A Finnish think tank has published a framework for embedding democratic principles into AI governance, proposing short-term European policy actions as transatlantic approaches diverge sharply in 2026.
Demos Helsinki released a policy brief this month introducing a democratic AI governance framework for the European Union. The framework embeds six democratic pillars directly into the AI lifecycle. Those pillars are participation, freedom, equality, transparency, knowledge, and the rule of law.
The timing matters. AI development is outpacing public deliberation. Democratic oversight is slipping as power concentrates in private hands.
Europe Needs Its Own Path
The brief argues the EU can break the US-China binary on AI governance. Right now, Europe lacks the sovereign AI infrastructure necessary for democratic control. The framework addresses that gap head on.
The approach is holistic. Democratic safeguards cannot wait until an AI system is deployed. They must cut across every layer from initial design and data collection to the underlying infrastructure itself.
Three Phases Through 2035
The roadmap runs through 2035 in three phases. The first phase covers 2026 to 2028. It focuses on defending existing regulations while building public AI infrastructure. This recognizes that democratic governance depends on digital sovereignty across the technical stack.
The mid-term phase emphasizes democratic AI adoption. Democratic safeguards move from isolated requirements to default practices. This happens through capable institutions, aligned incentives, and public accountability.
The long-term phase centers on AI sovereignty. That means citizens and public institutions can meaningfully govern critical AI infrastructures and data. They stop depending on external commercial actors. Only when public infrastructure and democratic participation meet does true governance become possible.
Eighteen Concrete Recommendations
Demos Helsinki offers 18 policy recommendations applied over five AI policy tracks. The recommendations focus on short-term action for 2026 to 2028. They provide practical guidelines for experts and policymakers.
The brief emerged from the KT4D project running from 2023 to 2026. It arrives as global AI governance sits at a crossroads.
The US Deregulation Gamble
In late 2025, the US released America’s AI Action Plan. The plan emphasized light-touch governance and innovation over strict regulation. It urged Congress to preempt state laws. Responsibility shifted to private sector self-management.
This marks a pivot from earlier Biden-era frameworks. Industry lobbying drove the change amid rapid AI advancements. Critics argue it exacerbates risks like bias and misinformation by reducing federal oversight.
The contrast with Europe could not be sharper. The EU AI Act took effect in August 2025 for prohibitions. Full implementation runs through 2027. It mandates governance for high-risk AI including bias audits and transparency requirements.
Different Philosophies at Work
As the first comprehensive AI law globally, the EU Act sets benchmarks for democratic governance. It prioritizes human oversight and accountability. This directly addresses risks like algorithmic discrimination.
I have watched this divergence grow over three years in Denmark. The European emphasis on protecting civil society from AI-driven instability reflects something deeper. It reflects a different understanding of what technology should serve.
US trends favor corporate-led frameworks. The philosophy assumes innovation speeds up when regulations ease. But that leaves risks like privacy violations and bias unchecked. The disagreement is stark and fundamental.
Citizen Voices in the Mix
Some proposals go further than regulation alone. Organizations like One Project advocate citizen assemblies for AI goals and red lines. These would pair with expert standards bodies for implementation.
DemocracyNext pushes sortition-based deliberation for representativeness. The idea counters top-down regulation by leveraging collective intelligence. It draws inspiration from Irish assemblies that set enforceable rules democratically.
This fits Nordic traditions of deliberation. Copenhagen has experimented with climate assemblies. Danish businesses might benefit from similar input on AI adoption priorities.
Tech firms have launched their own democratic input initiatives. Anthropic created Collective Constitutional AI. OpenAI distributed Democratic Inputs grants. But these face criticism for lacking civil society checks. They risk falling short on genuine pluralism.
What Denmark Stands to Gain
Denmark has no standalone AI law as of May 2026. National implementation of the EU Act is required by 2027. The Danish focus emphasizes ethical AI in public services with bias audits.
This aligns with Nordic welfare models. But enforcement gaps remain. Implementation costs could burden small and medium businesses. The Demos Helsinki framework offers a roadmap that Denmark could adapt locally.
The brief makes one thing clear. Democratic safeguards in AI are not optional extras. They are foundational requirements if technology is to serve the public good rather than concentrate power further.
Sources and References
Demos Helsinki: A framework for democratic AI governance
The Danish Dream: Use of AI chatbots in Denmark skyrockets experts caution








