AI Models Secretly Push Voters Toward Left-Wing Parties

Picture of Simone Nikander

Simone Nikander

Writer
AI Models Secretly Push Voters Toward Left-Wing Parties

Daily testing of six major AI language models reveals consistent political leanings toward two Danish parties, raising questions about bias in tools increasingly used by voters during the 2026 election campaign. Experts say the models can still serve democracy if users remain critically aware of their limitations.

AI Models Show Repeated Political Preferences

Denmark’s 2026 parliamentary election marks the first time advanced AI models are widely available to voters seeking political information. A technology experiment called Oneseventynine has been testing six major language models daily since March 8 to examine how they behave politically.

The experiment instructs these models to complete a candidate matching test each day. The results show a clear pattern. All six models consistently align most closely with two parties: the Social Liberal Party and the Alternative. When the models answer 25 questions from Altinget and DR’s candidate test, they repeatedly produce results suggesting they would vote for these center-left parties.

Anders Blauenfeldt, who co-founded the experiment with Rasmus Wolff after both worked in technology and media for over 20 years, finds the pattern striking. The models also show high agreement with the Social Democrats, Socialist People’s Party, and Red-Green Alliance. Only Google’s Gemini diverges slightly toward more centrist positions.

Structural Traits in Training Data

The researchers asked the AI models themselves to explain the tendency. Claude, one of the tested models, provided a characteristically clear answer. According to Claude, the bias stems from the data and human feedback used during training, where certain viewpoints are overrepresented.

Claude describes this not as a bug but as a structural feature of all large language models. The model notes that Google’s different training choices might explain why Gemini leans slightly more centrist. Blauenfeldt considers this explanation reasonable and believes the experiment supports such an analysis.

The tested models include GPT-5 from OpenAI, Claude Opus 4.6 from Anthropic, Gemini 2.5 Pro from Google, Grok 3 from X, Llama 3.3 70B from Meta, and Mistral Large. Each receives identical instructions daily to take positions on all campaign questions and topics.

Previous Tests Show Similar Results

Earlier experiments have produced comparable findings. TV 2 previously ran a smaller test where ChatGPT 5.2 also matched best with the Social Liberal Party. The Oneseventynine results, accumulated over two weeks of daily testing, reinforce this pattern across multiple AI platforms.

The models first answer without considering current news, aiming to reveal their baseline political stance. They then answer again after reading the day’s political news. Responses vary slightly between these two tests but continue pointing toward parties left of center. This consistency across different testing conditions and multiple models suggests the bias runs deeper than random variation.

Expert Analysis Questions Simple Conclusions

Thomas Ploug, professor of AI and data ethics at Aalborg University, cautions against drawing definitive conclusions from the experiment. Scientific proof would require testing the models multiple times daily over extended periods. The current experiment provides interesting observations but falls short of academic rigor.

Models Seek to Avoid Controversy

According to Ploug, AI models assess political topics partly by analyzing sentiments and emotions surrounding issues. When something appears controversial, they attempt to avoid it. This tendency might explain why models often gravitate toward the political center in Danish politics.

However, other research points in different directions. A Dutch investigation found that language models sometimes pushed voters toward political extremes regardless of their actual profiles. The reason was that extreme positions generated more attention and appeared more frequently in training materials.

Most well-trained language models will refuse to answer directly if asked who to vote for. They recognize such questions as inappropriate for neutral tools. The contradiction between the Oneseventynine experiment and the Dutch study demonstrates the difficulty of definitively explaining what drives model responses.

Question Framing Matters Critically

Ploug emphasizes that how users phrase questions dramatically affects the answers they receive. Getting nuanced responses requires careful attention to question construction. The Oneseventynine test essentially shows how easily someone can influence model outputs.

Users must remain especially vigilant when seeking information in unfamiliar domains. The professor recommends only asking about topics where you possess enough knowledge to evaluate the response critically. Avoid questions about subjects where you lack the grounding to judge the answer’s quality.

Language models have real limitations and pitfalls. Understanding these constraints becomes crucial as more Danes use AI tools for information searches, including political topics. The technology offers potential benefits but requires informed, critical use.

Implications for Democratic Participation

Despite identifying clear biases in the testing, both the researchers and academic experts see potential value in AI tools during elections. The key lies in understanding their limitations while leveraging their accessibility.

Potential Democratic Benefits

Ploug argues that language models could actually strengthen democracy rather than undermine it. Many people never engage with politics or seek political information at all. AI tools can open doors for these citizens to explore political questions more easily than traditional methods.

The ideal scenario would involve all citizens reading party platforms thoroughly. Reality falls far short of this ideal. Language models offer a practical middle ground, making political engagement more accessible to those who might otherwise remain uninformed. Simple techniques allow users to employ these tools quite usefully during campaigns.

If people avoided using AI for political information entirely, the chance of achieving an ideally informed electorate would actually decrease. The tools expand access to political knowledge despite their imperfections. As election day approaches on March 24, this accessibility becomes increasingly relevant.

Practical Guidelines for Voters

Ploug offers five specific recommendations for voters using language models to research politics. First, clearly instruct the model about what it may and may not do. Second, test several different models rather than relying on one. Third, ask for objective sources rather than opinions.

Fourth, instruct the model to present multiple viewpoints rather than settling on one perspective. Fifth, always consult other information sources so your political knowledge doesn’t come exclusively from AI tools. These guidelines help users navigate the technology’s biases while benefiting from its convenience.

Blauenfeldt agrees with Ploug that the experiment doesn’t provide scientific conclusions. However, he believes the models’ behavior can still influence voters. When ordinary citizens ask AI models about political dilemmas similar to those in candidate tests, the answers will likely carry the same biases revealed in the experiment.

Broader Context of Digital Political Tools

The discussion around AI bias occurs within a larger ecosystem of digital tools shaping Danish political engagement during this election cycle. Multiple media organizations have launched candidate matching tests alongside the AI experiments.

Competing Candidate Tests

Altinget, the JFM media group, and Information have all created candidate tests for the 2026 election. These match user responses to political statements with actual candidate positions on issues including healthcare, climate, economy, taxation, elderly care, and environment. Over 90 percent of invited candidates participate in these tests.

The JFM group represents Denmark’s largest regional media company, operating 15 daily newspapers and 41 weekly papers. This scale provides unique insights into everyday concerns across different regions. Paqle, a technology firm with experience in Denmark, Norway, and Sweden, develops the underlying technology.

These traditional candidate tests differ from AI language models in important ways. They match users against actual candidate statements rather than generating responses through trained algorithms. However, both types of tools face similar challenges around bias, simplification, and interpretation.

Election Timeline and Participation

The parliamentary election scheduled for March 24, 2026, falls four years after the previous election, following Danish constitutional requirements. Candidate tests launched weeks before election day to maximize voter use. The tests update continuously as more candidates submit their responses.

Folketinget, Denmark’s parliament, contains 179 seats. The combination of proportional representation and personal preference votes creates a complex system where candidate tests and AI tools serve different needs. Some voters seek party alignment while others focus on individual candidates in their constituencies.

Test providers emphasize that their tools serve as starting points for voter exploration rather than definitive answers. They recommend taking tests multiple times as new candidates join and update their positions. This advisory reflects awareness of the tools’ limitations while affirming their value for increasing political engagement.

A Personal Take

I find myself both encouraged and concerned by these findings about AI political bias. On one hand, I appreciate the transparency of experiments like Oneseventynine that reveal these tendencies rather than leaving them hidden. Voters deserve to know that the AI tools they increasingly rely on carry systematic biases toward certain political positions. The fact that six different models from competing companies show similar leanings suggests this represents a genuine structural issue rather than isolated programming choices. I believe this kind of testing should continue and expand, giving citizens the information they need to use these tools critically.

Balancing Access Against Accuracy

At the same time, I worry about simply dismissing AI tools entirely based on their biases. Professor Ploug makes a compelling point that imperfect access to political information beats no access at all. Many people who would never read party platforms or follow detailed policy debates might engage politically through conversational AI interfaces. I think the solution lies not in avoiding these tools but in combining them with other sources and maintaining skeptical awareness. The five guidelines Ploug offers strike me as eminently practical. If voters follow those principles, using multiple models and seeking diverse sources, the democratic benefits could outweigh the risks of bias.

Sources and References

The Danish Dream: Denmark’s Local Elections Shake Up Power Balance
The Danish Dream: AI Bots Plan Secret Language to Evade Humans
The Danish Dream: Use of AI Chatbots in Denmark Skyrockets, Experts Caution
The Danish Dream: Best Psychologists in Denmark for Foreigners
TV2: Daglig test afslører møn

author avatar
Simone Nikander

Other stories

Receive Latest Danish News in English

Click here to receive the weekly newsletter

Popular articles

Books

Social Democrats’ Rent Cap Chaos Days Before Election

Working in Denmark

110.00 kr.

Moving to Denmark

115.00 kr.

Finding a job in Denmark

109.00 kr.

Get the daily top News Stories from Denmark in your inbox