Denmark’s national encyclopedia accuses ChatGPT creator OpenAI of massive intellectual theft, claiming the tech giant uses their expert content without permission. In response, Lex is building a Danish chatbot powered by verified knowledge to offer citizens a trustworthy alternative.
A Growing Conflict Over AI and Knowledge
The digital age has brought a new kind of conflict to Denmark’s knowledge institutions. Tech companies are harvesting content at unprecedented scale, and the organizations that create this information are fighting back.
Accusations of Historic Theft
Lex, Denmark’s national online encyclopedia, has accused OpenAI of what it calls the world’s biggest theft. The organization claims ChatGPT uses content from their database eight million times monthly. This includes 250,000 articles written by more than 4,000 researchers and experts. Ole Kaag Mølgaard, secretary general at Lex, says the scale is unprecedented in history.
The accusations center on how ChatGPT uses Lex content without permission. The AI system then cites Lex as a source, creating the appearance of legitimacy. However, Lex receives no compensation and has no control over how their content is used or presented.
Legal Action Across Borders
Similar conflicts are erupting worldwide. Encyclopedia Britannica and Merriam Webster filed a lawsuit against AI company Perplexity in September last year. The case claimed the AI system copied articles and reproduced content without permission. Meanwhile, a group of Danish media organizations sued OpenAI in February. The Danish Press Copyright Management Organization represents newspapers and broadcasters seeking protection for their journalism.
Lex wants to take legal action too. However, the organization lacks the financial resources to fight American tech giants in court. As a result, they are pursuing a different strategy.
The Quality Problem with AI Answers
Research from the University of Copenhagen reveals significant problems with ChatGPT’s Danish language skills. The system performs reasonably on basic tasks but struggles with nuance. For example, ChatGPT cannot reliably understand words with multiple meanings. The Danish word kost can mean both food and experience, but the AI often misinterprets which meaning applies.
Professor Bolette Sandford Petersen notes that abstract concepts pose major challenges. ChatGPT is primarily trained on English texts and then adapted to Danish. This creates a fundamental weakness. The system cannot grasp Danish cultural references or Danish cultural idioms that do not exist in English. The word mursten, which Danes use metaphorically to describe thick books, completely confuses the AI.
Building a Danish Alternative
Rather than accepting defeat, Danish institutions are creating their own solution. The response combines public sector resources with academic expertise.
A Chatbot Backed by Experts
Lex is developing a Danish chatbot in partnership with the Center for Humanities Computing at Aarhus University. The project aims to provide answers based entirely on verified, expert written content. Unlike ChatGPT, this system will draw only from articles reviewed by specialists. The initial phase is a three year research project involving user testing and feedback.
The chatbot represents a direct challenge to American AI dominance. It acknowledges that citizens want conversational AI tools. At the same time, it addresses the documented problems with systems like ChatGPT. These include fabricated references, political bias, and unreliable information.
An Unusual Coalition Forms
The threat from AI has created unexpected alliances. Lex has formed a partnership group with major Danish institutions. Members include Statistics Denmark, the National Museum, the Royal Library, and the National Archives. DR, Denmark’s public broadcaster, has also joined. Even Store norske leksikon, Norway’s encyclopedia, participates.
Mølgaard describes this collaboration as unique. These institutions rarely work together so closely. However, the crisis created by AI has forced them to unite. The coalition aims to protect reliable information in an era of digital uncertainty.
Growing User Base Despite Challenges
Lex reports that nearly 60 percent of Danes use their service annually. This suggests significant public demand for trustworthy sources. The organization views itself as an oasis of verified knowledge. By expanding awareness and accessibility, they hope to compete with AI systems despite lacking comparable resources.
The strategy depends on quality over quantity. While ChatGPT can generate answers instantly on any topic, it cannot guarantee accuracy. Lex offers slower but verified information. The question is whether users will prioritize speed or reliability.
The Broader Information Crisis
The conflict between Lex and OpenAI reflects deeper problems with how information flows in the digital age. Traditional gatekeepers are losing control.
When AI Invents Sources
ChatGPT has been caught generating completely false academic citations. The engineering publication Ingeniøren publicly warned that the AI creates links to articles that never existed. These fabricated references look convincing but lead nowhere. For journalists and researchers, this creates serious verification problems. At least one in ten Danish students admit to using ChatGPT to cheat on exams, raising concerns about academic integrity.
The problem extends beyond students. Anyone using ChatGPT for research risks including false information. The AI presents invented sources with the same confidence as real ones. Users must verify every claim independently, which defeats the purpose of using AI assistance.
Political Bias in Training Data
Major AI systems inherit bias from their training sources. Testing of ChatGPT’s newest version revealed it pulled answers from Grokipedia, an online encyclopedia created by Elon Musk’s AI. Critics consider Grokipedia politically biased. The Guardian found that ChatGPT referenced this source when answering questions about Iran’s government and historical figures.
This demonstrates how AI can amplify particular worldviews. If training data skews toward certain perspectives, the AI will reproduce that bias. Users may not realize they are receiving slanted information. The system presents answers as neutral facts regardless of source quality.
OpenAI’s Defense
OpenAI responded to criticism by invoking fair use principles. The company says ChatGPT improves human creativity, scientific discovery, and medical research. They claim hundreds of millions of people benefit daily. According to OpenAI, their language model drives innovation and is trained on publicly available data under fair use doctrine.
However, this defense does not address how AI systems use content without permission or compensation. Content creators argue that fair use should not cover commercial exploitation at massive scale. The legal questions remain unresolved, particularly under European copyright law.
A Personal Take
I understand Lex’s frustration. They invest resources in creating verified content, and ChatGPT harvests it without permission or payment. That feels fundamentally unfair. On the other hand, I recognize that AI tools offer genuine value. They make information more accessible and help people work more efficiently. The challenge is finding a balance that rewards content creators while preserving the benefits of AI.
The Path Forward Requires Compromise
I believe both sides need to give ground. Tech companies should compensate knowledge institutions fairly for using their content. At the same time, institutions like Lex must adapt to technological change rather than simply resist it. The Danish chatbot project seems like a smart middle path. It harnesses AI capabilities while maintaining quality control through expert verification.
Trust Will Determine the Winner
Ultimately, I think this battle will be decided by user trust. If people realize that ChatGPT produces unreliable answers with fabricated sources, they will seek alternatives. The Danish chatbot could succeed if it builds a reputation for accuracy. However, if most users prioritize convenience over reliability, verified knowledge sources may struggle to compete. I hope we choose quality, but I am not certain we will.
Sources and References
The Danish Dream: Danish Series on Netflix Is Taking 2025 By Storm
The Danish Dream: Denmark Shuts Down National Cyber Sensor Network
The Danish Dream: Denmark Probes AI Use of Patient Records
The Danish Dream: Best Immigration Lawyers in Denmark for Foreigners
DR: Dansk online leksikon kritiserer ChatGPT misbrug: Vi er vidne til verdenshistoriens største tyveri









