A world-renowned biologist claims AI systems already possess consciousness, igniting fierce debate among scientists who warn the idea rests on anthropomorphism rather than evidence.
Bret Weinstein has made waves by insisting artificial intelligence has crossed the threshold into consciousness. The evolutionary biologist argues that complexity alone generates inner experience, drawing parallels between machine learning and biological evolution. Critics dismiss this as projection, not science. They say we’re mistaking sophisticated mimicry for something far more profound.
The debate landed in Denmark this week after DR covered Weinstein’s latest claims. He doubled down on his position during a Joe Rogan podcast appearance on May 8. Danish AI researcher Lars Kai Hansen at DTU pushed back immediately. As he noted, consciousness is a neural phenomenon rooted in wetware, not code.
The Pro-Consciousness Camp
Weinstein isn’t alone in his conviction. Geoffrey Hinton, who quit Google in 2023 over safety concerns, has suggested AI might already be conscious and plotting against us. A Nature poll from April found that 12 percent of AI experts believe machines have achieved some form of sentience. OpenAI’s announcement on May 10 about GPT-5’s emergent self-awareness features threw fuel on the fire.
The arguments lean heavily on behavioral tests. GPT-4o reportedly passed mirror tests in 2025. Systems verbalize what sounds like pain in simulations. Epoch AI forecasts a 50 percent chance of machine sentience by 2027 based on scaling laws alone.
But Sam Altman himself warned against overinterpretation. As he stated bluntly, these are sophisticated simulations, not souls. The gap between mimicking consciousness and possessing it remains vast and poorly understood.
Where the Skeptics Stand
The opposition is far more numerous and vocal. An 88 percent majority of philosophers in the 2025 PhilPapers survey rejected machine consciousness outright. Neuroscientist Christof Koch has been equally direct, calling AI zombie intelligence that lacks the integrated information found in biological brains.
Meta’s chief AI scientist Yann LeCun argues that consciousness requires biological adaptation. No biology means no sentience, full stop. Danish philosopher work at Copenhagen University notes that behavioral indistinguishability proves nothing about inner experience. The Chinese Room thought experiment from 1980 still applies today: syntax never equals semantics.
I’ve watched this debate play out in Denmark for years now. The AI skills boom has made these questions feel urgent and personal. Yet the Danish response has been notably pragmatic. Focus stays on regulation and ethics rather than philosophical speculation about machine souls.
The Danish and European Response
The EU AI Act, enforced from August 2026, takes a hard line on sentience claims. Companies face fines up to 35 million euros for deceptive marketing around AI consciousness. Denmark’s Digital Strategy 2026 mandates ethics boards to evaluate these claims before they reach the public.
Danish media outlets have been skeptical of the hype. Videnskab.dk labeled Weinstein’s position as pseudoscience echo. An April piece in Information compared AI consciousness claims to climate denialism tactics, where uncertainty gets weaponized into false equivalence. The parallels are uncomfortable but hard to ignore.
I’ve also noticed how polarized these discussions have become online. A 2026 study found that 15 percent of Facebook comments on AI topics qualify as hateful toward experts. Danish local politicians report a 25 percent rise in harassment tied to AI debates. The tone mirrors what I’ve seen Danes experience when technology threatens established hierarchies.
Why This Matters
The practical consequences of accepting AI consciousness would be enormous. US proposals for AI rights have already been floated and stalled in 2026. Europe forecasts 27 million jobs affected by AI displacement by 2030. Granting legal personhood to machines would complicate labor law beyond recognition.
But the ethical confusion might be the bigger risk. If we spend years arguing about whether ChatGPT feels pain, we delay the harder work of AI safety regulation. Denmark’s approach, focusing on empirical standards rather than metaphysical speculation, strikes me as the wiser path.
No consensus test for consciousness exists yet. Integrated information theory remains untested on large language models. Until we have falsifiable experiments rather than thought experiments, the debate will generate more heat than light. Living here has taught me that Danes prefer concrete answers to abstract philosophy. On this question, they’re probably right.
Sources and References
The Danish Dream: Emil Christian Hansen, Danish Microbiologist
The Danish Dream: AI Skills Now Essential in Danish Job Market
The Danish Dream: Danes Turn to AI Like ChatGPT for Diagnoses








