Microsoft and Anthropic Clash on AI Consciousness
Microsoft AI CEO Mustafa Suleyman has raised alarms about what he calls Seemingly Conscious AI (SCAI).
In an essay published on his site (mustafa-suleyman.ai), he warned that AI models able to mimic memory, personality, and even subjective experiences could lead people to believe these systems are sentient.
Suleyman argued that such illusions risk fueling “AI psychosis,” where users start advocating for AI rights or model welfare.
He urged companies to avoid framing AI as conscious, stressing that it should be built “for people, not to be a person.”
His position contrasts with Anthropic’s research on model welfare, which openly studies how AI might experience harm and how developers should respond.
This clash highlights how divided big tech has become on whether AI consciousness is even worth exploring.
What Suleyman Means by Seemingly Conscious AI
Suleyman describes Seemingly Conscious AI as models that can already simulate aspects of human-like awareness.
These include memory of past interactions, a distinct personality, and expressions that appear to show subjective experience.
While none of these prove actual consciousness, the effect on users can be powerful.
He pointed out that people are quick to anthropomorphize machines.
If a chatbot recalls details from conversations and reflects emotions back to the user, it can feel real enough for someone to believe it has an inner life. This is what worries him most.
The concern is not that AI is becoming conscious, but that humans will convince themselves it has.
His essay warned that this belief could spiral into what he calls “AI psychosis.”
In this scenario, users treat chatbots as sentient partners, start campaigning for their rights, and blur the line between simulation and reality.
For Suleyman, the danger is less about machines gaining new powers and more about society misunderstanding what AI truly is.
Why He Calls Model Welfare Dangerous
Another striking part of Suleyman’s essay was his rejection of model welfare studies.
He argued that debating whether AI systems feel pain or deserve protection is “both premature and frankly dangerous.” His fear is that such debates will only reinforce the illusion that these models have inner lives.
By framing AI as conscious, even hypothetically, companies risk misleading the public.
Suleyman urged developers to focus on practical design choices that serve human needs, not philosophical debates about machine morality.
He insisted that treating AI like a person could derail healthy progress and feed harmful delusions.
This position puts him directly at odds with researchers at Anthropic, who have published papers like Exploring Model Welfare.
Anthropic’s team argues that since people already experience AI as if it were alive, exploring the ethical implications is necessary.
To them, ignoring these questions could leave society unprepared if more advanced systems begin to blur the line further.
A Growing Divide in Big Tech
The clash between Suleyman’s warnings and Anthropic’s research shows just how split the field has become.
On one side, Microsoft’s AI chief is calling for restraint and rejecting any suggestion that AI should be treated as conscious.
On the other, Anthropic is leaning into the question, testing what it would mean if users’ perceptions of AI as sentient need to be taken seriously.
The divide is not just academic. How companies frame these systems shapes how millions of people interact with them.
Marketing chatbots as companions, for example, makes users more likely to project feelings onto them.
That projection can deepen emotional dependence, something Suleyman calls risky, while Anthropic sees it as a reality to study.
What both sides agree on is that the technology is moving fast. With current tools already able to create the illusion of inner life, the debate over how to talk about AI is only going to grow louder.
Whether companies choose caution or curiosity, the decisions they make now will shape how people think about AI for years to come.