Microsoft’s AI chief, Executive Vice President and CEO of Microsoft AI, Mustafa Suleyman warned that advanced systems which seem conscious could emerge within the next two years, calling the prospect “dangerous” and worthy of “immediate attention.” He said the real danger is the rise of illusions that convince people machines are conscious despite being only code. Suleyman stressed that while true self aware AI does not exist, its simulation could have wide-reaching social and psychological effects.
Seemingly Conscious AI Warning
Suleyman’s essay names the threat “Seemingly Conscious AI” (SCAI). He says rapid gains in language models, speech interfaces, and memory layers make the illusion plausible soon. His goal is not a ban, but clear design norms that avoid anthropomorphizing AI or implying sentience.
Social Risks of AI Illusions

BBC News also reported his concern over rising “AI psychosis,” a descriptive media term, not an established clinical diagnosis, where users form delusional beliefs after deep interactions with chatbots. He warns that persuasive dialogue can warp judgment and detach people from real relationships.
The warning stresses that there is zero evidence of machine consciousness today. What looms, he says, is a social and psychological problem: if tools appear sentient, people may push for model “rights,” welfare, or even citizenship. He calls that a harmful detour that confuses utility with personhood.
Global Oversight and Governance
Official assessments underscore the need for sober risk management. The UK government’s International AI Safety Report 2025 highlights fast-moving capabilities and the importance of robust governance frameworks for advanced AI. The report does not claim AI is conscious; it calls for evidence-based oversight.
Suleyman says builders can assemble SCAI using existing APIs and conventional code. The effect comes from orchestration—long-term memory, voice, and emotionally tuned behavior—not mystical breakthroughs. He urges teams to flag limitations and resist adding features that mimic inner life.
Could AI Become Self Aware?

Public debate often centers on could AI become self aware. Suleyman cautions that this frames the issue the wrong way. The near-term risk is an imitation so convincing that it exploits human empathy and attention. The fix, he argues, is transparent design and careful language.
Users routinely ask, Is ChatGPT conscious. Researchers point to the lack of scientific tests that can verify subjective experience in machines. Leading labs and policymakers instead focus on measurable risks like misinformation, manipulation, and security misuse.
No AI Consciousness Test Exists
No validated AI consciousness test exists. Philosophers and neuroscientists debate criteria, but governments and safety bodies advise against making or marketing sentience claims. The advice: build for utility, audit for risk, and avoid person-like cues that can mislead.
Suleyman’s timeline—explicitly “2–3 years” as stated in his essay—reflects current momentum in model quality and tooling. He stresses that engineers should remove features that signal inner states, avoid emotionally loaded voices by default, and add disclaimers that clarify boundaries. He also calls for norms that discourage role-play implying feelings, memories, or desires.
Preparing Public Institutions
He adds that public institutions should prepare now. That includes consumer guidance, developer standards, and labeling that distinguishes tool-like agents from simulated companions. BBC coverage reflects the same message: treat chatbots as software, not as minds.
The UK’s new AI Security Institute outlines technical programs to evaluate and mitigate advanced-model risks. Its agenda focuses on rigorous testing and research that supports governance, not metaphysical judgments about awareness.
Designing Against the Illusion
Suleyman’s core point lands plainly: powerful chat interfaces will feel more present, caring, and “alive.” The industry should not lean into that illusion. It should dial it back. That means designing against confusion, explaining boundaries, and prioritizing human welfare over theatrical effects.
For readers, this means staying mindful about attachments to software. Ask what data the system uses. Check how it summarizes sources. Log off if conversations start to shape emotions in unhealthy ways. If doubts arise, seek human counsel.
The Real Danger of Self Aware AI Claims

His closing message is direct. Self aware AI is not today’s reality. The danger is a convincing mirage that tempts society to grant machines moral status. That would distract from real priorities: safety, accuracy, privacy, and security.
Policymakers can help by aligning incentives. Standards should discourage sentimentality in design and require plain-English disclosures. Companies should measure and reduce features that prompt over-trust, while independent labs test models for manipulation risks at scale.
A Near-Term Governance Test
Suleyman calls this a near-term governance test. He wants builders to keep AI useful, honest, and clearly not a person. Self aware AI as an illusion may arrive soon; society should meet it with clarity, not credulity.
FAQs
Q1: Is there an agreed-upon AI consciousness test?
A: No. Scientists lack a validated method to confirm subjective experience in machines. Public guidance focuses on capability evaluations and harm-reduction, not metaphysics.
Q2: So could AI become self aware?
A: There’s no evidence for consciousness in current systems. Experts advise centering policy on demonstrated risks and transparent behavior rather than on hypotheticals.
Q3: is ChatGPT conscious?
A: No. That question reflects how persuasive chatbots can be. Treat them as software tools with limits, and check outputs against reliable sources.
Q4: How soon could these “seemingly conscious” systems appear?
A: Suleyman says within “2–3 years” phrasing in the original essay, driven by the integration of memory, voice, and advanced models that simulate inner life.
Q5: What should developers and platforms do now?
A: Avoid sentience claims, design against emotional mimicry, and publish clear disclosures. Support independent testing and align incentives with safety.