Microsoft AI chief Mustafa Suleyman says only biological beings are capable of consciousness, and that developers and researchers should stop pursuing projects that suggest otherwise.
“I don’t think that is work that people should be doing,” Suleyman told CNBC in an interview this week at the AfroTech Conference in Houston, where he was among the keynote speakers. “If you ask the wrong question, you end up with the wrong answer. I think it’s totally the wrong question.”
Suleyman, Microsoft’s top executive working on artificial intelligence, has been one of the leading voices in the rapidly emerging field to speak out against the prospect of seemingly conscious AI, or AI services that can convince humans they’re capable of suffering.
In 2023, he co-authored the book “The Coming Wave,” which delves into the risks of AI and other emerging technologies. And in August, Suleyman penned an essay titled, “We must build AI for people; not to be a person.”
It’s a controversial topic, as the AI companion market is swiftly growing, with products from companies including Meta and Elon Musk’s xAI. And it’s a complicated issue as the generative AI market, led by Sam Altman and OpenAI, pushes towards artificial general intelligence (AGI), or AI that can perform intellectual tasks on par with the capabilities of humans.
For Suleyman, it’s particularly important to draw a clear contrast between AI getting smarter and more capable versus its ability to ever have human emotions.
“Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn’t feel sad when it experiences ‘pain,’” Suleyman said. “It’s a very, very important distinction. It’s really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it’s actually experiencing. Technically you know that because we can see what the model is doing.”
Within the AI field, there’s a theory called biological naturalism, proposed by philosopher John Searle, that says consciousness depends on processes of a living brain.
“The reason we give people rights today is because we don’t want to harm them, because they suffer. They have a pain network, and they have preferences which involve avoiding pain,” Suleyman said. “These models don’t have that. It’s just a simulation.”
Suleyman and others have said that the science of detecting consciousness is still in its infancy. He stopped short of saying that others should be prevented from researching the matter, acknowledging that “different organizations have different missions.”
But Suleyman emphasized how strongly he opposes the idea.
“They’re not conscious,” he said. “So it would be absurd to pursue research that investigates that question, because they’re not and they can’t be.”Suleyman is on a speaking tour, in part to inform the public of the risks of pursuing AI consciousness.
Prior to the AfroTech Conference, he spoke last week at the Paley International Council Summit in Silicon Valley. There, Suleyman said that Microsoft will not build chatbots for erotica, a stance that’s in conflict with others in the tech industry. Altman announced in October that ChatGPT will allow adult users to engage in erotic conversations, while xAI offers a risque anime companion.
“You can basically buy those services from other companies, so we’re making decisions about what places that we won’t go,” Suleyman reiterated at AfroTech.
Suleyman joined Microsoft in 2024 after the company paid his startup, Inflection AI, $650 million in a licensing and acquihire deal. He previously co-founded DeepMind and sold it to Google for $400 million over a decade ago.










