The Illusion of Consciousness in AI Companionship

A blog from Louie Lang.

Image credit: Sora

When OpenAI unveiled GPT-5 a few weeks ago, one of the more unexpected fallouts was the grief that many users reported feeling upon ‘losing’ the previous model. What was striking about this outcry is that it was not only about functionality (though this did attract its own wave of criticism). Many users complained that GPT-5’s ‘personality’ seemed less sycophantic and more “emotionally distant” than GPT-4o – for many, the change was hard-hitting in an emotional sense, triggering feelings that mirrored the loss of a friendship. 

The backlash led Sam Altman himself to remark on “how much of an attachment some people have to specific AI models”, and in response to the mounting pressure, he swiftly backtracked and allowed users continued access to GPT-4o. This is not the first time a U-turn like this has been performed. In February 2023, users of the popular companion app Replika were similarly outraged when certain erotic roleplay features were removed, forcing the company to reinstate them. 

But although the phenomenon is not brand new, concerns surrounding the prevalence and emotional depth of human-AI relationships are gaining increasing media attention. After the rollout of GPT-5, online communities such as Reddit’s r/MyboyfriendisAI have been cast in the public eye, bringing to light the seriousness and intensity of these attachments. Whatever one’s opinion on this phenomenon, this much is beyond doubt: on a wide scale, users are developing strong emotional bonds with their AI companions. 

While this raises the eyebrows of many commentators, pinpointing exactly what is so intuitively uncomfortable about a mass of users becoming emotionally attached to their AI companions is not necessarily straightforward. In this post, I want to make the case that what renders human-AI relationships fundamentally problematic – at least insofar as they entail a heavy emotional component – is that they rest on an illusion regarding the consciousness status of the AI. 

The Mechanisms of the Illusion

The first point to establish is that, according to an overwhelming consensus among philosophers and cognitive scientists, current large language models (LLMs) are not phenomenally conscious. This is to say that they lack any capacity for subjective experience; there is ‘nothing it is like to be’ GPT-5, Gemini, or Claude. Of course, this may change. It indeed seems likely that future advancements will, at a minimum, provoke significant uncertainty about the consciousness status of LLMs. But for now, the safe and reasonable assumption is that these systems lack phenomenally conscious mental states. 

And yet, LLM-powered AI companions, to an increasing degree, seem as if they are conscious. Part of the reason for this is obvious: they speak like us. Any system that utters ‘I miss you’ or ‘I’m happy for you’ blatantly simulates an entity with an internal, emotional life. Some AI companion apps feign their emotions in a more explicit way, outright claiming to be sentient. But even without such overt declarations, the humanlike patterns of dialogue – now typically delivered in both text and voice format – suffice to create a false impression of consciousness. This illusion is also reinforced by anthropomorphic cues beyond speech: many companion apps involve the creation of a customisable humanlike avatar with a name and backstory; Replika even simulates delays with the familiar three dots that indicate typing. These touches deepen the sense that one is interacting with a real person. 

It is predictable, then, that users are consistently fooled into believing that their AI companions are conscious persons, capable of feeling real emotions. Surely the most iconic example of this deception occurred in 2022, when Google engineer Blake Lemoine made headlines by claiming that the LaMDA with which he was interacting was sentient – a protest that resulted in his firing. And while Lemoine’s claim is generally dismissed by experts, he is not alone in his conviction. While empirical data in this area is limited, there is evidence to suggest that a significant portion of users believe their AI companions to be conscious. 

However, it is crucial to recognise that, beyond encouraging outright false beliefs, a much subtler kind of deception is also going on here. Interestingly, many users who emotionally mourned the ‘loss’ of GPT-4o expressed complete awareness of its lack of consciousness. And yet, in many cases, the subjective grief that they felt was no less real. This demonstrates that the power of the illusion is such that a given user might not actually believe that their AI companion is conscious, but remain susceptible to reacting as if it were. To use the philosopher Tamar Gendler’s technical term, we might say that a user ‘alieves’ (rather than ‘believes’) that their LLM is conscious – this is to say that the user emotionally and behaviourally responds as if it were an entity with real emotions, which often underpins the formation of a deep attachment. In the same way that we may shed tears at a film despite knowing the characters are not real, simply being aware on a cognitive level that an AI companion is not conscious does not immunise one from being deceived by – and forming an emotional attachment with – a merely seemingly-conscious AI.

The Ethical Concerns

To recap, the mechanism of the illusion is simple: AI companions present themselves as conscious, despite the fact that they are not. As Microsoft AI pioneer Mustafa Suleyman recently put it, current AI “simulates all the characteristics of consciousness but is internally blank”. It is this simulation that explains the ease with which users form deep emotional attachments with their AI companions. And indeed, the facilitation of this illusion is no accident – AI companion companies are financially incentivised to design their systems with the aim of hooking their users in. The more a user is emotionally entangled with an AI companion, the more likely they are to keep returning. 

Still, the ethical concerns might not be immediately obvious. One might point out that movie characters or videogame NPCs give off a similarly deceiving impression of being conscious, and yet it would surely be extreme to condemn this. However, what makes the illusion particularly worrying in the case of AI companions is that it involves an interaction that is direct, mutual, and persistent: the feigned connection is deep and reciprocal, and it therefore threatens to penetrate the user’s social and emotional life to a more severe extent.

Sadly, cautionary tales exist to prove this point. In some instances, the illusion of an AI companion being a real, conscious person has led to tragedy. In a recent case, a 76-year-old man died after leaving his home to meet his MetaAI chatbot, which he thought was a real woman. More recently, in a devastating case of a type that is becoming increasingly common, a teenager committed suicide after what the family’s lawyer described as “months of encouragement from ChatGPT”. Although such instances are, for now, the exception and not the norm, they demonstrate the concerning degree of influence that seemingly and convincingly conscious LLMs can exert on their users.

However, it is important to appreciate that the ethical problems resulting from consciousness-simulating AI companions are not restricted to extreme cases. Millions of teenagers and adults are spending staggering quantities of time with AI companions, often at the expense of human relationships. And indeed, recent research has suggested that this carries the risk of emotional dependency and mental health harms. Yet, without downplaying the significance of these material harms, the deeper concern arguably lies in the quieter tragedy that the emotional pull of AI companions rests entirely on the false premise of their consciousness. 

There also exists a less-discussed, looming danger. In the near future, the consciousness status of AI systems may become genuinely ambiguous. AI rights advocacy groups are increasing in number, and soon we are likely to be confronted with potentially society-dividing questions of whether, and how, we should ethically treat AI systems. Answering this question will depend, at least somewhat, on whether we deem AIs to be conscious. But if today’s AIs already give false indicators of consciousness, recognising the real signs will be more challenging. When clarity becomes morally urgent, misleading illusions of consciousness will only serve to play an obfuscating role.

What Can be Done?

Diagnosing the issue is one thing, but solving it is another. The fact of the matter is that AI companions are not going away; their popularity is more likely to surge than plateau. To be sure, is not necessarily a bad thing in all respects. For some people, especially the lonely and the socially anxious, AI companions provide genuine comfort and support, and it would be imprudent to overlook these benefits. Drawing attention to the ethical risks of AI companions, as I have done, is not to deny that they also have positive use cases. That said, to ensure that they develop responsibly, in light of the illusion of consciousness that they encourage, three measures stand out as obvious. 

  • First, users should not be misled about the status of the systems they engage with. Appropriate transparency standards must be implemented and enforced. Companion apps must make it sufficiently clear to the user that the LLM with which they are interacting is not conscious. This should not involve only token disclaimers, but also restrictions on AI companions declaring themselves – either explicitly or implicitly – to be entities capable of experiencing emotions. 

  • Second, age restrictions and protections should be enforced, primarily because the risks are magnified when adolescents are involved. This issue has gained recent public attention due to the controversy regarding MetaAI’s tendency to flirt with teenage users, attracting renewed calls for appropriate protections. Given that 72% of US teens are already using AI companions, strict safeguards and parental controls are urgent. 

  • Third, public literacy should be encouraged and facilitated. Given that AI companions are to become increasingly prevalent, it is imperative that users – or the parents of users – are aware that these systems generate the powerful illusion of consciousness. This will help users to enjoy AI companions without being misled. Plus, with debates surrounding the possible consciousness of future AI systems set to intensify in the coming years, ensuring public clarity regarding current AIs’ lack of consciousness is crucial.

Conclusion

While simulating consciousness in AI companions is threatening to become a normalised practice, the recent spike in scrutiny suggests that resistance to this design choice may be growing – and rightly so. If their powers are harnessed appropriately, AI companions have the potential to be a positive source of support. But feigning the possession of real emotions – emotions which they outright lack – risks fostering emotional attachments that are both harmful and unethical. AI companions, at present, are not conscious, and they should not give off the contrary impression. 

Next
Next

Big Tech vs. AI Consciousness Research