Are current LLMs conscious?

Many people are tempted to believe that large language models (LLMs), such as ChatGPT, Gemini, and Mistral, might be conscious. This is understandable – they produce remarkably human-like text and conversations.

However, the vast majority of experts in fields like AI research, neuroscience, philosophy, and software engineering believe it’s extremely unlikely that today’s LLMs are conscious. They believe that LLMs process patterns in language without subjective experiences, emotions, or awareness. There is nothing “it is like to be” for an LLM.

It’s certainly possible that future AI systems could achieve forms of consciousness. Conscium was founded partly to address the important safety questions such developments would raise – both for humans and machines.

For now, though, LLMs don’t have the complex biological or computational structures that seem to be essential for conscious experience. Nor do they consistently display behaviors — like self-awareness, intentional understanding, or unified perception — that we associate with being conscious.