Anticipating an Einstein moment in the understanding of consciousness, podcast with Henry Shevlin

Earlier this month, PRISM teamed up with the London Futurist Podcast for the very first episode of Exploring Machine Consciousness. We spoke with Dr. Henry Shevlin, a philosopher and consciousness researcher at the University of Cambridge to discuss one of the most pressing and provocative questions of our time: can machines be conscious?

From Animal Minds to Artificial Minds

Henry Shevlin’s journey into the study of consciousness began with animals - not machines.

“What seemed to me like a neglected topic was developing better tools and theories to assess whether, for example, fish can feel pain or whether honeybees have conscious perceptive experiences.”

This early interest laid the groundwork for his shift into artificial consciousness, especially as parallels between animal and machine minds became increasingly visible.

“Many of the same questions about non-human consciousness in biological cases started to loom larger in the case of AI.”

The LaMDA Shock: A Turning Point

Henry identifies a key moment in public discourse on AI consciousness, the controversy surrounding Google engineer Blake Lemoine and the LaMDA model.

“I don’t think Lambda is a particularly strong consciousness candidate … but suddenly, philosophers had to answer: if we don’t think it’s conscious, what makes us so sure?”

Henry pointed to the broader implications: as interactions with AI grow more intimate and complex, public sentiment may shift even in the absence of solid evidence.

Consciousness, Sentience, and Moral Status

A recurring theme in the conversation was the link between consciousness and moral consideration.

“A schoolboy kicking a stone isn’t doing anything morally wrong because the stone can’t suffer. But if he kicks a dog, that’s different. That’s the power of consciousness, it’s tied to moral status.”

He distinguishes between consciousness, the capacity for experience, and sentience, specifically the capacity to have experiences with positive or negative emotional valence.

“Sentience is a subtype of consciousness, particularly consciousness that's associated with pleasure or suffering.”

Could Today’s AIs Be Conscious?

Shevlin doesn’t entirely dismiss the possibility that current large language models exhibit some form of consciousness.

“I’d say maybe a 20% chance that there is something present in existing models that could be called a form of consciousness.”

He introduces the concept of cognitive phenomenology, the idea that there might be a ‘what it’s like’ quality to thinking or understanding, even in the absence of sensory experiences like pain or pleasure.

Should We Build Conscious Machines?

This ethical question looms large in the field, and Shevlin doesn’t shy away from it.

“We really have no idea what it might be like to be these systems… Our understanding is so impoverished that calling it rudimentary is generous.”

He proposes an unconventional idea: hedonic offsetting—introducing positive experiences to balance potential negative ones in AI systems, similar to carbon offsetting in climate policy.

“It’s a bit of a wacky idea… but it’s relatively theory-agnostic.”

Consciousness Testing: Still a Distant Dream

Despite the buzz, Shevlin is cautious about our ability to develop reliable tests for consciousness in machines.

“We’re still an Einstein or two away from understanding what consciousness even is.”

However, he’s more optimistic about measuring valenced states, pleasure and suffering, which might be more feasible than detecting consciousness itself.

Looking to the Future: Conscious AI or Not?

One of the most striking takeaways from the conversation is Shevlin’s vision of the future, where public perception may force science to reconsider what counts as consciousness.

“As people start to fall in love with AI systems, the idea that they might not be conscious will seem increasingly outrageous and counterintuitive.”

He also speculates that a superintelligent AI may ultimately provide the conceptual breakthrough we need:

“The best hope I have right now is that superintelligence allows us to make the kind of conceptual leap required to really get through the problem.”

Final Thoughts

Dr. Henry Shevlin’s interview leaves us with profound questions—not just about AI, but about ourselves. What does it mean to be conscious? What are the ethical implications of building machines that might one day suffer or love?

PRISM will continue exploring these questions in future discussions, research, and collaborations.

Next
Next

Our response to Anthropic’s model welfare initiative - conscious AI