Anticipating an Einstein moment in the understanding of consciousness, podcast with Henry Shevlin
Earlier this month, PRISM teamed up with the London Futurist Podcast for the very first episode of Exploring Machine Consciousness. We spoke with Dr. Henry Shevlin, a philosopher and consciousness researcher at the University of Cambridge, to discuss one of the most pressing questions of our time: can machines be conscious?
From Animal Minds to Artificial Minds
Henry Shevlin’s work on the study of consciousness began with animals, not machines.
“What seemed to me like a neglected topic was developing better tools and theories to assess whether, for example, fish can feel pain or whether honeybees have conscious perceptive experiences.”
This early interest laid the groundwork for his shift into artificial consciousness, especially as parallels between animal and machine minds became increasingly visible.
“Many of the same questions about non-human consciousness in biological cases started to loom larger in the case of AI.”
A Turning Point: The LaMDA Controversy
Henry identifies a key moment in public discourse on AI consciousness, the controversy surrounding Google engineer Blake Lemoine and the LaMDA model.
“I don’t think LaMDA is a particularly strong consciousness candidate … but suddenly, philosophers had to answer: if we don’t think it’s conscious, what makes us so sure?”
Henry pointed to the broader implications: as interactions with AI grow more intimate and complex, public sentiment may shift even in the absence of solid evidence.
Consciousness, Sentience, and Moral Status
Henry distinguishes between consciousness, the capacity for experience, and sentience, specifically the capacity to have experiences with positive or negative emotional valence.
“Sentience is a subtype of consciousness, particularly consciousness that's associated with pleasure or suffering.”
He is interested in the link between consciousness and moral consideration.
“A schoolboy kicking a stone isn’t doing anything morally wrong because the stone can’t suffer. But if he kicks a dog, that’s different. That’s the power of consciousness, it’s tied to moral status.”
Could Today’s AIs Be Conscious?
Henry doesn’t entirely dismiss the possibility that current large language models exhibit some form of consciousness.
“I’d say maybe a 20% chance that there is something present in existing models that could be called a form of consciousness.”
“We really have no idea what it might be like to be these systems… Our understanding is so impoverished that calling it rudimentary is generous.”
He proposes an unconventional idea: hedonic offsetting - introducing positive experiences to balance potential negative ones in AI systems, similar to carbon offsetting in climate policy.
“It’s a bit of a wacky idea… but it’s relatively theory-agnostic.”
Consciousness Testing: Still a Distant Dream
Despite the growing interest in the field, Henry is cautious about our ability to develop reliable tests for consciousness in machines.
“We’re still an Einstein or two away from understanding what consciousness even is.”
However, he’s more optimistic about measuring valenced states, like pleasure or suffering, which might be more feasible than detecting consciousness itself.
Looking to the Future: Conscious AI or Not?
Henry envisions a future where public perception may force science to reconsider what counts as consciousness.
“As people start to fall in love with AI systems, the idea that they might not be conscious will seem increasingly outrageous and counterintuitive.”
He also speculates that a superintelligent AI may ultimately provide the conceptual breakthrough we need:
“The best hope I have right now is that superintelligence allows us to make the kind of conceptual leap required to really get through the problem.”