Keith Frankish: Illusionism and Its Implications for Conscious AI

In this episode of the "Exploring Machine Consciousness" podcast, we discuss the theory of illusionism with philosopher Keith Frankish. Frankish challenges the traditional understanding of consciousness by suggesting that our perception of it as a mysterious, non-physical phenomenon is an illusion. He argues that consciousness should be viewed as a complex set of functions and capacities, much like life itself.

The “vital force” analogy

Keith argues that centuries ago, life looked like it needed a special ingredient, an élan vital. As biology as a discipline grew up that story gave way to things like genetics, development, and evolution. Life did not evaporate; the mystery just moved into the details. Keith argues we should do the same with consciousness. Stop treating it as a magical add-on and start treating it as a complex set of functions that organisms do when they sense, model, and respond to the world.

“We have an illusion about what consciousness is. It is not a simple, mysterious extra thing.”

What illusionism actually claims

According to Frankish, illusionism does not deny subjective experiences like pain or colour. It denies that their nature is a private, ineffable essence beyond science. In Frankish’s view, experience has two intertwined layers. First, there is the rich, embodied engagement with the world through discrimination, attention, memory, learning, and action. Second, there is self-monitoring that lets us talk about what it is like. That higher layer tempts us to picture an inner theater. For Frankish, we should resist that picture and keep our explanations in causal, functional terms.

He gives the example of pain. Strip away every effect linked to pain, the attention capture, the avoidance, the appraisal changes, the learning updates, the memories. If nothing changes in what the organism does or is disposed to do, what is left to call pain? On this view, the hurting is the organised suite of effects, not some added extra.

What is it like to be a bat?

Thomas Nagel asked what it is like to be a bat, and concluded that we cannot know. Keith disagrees. He says we can know, not from the armchair, but by doing careful empirical work. Map what bats can discriminate with echolocation, which textures and surfaces they can tell apart, which cues capture their attention, how those cues guide flight, feeding, avoidance, and memory. Track how those sensitivities change with context, how they couple to motivation and action, and how learning reshapes the whole loop over time.

Tell that story in enough detail and you have “the mental world of the bat.” The subjective feel is not a hidden extra. It is the structured profile of discriminations, attentional policies, affective tags, action tendencies, and memory updates in a living system. A list of neuron spikes will never satisfy, but a rich psychological and functional account can.

“I think we can. You need to study bats very, very, very carefully. Tell that story in enough detail and you have the mental world of the bat.”

Machine consciousness needs a better brief

Frankish argues that “building machine consciousness” is not a well-defined project. He argues that researchers should first choose which aspects of experiential life they want to reproduce, and to what degree. That means committing to sensors, world models, control loops, attention and affect, and memory that spans time. Do that across multiple modalities, and the remaining question becomes a matter of where you draw the line on the word conscious.

“Focus on the functions and let consciousness look after itself.”

AI and Moral Consideration

In Keith’s view, many animals likely meet the bar for conscious experience, yet they do not conceptualise their states as private inner properties. An artificial agent might someday develop metacognition and talk as if it has an inner world, but it is not the basis for ethical concern.

Where moral consideration begins is a separate, practical question. For Frankish, it tracks interests, not self-reports about qualia. A system merits concern when it has integrity and investment in its own continued existence. In his words, we would need to “create creatures that matter to themselves.” That looks closer to artificial life than to a language model. Things like:

  • Independent maintenance and homeostasis, with sustained organisation rather than being a plug-in tool.

  • Goals that matter to the system because they protect it, so outcomes can be good or bad for it.

  • Rich aversive and appetitive reactions that guide learning, attention, memory, and policy over time.

  • Closed-loop sensing and action across multiple modalities, not a single narrow output channel.

Keith argues that as agents acquire more of these features, our obligations towards them may strengthen. The ethically important part is first-order engagement, for example, the aversive dynamics that make pain a burden for the organism. The higher-order human habit of talking about ineffable feelings is irrelevant to whether the system can be harmed.

Are LLMs conscious?

Keith argues that large language models do not satisfy these conditions. He believes their interactions with the world are “impoverished,” and describes them as a red herring for consciousness and for ethics. He points out that humans tend to overvalue language as a sign of consciousness when, in fact, their fluent talk is not a reliable signal of experiential life. Read more about whether current LLMs are conscious.

An evolutionary mindset for future research

Keith argues that biology makes sense through evolution, and researchers trying to create consciousness should borrow the same lens. Start with minimal agents that must survive in changing environments and scale up to more complex functions.

Next
Next

Mark Solms: Engineering Consciousness – Can Robots "Give a Damn?"