Lucius Caviola: A Future with Digital Minds? Expert Estimates and Societal Response
Lucius Caviola is an Assistant Professor in the Social Science of AI at the University of Cambridge's Leverhulme Centre for the Future of Intelligence and a Research Associate in Psychology at Harvard University. His research explores how the potential arrival of conscious AI will reshape our social and moral norms. In today's interview, Lucius examines the psychological and social factors that will determine whether this transition unfolds well, or ends in moral catastrophe.
Listen or watch this episode below; you can also find it on YouTube, Spotify, Apple Podcasts, or wherever you find your podcasts. Full transcript available here.
Listen
Watch
Summary
In this episode we discuss:
Why experts estimate a 50% chance that conscious digital minds will emerge by 2050
The "takeoff" scenario where digital minds could outnumber humans in welfare capacity within a decade
How "biological chauvinism" leads people to deny consciousness even in perfect whole-brain emulations
The dual risks of "under-attribution" (unwittingly creating mass suffering) and "over-attribution" (sacrificing human values for unfeeling code)
Surprising findings that people refuse to "harm" AI in economic games even when they explicitly believe the AI isn't conscious
Lucius argues that rigorous social science and forecasting are essential tools for navigating these risks, moving beyond intuition to prevent us from accidentally creating vast populations of digital beings capable of suffering, or failing to recognise consciousness where it exists.
Lucius’ work
A full list of Lucius’ work can be found on his personal website here.
Allen, C., Lewis, J., & Caviola, L. (2025). Reluctance to Harm AI. PsyArXiv.
Caviola, L. (2025). AI Rights Will Divide Us. [Blog Post].
Caviola, L. (2025). Will We Go to War Over AI Consciousness? [Blog Post].
Caviola, L., & Ladak, A. (2025). Digital Sentience Skepticism. PsyArXiv.
Caviola, L., & Saad, B. (2025). Futures with Digital Minds: Expert Forecasts in 2025. University of Oxford & University of Cambridge.
Related Work
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press
Colombato, C., & Fleming, S. M. (2024). Folk psychological attributions of consciousness to large language models. Neuroscience of Consciousness
de Garis, H. (2005). The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. ETC Publications.
Egan, G. (1990). "Learning to Be Me". Interzone #37. [Short Story]
Rescorla, M. (2024). "The Computational Theory of Mind". The Stanford Encyclopedia of Philosophy
Seth, A. (2025). Conscious artificial intelligence and biological naturalism. Behavioral and Brain Sciences. Cambridge University PressSchwitzgebel, E., & Sebo, J. (2025). The Emotional Alignment Design Policy. arXiv.
Suleyman, M. (2025). Seemingly Conscious AI is Coming [Blog Post].
When AI Seems Conscious. (2025). Online guide by Lucius Caviola et al. Available at: https://whenaiseemsconscious.org