|
PRISM in 2025: A Year In Review |
|
| Radhika, Will and Mitch reflect on 2025 as a foundational year for PRISM, marked by deep engagement with the AI consciousness research community and the creation of new platforms for collaboration and public understanding. |
|
|
| |
|
|
|
|
|
|
|
|
| Exploring Machine Consciousness |
|
| | | This year, the issue of AI consciousness and digital minds has exploded. In this blog, Will Millership outlines some of his favourite picks of 2025, covering research, videos, podcasts, books, press and more. |
|
|
| |
|
|
|
|
|
|
|
|
| The Digital Minds Newsletter |
|
| Will Millership co-authored the first edition of The Digital Minds Newsletter with Lucius Caviola and Brad Saad. The newsletter collates all the latest news and research on digital minds, AI consciousness, and moral status to help you stay on top of the most important developments in this emerging field. |
|
|
| |
|
|
|
|
|
|
|
|
| Computational Functionalism Debate |
|
| Chris Percy, Director of the Co-Sentience Initiative and next guest on Exploring Machine Consciousness has released a website mapping arguments for and against computational functionalism. |
|
|
| |
|
|
|
|
|
|
|
|
| Call for Expressions of Interest: NYU Mind, Ethics, and Policy Summit 2026 |
|
| The NYU Center for Mind, Ethics, and Policy is hosting a two-day summit on April 10-11, 2026. Discussion topics will center on the consciousness, sentience, agency, moral status, legal status, and political status of nonhumans, with special focus on invertebrates and AI systems. |
|
|
| |
|
|
|
|
|
|
|
|
| | A selection of recent interesting research, resources, and news articles looking at AI consciousness, collated by Moheet Khawaja.
Research
Sebo (2025). This article argues that current frameworks for risk and uncertainty imply we must extend legal personhood to insects and future AI systems of moral inclusion to prevent catastrophic ethical errors. Butlin et al. (2025). This consensus paper proposes an indicator method for assessing AI consciousness, auditing system architectures against six leading theories of consciousness. Berg et al. (AE Studio, 2025). This research demonstrates that when Large Language Models are prompted to engage in recursive "self-referential processing," they consistently generate structured reports of subjective experience.
News Articles & Essays
Gizmodo (2025). This piece synthesizes the views of three leading thinkers—Megan Peters, Anil Seth, and Michael Graziano. They explain that a definitive test for AI consciousness is likely impossible because we lack "ground truth" for subjective experience. Jeff Sebo and Andreas Mogensen, argue for a "probabilistic" approach to ethics. It suggests that even if there is only a 10% chance a being (like an ant or AI) is sentient, the high moral stakes mandate that we treat them with a degree of concern proportional to that probability. Schneider. This article argues that while current LLMs are likely just "crowdsourced neocortices" mimicking human concepts, we are entering a dangerous "Grey Zone" with neuromorphic and biological AI that may possess genuine sentience.
Video Lectures & Podcasts Schwitzgebel (Mind-Body Solution, 2025). Schwitzgebel argues we are entering an "Epistemic Fog" where we will build AI systems that seem conscious long before we have the science to prove if they are conscious, leading to a potential moral crisis. The AI Risk Network (Am I? Podcast, 2025). Cameron Berg discusses the "Ohio Bill" that attempts to legally declare AI "non-sentient." They debate the futility of trying to "outlaw" consciousness and the dangers of using legislation to solve philosophical problems. Chalmers (IMICS, 2025). In this keynote, David Chalmers introduces the concepts of "Quasi-Beliefs" and "AI Identity Threads." He argues that even if LLMs aren't fully conscious yet, they are "quasi-subjects" that persist through "threads" of memory. Amanda Askell is a philosopher at Anthropic who focuses on the development of Claude's character. In this session, she addresses a critical question for the future of AI: if future models are trained on the history of how we treated early AI, what will they learn about us?
|
|
|
|
|
|
|
|
|
|
|
|
| Thank you for catching up with PRISM’s ongoing activities. Feel free to reach out if you want to learn more about our work.
Regards,
The PRISM Team |
|
| | |
|
|
|
| |
|