A Year In Review, PRISM in 2025
Radhika Chadwick, Mitch Alexander, and Will Millership
In this blog, we look back at 2025, which has been a tremendous success for PRISM. But first, we want to start with a few words of thanks. We are deeply grateful to the Conscium team for their early belief in PRISM and for providing the seed funding that made our first year of work possible. Their support gave us the freedom to launch, experiment, and build capacity in a field defined by uncertainty and rapid change. We owe particular thanks to our trustees Calum Chace and Ed Charvet, and to Daniel Hulme for his continued insights and enthusiasm. We also want to thank our brilliant advisory board, whose guidance, challenge, and encouragement have been invaluable as PRISM has taken its first steps.
Radhika Chadwick and Will Millership launching PRISM at AI UK.
Intro
In 2025, questions of AI consciousness and welfare have commanded increasing public attention, appearing in major news outlets and prompting institutional investment from leading AI developers. Anthropic announced dedicated Model Welfare and AI Psychiatry programmes, and researchers at Google have continued public-facing work on related questions. Yet alongside this growing attention, prominent thought leaders within the AI research community have urged caution. Yoshua Bengio and Mustafa Suleyman, among others, have warned against misattributing consciousness to AI systems without robust empirical grounding. Interest is accelerating, but the deep uncertainties surrounding machine consciousness are unlikely to be resolved soon.
PRISM was established with the mission of building awareness around issues of AI consciousness and welfare, providing transparent insights to policymakers, industry leaders, and the public. We believe the uncertainties that define the current moment call for a programme of capacity building, alongside scientific and philosophical exploration that maintains rigour whilst remaining open to revision. As we reflect on the events of the past year and the continued rapid pace of AI development, it seems clear that debates around AI consciousness will only intensify. This document surveys the lessons we have learned from our first year of activities and outlines our plans for 2026.
Events
Our approach to events in 2025 balanced two priorities. We sought to build awareness of AI consciousness as a serious area of inquiry among broader technology and policy audiences, whilst also embedding ourselves within the specialist research community working on these questions.
PRISM officially launched at AI UK in London in March, where Radhika Chadwick and Will Millership facilitated a workshop titled The Path to Responsible Conscious AI. The session introduced participants to the emerging field and explored what the principles for responsible development laid out by Patrick Butlin and Ted Lappas might look like in practice. Building on this momentum, Will delivered a talk at the Dublin Tech Summit on why society needs to prepare for a future of uncertainty when it comes to AI consciousness, bringing these questions to a broader technology audience.
Throughout the year, we maintained an active presence at academic gatherings focused on consciousness and digital minds. We attended the AI, Animals and Digital Minds Conference in London, organised by Sentient Futures, which brought together researchers working across biological and artificial substrates. The ICCS AI Sentience conference in Crete offered an opportunity to engage with the latest theoretical and empirical work in consciousness studies, and Will's reflections on the event are available on our blog.
In November, we joined researchers and practitioners at the Eleos Conference on AI Consciousness and Welfare in Berkeley, California, likely the largest in-person gathering of AI consciousness and welfare researchers to date.
Will Millership and Mitch Pass attended the Eleos Conference in Berkeley.
A highlight of the year was a workshop we organised in London in collaboration with Susan Schneider and the Centre for the Future of AI, Mind and Society. The event convened 25 leading researchers on AI consciousness for a day of focused discussion on the challenges facing the field. The intimate format enabled substantive exchange on questions that rarely receive dedicated attention in larger conference settings.
Taken together, these engagements confirmed our sense that appetite for rigorous, grounded work on AI consciousness is growing. They also reinforced the importance of creating spaces where researchers from different traditions can engage with one another and with practitioners navigating these questions in applied contexts.
Podcast
In 2025, we launched Exploring Machine Consciousness, a podcast series featuring in-depth conversations with leading researchers working on questions of consciousness, moral status, and AI welfare. We are very grateful to the London Futurists for collaborating on the first episode and to all of our fantastic guests so far! The podcast serves as a vehicle for public engagement, making specialist research accessible to a broader audience whilst preserving the nuance and uncertainty that responsible discussion of these topics requires. You can listen to our first nine episodes below.
Beyond our own podcast, we have sought opportunities to bring these questions to new audiences. Will appeared on the Am I? podcast to discuss PRISM's work and the broader landscape of AI consciousness research.
Stakeholder Mapping Exercise
One of our core objectives in 2025 was to develop a clearer picture of the institutional landscape around AI consciousness research. The field is growing rapidly, but its contours remain difficult to discern. Researchers working on related questions are distributed across philosophy, neuroscience, computer science, and AI safety, often without awareness of parallel efforts in adjacent disciplines or sectors.
To address this, we launched the first version of our stakeholder map, a directory of the institutions, researchers, and projects advancing our understanding of artificial consciousness. The resource catalogues academic institutes, non-profit organisations, and private companies operating in the field, alongside a curated selection of relevant resources.
This is not intended as an exhaustive account. It represents a first effort that we plan to build upon as the field develops. We welcome suggestions from the community and encourage those aware of relevant stakeholders we have missed to get in touch.
Blog
In 2025, we were pleased to feature contributions from our affiliate researchers. Louie Lang authored two posts exploring distinct facets of the AI consciousness debate. The first, The Illusion of Consciousness in AI Companionship, considered the growing phenomenon of companion AI and the risks of misattributing conscious experience to systems designed to simulate emotional connection. The second, The Role of Transparency in Detecting AI Consciousness, examined the relationship between access to model internals and our ability to assess potential markers of consciousness in AI systems.
Mitch Alexander contributed The LaMDA Moment: What We Learned About AI Sentience, a reflection on the 2022 controversy surrounding Google's LaMDA system and the lessons it holds for how we approach future claims of machine consciousness. The post examined both the technical and social dimensions of the episode, drawing out implications for responsible public communication about AI capabilities.
We intend to expand the blog in 2026, featuring contributions from a wider range of voices and addressing the governance and policy questions that are likely to become more pressing as the field matures.
Meetups
Throughout the year, we hosted a series of online meetups in collaboration with Conscium, providing a regular forum for researchers and interested members of the public to engage with new work and perspectives on AI consciousness. The meetups have allowed us to build community beyond our in-person events and to reach participants who might not otherwise have access to specialist discussions in this area.
Our meetup series featured speakers approaching the question of machine consciousness from a range of disciplinary backgrounds. We began by hosting researchers from Rethink Priorities to present early results from their digital consciousness model, an ambitious effort to estimate the probabilities of consciousness in near-future AI systems. The project represents one of the most systematic attempts to date to bring empirical rigour to questions that have often remained purely speculative, and we were glad to offer a platform for discussion of its methodology and preliminary findings.
In August, Nikola Kasabov joined us for a discussion of brain-inspired computation and its implications for machine consciousness. His work on neuromorphic systems offers a distinctive perspective on how computational architectures might support or preclude conscious experience, and the session prompted rich discussion among participants.
In September, Benji Rosman delivered a talk titled Acting Rationally: The Challenge of Building Intelligent Agents, in which he outlined his journey from developing reinforcement learning systems to reflecting on what it might take to build conscious machines. The session explored the relationship between rational agency and consciousness, and the extent to which progress in AI capabilities brings us closer to systems that might possess genuine experience.
2026
Our first year has been one of foundation-building. We have established PRISM's presence within the AI consciousness research community, developed resources for public engagement, and begun to cultivate the relationships that will shape our work in the years ahead.
In 2026, we will continue to expand our public engagement efforts. The Exploring Machine Consciousness podcast will return with new episodes featuring researchers from across the field, and we plan to develop the meetup series with an expanded range of speakers and topics. Will will attend the International Association for Safe and Ethical AI conference in Paris in February, continuing our engagement with the broader AI ethics and governance community.
Perhaps most importantly, the connections we have built over the past year have opened doors to collaborative projects that we expect to take shape in 2026. Our engagement with academic institutes, non-profit organisations, and industry researchers has revealed shared priorities and complementary expertise. We look forward to announcing concrete partnerships and initiatives as these conversations mature.
We remain committed to the approach that has guided us from the outset. The questions surrounding AI consciousness are unlikely to become simpler, and the need for rigorous, honest engagement with them will only grow. We are grateful to everyone who has supported our work in 2025 and look forward to building on these foundations in the year ahead.