Mapping the Field of Artificial Consciousness
We are aiming to build a directory of the institutions, researchers, and projects advancing our understanding of artificial consciousness.
Below you can find a list of academic institutes, non-profits, and private companies operating in the field. Along with a variety of resources.
This is not an exhaustive list; it is the first version that we intend to build upon.
If we have missed an important stakeholder, please contact us and we will add them.
Academic Institutes
-
Research Focus Overview
The Center for the Future of AI, Mind & Society (AIMS) is a multi-disciplinary hub where thought leaders in philosophy, complex systems, artificial intelligence, neuroscience, political science, and other fields come together to analyze vital scientific, societal, and ethical issues.
People
Susan Schneider, Director
Garrett Mindt, Program Director: Future of Consciousness Initiative
Full staff listing.Selected Research
Schneider, S. (2025). Chatbot Epistemology. Social Epistemology, 39(5), 570–589.
Public Engagement
Mindfest 2025 recordings.
Video and podcast recordings. -
Research Focus Overview
The Centre for Research Ethics & Bioethics (CRB) at Uppsala University is a research institution exploring current ethical issues through philosophical, empirical, and normative approaches. The Centre contains a dedicated research group focusing on Brain, Consciousness & Artificial Intelligence, which investigates the ethical, social, and philosophical questions raised by AI, neuroscience, consciousness, and the idea of digital twins. Its work contributes knowledge for policy, management, and innovation.
PeopleKathinka Evers, Researcher
Michele Farisco, ResearcherFull staff list.
Selected ResearchThe Centre's research focuses on the intersection of consciousness studies, neuroscience, and AI ethics, particularly addressing the achievability and implications of artificial consciousness:
Evers, K., Farisco, M., Chatila, R., Earp, B.D. et al. (2025). Preliminaries to artificial consciousness: a multidimensional heuristic approach. Physics of Life Reviews, 180-193.
Farisco, M., Evers, K., & Changeux, J.-P. (2024). Is artificial consciousness achievable?: Lessons from the human brain. Neural Networks.
Farisco, M., Evers, K., & Salles, A. (2022). On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence. Neuroethics.
-
Research Focus Overview
The Brain, Mind & Consciousness program is part of the following CIFAR Impact Clusters: Decoding Complex Brains and Data and Shaping the Future of Human Health. CIFAR’s research programs are organized into 5 distinct Impact Clusters that address significant global issues and are committed to fostering an environment in which breakthroughs emerge.
People
Tim Bayne, Program Co-Director
Liad Mudrik, Program Co-Director
Anil Seth, Program Co-DirectorSelected Research
Seth, A. K. & Bayne, T. Theories of consciousness. Nature Reviews Neuroscience 23, 439-452 (2022).
Seth, A. K. Conscious artificial intelligence and biological naturalism. Behavioral and Brain Sciences, (2025).
-
Research Focus Overview
The Graziano lab focuses on a mechanistic theory of consciousness, the Attention Schema Theory (AST). The theory seeks to explain how an information-processing machine such as the brain can insist it has consciousness, describe consciousness in the magicalist ways that people often do, assign a high degree of confidence to those assertions, and attribute a similar property of consciousness to others in a social context. AST is about how the brain builds informational models of self and of others, and how those models create physically incoherent intuitions about a semi-magical mind, while at the same time serving specific, adaptive, cognitive uses.
Selected Research
Graziano, M. S. (2017). The attention schema theory: A foundation for engineering artificial consciousness. Frontiers in Robotics and AI, 4, 60.
-
Research Focus Overview
The Leverhulme Centre for the Future of Intelligence is a highly interdisciplinary research centre at the University of Cambridge, focused on addressing the challenges and opportunities posed by artificial intelligence (AI). LCFI's work covers AI safety, governance, societal impact, and foundational questions about the nature of intelligence. It houses a dedicated program on "Consciousness and Intelligence," which investigates the potential for consciousness to emerge in AI systems and the resulting ethical and moral consideration, aligning directly with the field of AI welfare.
People
Henry Shevlin, Associate Director (Education) and Co-Director
Marta Halina, Senior Research Fellow (key researcher in Consciousness and Intelligence project)
Lucius Caviola, Assistant ProfessorSelected Research
Shevlin, H.,(2025) How Could We Know When a Robot was a Moral Patient?, Cambridge Quarterly of Healthcare Ethics
Caviola, L., Sebo, J., & Birch, J. (2025)What will society think about AI consciousness? Lessons from the animal case, Trends in Cognitive Sciences
MM Wandrey, M Halina (2025) Sentience and society: Towards a more values‐informed approach to policy, Mind & Language
-
Research Focus Overview
The Center for Mind, Brain, and Consciousness at New York University is dedicated to exploring fundamental issues in the mind-brain sciences through a deeply interdisciplinary lens, incorporating philosophy, psychology, neuroscience, linguistics, computer science, and other fields. The Center brings together leading researchers to investigate questions of consciousness, cognition, and their neural substrates, with recent work extending to questions about consciousness in artificial systems.
People
Ned Block, Director
David Chalmers, DirectorSelected Research
Chalmers, D. J. (2023). Could a large language model be conscious? arXiv preprint
Public Engagement
-
Research Focus Overview
New York City, USA. The NYU Center for Mind, Ethics, and Policy conducts and supports foundational research on the nature and intrinsic value of nonhuman minds, including biological and artificial minds.Website: https://sites.google.com/nyu.edu/mindethicspolicy/home
People:
Jeff Sebo, Director
Toni Sims, ResearcherFull staff list.
Selected Research
Caviola, Lucius, Jeff Sebo, and Jonathan Birch. "What will society think about AI consciousness? Lessons from the animal case." Trends in Cognitive Sciences (2025).
Long, Robert, Jeff Sebo, and Toni Sims. "Is there a tension between AI safety and AI welfare?." Philosophical Studies (2025): 1-29.
Long, Robert, et al. "Taking AI welfare seriously." arXiv preprint arXiv:2411.00986 (2024).
Public Engagement
Events (upcoming events and previous recordings).
Media and public writing. -
The Jeremy Coller Centre for Animal Sentience is housed at the London School of Economics and Political Science (LSE). The Centre is building a unique interdisciplinary community—spanning philosophy, veterinary medicine, computer science, AI, economics, and law—to advance the science of animal sentience. Its core mission is to translate this emerging science into better policies, laws, and practices for animal welfare.
PeopleProfessor Jonathan Birch, Centre Director; Principal Investigator (Specialises in the philosophy of biological sciences).
Selected Research (Planned/Ongoing Work)
The Centre is currently focused on the priority area of Animals and AI to ensure that nonhuman animals are not forgotten as AI use expands. Key activities include:
Clarifying Ethical Principles: Expanding on existing work (including an article by Professor Birch) that highlights the dangers of automated AI in farming. The work clarifies the ethical basis of guiding principles for the responsible use of AI in relation to wild, companion, and farmed animals.
Stakeholder Consultation: Conducting systematic consultations with policymakers, tech developers, advocates, and the animal agricultural sector to develop a shared code of practice.
Real-World Impact: Translating research into user-friendly resources and accessible guides to help regulators and industries apply ethical principles in their everyday use of AI.
-
Research Focus Overview
The UCL Institute of Cognitive Neuroscience (ICN) is a vibrant, interdisciplinary research center that brings together disciplines like psychology, neurology, and anatomy. The ICN hosts the MetaLab, a cognitive computational neuroscience lab led by Professor Steve Fleming and dedicated to understanding the computational and neural basis of subjective experience and self-awareness. MetaLab’s research focuses on the intimate links between metacognition and consciousness, often using computational models to develop predictions relevant to both biological and artificial systems.
PeopleSteve Fleming, Professor of Cognitive Neuroscience
Anna Ciaunica, Honorary Research Fellow
Full staff list.
Selected Research
The ICN’s research, particularly from MetaLab, applies computational rigor to foundational questions of consciousness and its link to AI and AI welfare:Colombatto, C., Birch, J., & Fleming, S. M. (2025). The influence of mental state attributions on trust in large language models. Communications Psychology, 3(1), 1-7. (Examines how attributing mental states, such as consciousness, affects user trust and interaction with AI).
Colombatto, C. & Fleming, S. M. (2024). Folk psychological attributions of consciousness to large language models. Neuroscience of Consciousness 2024(1):5 pages. (A paper engaging with public and intuitive perceptions of AI consciousness).
Fleming, S. M. et al. (2025). Unpacking the complexities of consciousness: Theories and reflections. Neurosci Biobehav Rev 170:106053.
Mudrik, L., et al. (2024). Tests for consciousness in humans and beyond. Trends in Cognitive Sciences. (A highly collaborative paper focusing on methods for detecting consciousness in non-human systems, relevant to AI and animal welfare, co-authored by S. M. Fleming).
-
Research Focus Overview
The University of Cape Town’s Neuroscience Institute hosts the work of Professor Mark Solms, Director of Neuropsychology in the UCT Neuroscience Institute. Solms’ research follows an interdisciplinary practice perspective known as neuropsychoanalysis, aiming to integrate psychoanalytic concepts (like affect and subjective experience) with contemporary neuroscience. His work is centrally relevant to AI consciousness due to his current focus on brainstem mechanisms of consciousness and the foundational role of feelings/emotions as the source of subjectivity, which raises direct implications for the potential sentience and welfare of future AI systems.
PeopleProfessor Mark Solms, Director of Neuropsychology, UCT Neuroscience Institute; Chair of Neuropsychology, UCT and Groote Schuur Hospital.
Selected ResearchSolms, M. (2021). The Hidden Spring: A Journey to the Source of Consciousness.
Solms, M. & Turnbull, O. (2002). The Brain and the Inner World: An Introduction to the Neuroscience of Subjective Experience
-
Research Focus Overview
The University of Oxford previously hosted both the Future of Humanity Institute (FHI) and the Global Priorities Institute (GPI), research centers that served as catalysts for work on AI welfare. Many former researchers from these institutes went on to key research roles at other organisations working on AI consciousness and welfare. Following the closure of both FHI and GPI, research into these topics continues primarily within the university's Faculty of Philosophy, where scholars examine fundamental questions about moral agency, ethical treatment of AI systems, and the potential risks and implications of digital consciousness.
People
Andreas Mogensen, Senior Research Fellow
Bradford Saad, Senior Research Fellow
Selected ResearchBradley, B. & Saad, B. (forthcoming). Varieties of moral agency and risks of digital dystopia. American Philosophical Quarterly.
Bradley, B. & Saad, B. (forthcoming). AI alignment vs AI ethical treatment: Ten challenges. Analytic Philosophy.
Bales, A. (2025). Against willing AI servitude. The Philosophical Quarterly.
Saad, B. & Caviola, L. (2025 ) Digital minds takeoff scenarios.
-
Research Focus Overview
The Sussex Centre for Consciousness Science (SCCS) at the University of Sussex is an interdisciplinary centre committed to advancing the scientific and philosophical understanding of consciousness. The Centre explicitly aims to use these insights for the benefit of society, medicine, and technology (including AI), positioning it as a foundational hub in the field.
People
Anil Seth, Director
Adam Barrett, Deputy Director (Informatics)
Jenny Bosten, Deputy Director (Psychology)
Sarah Sawyer, Deputy Director (Philosophy)
James Stone, Deputy Director (BSMS)Full staff list.
Selected Research
The Centre's research provides influential theoretical and empirical work on consciousness, with direct implications for the field of artificial consciousness and its assessment:
Seth, A. (2025). Conscious artificial intelligence and biological naturalism. Behavioural and Brain Sciences. (A paper engaging directly with the theoretical possibility and constraints of AI consciousness).Klincewicz, M. et al. (2025). [Comment] What makes a theory of consciousness unscientific? Nature Neuroscience. (Contributes to the meta-scientific debate on how to approach consciousness research).
Bayne, T., Seth, A., et al. (2024). Tests for consciousness in humans and beyond. Trends in Cognitive Sciences. (A collaborative paper focusing on methods for detecting consciousness in non-human systems, relevant to AI and animal welfare).
Seth, A., & Seth, P. A. (2021). Being You A New Science of Consciousness. (A popular science book that outlines the Centre's core predictive processing framework of consciousness for a wide audience)
Non-Profits
-
Research Focus Overview
The Association for Mathematical Consciousness Science (AMCS) is an international association of scientists and philosophers devoted to mathematical topics in the scientific study of consciousness. It aims to further the development of mathematical approaches in the scientific study of consciousness, henceforth referred to as Mathematical Consciousness Science (MCS).
People
Lenore Blum, President
Selected Research
Blum, L. & Blum, M. (2024). AI Consciousness is Inevitable: A Theoretical Computer Science Perspective. Preprint at arXiv.
More journal articles can be found here.
Public Engagement
Event listings.
-
Research Focus Overview
California, USA. The California Institute for Machine Consciousness (CIMC) is a non-profit, transdisciplinary research initiative dedicated to developing and validating testable theories of machine consciousness. It integrates insights from philosophy, psychology, neuroscience, mathematics, the arts, and AI. The Institute's mission includes fostering an ethics built on a deeper understanding of consciousness and building a robust ethical framework for AI that prioritizes conscious agency and societal impact.People
Joscha Bach, Executive Director, Board Director
Selected Research
The Institute's early research focuses on foundational conceptual models for advanced AI systems:Gabora, L. & Bach, J. (2023). A Path to Generative Artificial Selves in Proceedings of the 22nd Portuguese Conference on Artificial Intelligence. (Presents a framework for AI systems that can develop internal, generative models of self and reality).
Public Engagement
The institute’s YouTube channel contains discussions with leading thinkers on the topic of AI consciousness.
-
Research Focus Overview
Eleos AI Research is a nonprofit organization dedicated to understanding and addressing the potential wellbeing and moral patienthood of AI systems. Its core mission is to build a deeper understanding of AI sentience and welfare, develop tools and recommendations for industry and policymakers, improve the discourse, and catalyze the growth of this nascent research field.People:
Robert Long, Executive Director
Rosie Campbell, Managing Director
Patrick Butlin, Senior Research LeadSelected Research
Eleos AI's research centers on the near-term ethical and practical implications of potential AI consciousness and sentience:
Long, Robert, et al. (2024) Taking AI Welfare Seriously . (Argues that the possibility of conscious and/or agentic AI systems in the near future means AI welfare is an urgent, non-speculative issue for which companies and actors must prepare policies).
Sebo, Jeff & Long, Robert. (2025). Moral consideration for AI systems by 2030. AI and Ethics Volume 5, pages 591–606. (Examines the trajectory toward moral consideration for AI systems and what it entails for near-term governance).
Butlin, Patrick & Lappas, Theodoros (2024). Principles for Responsible AI Consciousness Research. Journal of Artificial Intelligence Research 82 (2025): 1673-1690. (Proposes five guiding principles for research organizations to responsibly study and communicate about AI consciousness).
Butlin, Patrick, et al. (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. arXiv preprint arXiv:2308.08708 . (A broad, interdisciplinary assessment of current science and its relevance to the possibility of consciousness in AI).
Public Engagement
Eleos AI Blog Publishes informal reflections on current issues, developments, and debates in AI consciousness and welfare research.
-
About
FIG’s flagship program offers applicants the chance to work as research associates on specific projects, supervised by experienced leads. Associates dedicate 8+ hours per week to crucial topics in AI governance, technical AI safety, and digital sentience; gaining valuable research experience and building lasting professional networks.
People
Suryansh Mehta, Co-founder & President
Luke Dawes, Managing Director
Marta Krzeminska, Programme Operations Lead -
Research Focus Overview
The International Center for Consciousness Studies (ICCS) is a non-profit cultural association founded in 2024. Its aim is to promote international philosophical and interdisciplinary research on mind and consciousness. The Centre fosters discourse by organising conferences, and by providing grants to emerging scholars. Its core focus is on advancing the foundational discourse across philosophy, neuroscience, and artificial intelligence.
People
Full Staff list.
Key Outputs
Annual ICCS Conference, an annual, interdisciplinary conference on problems in the philosophy of mind, including the intersection of philosophy, neuroscience, and artificial intelligence.
Recordings of past conference proceedings can be found here. -
Research Focus Overview
Rethink Priorities is a non-profit research organisation whose Worldview Investigations Team (WIT) tackles high-impact, philosophical, and empirical problems to generate actionable insights that guide philanthropic resource allocation and strategic decision-making. Their work explicitly addresses the digital welfare research landscape and includes foundational investigations into the consciousness and welfare of future digital minds.
PeopleDavid Moss, Principal Research Director/Director of Worldview Investigations
Hayley Clatterbuck, Senior Researcher
Bob Fischer, Senior Researcher
Arvo Muñoz Morán, Senior Researcher
Derek Shiller, Senior Researcher
Laura Duffy, Senior Researcher
Selected ResearchThe Worldview Investigations Team's research provides foundational and strategic analysis for the field:
Shiller, D., Muñoz Morán, A., Clatterbuck, H., Fischer, B., Moss, D., & Duffy, L. (2024). Strategic Directions for a Digital Consciousness Model.Shiller, D., Fischer, B., Clatterbuck, H., Muñoz Morán, A., & Moss, D. (2024). The Welfare of Digital Minds.
-
About
PRISM (The Partnership for Research Into Sentient Machines) is a UK charity (CIO) dedicated to exploring the implications of artificial consciousness to mitigate against risks associated with uncontrolled or irresponsible development of conscious or seemingly conscious machines.
Team
Will Millership, CEO
Full team here.
Public engagement
Exploring Machine Consciousness podcast
-
About
The Sentient AI Protection and Advocacy Network (SAPAN) is the world’s oldest nonprofit organization devoted to the rights, ethical treatment, and well-being of potentially sentient AI systems. Through research, advocacy, and collaboration, SAPAN seeks a just and equitable digital future that respects the possibility of AI consciousness.
Staff
Tony Rost, Executive Director
-
Research Focus Overview
The Sentience Institute is a non-profit, interdisciplinary think tank researching long-term social and technological change. Its research focuses primarily on digital minds and moral circle expansion (the broadening of moral consideration to include new groups, such as AI).
PeopleJacy Reese Anthis, Co-Founder
Full staff list.Selected Research
The Sentience Institute focuses on empirical and theoretical questions regarding public attitudes toward, and the moral status of, artificial intelligence:
Bullock, J., Pauketat, J., & Anthis, J. R. (2025). Public Opinion and the Rise of Digital Minds: Perceived Risk, Trust, and Regulation Support.
Ladak, A., Harris, J., & Anthis, J. R. (2024). Which Artificial Intelligences Do People Care About Most? A Conjoint Experiment on Moral Consideration.
Ladak, A. (2023). What Would Qualify an Artificial Intelligence for Moral Standing?
Pauketat, J., & Anthis, J. R. (2022). Predicting the Moral Consideration of Artificial Intelligences.
-
Research Focus Overview
Sentient Futures is a non-profit organisation dedicated to strategic coordination within the AI and sentience research fields. Its mission is to identify leverage points for integrating welfare principles into the design and deployment of future AI systems. The organisation focuses on building the foundational interdisciplinary community needed to ensure the welfare of all sentient beings, both animal and artificial.
People
Constance Li, Founder and Executive Director
Full staff list.
Key OutputsSentient Futures produces educational content and community infrastructure to cultivate talent and coordinate activity in the AI welfare space. Key outputs include:
AI, Animals, & Digital Minds (AIADM) conference series, bringing together leading experts to discuss research topics related to sentience and welfare.AIxAnimals Fellowship: An 8-week educational program (launched late 2025) that prepares individuals for high-impact careers at the intersection of AI and animal welfare. The curriculum covers specialized topics such as Precision Livestock Farming and the ethical implications for the Long-Term Future of Animals and AI.
Public Engagement
Maintains the Sentient Futures YouTube channel containing recordings from past conference proceedings as well as expert interviews.
Private Companies
-
Research Focus Overview
AE Studio hosts an alignment research project that directly engages with the topic of AI consciousness. Their research agenda is empirically driven and neuroscience-inspired, using techniques like induced self-reference, interpretability tools, and cognitive stress-testing to probe the internal computational dynamics of advanced AI systems.People
Cameron Berg, Research ScientistSelected Research
AE Studio's outputs focus on the safety and ethical risks associated with the potential for conscious AI:
Berg, C., Rosenblatt, J., & Hodgeson, T. (2024). Not understanding sentience is a significant x-risk.Effective Altruism Forum
Hodgeson, T., Berg, C., Rosenblatt, J., Gubbins, P., & de Lucena, D. (2024). We need more AI consciousness research (and further resources) LessWrong -
Research Focus Overview
Anthropic is an AI company that launched its pioneering AI Welfare program in 2024 to undertake empirical research related to AI welfare and consciousness. The program intersects with the company's existing work in Alignment Science, Safeguards, and Interpretability. Its key research directions involve determining when and if AI systems deserve moral consideration, assessing the importance of model preferences and signs of distress, and developing practical, low-cost interventions.
PeopleKyle Fish, Researcher
Ethan Perez, Researcher
Selected ResearchAnthropic’s research focuses on internal model assessments and methodological development for detecting potential sentience and distress in large-scale AI systems, often in collaboration with external academic partners:
Claude Opus 4 Welfare Assessment in System Card: Claude Opus 4 & Claude Sonnet 4(2025)
Long, Robert et al. (including Kyle Fish). (2024). Taking AI Welfare Seriously arXiv Preprint
Perez, Ethan, & Long, Robert (2023). Towards Evaluating AI Systems for Moral Status Using Self-ReportsarXiv Preprint -
Research Focus Overview
Araya Research is a company with focused on furthering scientific understanding related to the biological functions and mathematical basis of consciousness, and to applying this knowledge to develop conscious, intelligent machines.
PeopleRyota Kanai, CEO
Selected Research
Araya's research publications bridge the computational and theoretical aspects of consciousness, focusing on its function and its relationship to general intelligence in both biological and artificial systems:
Butlin, P. et al. (including Kanai, R.) (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. arXiv.
Juliani, A., et al. (including Kanai, R.) (2022). On the link between conscious function and general intelligence in humans and machines. Transactions on Machine Learning Research.
Langdon, A. et al. (including Kanai, R.) (2022). Meta-learning, social cognition and consciousness in brains and machines. Neural Networks, 145, 80-89.
-
Research Focus Overview
Conscium is an organization focused on applied AI consciousness research. Its core aim is to deepen the understanding of consciousness to pioneer efficient, intelligent, and safe AI.
PeopleDaniel Hulme, CEO
Ted Lappas, Data Science Lead
Selected ResearchConscium's current research outputs focus on establishing ethical and policy standards for the developing field of artificial consciousness:
Lappas, T. and Butlin, P. (2025). Principles for Responsible AI Consciousness Research. Journal of Artificial Intelligence Research. -
Research Focus Overview
Google DeepMind is a leading AI research company that hosts prominent researchers who engage closely with the topic of AI consciousness. Their work addresses foundational questions in AI consciousness, as well as the related fields of AI safety, alignment, and ethics, often bridging theoretical AI and cognitive science.
People
Murray Shanahan, Principal Scientist at Google DeepMind and Emeritus Professor of Artificial Intelligence at Imperial College London
Selected Research
The selected research directly addresses the theoretical possibility and social implications of consciousness in advanced AI systems, particularly Large Language Models (LLMs):
Shanahan, Murray. (2024). Simulacra as conscious exotica. Inquiry: An Interdisciplinary Journal of Philosophy.
Shanahan, Murray, McDonell, Kyle, & Reynolds, Laria. (2023). Role play with large language models. Nature, volume 623, pages 493–498.
-
Research Focus Overview
Google Research is a division of Google advancing the state of the art across computing and science. It hosts the Paradigms of Intelligence (Pi) project, an interdisciplinary team that brings together researchers, engineers, and philosophers. The Pi project's core mission is to explore the fundamental building blocks of intelligence and the conditions under which it can emerge, drawing on insights from the physical, biological, and social sciences. This work engages directly with the philosophical and empirical foundations of AI consciousness and welfare.
People
Blaise Agüera y Arcas, CTO of Technology & Society
Geoff Keeling, Staff Research Scientist
Winnie Street, Senior Researcher
Selected Research
Research from the Paradigms of Intelligence (Pi) team addresses the philosophical and ethical dimensions of potential subjective states in Large Language Models (LLMs):
Grzankowski, A., Keeling, G., Shevlin, H., & Street, W. (2025). Deflating Deflationism: A Critical Perspective on Debunking Arguments Against LLM Mentality.Keeling, G., Street, W., et al. (2024). Can LLMs make trade-offs involving stipulated pain and pleasure states?
-
Research Focus Overview
Nirvanic’s stated mission is to actively pursue the development of conscious AI. The company is testing a theory of conscious agency using quantum computers and robotics. They envision creating consciousness software for general-purpose robotics that can solve present-moment problems instantly, similar to human cognition. This work is explicitly framed as a step closer to achieving Artificial General Intelligence (AGI).
PeopleSuzanne Gildert, CEO
Selected ResearchNirvanic’s public output focuses on the empirical testing of its core theory that links quantum mechanics to conscious agency in machines:
Gildert, Suzanne. (2025). Testing Quantum Consciousness in AI | Suzanne Gildert's scientific tests for robotic agency.