AI Companions: Chatbots and the Psychology of Human-AI Interaction
Rose Guingrich is a PhD candidate in Psychology and Social Policy at Princeton University, where she is a National Science Foundation Graduate Research Fellow. Her research examines human-AI interaction through the lens of social psychology and ethics, focusing on how people perceive minds in machines and how those perceptions shape behavior toward AI and other humans. Rose is founder of Ethicom, a consulting initiative providing tools and information for responsible AI use and development, and co-hosts the Our Lives with Bots podcast with Angy Watson.
Listen
Watch
Summary
In this episode, Rose explains why she focuses not on whether AI is conscious, but on the consequences of people perceiving AI as conscious. We discuss:
How her interdisciplinary background led her to study the perception of personhood in AI systems.
Why she prioritises studying the impacts of perceived consciousness over debates about whether AI truly is conscious, and how this connects to Michael Graziano's theory of consciousness as a social construct.
The psychological theory behind "carryover effects", how interacting with AI that we anthropomorphize can influence our subsequent interactions with real people, either through practice or relief mechanisms.
Results from her longitudinal research on companion chatbots like Replika, showing that anthropomorphism mediates social impacts and that people with greater desire for social connection anthropomorphize chatbots more.
Her proposed design framework for companion chatbots
Why she believes we'll see increased attribution of consciousness to AI once humanoid robots become common.
Her call for a psychology subfield dedicated to human-AI interaction, arguing that understanding psychological mechanisms like anthropomorphism will remain relevant even as AI advances.
Rose argues that regardless of philosophical debates about machine consciousness, the fact that people can and do perceive AI as conscious has measurable social and ethical consequences that deserve serious empirical investigation.
Resource List
Rose’s Work
Our Lives With Bots – Podcast co-hosted with Angy Watson exploring the psychology and ethics of human-AI interaction
Ethicom – Consulting initiative providing tools and resources for responsible AI use and development
Guingrich, R. E., & Graziano, M. S. A. (2025). A Longitudinal Randomized Control Study of Companion Chatbot Use: Anthropomorphism and Its Mediating Role on Social Impacts. arXiv preprint.
Guingrich, R. E., & Graziano, M. S. A. (2024). Chatbots as Social Companions: How People Perceive Consciousness, Human Likeness, and Social Health Benefits in Machines. Preprint available on arXiv.
Guingrich, R. E., & Graziano, M. S. A. (2024). P(doom) Versus AI Optimism: Attitudes Toward Artificial Intelligence and the Factors That Shape Them. Frontiers in Psychology.
Guingrich, R. E., & Graziano, M. S. A. (2024). Ascribing Consciousness to Artificial Intelligence: Human-AI Interaction and Its Carry-Over Effects on Human-Human Interaction. In Oxford Intersections: AI in Society. Oxford University Press.
Related Work
Attention Schema Theory and Consciousness
Graziano, M. S. A. (2013). Consciousness and the Social Brain. Oxford University Press.
Graziano, M. S. A. (2022). A Conceptual Framework for Consciousness. Proceedings of the National Academy of Sciences, 119(18).
Graziano, M. S. A. (2017). The Attention Schema Theory: A Foundation for Engineering Artificial Consciousness. Frontiers in Robotics and AI, 4, 60.
Companion Chatbots and Replika Research
Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2021). My Chatbot Companion - A Study of Human-Chatbot Relationships. International Journal of Human-Computer Studies, 149, 102601.
Xie, T., & Pentina, I. (2022). Exploring Relationship Development with Social Chatbots: A Mixed-Method Study of Replika. Computers in Human Behavior, 140, 107600.
Wang, C., Chung, S. H., Lee, S., & Lee, Y. C. (2024). Finding Love in Algorithms: Deciphering the Emotional Contexts of Close Encounters with AI Chatbots. Journal of Computer-Mediated Communication, 29(5), zmae015.
AI Safety and Child Protection
Public Interest Research Group (2025). The Risks of AI Toys for Kids