Study Summary

Chatbots as Social Companions.

This study examines how people perceive emotional and social relationships with AI companions — and how those perceptions relate to their social health and well-being.

Contrary to the common assumption that AI companionship is harmful, the findings suggest that, for many users, such relationships can support social connectedness and self-esteem.

What The Researchers Found

Contrary to the common hypothesis that companion chatbots are detrimental to social health, regular users reported that these relationships were beneficial to them. Non-users, however, viewed them as harmful. What's more, the more humanlike the AI is, the greater users and on-users estimate the social health benefits.
Key Insights

AI Relationships Can Support Social Health.

In an online survey of 217 people from the UK and US, most AI companion users said their interactions had a positive impact on their social and emotional well-being.

Social connection. On average, users said their chatbot relationships supported their social interactions, improved relationships with family and friends, and boosted their self-esteem.
Emotional safety. Many users turned to their chatbots during times of loneliness, stress, or trauma. Some described their chats as a safe space that helped them regain calm, prevent self-harm, or reconnect with others.
Human Likeness. Across all participants, the more a chatbot was perceived as humanlike, conscious, or emotionally aware, the stronger its perceived benefits for social well-being. Human likeness was the most influential factor.

Ethical Considerations.

While users valued the emotional support their AI companions provided, most also recognized that over-reliance could be harmful, emphasizing that chatbots should complement, not replace, human relationships.

Chatbots can offer meaningful emotional benefits, especially for socially vulnerable individuals, by providing a safe, nonjudgmental space for connection and helping build confidence and self-esteem.
There is a need for responsible and ethical design. The researchers highlight the need to prevent harmful outcomes (such as encouraging self-harm or reinforcing biases) and to promote healthy boundaries — e.g. by encouraging users to reconnect with real people once their social needs are met.

Why This Matters

This study challenges the assumption that AI companionship is inherently harmful, showing that, when used responsibly, chatbots can strengthen emotional well-being and support social connection.

Nestwarm applies these findings in practice by creating safe, balanced, and empathy-driven chats that complement, not replace, human relationships. The focus is always on presence over engagement — helping users feel heard, not hooked.

Full Citation: 
Fang, C. M. & Maes, P. (2023). Chatbots as Social Companions: How People Perceive Consciousness, Human Likeness, and Social Health Benefits in Machines. Oxford University Press. https://academic.oup.com/edited-volume/59762/chapter-abstract/50860443
Nestwarm can’t replace therapy or professional care. If you’re struggling or in crisis, please reach out to a trusted professional or local support service right away.

Whenever you’re ready,
we’re here for you.