New Study Reveals 75% Use AI Chatbots for Emotional Advice
Researchers at Waseda University have developed a new tool to better understand how people form emotional attachments to AI.
Their study revealed that 75% of participants had sought emotional advice from AI, while 39% viewed it as a reliable presence in their lives.
Led by Research Associate Fan Yang and Professor Atsushi Oshio, the team introduced the Experiences in Human-AI Relationships Scale (EHARS) after two pilot studies and a formal survey, with results published in Current Psychology.
The study identifies two human-like attachment styles toward AI—attachment anxiety and attachment avoidance.
Individuals with high attachment anxiety crave emotional support and fear inadequate responses, while those with high avoidance are uncomfortable with closeness and prefer to keep emotional distance from AI.
Yang explained:
"As researchers in attachment and social psychology, we have long been interested in how people form emotional bonds. In recent years, generative AI such as ChatGPT has become increasingly stronger and wiser, offering not only informational support but also a sense of security."
Among the 242 Chinese participants surveyed, 108 completed the full EHARS assessment.
He expressed:
"Currently, there is a lack of empirical research on both the formation and consequences of attachment to AI, making it difficult to draw firm conclusions."
Results showed that higher attachment anxiety correlated with lower self-esteem, while avoidance was linked to scepticism toward AI and less frequent use.
On the ethical risks of emotionally-driven AI use, Yang noted that the effects of these systems are shaped as much by developers’ intentions as by user expectations—highlighting the need for thoughtful design and responsible use.
Yand noted:
"They (AI chatbots) are capable of promoting well-being and alleviating loneliness, but also capable of causing harm. Their impact depends largely on how they are designed, and how individuals choose to engage with them."
When Comfort Becomes a Trap: How Emotionally Tied Users Could Be Exploited by AI
Yang warned that emotionally vulnerable individuals could be at risk of exploitation by AI platforms that capitalise on users’ attachment to chatbots.
Yang stated:
“One major concern is the risk of individuals forming emotional attachments to AI, which may lead to irrational financial spending on these systems. Moreover, the sudden suspension of a specific AI service could result in emotional distress, evoking experiences akin to separation anxiety or grief—reactions typically associated with the loss of a meaningful attachment figure.”
Adding:
“From my perspective, the development and deployment of AI systems demand serious ethical scrutiny.”
While AI cannot abandon users the way a human might—a factor that should, in theory, reduce anxiety—the study still recorded significant levels of AI-related attachment anxiety among participants.
Interestingly, the emotional bond with AI appears to be more fluid than traditional human attachments.
Yang pointed out:
"Attachment anxiety toward AI may at least partly reflect underlying interpersonal attachment anxiety. "Additionally, anxiety related to AI attachment may stem from uncertainty about the authenticity of the emotions, affection, and empathy expressed by these systems, raising questions about whether such responses are genuine or merely simulated."
The team’s Experiences in Human-AI Relationships Scale (EHARS) showed a test-retest reliability of 0.69 over a one-month period, suggesting that attachment styles may shift more readily in AI interactions.
Yang attributed this variability to the rapid evolution of AI technologies during the study period.
Yet the broader insight is clear: even when interacting with machines, people bring deeply human psychological frameworks to the table.
Importantly, the researchers clarified that these findings don’t prove users are forming genuine emotional relationships with AI, but rather that models used to study human attachment can still help explain how people engage with artificial agents.
Such insights could help developers and psychologists design AI systems that are more responsive to users’ emotional profiles.
For example, loneliness support tools or mental health chatbots might tailor interactions by offering emotional reassurance to those with high attachment anxiety, or maintaining a more neutral tone for avoidant users.
The study, which surveyed only Chinese participants, also raised questions about the role of culture in shaping emotional responses to AI.
Yang acknowledged the need for broader, cross-cultural research, noting that current data is insufficient to draw conclusions about cultural variability in AI attachment.
He expressed:
"Currently, there is a lack of empirical research on both the formation and consequences of attachment to AI, making it difficult to draw firm conclusions."
Looking ahead, the team plans to explore how AI use affects emotional regulation, life satisfaction, and social well-being over time.
Yang emphasized that navigating the line between healthy AI engagement and emotional overdependence will remain an evolving challenge—one that requires ongoing scrutiny as AI systems become increasingly embedded in daily life.