How ChatGPT Is Making Its Frequent Users Lonelier, New Research Explained
MIT and OpenAI studies said those with stronger emotional attachment tendencies experienced more loneliness. In contrast, those with a higher level of trust in ChatGPT experienced more emotional dependence

A research by MIT Media Lab and OpenAI has shown that frequent users of ChatGPT may tend to feel high levels of loneliness and get more emotionally dependent on the AI tool.
The studies underlined that only a small number of users engage emotionally with ChatGPT, but those who do are among the heaviest users.
related stories
“Overall, higher daily usage—across all modalities and conversation types—correlated with higher loneliness, dependence, and problematic use, and lower socialisation," the researchers said in an abstract for the two parallel studies.
How Does ChatGPT Impact Users’ Emotional Health?
The studies set out to investigate how daily interactions with ChatGPT affect a user’s emotional health, with a focus on the use of the chatbot’s advanced voice mode.
The research included a randomized controlled trial (RCT) by MIT, in which 1,000 participants used ChatGPT over four weeks, and an automated analysis of nearly 40 million ChatGPT interactions conducted by OpenAI.
Both the studies concluded that those with stronger emotional attachment tendencies tended to experience more loneliness. In contrast, those with a higher level of trust in the chatbot experienced more emotional dependence.
They also pointed out that “power users" tended to consider ChatGPT as their friend or someone with humanlike emotions.
“Personal" conversations with the AI tool were correlated with higher levels of loneliness among users, the study highlighted.
“Results showed that while voice-based chatbots initially appeared beneficial in mitigating loneliness and dependence compared with text-based chatbots, these advantages diminished at high usage levels, especially with a neutral-voice chatbot," the researchers said.
Why Are People Using AI For Companionship?
With more than 400 million active weekly users worldwide, many people are turning to ChatGPT for advices, suggestions and companionship.
According to a 2024 YouGov survey, just over half of young Americans aged 18 to 29 felt comfortable speaking to an AI about mental health concerns.
Another study even suggests that OpenAI’s chatbot gives better personal advice than professional columnists.
While some users say the chatbot help ease their loneliness, there has been increasing scrutiny and research on the ill effects of the interacting with the AI tool.
Recently, AI company Character.ai has been sued for its concerning interactions with minors.
The researchers said the new findings “underscore the complex interplay between chatbot design choices (e.g., voice expressiveness) and user behaviours (e.g., conversation content, usage frequency" and called for further work to investigate “whether chatbots’ ability to manage emotional content without fostering dependence or replacing human relationships benefits overall well-being," as reported by Fortune.
How Men And Women Use ChatGPT
The researchers found some intriguing differences between how men and women use ChatGPT. After using the chatbot for four weeks, female study participants were slightly less likely to socialize with people than their male counterparts who did the same.
While participants who interacted with ChatGPT’s voice mode in a gender that was not their own for their interactions reported significantly higher levels of loneliness and more emotional dependency on the chatbot at the end of the experiment.
In 2023 MIT Media Lab researchers found that chatbots tend to mirror the emotional sentiment of a user’s messages, suggesting a kind of feedback loop where the happier you act, the happier the AI seems, or if you act sadder, so does the AI.
Why Is This Dangerous?
Dr Andrew Rogoyski, a director at the Surrey Institute for People-Centred Artificial Intelligence, told Guardian that because people were hard-wired to think of a machine behaving in human-like ways as a human, AI chatbots could be “dangerous", and far more research was needed to understand their social and emotional impacts.
“In my opinion, we are doing open-brain surgery on humans, poking around with our basic emotional wiring with no idea of the long-term consequences. We’ve seen some of the downsides of social media – this is potentially much more far-reaching," he told Guardian.
Kate Devlin, a professor of AI and society at King’s College London, told MIT Technology Review, although the research is welcome, it is still difficult to identify when a human is and isn’t “engaging with technology on an emotional level". She says the study participants may have been experiencing emotions that were not recorded by the researchers.
- Location :
- First Published: