Please don’t fall in love with our AI chatbots, urges OpenAI

Key findings

  • OpenAI is concerned that users may develop an emotional bond due to the human-like interactions with ChatGPT-4o.
  • The company fears that too hasty a belief in AI could lead to investing in false beliefs and changing social norms.
  • OpenAI aims to monitor and adapt its systems to prevent users from confusing AI like ChatGPT-4o with human interaction.



Companies that develop AI chatbots strive to make their models as human-like as possible. However, OpenAI seems to be a little concerned about how human is “too human” and how that might affect societal norms. The company has published a detailed report on its operations, and part of it makes it very clear: The company would really, really prefer if you didn't develop feelings for ChatGPT-4o.

Related

Solos AirGo 3 review: Even with ChatGPT, these great glasses aren’t really smart

Of all the smart glasses I've tested, this is the best pair of glasses ever. However, I can't say the same about the smart features.

OpenAI describes its concerns about how people might build relationships with ChatGPT-4o

A laptop with ChatGPT running Windows 11


As mentioned in a blog post by OpenAI, the company has worked hard to make ChatGPT-4o feel like you're talking to a human. This includes a new voice feature that tries to emulate human speech, as well as faster responses. The end result means you can have a conversation with the chatbot, just like you would with a human.

There is only one small problem – OpenAI noticed that people started treat ChatGPT-4o like a human:

During initial testing, including red teaming and internal user testing, we observed that users used language that could indicate attachment to the model. For example, this includes phrases that express shared bonds, such as “This is our last day together.” While these instances seem innocuous, they indicate that further research is needed to understand how these effects might manifest over longer periods of time.


OpenAI states that this is bad news for two reasons. First, we are more likely to take what an AI says at face value if it seems human-like. This means that humans are much more likely to hallucinate than if the AI ​​comes across as robotic.

Second, OpenAI fears that social norms could be distorted by ChatGPT-4o. The company worries that it could reduce people's need for social interaction, “potentially benefiting lonely people but potentially detrimental to healthy relationships.” In addition, OpenAI fears that people could start talking to people as if they were ChatGPT-4o. That's pretty bad considering OpenAI specifically designed ChatGPT-4o to stop talking as soon as the user starts talking on it. Transposing that expectation into the human world wouldn't go over so well.


For now, OpenAI will watch how people form an emotional connection with its chatbots and tweak its systems as needed, but it's an interesting glimpse into the side effects of making an AI model as tangible and accessible as possible.

Leave a Comment