Side Hustles
OpenAI Says ChatGPT Could Cause Emotional Dependence: Expert
When the latest version of ChatGPT was released in May, it came with a few emotional voices that made the chatbot sound more human than ever.
Listeners called the voices “flirty,” “convincingly human,” and “sexy.” Social media users said they were “falling in love” with it.
But on Thursday, ChatGPT-creator OpenAI released a report confirming that ChatGPT’s human-like upgrades could lead to emotional dependence.
“Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships,” the report reads.
Related: Only 3 of the Original 11 OpenAI Cofounders Are Still at the Company After Another Leader Departs
ChatGPT can now answer questions voice-to-voice with the ability to remember key details and use them to personalize the conversation, OpenAI noted. The effect? Talking to ChatGPT now feels very close to talking to a human being — if that person didn’t judge you, never interrupted you, and didn’t hold you accountable for what you said.
These standards of interacting with an AI could change the way human beings interact with each other and “influence social norms,” per the report.
Say hello to GPT-4o, our new flagship model which can reason across audio, vision, and text in real time: https://t.co/MYHZB79UqN
Text and image input rolling out today in API and ChatGPT with voice and video in the coming weeks. pic.twitter.com/uuthKZyzYx
— OpenAI (@OpenAI) May 13, 2024
OpenAI stated that early testers spoke to the new ChatGPT in a way that showed they could be forming an emotional connection with it. Testers said things, such as, “This is our last day together,” which OpenAI said expressed “shared bonds.”
Experts, meanwhile, are questioning if it’s time to reevaluate how realistic these voices can be.
“Is it time to pause and consider how this technology affects human interaction and relationships?” Alon Yamin, cofounder and CEO of AI plagiarism checker Copyleaks, told Entrepreneur.
“[AI] should never be a replacement for actual human interaction,” Yamin added.
To better understand this risk, OpenAI said more testing over longer periods and independent research could help.
Another risk OpenAI highlighted in the report was AI hallucinations or inaccuracies. A human-like voice could inspire more trust in listeners, leading to less fact-checking and more misinformation.
Related: Google’s New AI Search Results Are Already Hallucinating
OpenAI isn’t the first company to comment on AI’s effect on social interactions. Last week, Meta CEO Mark Zuckerberg said that Meta has seen many users turn to AI for emotional support. The company is also reportedly trying to pay celebrities millions to clone their voices for AI products.
OpenAI’s GPT-4o release sparked a conversation about AI safety, following the high-profile resignations of leading researchers like former chief scientist Ilya Sutskever.
It also led to Scarlett Johansson calling out the company for creating an AI voice that, she said, sounded “eerily similar” to hers.
Read the full article here
-
Side Hustles7 days ago
5 Myths About Young Shoppers and How Retailers Can Reach Them
-
Passive Income5 days ago
The One Microsoft Design Tool Business Owners Shouldn’t Miss
-
Investing7 days ago
U.K. stocks lower at close of trade; Investing.com United Kingdom 100 down 0.07% By Investing.com
-
Investing7 days ago
Want Your Workers to Be More Productive? You Need a Better Way to Measure Their Contributions
-
Side Hustles7 days ago
Take Control of Your Projects for Life for Just $79.97
-
Side Hustles3 days ago
The DOJ Reportedly Wants Google to Sell Its Chrome Browser
-
Passive Income7 days ago
How to Build a Lasting Career in the Creator Economy
-
Side Hustles6 days ago
Holiday Savings: Get a MacBook Air for $250