Harmful, abusive interactions plague AI chatbots. Researchers have found that AI companions like
Character.AI, Nomi, and Replika are unsafe for teens under 18, ChatGPT has the potential to reinforce users’ delusional thinking, and even OpenAI CEO Sam Altman has spoken about ChatGPT users developing an “emotional reliance” on AI. Now, the companies that built these tools are slowly rolling out features that can mitigate this behavior.
On Friday, Anthropic said its Claude chatbot can now end potentially harmful conversations, which “is intended for use in rare, extreme cases of persistently harmful or abusive user interactions.” In a press release, Anthropic cited examples such as sexual content involving minors, violence, and even “acts of terror.”
“We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future,” Anthropic said in its press release on Friday. “However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible. Allowing models to end or exit potentially distressing interactions is one such intervention.”
Mashable Light Speed
Support authors and subscribe to content
This is premium stuff. Subscribe to read the entire article.