OpenAI has been under intense legal and public pressure to improve the way its flagship AI product ChatGPT responds when a user express suicidal feelings.
On Thursday, the company launched a feature called Trusted Contact, which allows users to designate an adult to notify should the user talk about self-harm or suicide in a serious or concerning way.
The optional feature only encourages the trusted contact to reach out to the user. It does not share chat transcripts or conversation details.
4 reasons not to turn ChatGPT into your therapist
Support authors and subscribe to content
This is premium stuff. Subscribe to read the entire article.













