We worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress, respond with care, and guide people toward real-world support–reducing responses that fall short of our desired behavior by 65-80%.We recently updated ChatGPT’s default model(opens in a new window) to better recognize and support people in moments of distress. Today we’re sharing how we made those improvements and how they are performing. Working with mental health experts who have real-world clinical experience, we’ve taught the model to better recognize distress, de-escalate conversations, and guide people toward professional care when appropriate. We’ve also expanded access to crisis hotlines, re-routed(opens in a new window) sensitive conversations originating from other models to safer models, and added gentle reminders to take breaks during long sessions.We believe ChatGPT can provide a supportive space for people to process what they’re feeling, and guide them to reach out to friends, family, or a mental health professional when appropriate. Our safety improvements in the recent model update focus on the following areas: 1) mental health concerns such as psychosis or mania; 2) self-harm and suicide; and 3) emotional reliance on AI. Going forward, in addition to our longstanding baseline safety metrics for suicide and self-harm, we are adding emotional reliance and non-suicidal mental health emergencies to our standard set of baseline safety testing for future model releases.
pull down to refresh
related posts
21 sats \ 0 replies \ @zapsammy 11h
optimizing ChatGPT for mental health support is a weak cope;
the calculations and recommendations are aimed at the plane of effects;
it's like building a skyscraper on shaky ground, trying to figure out how to add more features for stability;
solutions have to be aimed at the plane of the causes; most people still have intact normal reactions to the problems of daily life - they feel it! the psychologists and psychiatrists (academons) are then attempting to fool people into believing that life is actually okay, and do not address the underlying problems, such as lack of education about morality;
reply
0 sats \ 1 reply \ @SimpleStacker 11h
I think this is losing the plot. The responsibility for mental health should never have fallen onto an AI company in the first place!
reply
21 sats \ 0 replies \ @0xbitcoiner OP 11h
Yeah, but let’s be real, if companies can help, that’s a win.
reply