The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

  • audaxdreik@pawb.social
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    4
    ·
    edit-2
    3 months ago

    I definitely do not agree.

    While they may not be entirely blameless, we have adults falling into this AI psychosis like the prominent OpenAI investor.

    What regulations are in place to help with this? What tools for parents? Isn’t this being shoved into literally every product in everything everwhere? Actually pushed on them in schools?

    How does a parent monitor this? What exactly does a parent do? There could have been signs they could have seen in his behavior, but could they have STOPPED this situation from happening as it was?

    This technology is still not well understood. I hope lawsuits like this shine some light on things and kick some asses. Get some regulation in place.

    This is not the parent’s fault and seeing so many people declare it just feels like apoligist AI hype.