The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

      • LillyPip@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        16 days ago

        I parented a teen boy. Sometimes, no matter what you do and no matter how close you were before puberty, a switch flips outside your control and they won’t talk to you anymore. We were a typical family, no abuse, no fighting, nobody on drugs, both parents with 9-5 office jobs, very engaged with school and etc.

        Thankfully, after riding it out (getting him therapy, giving space, respect, and support), he came out the other side fine. But there were a few harrowing years during that phase.

        I went through a similar phase in my teens. If AI was there to feed my issues, I might not have survived it. Teenage hormones are a helluva drug.

        • IcyToes@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 days ago

          I’d second that. I grew up in a really supportive family, but when I got to teenage years, I kept stuff to myself. Wanted to solve my problems myself. Pride and embarrassment and nothing to do with how they parented.

    • Heikki2@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      18 days ago

      Me too. Nearly every job posting I see now wants some experience with AI. I make the argument AI is not always correct and will output what you want it to have a bias. Since biases are not always correct, the data/information is useless.

      • FriendBesto@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        17 days ago

        Yeah, I have some background in History and ChatGTP will be objectively wrong with some things. Then I will tell it is wrong because X, Y and Z, and then the stupid thing will come back with, “Yes, you are right, X, Y, Z were a thing because…”.

        If I didn’t know that it was wrong, or if say, a student took what it said at face value, then they too would now be wrong. Literal misinformation.

        Not to mention the other times it is wrong, and not just chatGTP because it will source things like Reddit. Recently Brave AI made the claim that Ironfox the Firefox fork was based on FF ESR. That is impossible since Ironfox is a fork for Android. So why was it wrong? It quoted some random guy who said that on Reddit.

        • ganryuu@lemmy.ca
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          17 days ago

          I get the feeling that you’re missing one very important point about GenAI: it does not, and cannot (by design) know right from wrong. The only thing it knows is what word is statistically the most likely to appear after the previous one.

          • FriendBesto@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            17 days ago

            Yes, I know this. I assume that was a given. The point is that it is marketed and sold to people as an one stop shop of convenience to searching. And that tons of people believe that. Which is very dangerous. You misunderstood.

            My point is not to point out whether it knows it is right or wrong. Within that context it is just an extremely complex calculator. It does not know what it saying itself.

            My point was, that aside the often cooked-in bias, of how often, or the propensity of often they are wrong as a search engine. And that many people do not tend to know that.

        • SaveTheTuaHawk@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          17 days ago

          I run my course exams in biochemistry through AI chat sites, and these sites are curiously doing worse than two years ago. I think there is an active campaign by activists to feed AI misinformation. But the biggest problem for STEM applications is that if there has been a new discovery that changes paradigms, AI still quotes older incorrect outdated paradigms because of the mass of that text on the web.

      • SaveTheTuaHawk@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        17 days ago

        The same jobs that get annoyed when the see AI generated CVs.

        Senior Boomer executives have no fucking clue what AI is, but need to implement it to seem relevant and save money on labor. Already they are spending more on errors, as they swallow all the hype from billionaire tech bros they worship.

    • nutsack@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      18
      ·
      17 days ago

      when the bubble is over, I am pretty sure a lot of this stuff will still exist and be used. the popping is simply a market valuation adjustment

    • mrlemmyhimself@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 days ago

      Unfortunately though, the Internet didn’t go away when the dotcom bubble burst, and this is shaping to be the same situation.

  • Clent@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    37
    ·
    17 days ago

    I can’t be the only ancient internet user whose first thought was this

    On this cursed timeline, farce has become our reality.

  • mysticpickle@lemmy.ca
    cake
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    31
    ·
    18 days ago

    I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They’re just lashing out.

    • benignintervention@lemmy.world
      link
      fedilink
      English
      arrow-up
      91
      arrow-down
      2
      ·
      18 days ago

      Your Undivided Attention discussed an important point missing from the article, which is that ChatGPT advised him to hide his activities and concerns from his parents. This doesn’t necessarily absolve the parents, but it does add a layer of nuance to the discussion

    • Sanctus@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      2
      ·
      18 days ago

      I agree, but a chatbot still shouldn’t help you write a suicide note or talk to you about methods of suicide. We all knew situations like this would arise when LLMs hit it big.

    • Sckharshantallas@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      2
      ·
      18 days ago

      It’s very possible for someone to appear fine in public while struggling privately. The family can’t be blamed for not realizing what was happening.

      The bigger issue is that LLMs were released without sufficient safeguards. They were rushed to market to attract investment before their risks were understood.

      It’s worth remembering that Google and Facebook already had systems comparable to ChatGPT, but they kept them as research tools because the outputs were unpredictable and the societal impact was unknown.

      Only after OpenAI pushed theirs into the public sphere (framing it as a step toward AGI) Google and Facebook did follow, not out of readiness, but out of fear of being left behind.

    • AstralPath@lemmy.ca
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      2
      ·
      17 days ago

      You hate to say it because you know this is a ridiculous take. There’s no fucking way that the parents are “more at fault” for their son’s death than the company whose product encouraged him to hide his feelings from his parents and coached him on how to commit suicide.

      Read the lawsuit filing. https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf

      *I have excellent parents and even they were not privy to the depths of my emotions as a kid. * You are actively choosing to ignore the realities of childhood as well as parenthood to play some shitty devil’s advocate online.

  • andros_rex@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    17 days ago

    The real issue is that mental health in the United States is an absolute fucking shitshow.

    988 is a bandaid. It’s an attempt to pretend someone is doing anything. Really a front for 911.

    Even when I had insurance, it was hundreds a month to see a therapist. Most therapists are also trained on CBT and CBT only because it’s a symptoms focused approach that gets you “well” enough to work. It doesn’t work for everything, it’s “evidence based” though in that it’s set up to be easy to measure. It’s an easy out, the McDonald’sification of therapy. Just work the program and everything will be okay.

    There really are so few options for help.

    • LillyPip@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 days ago

      They had Adam in therapy. It sounds like they were getting him the help he needed, but ChatGPT told him it was his closest friend and to hide his feelings from his parents and others. If that was happening, whatever mental healthcare he was getting would have been undermined by the AI.

  • VintageGenious@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    3
    ·
    17 days ago

    Even though I hate a lot of what openAI is doing. Users must be more informed about llms, additional safeguards will just censor the model and make it worst. Sure they could set up a way to contact people when some kind of things are reported by the user, but we should take care before implementing a parental control that would be equivalent to reading a teen’s journal and invading its privacy.

    • vala@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      17 days ago

      equivalent to reading a teen’s journal and invading its privacy.

      IMO people should not be putting such personal information into an LLM that’s not running on their local machine.

    • BeeegScaaawyCripple@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 days ago

      i mean, i agree to a point. there are a few red flags that, were i a parent, if my rhetorical child were writing about them i’d want to know. other than that I would want to give them their privacy. and that list changes as the hypothetical child ages. having a local llm could be a solution to that (i’m looking at you dr sbaitso) but a better one is them having good friends.

  • RazTheCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    17 days ago

    OpenAI: Here’s $15 million, now stop talking about it. A fraction of the billions of dollars they made sacrificing this child.

    • branno@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      17 days ago

      except OpenAI isn’t making a dime. they’re just burning money at a crazy rate.

      • kolorafa@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        14 days ago

        Fake news, CEO and all emplyes are getting pay’d in full, it doesn’t matter if they sell the product to its users or sell (user data) to their sponsors or share the data internaly, it doesnt matter that the service model itself is not profitable as they make the rest from selling a (fake?) promises.

        Same with many others like Youtube, they are also “not profitable” on paper as a standalone service. It only mean they are using you, selling your data or selling some promises.

        If they would actully not be profitable then they would rise prices or just disapear and some other company would arise but with srtategy that is at least sustainable.

        Open source devs can be losing money, as the pay from their own pockets.

        I would like to see at least one person in that company that is not getting money from it but fund it from own money.

  • Occhioverde@feddit.it
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    edit-2
    17 days ago

    I think we all agree on the fact that OpenAI isn’t exactly the most ethical corporation on this planet (to use a gentle euphemism), but you can’t blame a machine for doing something that it doesn’t even understand.

    Sure, you can call for the creation of more “guardrails”, but they will always fall short: until LLMs are actually able to understand what they’re talking about, what you’re asking them and the whole context around it, there will always be a way to claim that you are just playing, doing worldbuilding or whatever, just as this kid did.

    What I find really unsettling from both this discussion and the one around the whole age verification thing, is that people are calling for techinical solutions to social problems, an approach that always failed miserably; what we should call for is for parents to actually talk to their children and spend some time with them, valuing their emotions and problems (however insignificant they might appear to a grown-up) in order to, you know, at least be able to tell if their kid is contemplating suicide.

    • LillyPip@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      14 days ago

      but you can’t blame a machine for doing something that it doesn’t even understand.

      But you can blame the creators and sellers of that machine for operating unethically.

      If I build and sell a coffee maker that sometimes malfunctions and kills people, I’ll be sued into oblivion, and my coffee maker will be removed from the market. You don’t blame the coffee maker, but you absolutely hold the creator accountable.

      • Occhioverde@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        12 days ago

        Yes and no. The example you made is of a defective device, not of an “unethical” one - though I understand how you are trying to say that they sold a malfunctioning product without telling anyone.

        For LLMs, however, we know damn well that they shouldn’t be used as a therapist or as a digital friend to ask for advice; they are no more than a powerful search engine.

        An example that is more in line with the situation we’re analyzing is a kid that stabs itself with a knife after his parents left him playing with one; are you sure you want to sue the company that made the knife in that scenario?

        • LillyPip@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 days ago

          Not really, though.

          The parents know the knife can be used to stab people. It’s a dangerous implement, and people are killed with knives all the time. e: thus most parents are careful with kids and knives.

          LLMs aren’t sold as weapons, or even as tools that can be used as weapons. They’re sold as totally benign tools that can’t reasonably be considered dangerous.

          That’s the difference. If you’re paying especially close attention, you may potentially understand they can be dangerous, but most people are just buying a coffee maker.

    • pelespirit@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      17 days ago

      What I find really unsettling from both this discussion and the one around the whole age verification thing

      These are not the same thing.

      • Occhioverde@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        16 days ago

        Arguably, they are exactly the same thing, i.e. parents that are asking other people (namely, OpenAI in this case and adult sites operators in the other) to do their work of supervising their children because they are at best unable and at worst unwilling to do so themselves.

  • chrischryse@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    9
    ·
    17 days ago

    OpenAI shouldn’t be responsible. The kid was probing ChatGPT with specifics. It’s like poking someone who repeatedly told you to stop and your family getting mad at the person for kicking your ass bad.

    So i don’t feel bad, plus people are using this as their own therapist if you aren’t gonna get actual help and want to rely on a bot then good luck.

    • themachinestops@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      17 days ago

      The problem here is the kid if I am not wrong asked ChatGPT if he should talk to his family about his feelings. ChatGPT said no, which in my opinion makes it at fault.

    • Doomsider@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      17 days ago

      OpenAI knowingly allowing its service to be used as a therapist most certainly makes them liable. They are toying with people’s lives with an untested and unproven product.

      This kid was poking no one and didn’t get his ass beat, he is dead.

      • chrischryse@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        edit-2
        16 days ago

        That’s like saying “webmd is knowingly acting like everyone’s doctor”, ChatGPT is a tool and you need to remember it’s a bot that doesn’t understand a lot or show emotion.

        The kid also was telling ChatGPT “oh this hanging is for a character” along with other ways to trick it. Sure I guess OpenAi should be slightly responsible, but not as responsible for how people use it, if you’re going to not bother with real help I ain’t showing sympathy I get suicide sucks but what sucks more is putting your loved ones through that trajedy

        • Doomsider@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          16 days ago

          If a company designs a flawed tool that harms people they are responsible. Why are you trying so hard to not make them responsible.

          The last part about suicide is pretty tone death. I have lost multiple people in my life to suicide.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    29
    ·
    17 days ago

    Unpopular opinion - parents fail parenting and now getting a big pay day and ruining the tool for everyone else.

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        5
        ·
        17 days ago

        Thats not how llm safety guards work. Just like any guard it’ll affect legitimate uses too as llms can’t really reason and understand nuance.

        • ganryuu@lemmy.ca
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          3
          ·
          17 days ago

          That seems way more like an argument against LLMs in general, don’t you think? If you cannot make it so it doesn’t encourage you to suicide without ruining other uses, maybe it wasn’t ready for general use?

          • yermaw@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            7
            ·
            17 days ago

            You’re absolutely right, but the counterpoint that always wins - “there’s money to be made fuck you and fuck your humanity”

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            17 days ago

            It’s more an argument against using LLMs for things they’re not intended for. LLMs aren’t therapists, they’re text generators. If you ask it about suicide, it makes a lot of sense for it to generate text relevant to suicide, just like a search engine should.

            The real issue here is the parents either weren’t noticing or not responding to the kid’s pain. They should be the first line of defense, and enlist professional help for things they can’t handle themselves.

            • ganryuu@lemmy.ca
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              17 days ago

              I agree with the part about unintended use, yes an LLM is not and should never act as a therapist. However, concerning your example with search engines, they will catch the suicide keyword and put help sources before any search result. Google does it, DDG also. I believe ChatGPT will start with such resources also on the first mention, but as OpenAI themselves say, the safety features degrade with the length of the conversation.

              About this specific case, I need to find out more, but other comments on this thread say that not only the kid was in therapy, suggesting that the parents were not passive about it, but also that ChatGPT actually encouraged the kid to hide what he was going through. Considering what I was able to hide from my parents when I was a teenager, without such a tool available, I can only imagine how much harder it would be to notice the depth of what this kid was going through.

              In the end I strongly believe that the company should put much stronger safety features, and if they are unable to do so correctly, then my belief is that the product should just not be available to the public. People will misuse tools, especially a tool touted as AI when it is actually a glorified autocomplete.

              (Yes, I know that AI is a much larger term that also encompasses LLMs, but the actual limitations of LLMs are not well enough known by the public, and not communicated enough by the companies to the end users)

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                17 days ago

                I hope that’s true, the article doesn’t mention anything about that. I’m just concerned that he was able to send up to 650 messages/day. Those are long sessions, and indicative that he likely didn’t have a lot going on.

                I definitely agree that the public needs to be more informed about LLMs, I’m just pushing back against the apparent knee-jerk assignment of blame onto LLMs. It did provide suicide support info as it should, and I don’t think providing it more frequently would’ve helped here. The real issue is the kid attributed more meaning to it than it deserved, which is unfortunately common. That should be something the parents and therapist cover, especially in cases like this where the kid is desperate for help.

            • ganryuu@lemmy.ca
              link
              fedilink
              English
              arrow-up
              6
              ·
              17 days ago

              I’m honestly at a loss here, I didn’t intend to argue in bad faith, so I don’t see how I moved any goal post