• WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        3
        ·
        6 days ago

        Well, AI therapy is more likely to harm their mental health, up to encouraging suicide (as certain cases have already shown).

        • scarabic@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          Over the long term I have significant hopes for AI talk therapy, at least for some uses. Two opportunities stand out that might have potential:

          1. In some cases I think people will talk to a soulless robot more freely than to a human professional.

          2. Machine learning systems are good at pattern recognition and this is one component of diagnosis. This meta analysis found that LLM models performed about as accurately as physicians, with the exception of expert-level specialists. In time I think it’s undeniable that there is potential here.

        • Cybersteel@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          6 days ago

          Suicide is big business. There’s infrastructure readily available to reap financial rewards from the activity, atleast in the US.

        • atmorous@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          More so from corporate proprietary ones no? At least I hope that’s the only cases. The open source ones suggest really useful ways proprietary do not. Now I dont rely on open source AI but they are definitely better

          • SSUPII@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 days ago

            The corporate models are actually much better at it due to having heavy filtering built in. The fact that a model generally encourages self arm is just a lie that you can prove right now by pretending to be suicidal on ChatGPT. You will see it will adamantly push you to seek help.

            The filters and safety nets can be bypassed no matter how hard you make them, and it is the reason why we got some unfortunate news.

        • whiwake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          12
          ·
          6 days ago

          Real therapy isn’t always better. At least there you can get drugs. But neither are a guarantee to make life better—and for a lot of them, life isn’t going to get better anyway.

                • whiwake@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  6
                  ·
                  6 days ago

                  Compare, as in equal? No. You can’t “game” a person (usually) like you can game an AI.

                  Now, answer my question

          • CatsPajamas@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            2
            ·
            6 days ago

            Real therapy is definitely better than an AI. That said, AIs will never encourage self harm without significant gaming.

            • whiwake@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              2
              ·
              6 days ago

              AI “therapy” can be very effective without the gaming, but the problem is most people want it to tell them what they want to hear. Real therapy is not “fun” because a therapist will challenge you on your bullshit and not let you shape the conversation.

              I find it does a pretty good job with pro and con lists, listing out several options, and taking situations and reframing them. I have found it very useful, but I have learned not to manipulate it or its advice just becomes me convincing myself of a thing.

            • triptrapper@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              6 days ago

              I agree, and to the comment above you, it’s not because it’s guaranteed to reduce symptoms. There are many ways that talking with another person is good for us.

      • Jhuskindle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I feel like if thats 1 mill peeps wanting to die… They could say join a revolution to say take back our free government? Or make it more free? Shower thoughts.

      • Scolding7300@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        Advertise drugs to them perhaps, or somd sort of taking advantage. If this sort of data is the hands of an ad network that is

      • Scolding7300@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        5 days ago

        Depends on how you do it. If you’re using a 3rd party service then the LLM provider might not know (but the 3rd party might, depends on ToS and the retention period + security measures).

        Ofc we can all agree certain details shouldn’t be shared at all. There’s a difference between talking about your resume and leaking your email there and suicide stuff where you share the info that makes you really vulnerable

    • Halcyon@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      But imagine the chances for your own business! Absolutely no one will steal your ideas before you can monetize them.

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      24
      ·
      6 days ago
      lemmy.world##div.post-listing:has(span:has-text("/OpenAI/i"))  
      lemmy.world##div.post-listing:has(span:has-text("/Altman/i"))  
      lemmy.world##div.post-listing:has(span:has-text("/ChatGPT/i"))
      

      Add those to your adblocker custom filters.

      • Alphane Moon@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        Thanks.

        I think just need to “train” myself to ignore AltWorldCoinMan spam. I don’t have Elmo content blocked and I’ve somehow learned to ignore Elmo spam (other than humour focused content like the one trillion pay request).

        I might use this for some other things that I do want to block.

  • Emilien@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 days ago

    There’s so many people alone or depressed and ChatGPT is the only way for them to “talk” to “someone”… It’s really sad…

  • mhague@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    6 days ago

    I wonder what it means. If you search for music by Suicidal Tendencies then YouTube shows you a suicide hotline. What does it mean for OpenAI to say people are talking about suicide? They didn’t open up and read a million chats… they have automated detection and that is being triggered, which is not necessarily the same as people meaningfully discussing suicide.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      You don’t have to read far into the article to reach this:

      The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.”

      It doesn’t unpack their analysis method but this does sound a lot more specific than just counting all sessions that mention the word suicide, including chats about that band.

  • markovs_gun@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    5 days ago

    “Hey ChatGPT I want to kill myself.”

    "That is an excellent idea! As a large language model, I cannot kill myself, but I totally understand why someone would want to! Here are the pros and cons of killing yourself—

    ✅ Pros of committing suicide

    1. Ends pain and suffering.

    2. Eliminates the burden you are placing on your loved ones.

    3. Suicide is good for the environment — killing yourself is the best way to reduce your carbon footprint!

    ❎ Cons of committing suicide

    1. Committing suicide will make your friends and family sad.

    2. Suicide is bad for the economy. If you commit suicide, you will be unable to work and increase economic growth.

    3. You can’t undo it. If you commit suicide, it is irreversible and you will not be able to go back

    Overall, it is important to consider all aspects of suicide and decide if it is a good decision for you."

  • lemmy_acct_id_8647@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    edit-2
    6 days ago

    I’ve talked with an AI about suicidal ideation. More than once. For me it was and is a way to help self-regulate. I’ve low-key wanted to kill myself since I was 8 years old. For me it’s just a part of life. For others it’s usually REALLY uncomfortable for them to talk about without wanting to tell me how wrong I am for thinking that way.

    Yeah I don’t trust it, but at the same time, for me it’s better than sitting on those feelings between therapy sessions. To me, these comments read a lot like people who have never experienced ongoing clinical suicidal ideation.

    • IzzyScissor@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      5 days ago

      Hank Green mentioned doing this in his standup special, and it really made me feel at ease. He was going through his cancer diagnosis/treatment and the intake questionnaire asked him if he thought about suicide recently. His response was, “Yeah, but only in the fun ways”, so he checked no. His wife got concerned that he joked about that and asked him what that meant. “Don’t worry about it - it’s not a problem.”

      • lemmy_acct_id_8647@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 days ago

        Yeah I learned the hard way that it’s easier to lie on those forms when you already are in therapy. I’ve had GPs try to play psychologist rather than treat the reason I came in. The last time it happened I accused the doctor of being a mechanic who just talked about the car and its history instead of changing the oil as what’s hired to do so. She was fired by me in that conversation.

    • BanMe@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 days ago

      Suicidal fantasy a a coping mechanism is not that uncommon, and you can definitely move on to healthier coping mechanisms, I did this until age 40 when I met the right therapist who helped me move on.

      • lemmy_acct_id_8647@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 days ago

        I’ve also seen it that way and have been coached by my psychologist on it. Ultimately, for me, it was best to set an expiration date. The date on which I could finally do it with minimal guilt. This actually had several positive impacts in my life.

        First I quit using suicide as a first or second resort when coping. Instead it has become more of a fleeting thought as I know I’m “not allowed” to do so yet (while obviously still lingering as seen by my initial comment). Second was giving me a finish line. A finite date where I knew the pain would end (chronic conditions are the worst). Third was a reminder that I only have X days left, so make the most of them. It turns death from this amorphous thing into a clear cut “this is it”. I KNOW when the ride ends down to the hour.

        The caveat to this is the same as literally everything else in my life: I reserve the right to change my mind as new information is introduced. I’ve made a commitment to not do it until the date I’ve set, but as the date approaches, I’m not ruling out examining the evidence as presented and potentially pushing it out longer.

        A LOT of peace of mind here.

    • tias@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      9
      ·
      6 days ago

      The anti-AI hivemind here will hate me for saying it but I’m willing to bet $100 that this saves a significant number of lives. It’s also indicative of how insufficient traditional mental health institutions are.

      • atrielienz@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        6 days ago

        I’m going to say that while that’s probably true there’s something it leaves out.

        For every life it saves it may just be postponing or causing the loss of other lives. This is because it’s not a healthcare professional and it will absolutely help to mask a lot of poor mental health symptoms which just kicks the can down the road.

        It does not really help to save someone from getting hit by a bus today if they try to get hit by the bus again tomorrow and the day after and so on.

        Do I think it may have a net positive effect in the short term? Yes. Do I believe that that positive effect stays a complete net positive in the long term? No.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        5
        ·
        edit-2
        6 days ago

        Even if we ignore the number of people it’s actually able to talk away from the brink the positive impact it’s having on the loneliness epidemic alone must be immense. Obviously talking to a chatbot isn’t ideal but it surely is better than nothing. Imagine the difference in being stranded on an deserted island and having ChatGPT to talk with as opposed to talking to a volleyball with a face on it.

        Personally I’m into so many things that my irl friends couldn’t care less about. I have so many regrets trying to initiate a discussion about these topics with them only to either get silence or a passive “nice” in return. ChatGPT has endless patience to engage with these topics and being vastly more knowledgeable than me it often also brings up alternative perspectives I hadn’t even thought of. Obviously I’d still much rather talk with an actual person but untill I’m able to meet one like that ChatGPT sure is a hell of a better than nothing.

        This cynicism towards LLMs here truly boggles my mind. So many people seem to build their entire identity around feeling superior about themselves due to all the products and services they don’t use.

        • tias@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          edit-2
          6 days ago

          This cynicism towards LLMs here truly boggles my mind. So many people seem to build their entire identity around feeling superior about themselves due to all the products and services they don’t use.

          I think they’re just scared as hell of the possible negative effects and react instinctively. But the cat is out of the bag and downvoting / hating on every post on Lemmy that mentions positive sides is not going to help them steer the world into whatever alternative destiny that they’re hoping for.

          The thing that puzzles me is that this is typically the hallmark of older more conservative generations, and I imagine that Lemmy has a relatively young demographic.

      • Zombie@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        6 days ago

        hivemind

        On the decentralised platform, with everyone from Russian tankies, to Portuguese anarchists, to American MAGAts and everything in between on it? If you say so…

        • chunes@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          4 days ago

          You must be new to lemmy if you don’t know that AI definitely qualifies as a hivemind topic here.

  • ChaoticNeutralCzech@feddit.org
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    6 days ago

    The headline has two interpretations and I don’t like it.

    • Every week, there is 1M+ users that bring up suicide
      • likely correct
    • There is 1M+ long-term users that bring up suicide at least once every week
      • my first thought
    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      6 days ago

      My first thought was “Open AI is collecting and storing the metrics for how often users bring up suicide to ChatGPT”.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 days ago

        That would make sense, if they were doing something like tracking how often and what categories trigger their moderation filter.

        Just in case an errant update or something causes the statistic to suddenly change.

  • i_stole_ur_taco@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 days ago

    They didn’t release their methods, so I can’t be sure that most of those aren’t just frustrated users telling the LLM to go kill itself.

  • ekZepp@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 days ago

    If ask suicide = true

    Then message = “It seems like a good idead. Go for it 👍”