• carrion0409@lemm.ee
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    4
    ·
    2 months ago

    Everytime I see a post like this I lose a little more faith in humanity

  • Beppe_VAL@feddit.it
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    2 months ago

    Even Sam Altman acknowledged last year the huge amount of energy needed by chatgpt, and the need for a breakthrough in energy breakthrough…

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      18
      ·
      2 months ago

      Do you hold Sam Altman’s opinion higher than the reasoning here? In general or just on this particular take?

      • Neverclear@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        What would Altman gain from overstating the environmental impact of his own company?

        What if power consumption is not so much limited by the software’s appetite, but rather by the hardware’s capabilities?

        • anus@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          11
          ·
          2 months ago

          What would Altman gain from overstating the environmental impact of his own company?

          You should consider the possibility that CEOs of big companies essentially always think very hard about how to talk about everything so that it always benefits them

          I can see the benefits, I can try to explain if you’re actually interested

          • Neverclear@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            What weighs more: the cost of taking people at their word, or the effort it takes to interpret the subtext of every interaction?

              • Neverclear@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                You seem to spend a lot of energy questioning people’s intentions, inventing reasons to question whether people’s intentions toward you are genuine. Some do deserve to be questioned, no doubt. It just seems draining, and for what goal?

                Do you aim to be the sole determiner of truth? To never be duped again? To sharpen your skills as an investigator?

                How much more creative energy could you put into the world by taking people at their word in all but the highest risk cases?

                • anus@lemmy.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  Stoic desire to be informed and to be a force of good for others with like intentions

  • superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    tl/dr: “Yes it is, but not as much as other things so stop worrying.”

    What a bullshit take.

  • Takapapatapaka@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    2 months ago

    I was very sceptical at first, but this article kinda convinced me. I think it still has some bad biases (it often only considers 1 chatgpt request in its comparisons, when in reality you quickly make dozens of them, it often says ‘how weird to try and save tiny amounts of energy’ when we do that already with lights when leaving rooms, water when brushing teeths, it focuses on energy (to train, cool and generate electricity) and not on logistics and hardware required), but overall two arguments got me :

    • one chatgpt request seems to consume around 3Wh, which is relatively low
    • even with daily billions of requests, chatbots seems to represent less than 5% of AI power consumption, which is the real problem and lies in the hand of corporates.

    Still probably cant hurt to boycott that stuff, but it’d be more useful to use less social media, especially those with videos or pictures, and watch videos in 140p

      • NeilBrü@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 months ago

        Oof, ok, my apologies.

        I am, admittedly, “GPU rich”; I have ~48GB of VRAM at my disposal on my main workstation, and 24GB on my gaming rig. Thus, I am using Q8 and Q6_L quantized .gguf files.

        Naturally, my experience with the “fidelity” of my LLM models re: hallucinations would be better.

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      9
      ·
      2 months ago

      I actually think that (presently) self hosted LLMs are much worse for hallucination