• Dekkia@this.doesnotcut.it
    link
    fedilink
    English
    arrow-up
    12
    ·
    10 months ago

    I struggle to see why numerous scientists (and even Sam ‘AI’ Altman himself) would be wrong about this but a random substack post holds the truth.

    • Takapapatapaka@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Having read the entire post, i think there’s a misunderstanding :

      • this post is about ChatGPT and LLM chatbots in general, not AI as a whole.
      • This post claims to be 100% aligned with scientists and that AI as a whole is bad for the environment.
      • What they claim is that chatbots are only 1-3% of AI use and yet benefit to 400 million people (rest is mostly business stuff and serves more entreprises or very specific needs), therefore they do not consume much by themselves (just like we could keep 1-3% of cars going and be just fine with environment)
    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      10
      ·
      10 months ago
      1. Have you read the post?

      2. If you’d like to refute the content on the grounds of another scientist, can you please provide a reference? I will read it

  • Beppe_VAL@feddit.it
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    10 months ago

    Even Sam Altman acknowledged last year the huge amount of energy needed by chatgpt, and the need for a breakthrough in energy breakthrough…

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      23
      ·
      10 months ago

      Do you hold Sam Altman’s opinion higher than the reasoning here? In general or just on this particular take?

  • jonathan@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    ChatGPT energy costs are highly variable depending on context length and model used. How have you factored that in?

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      9
      ·
      10 months ago

      This isn’t my article and yes that’s controlled for

  • Takapapatapaka@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    10 months ago

    I was very sceptical at first, but this article kinda convinced me. I think it still has some bad biases (it often only considers 1 chatgpt request in its comparisons, when in reality you quickly make dozens of them, it often says ‘how weird to try and save tiny amounts of energy’ when we do that already with lights when leaving rooms, water when brushing teeths, it focuses on energy (to train, cool and generate electricity) and not on logistics and hardware required), but overall two arguments got me :

    • one chatgpt request seems to consume around 3Wh, which is relatively low
    • even with daily billions of requests, chatbots seems to represent less than 5% of AI power consumption, which is the real problem and lies in the hand of corporates.

    Still probably cant hurt to boycott that stuff, but it’d be more useful to use less social media, especially those with videos or pictures, and watch videos in 140p

  • superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    10 months ago

    tl/dr: “Yes it is, but not as much as other things so stop worrying.”

    What a bullshit take.

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 months ago

      What makes this a bullshit take? Focusing attention on actual problems is a great way to make progress

      • NeilBrü@lemmy.world
        cake
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        10 months ago

        Oof, ok, my apologies.

        I am, admittedly, “GPU rich”; I have ~48GB of VRAM at my disposal on my main workstation, and 24GB on my gaming rig. Thus, I am using Q8 and Q6_L quantized .gguf files.

        Naturally, my experience with the “fidelity” of my LLM models re: hallucinations would be better.

    • anus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      8
      ·
      10 months ago

      I actually think that (presently) self hosted LLMs are much worse for hallucination