You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.

  • awmwrites@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    83
    arrow-down
    33
    ·
    28 days ago

    My current list of reasons why you shouldn’t use generative AI/LLMs

    A) because of the environmental impacts and massive amount of water used to cool data centers https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117

    B) because of the negative impacts on the health and lives of people living near data centers https://www.bbc.com/news/articles/cy8gy7lv448o

    C) because they’re plagiarism machines that are incapable of creating anything new and are often wrong https://knowledge.wharton.upenn.edu/article/does-ai-limit-our-creativity/ https://www.plagiarismtoday.com/2024/06/20/why-ai-has-a-plagiarism-problem/

    D) because using them negatively affects artists and creatives and their ability to maintain their livelihoods https://www.sciencedirect.com/science/article/pii/S2713374523000316 https://www.insideradio.com/free/media-industry-continues-reshaping-workforce-in-2025-amid-digital-shift/article_403564f7-08ce-45a1-9366-a47923cd2c09.html

    E) because people who use AI show significant cognitive impairments compared to people who don’t https://www.media.mit.edu/publications/your-brain-on-chatgpt/ https://time.com/7295195/ai-chatgpt-google-learning-school/

    F) because using them might break your brain and drive you to psychosis https://theweek.com/tech/spiralism-ai-religion-cult-chatbot https://mental.jmir.org/2025/1/e85799 https://youtu.be/VRjgNgJms3Q

    G) because Zelda Williams asked you not to https://www.bbc.com/news/articles/c0r0erqk18jo https://www.abc.net.au/news/2025-10-07/zelda-williams-calls-out-ai-video-of-late-father-robin-williams/105863964

    H) because OpenAI is helping Trump bomb schools in Iran https://www.usatoday.com/story/opinion/columnist/2026/03/06/openai-pentagon-tech-surveillance-us-citizens/88983682007/

    I) because RAM costs have skyrocketed because OpenAI has used money it doesn’t have to purchase RAM from Nvidia that currently doesn’t exist to stock data centers that also don’t currently exist, inconveniencing everyone for what amounts to speculative construction https://www.theverge.com/news/839353/pc-ram-shortage-pricing-spike-news

    J) because Sam Altman says that his endgame is to rent knowledge back to you at a cost https://gizmodo.com/sam-altman-says-intelligence-will-be-a-utility-and-hes-just-the-man-to-collect-the-bills-2000732953

    K) because some AI bro is going to totally ignore all of this and ask an LLM to write a rebuttal rather than read any of it.

    • tomi000@lemmy.world
      link
      fedilink
      arrow-up
      12
      arrow-down
      8
      ·
      28 days ago

      Good list, but we should keep it real.

      C is simply wrong, AIs have created a lot. By the reasoning that its only based on the inputs, no human has ever created anything “new” because it is all based on their experiences of the outside world.

      F is simply fearmongering and not helpful.

      • ramble81@lemmy.zip
        link
        fedilink
        arrow-up
        5
        ·
        28 days ago

        And the plagiarism part? There’s a difference between derivative work based on the spirit of someone else’s work and flat out using someone else’s work. It’s the whole reason those laws exist.

        • tomi000@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          28 days ago

          Yes definitely. Plagiarism is complicated and theres no easy way to draw a line where it starts. But Im not trying to defend AI here. I dont like the way it is currently used at all. Its just those points that I dont agree with.

    • jimmy90@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      27 days ago

      i use it like a search engine or example generator

      i don’t trust anything it creates just like i don’t trust anything on the internet without validating it

      i take you point about being wasteful tho, AI is like the oil of computing; incredibly wasteful for what it does

  • Denjin@feddit.uk
    link
    fedilink
    arrow-up
    27
    ·
    28 days ago

    Medicine.

    Evidence shows that some highly specialised models are better at things like detecting breast cancer in scans than human doctors.

    Properly anonymised automatic second scans by an AI to catch the markers that human doctors miss for another review by a specialist is an excellent potential use case for an LLM AI.

    Transcription services can save doctors huge amounts of admin time and allows them to focus on the patient if they know there’s a reliable system in place for typing up notes for a consultation. As long as it’s treated as a “please review these notes are accurate” rather than treated as a gospel recording and the data is destroyed once it’s job is complete and the patient has been able to give informed consent.

    The way these things are being used in actual medical contexts right now is frankly terrifying.

    • Hossenfeffer@feddit.uk
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      27 days ago

      I had a colonoscopy last year (such fun!) and there was an ‘AI’ monitoring the camera feed to detect anomalies. If it spotted something it just drew the doctor’s attention to it for his expert, human review. I was ok with that. Effectively an extra pair of eyes that can look everywhere on the screen all at once and never blink.

      • cynar@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        28 days ago

        That’s how AI systems should be used. A “heads up, something weird here” system.

        I could also see it being used well like this for patient history analysis. Often a doctor is treating 1 symptom of something larger. They can’t see the wood for the trees. An LLM could pick out oddities and flag them. The doctor can then filter out the mistakes and hallucinations, but be alerted to rare or unusual conditions that match the patient’s symptoms and history.

    • tomiant@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      28 days ago

      Yeah the sciences in general I’d say. There’s a project aiming to translate the tens of thousands of cuneiform clay tablets that sit in storage all because there’s like a handful of people in the world that can read them- AI is an amazing way to mass translate them and unlocking vast troves of hitherto completely unknown ancient knowledge.

      The problem is not even the AI, but the scientists themselves who guard the tablets jealously because they don’t want anyone else to translate “their” tablets that they dug up, even though they are incapable of possibly make a dent in the sheer volume in their collected lifetimes.

      Imagine, so much information encoded, from thousands of years ago that could reveal so much about the origins of our culture and civilization!

  • Pinetten@pawb.social
    link
    fedilink
    arrow-up
    22
    arrow-down
    4
    ·
    28 days ago

    I think anything with text generation is fine. Your multiple Google searches are highly likely to eat more resources than that. Also, fuck Google, use Ecosia. But when I suspect an answer isn’t one quick search away, I happily rather use Le Chat for answers, than give Reddit traffic, or have to wade through the shite that is Fandom, Wikia or whatever. Not to mention using AI helps me get past the issue of having to check multiple sites for an answer, just to find that the answer is “Google it” or “Nvm, solved it”. Some of you fuckers did this.

    However people need to understand that an AI is exactly as fallible as any person. Yes, it has access and capability to handle way more data but between trying to please you and just it getting it’s wires crossed, it’s going to make mistakes. YOU need to be able to assess the accuracy of the output. The more important the topic, the more careful you need to be and always assume that the possibility of error is there no matter how hard you try - JUST LIKE WITH ANY BIT OF INFORMATION. I see so many people cite academic articles like they prove whatever claim they are making, just to see that the study in question was funded by The Company That Wants to Prove The Claim and sample size was 3 people who work for The Company That Wants to Prove The Claim. At least AI has a small chance of pointing the issue out if YOU yourself tell it to be critical - and I actually suspect this is part of the reason some people hate AI. They don’t like that it absolutely can be more intellectually rigorous than a person with an emotional investment in whatever they want to be true. Yes, you can have an AI asspat your grandest delusions but if you actually try to get it to be critical, it will be. You can use a hammer to hit people, or you can use it on a nail as intended (and how many times you hit your own fingers is on you, not the hammer).

    I would draw a line on artwork, videos, music. While I’m not going to crucify actual artists using AI assistance to take out some tedium from a project, I still wouldn’t encourage it. Stolen artwork to train AI is one thing and the environmental impact is VASTLY greater than just text. Generating one AI image can use as much energy as even a 1,000 text responses. I would also really like to be able to completely opt out of AI slop in media sites. I fucking hate that Soundcloud allows it.

    And a last point on AI text responses: if you saw the rise of alt-right and the anti-vaxx stuff, you probably are familiar with gish galloping and Brandolini’s Law. If not, you really fucking should be. AI can make it so much easier to debunk misinformation. YES it can make it easier to perpetuate too but this is where we see the AI weapons race. Bad actors can AND WILL use AI to fill any void with their rhetoric. If you value truth and facts and want to prevent misinformation from spreading you are gimping yourself if you’re not using AI.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      28 days ago

      I use Suno on occasion. I enjoy writing poetry, and being able to turn it into a song is something I find fun and inspirational, driving me to write more than I have in decades. I could never, ever write a chord of music.

      I don’t share it. It’s just for personal gratification. If it’s super good maybe I’d share with some friends in discord who are super into AI. Thing is, part of a song might be super good, but I’ve never had an entire song turn or the way I want. And I’ve found no one ever thinks a song is as good or interesting as the prompter.

      AI is like the cheap consumer goods of art and thought. Cheap, but not quality or durable. It works and looks great if gently used, but as soon as it gets any real pressure or scrutiny, it falls apart.

      I think it’s likely, if we continue down that path, to be the artistic equivalent of IKEA vs a master woodworker. You can buy an end table for $30, or you can but something hand crafted from teak and mahogany for $3000. A lot of people like IKEA, but if they weren’t around a nice end table might be $600 and be heirloom quality (if not as good as the $3k one). But today that middle market doesn’t exist. Rather it does, but it’s filled with IKEA quality shit dressed up to look a bit nicer temporarily. I don’t know, maybe my analogy fell apart.

      I’m just saying that these things are fun and interesting on an individual level, but I agree they shouldn’t be commercial. We should just make it so that there are no enforceable rights granted on anything AI produces. It can be freely copied and distributed. But that doesn’t help real artists make a living. And their work should be appreciated and respected (and result in a lifestyle that affords them the ability to keep making art).

      • Pinetten@pawb.social
        link
        fedilink
        arrow-up
        6
        ·
        28 days ago

        I don’t agree with the use but at least you’re keeping it private. Not gonna crucify you because I understand the appeal. I’d encourage you to find a way to pay for it though, or even just start making a donation to some environmental cause as a way of off-setting.

        • MagicShel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          28 days ago

          That’s a pretty reasonable ask. I do donate to other things I use like Lemmy. I like your suggestion.

  • tomiant@piefed.social
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    28 days ago
    1. The sciences obviously

    2. For me personally, data collation

    3. Learning

    4. Assisting with Linux sysadmin stuff (used to be a “how do I X” meant hours of scouring online forums and asking questions that might be deleted because draconian forum rules or get answered weeks later if at all, now I can get shit done in minutes)

    5. I also use it a lot to explore ideas and arguments, like a sort of metaphysical sparring partner.

  • Anbalsilfer@lemmy.ml
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    28 days ago

    I have autism and ADHD, and have been frustrated throughout my entire life by my inability to realize any of my numerous ideas due to double executive dysfunction. While I see many drawbacks from using these models - the most serious one as it currently stands being their water consumption - I’ve come to consider them a very important support tool for people in a similar position as myself.

    • Rhynoplaz@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      28 days ago

      I hear you. A lot of times my ideas are just a “vibe”, and starting is the hardest part. I haven’t used AI much at all, but I can see how having a prompt to get you started can get the creative ball rolling.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        28 days ago

        “Starting is the hardest part.”

        I’m a technical lead for my teams. We also have a technical architect, but he’s a bit newer than ME and so it falls on me to do some of the architecture because he’s laser focused on a big project.

        We worked together last week because his designs… well they were bad — so bad I was worried for the project and maybe ultimately his job. But what I found was they were very roughly the right shape and gave context for thinking and refinement, and I was able to question things and suggest all kinds of refinement. Mostly all I did was point out things like this data here seems to be in a process that doesn’t need it. Are we putting the generation of two completely different objects in the same component? That might not be good separation of concerns.

        My own architectural designs… I have none and I’ve had much longer to do them. I need that shit version to refine. I need the brainstorming process with a partner to refine — not all of my suggestions were golden. I got push back and my own ideas fell apart sometimes. The end result is much stronger for our collaboration. But it was an expensive process. Man, I wish AI could fill that role for me.

        In fact my biggest complaint about using AI is that it rarely pushes back and pressure tests me. Even when I prompt it to do so it falls apart under the slightest argument.

        Except strangely, sometimes I have it analyze my words for teams, or email, or especially here, and provide feedback. And every once it a while it’ll fixate on something that is my style and tell me it’s bad or won’t resonate or will push away some readers and I’m like, but that’s my style. If I change that I’m not being genuinely me. And so I don’t change it, but it keeps harping on it. “I know you said you won’t change this but…”

        If only it would do that in any other context.

    • Artisian@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      28 days ago

      Do check the vlogbros summary of the AI water issue. TLDR: it’s negligible compared to the real water hog (corn), and being managed.

  • Quazatron@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    28 days ago

    It’s not going away. The cat is out of the bag.

    As with any tool it has its use cases. It’s not a good fit for everything. You can drive a screw with a hammer but a screwdriver works best.

    We’re experiencing the capitalist euphoria that happens when something new comes along. This needs to get regulated into submission like all the previous bubbles.

    • cRazi_man@europe.pub
      link
      fedilink
      arrow-up
      2
      ·
      28 days ago

      Tech bros benefit from saying that AI is the solution to everything… And people are eating this up.

      You’re exactly right that it really should be the right tool for the right job, and people don’t know how to differentiate good vs bad uses of AI.

      I’ve used it for getting over my Linux migration problems. Ive also used it to help me set up my home server. Ive used the tech Bros tools to remove as many tech bro products as I can from my life. I think this is the perfect use of AI, on a noncritical problem with good impact and absolutely no consequence when it is completely wrong. I ask AI to interpret massive docker log files for me and point me in the right direction. Once I know what the problem might be then I can go read human written solution posts.

      I know people have successfully used AI to write letters to help get out of unfair parking tickets, battle shitty landlords and use it to do shitty useless tasks that bosses ask them to do. I fully support using AI to push back against overbearing authority. Use their own tool to stick it to the man! We just need to prioritise reducing the climate, energy, water impact to make it not destroy the planet at the same time. I want ethical AI that doesnt steal everyone’s content.

  • ace_garp@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    28 days ago

    Scientific use on your own massive data sets(think 100s of TB) - Sure

    Consumer chatbot uses - May give the illusion of positive results, whereas the long-term outcome is an overall negative effect on the user.

    • Vlyn@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      28 days ago

      Give me back my Google search from 10 years ago and alright, no need for AI.

      Nowadays Google is so unusable that I actually go to Claude first if I need to research something.

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    28 days ago

    Would an upscaler be considered generative? Really all I can think of, but I do believe calling those generative is also a little bit of a stretch using the basic idea of “generation” extremely loosely.

    Oh, and helping find new chemicals for medicine and other medical research. Of all the things that might benefit from “throwing everything at a wall and seeing what sticks,” that’s the only real good use it could be.

  • Iconoclast@feddit.uk
    link
    fedilink
    arrow-up
    13
    arrow-down
    3
    ·
    28 days ago

    Ofcourse, but I know better to not even bother trying to have a civil discussion about it here.

  • venusaur@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    2
    ·
    28 days ago

    For sure. You could absolutely create and train a model ethically. It wouldn’t be nearly as useful in many aspects, but it would be gen AI. From an environmental perspective, I guess you could ask yourself the same thing of CPU intensive gaming. People play games for hours using up similar, often more, electricity as a small locally run LLM.

  • Libb@piefed.social
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    28 days ago

    Everything can be justified. Even the most… miserable actions. Here is one: I let a kid drown, because I was busy saving a couple other kids that were drowning too. It’s a legit choice but it is also not ok, and I would not want to be in the shoes of anyone having to face that situation and to live with the aftermath.

    Regarding AI, I don’t think the question should be whether it is justifiable or not. It’s a tool, it needs no justification beside filling a purpose like a hammer or even a gun do.

    The question should be to decide if we’re OK a tool (that has been developed using humanity common knowledge) and that will deeply change all our lives and all of humanity future to be owned and controlled by a handful multi-billionaires that are already actively working their worst to make the world unfit to most of us. Or if we want for that tool to be ours and to be able to decide by ourselves what limit we want to put on its usage.

    Well, at least that’s what I think.

    I have no hate towards AI. No more than I hate a hammer (edit: or a gun) when someone use it to commit a murder. I’m much more critical of the way AI is not developed as a common good… which to me is unacceptable for a tool that only exists because because of our common knowledge.

  • CodenameDarlen@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    28 days ago

    Ask programmer bros who work on corporative hell… It’s almost mandatory today if you want to earn money programming.

    If you’re in a dev company that doesn’t require AI, it’s just a matter of time.

    I think programmers are like 90% responsible for impact on environment due to AI use. I’ve a friend who work on a big company, they use AI literally everywhere you can imagine, even on Slack to answer other colleagues messages. They need to feed huge codebases to provide context to AI, at the end it’s more resource hungry than generating video or images a few times a day.

  • jtrek@startrek.website
    link
    fedilink
    arrow-up
    8
    ·
    27 days ago

    I have used copilot a couple times to be like “I have this scenario and want to do this. What are my options?”. I’d rather have a good Internet search and real people, but that’s all shitted up.

    The answers from the LLM aren’t even consistently good. If I didn’t know programming I wouldn’t be able to use this information effectively. That’s probably why a lot of vibe coding is so bad.

    • AA5B@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      26 days ago

      Same.

      • i think of search as a summary of the first page of search results. It takes slightly longer to come back but might save you time evaluating. But much of the time you do need to click into original source
      • ai writing unfortunately is valued at my company. I suppose it helps us engineers write more effective docs, but they don’t really add value technically, and they’re obviously ai. I’ve used this to translate technical docs into wording so management can say “look how we use ai”
      • ai coding is better. I use it through my ide as effectively an extension of autocomplete: where the IDE can autocomplete function signatures, for example, ai can autocomplete multiple lines. It’s very effective in that scenario
      • I’m just starting with more complex rulesets. I’ve gotten code reviews with good results, except for keeping me in the loop so it inevitably goes very wrong. I’ve really polished my git knowledge trying to unwind where someone trusts ai results without evaluation but the fails forward trying to get it to fix itself until they can’t find their way back. This past week I’ve been playing with a refactoring ruleset (copied from online). It’s finding some good opportunities and the verbal description of the fix is good, but I’ll need to tweak the rule set for the generated solution to be usable

      The short version is it appears to be a useful tool, IFF you can spend the time to develop thorough rulesets, stables of mcp servers, and most importantly, the expertise that you could do it yourself

  • Dumhuvud@programming.dev
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    27 days ago

    GenAI is a plagiarism machine. If you use it, you’re complicit.

    Ethics aside, LLMs in particular tend to “hallucinate”. If you blindly trust their output, you’re a dumbass. I honestly feel bad for young people who should be studying but are instead relying on ChatGPT and the likes.

  • Canopyflyer@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    27 days ago

    LLM’s have their use, there is no doubt about that. I’m in the middle of creating a home brew campaign for my D&D group and unfortunately I’m a lousy artist and I wanted a few things visualized. Well, I used a photo generating AI to create something that had the visual I wanted. I’m going to use it for my campaign and it will probably just sit on my hard drive after I’m done.

    My employer is rolling out AI and is asking us to find places to insert it into our workflows. I am doing that with my team, but none of us are really sure if it will be of any benefit.

    The problem right now is we’re at the stage where idiots are convinced it is something that it is not and they have literally thrown 10’s of billions of dollars at it. Now… They are staring at the wide abyss that is the amount of money they invested vs the amount of money people are willing to pay for it.

    I’ve seen arguments for and against the presence of an AI bubble… Personally, I think it’s a bubble that’s so large that it will take down several long established computer industry manufacturers when if pops. Those that are arguing its absence probably have large investments that they do not want to see fail.