As I’ve said elsewhere, I’m a little older. I hear a lot about AI. I’m just trying to figure out what’s “good” AI, what’s “bad” and if there’s even a difference. I do know that there’s the whole stealing content to train AI bs going on, but is it deeper? Is there such a thing as good AI? Just trying to learn so I can be better person

  • agent_nycto@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    17 hours ago

    Here’s a fairly well researched and entertaining video about ai and some of the downsides.

    some more news

    Long story short, in my opinion, there’s isn’t a good AI. The things it sets out to do, it does poorly, and there’s ethical, bodily, environmental, and mental concerns with it.

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    2 days ago

    Oh my. This is a huge can of worms—especially on Lemmy. There’s a lot of anti-AI hate on this platform. Almost to the point of it being a religion.

    For reference, when people say, “AI” they’re usually talking about Large Language Models (LLMs) and other forms of generative AI (e.g. diffusion models that make images). Having said that, “AI” is an enormous topic of which LLMs are a small, but increasingly popular part.

    Furthermore, when people here on Lemmy say, “AI” they’re normally talking about “Big AI” which consists of:

    • OpenAI (ChatGPT)
    • Microsoft (Copilot)
    • Anthropic (Claude)
    • Meta (Whatsapp, Facebook, Instagram, Llama models, and more)
    • Google (Gemini and shittons of other things people don’t see and often don’t even have names people outside of Google would recognize)
    • Amazon (because they’re hosting the data centers that power a lot of the other players and also do AI stuff on their own)

    Is AI inherently bad or evil? No. It’s just the latest way of giving instructions to a computer. Considering that all computer programs are literally just instructions, an AI model is just a really fancy and often expensive way of performing the same function. Albeit with a lot more breadth and flexibility. Note that I didn’t say “depth”, haha.

    The “bad” or “evil” part of AI is mostly due to the large players (aka “Big AI”) spending literally over $1 trillion so far on data centers and hardware. There’s so much demand for their services that they’re having to build their own—often dirty, fossil fuel—power plants just to power it all.

    A lot of the talk around data centers is based on myths. For example, generating an image with AI doesn’t use a liter of water. A study came out that no one actually read (beyond the summary) that stated that a really long conversation with an LLM could in theory use up half a liter of water, assuming the data center was powered by a fossil fuel power plant that was using water for cooling (as in, the heat dissipation required 0.5 liters of water from the cooling pond next to the power plant, not potable/drinking water).

    LLMs do use up a lot of power though! People often assume this is from training the AIs (which I’ll get to in a moment) because everyone “knows” it’s a long, involved process that can take months (even with a $50 billion data center specifically made for AI). However, it’s actually all the people and businesses using AI that uses up all that energy. The biggest, most power-hungry step is “inference” which is the point where the LLM tries to figure out what you just asked of it.

    The important point here is that AI is actually being used.* There’s real demand for it! It’s not just fools asking ChatGPT for strange pizza recipes. It’s mostly businesses using it for things like writing and checking code or investigating server logs for malicious activity or any number of very businessy IT things.

    The demand for AI services is so great that they can’t build data centers fast enough. Big AI, specifically is having trouble keeping responses within satisfactory time windows. The business models are still developing but they’re actually not charging enough to make up for their spending in a lot of cases. Specifically, OpenAI and Microsoft are losing money like crazy, trying to compete.

    I ran out of time… I’ll reply again about the copyright situation, training costs, and open weight (aka open source) models in a bit…

    • agent_nycto@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      17 hours ago

      There’s a lot of anti-AI hate on this platform. Almost to the point of it being a religion.

      There’s a lot of justified hate, outside of Lemmy as well. The irony of saying it’s like a religion when there’s people worshipping their AI out there is notable.

      No. It’s just the latest way of giving instructions to a computer.

      While that’s sort of true, it’s obfuscating what actually happens. You’re technically just giving instructions to a computer, but it’s not like a software program on your personal computer. You’re sending a message out to a very large computer to do a very complicated large program, while a lot of other people are doing so.

      The “bad” or “evil” part of AI is mostly due to the large players (aka “Big AI”) spending literally over $1 trillion so far on data centers and hardware.

      There’s more than that. There’s the ethical concerns of making pornography of people without their consent, especially minors. There’s art theft. There’s people losing jobs. There’s the environmental issues. There’s the mental issues. There’s the problems with people trying to get jobs. There’s the drop in reading comprehension. There’s the people being driven to kill or kill themselves over it. There’s people falling in love with their AI and avoiding other people for it. There’s the noise. The water usage. The electrical pull. The Ponzi scheme funding.

      You’re trying to preemptively say that these complaints are only about the big AI, but these are inherent for all of them.

      There’s so much demand for their services that they’re having to build their own—often dirty, fossil fuel—power plants just to power it all.

      Source? People are already having to pay more for electricity. Tahoe is about to not have any electricity because of the AI center.

      Also those sus dash marks.

      the heat dissipation required 0.5 liters of water from the cooling pond next to the power plant, not potable/drinking water).

      Ok, but where do you think that water was acquired to fill that pond? It’s from local sources. Closed loop systems aren’t actually great for the environment, either. You remember the water cycle? Where water evaporates, turns into clouds, turns into rain, then dries up and repeats itself? Well, there’s only a specific amount of water on the planet, and only some of it is usable by humans. Data centers and AI centers using closed loop systems take a huge chunk of water out of that water cycle. With global warming in the mix, we’re starting to run out. Oh, and data centers and AI centers don’t disclose how much water they are taking out of the local system, so we can only guess, but the best estimate is summed up as “a fuck load”.

      However, it’s actually all the people and businesses using AI that uses up all that energy. The biggest, most power-hungry step is “inference” which is the point where the LLM tries to figure out what you just asked of it.

      Saying “it doesn’t use power unless you use it” isn’t really an argument against it’s power usage. And saying it uses more power after it’s started is worse.

      The important point here is that AI is actually being used.* There’s real demand for it!

      That demand, though, isn’t profitable. That’s why companies have been upping their rates and the building of AI centers have been stalling.

      The demand for AI services is so great that they can’t build data centers fast enough.

      That’s not why people have been trying to build a lot of data centers. There’s a lot of speculative investing going on, and there’s a lot of people trying to get onto the ground floor. So these people are dumping a crapton of money into it, trying to get ahead of everyone else.

      This isn’t coming from some bandwagon, or anti progress/tech sentiment. Ai is just bad.

    • lattrommi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      This is a well thought out comment and I agree with most of what you have to say.

      The part about data center and water use needs a caveat though. Some of them (but not all!) use a massive amount of water (a google dat a center in oregon was found to have used 25% of the local water supply) and wastewater that comes from the plant could potentially just be getting dumped into the water supply. Companies that are lax in what they do with waste water are what concerns a lot of people. It’s a lot like how mining companies would leave behind tailings ponds, pits full of water filled with large amounts of toxic materials like lead and arsenic. Some companies are only using wast ewater to cool their systems though. Others use a closed-loop system which reuse the same water continuously and use much less water.

      This article breaks it all down better than I could: https://www.fwpcoa.org/content.aspx?page_id=5&club_id=859275&item_id=130961

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        18 hours ago

        Just want to point out that nearly all new data centers use closed loop water cooling. That only makes sense in very, very dry places in the world that also have extremely cheap water.

        For example, cooling towers would make no sense in Florida because the ambient humidity is too high. Even though water is plentiful.

      • Bongles@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        Before AI, I didn’t even know what an em dash was, it was basically something word (or other software) occasionally corrected my hyphens to. I learned about it because people realized AI uses it all the time and it seemed like a good replacement for all those damn parentheses I always use.

        Didn’t end up using it much though.

  • reallykindasorta@slrpnk.net
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    I think AI is in a similar place as GMOs were 10 years ago. The technology isn’t inherently problematic but the main companies rolling it out seem to be doing so during a banner drop where the banner screams “I’m evil and I intend to burn this place to the ground.” We shouldn’t trust them because they’re practically telling us not to in the same breath they use to promote their products. I would say most of the main models available to the public fall under this boat.

    Just like GMOs this doesn’t mean that there’s not some cool AI research being done, for ex. special models run by researchers to improve diagnostics or look for new antibiotics. It remains to be seen whether the cool stuff will have been worth whatever it is we lose.

  • HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    do you mean types of ai as there is not a whole lot of difference. its actively under development and each one is trying to one up the other. granted some like grok are just reskinned and hyped of others. AI can give both good and bad results which is why it has to be used from a critical perspective. One has to evaluate and validate the response before using.

  • iceberg314@slrpnk.net
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    My opinion

    The good: Large Language AI models are a really useful tool.

    The bad: harms the environment, steals people’s work, and can be easily misused

  • AzuraTheSpellkissed@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    Hmm, this is a topic that has been debated for years, I guess instead of writing my own summaries, it’s great to link you to some resources, outlining why modern AI (“LLM”/“GPT”) is controversial:

    Note that some issues apply only given certain output (e.g. hallucinations), some depend on the usage (the decision to generate and publicize AI slop is made by human operators), whereas some issues are always present (e.g. huge environmental impact).

    Regarding the there being a difference between good and bad AI or not: Some people argue that it’s always bad, some are bit more nuanced, some are competely blind/ignorant to the problem. Only those in the middle camp would necessarily see a difference.