A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • Cyv_@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    189
    arrow-down
    2
    ·
    1 month ago

    I mean, I get if you wanna use AI for that, it’s your project, it’s free, you’re a volunteer, etc. I’m just not sure I like the idea that they’re obscuring what AI was involved with. I imagine it was done to reduce constant arguments about it, but I’d still prefer transparency.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      59
      arrow-down
      8
      ·
      1 month ago

      I tried fitting AI into my workloads just as an experiment and failed. It’ll frequently reference APIs that don’t even exist or over engineer the shit out of something could be written in just a few lines of code. Often it would be a combo of the two.

      • Scrollone@feddit.it
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        1
        ·
        1 month ago

        Yeah I mean. It’s not like AI can think. It’s just a glorified text predictor, the same you have on your phone keyboard

        • yucandu@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          1 month ago

          It’s like having an idiot employee that works for free. Depending on how you manage them, that employee can either do work to benefit you or just get in your way.

          • daikiki@lemmy.world
            link
            fedilink
            English
            arrow-up
            18
            arrow-down
            1
            ·
            1 month ago

            Only it’s not free. If you run it in the cloud, it’s heavily subsidized and proactively destroying the planet, and if you run it at home, you’re still using a lot of increasingly unaffordable power, and if you want something smarter than the average American politician, the upfront investment is still very significant.

            • yucandu@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              30
              ·
              1 month ago

              Yeah I’m not buying the “proactively destroying the planet” angle. I’d imagine there’s a lot of misinformation around AI, given that the products surrounding it are mostly Western, like vaccines…

          • BackgrndNoize@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 month ago

            Not even free, just cheaper than an actual employee for now, but greed is inevitable and AI is computationally expensive, it’s only a matter of time before these AI companies start cranking up the prices.

      • Vlyn@lemmy.zip
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        5
        ·
        1 month ago

        You might genuinely be using it wrong.

        At work we have a big push to use Claude, but as a tool and not a developer replacement. And it’s working pretty damn well when properly setup.

        Mostly using Claude Sonnet 4.6 with Claude Code. It’s important to run /init and check the output, that will produce a CLAUDE.md file that describes your project (which always gets added to your context).

        Important: Review everything the AI writes, this is not a hands-off process. For bigger changes use the planning mode and split tasks up, the smaller the task the better the output.

        Claude Code automatically uses subagents to fetch information, e.g. API documentation. Nowadays it’s extremely rare that it hallucinates something that doesn’t exist. It might use outdated info and need a nudge, like after the recent upgrade to .NET 10 (But just adding that info to the project context file is enough).

      • Fatal@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 month ago

        At a minimum, the agent should be compiling the code and running tests before handing things back to you. “It references non-existent APIs” isn’t a modern problem.

        • Zos_Kia@jlai.lu
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          I don’t know what they are using cause all agents routinely do that. I suspect they are fibbing or tested things out in 2024 and never updated their opinion.

      • yucandu@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        3
        ·
        1 month ago

        I create custom embedded devices with displays and I’ve found it very useful for laying things out. Like asking it to take secondly wind speed and direction updates and build a Wind Rose out of it, with colored sections in each petal denoting the speed… it makes mistakes but then you just go back and reiterate on those mistakes. I’m able to do so much more, so much faster.

      • CompassRed@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        7
        ·
        1 month ago

        The symptoms you describe are caused by bad prompting. If an AI is providing over-complicated solutions, 9 times out of 10 it’s because you didn’t constrain your problem enough. If it’s referencing tools that don’t exist, then you either haven’t specified which tools are acceptable or you haven’t provided the context required for it to find the tools. You may also be wanting too much out of AI. You can’t expect it to do everything for you. You still have to do almost all the thinking and engineering if you want a quality project - the AI is just there to write the code. Sure, you can use an AI to help you learn how to be a better engineer, but AIs typically don’t make good high-level decisions. Treat AI like an intern, not like a principal engineer.

          • CompassRed@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            2
            ·
            1 month ago

            It’s not about stupid or smart. It’s a tool, not a person. If you don’t get the same results that other people get with the same tool, then what could possibly be the problem other than how the person is using the tool?

        • Bronzebeard@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          1 month ago

          “it’s your fault that it just made up tools that don’t exist” is a bold statement, bro.

          • CompassRed@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            1 month ago

            No, it’s not. It doesn’t have intention. It’s literally just a tool. If you don’t get the results you expect with a tool when other people do get those results, then the problem isn’t the tool.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            The junior analogy comes to mind. If you hire a fresh face and they ship code that doesn’t work, it’s definitely on you, bro.

      • aloofPenguin@piefed.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 month ago

        I had the same experience. Asked a local LLM about using sole Qt Wayland stuff for keyboard input, a the only documentation was the official one (which wasn’t a lot for a noob), no.examples of it being used online, and with all my attempts at making it work failing. it hallucinated some functions that didn’t exist, even when I let it do web search (NOT via my browser). This was a few years ago.

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      4
      ·
      1 month ago

      I expect because it wasn’t a user - just a random passer by throwing stones on their own personal crusade. The project only has two major contributors who are now being harassed in the issues for the choices they make about how to run their project.

      Someone might fork it and continue with pure artisanal human crafted code but such forks tend to die off in the long run.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      1 month ago

      Considering the amount of damage AI has done to well-funded projects like Windows and Amazon’s services, I agree with this entirely. It might be crucial to help fix bigger issues down the line.

    • Fizz@lemmy.nz
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      6
      ·
      1 month ago

      I’m the opposite. Its weird to me for someone to add an AI as a co author. Submit it as normal.

      • svtdragon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        It’s mostly not a thing developers do. It’s a thing the tools themselves do when asked to make a commit.