A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • Cyv_@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    188
    arrow-down
    2
    ·
    17 days ago

    I mean, I get if you wanna use AI for that, it’s your project, it’s free, you’re a volunteer, etc. I’m just not sure I like the idea that they’re obscuring what AI was involved with. I imagine it was done to reduce constant arguments about it, but I’d still prefer transparency.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      59
      arrow-down
      8
      ·
      17 days ago

      I tried fitting AI into my workloads just as an experiment and failed. It’ll frequently reference APIs that don’t even exist or over engineer the shit out of something could be written in just a few lines of code. Often it would be a combo of the two.

      • Scrollone@feddit.it
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        1
        ·
        17 days ago

        Yeah I mean. It’s not like AI can think. It’s just a glorified text predictor, the same you have on your phone keyboard

        • yucandu@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          17 days ago

          It’s like having an idiot employee that works for free. Depending on how you manage them, that employee can either do work to benefit you or just get in your way.

          • daikiki@lemmy.world
            link
            fedilink
            English
            arrow-up
            18
            arrow-down
            1
            ·
            17 days ago

            Only it’s not free. If you run it in the cloud, it’s heavily subsidized and proactively destroying the planet, and if you run it at home, you’re still using a lot of increasingly unaffordable power, and if you want something smarter than the average American politician, the upfront investment is still very significant.

            • yucandu@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              30
              ·
              17 days ago

              Yeah I’m not buying the “proactively destroying the planet” angle. I’d imagine there’s a lot of misinformation around AI, given that the products surrounding it are mostly Western, like vaccines…

          • BackgrndNoize@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            17 days ago

            Not even free, just cheaper than an actual employee for now, but greed is inevitable and AI is computationally expensive, it’s only a matter of time before these AI companies start cranking up the prices.

      • Vlyn@lemmy.zip
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        5
        ·
        17 days ago

        You might genuinely be using it wrong.

        At work we have a big push to use Claude, but as a tool and not a developer replacement. And it’s working pretty damn well when properly setup.

        Mostly using Claude Sonnet 4.6 with Claude Code. It’s important to run /init and check the output, that will produce a CLAUDE.md file that describes your project (which always gets added to your context).

        Important: Review everything the AI writes, this is not a hands-off process. For bigger changes use the planning mode and split tasks up, the smaller the task the better the output.

        Claude Code automatically uses subagents to fetch information, e.g. API documentation. Nowadays it’s extremely rare that it hallucinates something that doesn’t exist. It might use outdated info and need a nudge, like after the recent upgrade to .NET 10 (But just adding that info to the project context file is enough).

      • Fatal@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        17 days ago

        At a minimum, the agent should be compiling the code and running tests before handing things back to you. “It references non-existent APIs” isn’t a modern problem.

        • Zos_Kia@jlai.lu
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 days ago

          I don’t know what they are using cause all agents routinely do that. I suspect they are fibbing or tested things out in 2024 and never updated their opinion.

      • yucandu@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        3
        ·
        17 days ago

        I create custom embedded devices with displays and I’ve found it very useful for laying things out. Like asking it to take secondly wind speed and direction updates and build a Wind Rose out of it, with colored sections in each petal denoting the speed… it makes mistakes but then you just go back and reiterate on those mistakes. I’m able to do so much more, so much faster.

      • CompassRed@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        7
        ·
        16 days ago

        The symptoms you describe are caused by bad prompting. If an AI is providing over-complicated solutions, 9 times out of 10 it’s because you didn’t constrain your problem enough. If it’s referencing tools that don’t exist, then you either haven’t specified which tools are acceptable or you haven’t provided the context required for it to find the tools. You may also be wanting too much out of AI. You can’t expect it to do everything for you. You still have to do almost all the thinking and engineering if you want a quality project - the AI is just there to write the code. Sure, you can use an AI to help you learn how to be a better engineer, but AIs typically don’t make good high-level decisions. Treat AI like an intern, not like a principal engineer.

          • CompassRed@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            2
            ·
            16 days ago

            It’s not about stupid or smart. It’s a tool, not a person. If you don’t get the same results that other people get with the same tool, then what could possibly be the problem other than how the person is using the tool?

        • Bronzebeard@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          16 days ago

          “it’s your fault that it just made up tools that don’t exist” is a bold statement, bro.

          • CompassRed@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            16 days ago

            No, it’s not. It doesn’t have intention. It’s literally just a tool. If you don’t get the results you expect with a tool when other people do get those results, then the problem isn’t the tool.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 days ago

            The junior analogy comes to mind. If you hire a fresh face and they ship code that doesn’t work, it’s definitely on you, bro.

      • aloofPenguin@piefed.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        17 days ago

        I had the same experience. Asked a local LLM about using sole Qt Wayland stuff for keyboard input, a the only documentation was the official one (which wasn’t a lot for a noob), no.examples of it being used online, and with all my attempts at making it work failing. it hallucinated some functions that didn’t exist, even when I let it do web search (NOT via my browser). This was a few years ago.

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      4
      ·
      17 days ago

      I expect because it wasn’t a user - just a random passer by throwing stones on their own personal crusade. The project only has two major contributors who are now being harassed in the issues for the choices they make about how to run their project.

      Someone might fork it and continue with pure artisanal human crafted code but such forks tend to die off in the long run.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      17 days ago

      Considering the amount of damage AI has done to well-funded projects like Windows and Amazon’s services, I agree with this entirely. It might be crucial to help fix bigger issues down the line.

    • Fizz@lemmy.nz
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      6
      ·
      17 days ago

      I’m the opposite. Its weird to me for someone to add an AI as a co author. Submit it as normal.

      • svtdragon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 days ago

        It’s mostly not a thing developers do. It’s a thing the tools themselves do when asked to make a commit.

  • nialv7@lemmy.world
    link
    fedilink
    English
    arrow-up
    169
    arrow-down
    11
    ·
    17 days ago

    you can criticise them but ultimately they are a unpaid developer making their work freely available to the benefit of us all. at least don’t harass the developer.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      66
      arrow-down
      17
      ·
      17 days ago

      You make a fair point, but I feel like the trolling reaction they gave was asking for more backlash. Not responding was probably the best move.

      • Zos_Kia@jlai.lu
        link
        fedilink
        English
        arrow-up
        71
        arrow-down
        3
        ·
        17 days ago

        It’s typical of dev burnout, though. Communication starts becoming more impulsive and less constructive, especially in the face of conflicts of opinions.

        I’ve seen it play a few times already. A toxic community will take a dev who’s already struggling, troll them, screenshot their problematic responses, and use that in a campaign across relevant places such as github, reddit, lemmy… Maybe add a little light harassment on the side, as a treat. It’s a fun activity ! The dev spirals, posts increasingly unhinged responses and often quits as a result.

        The fact that the thread is titled “is lutris slop now” is a clear indication that the intention of the poster wasn’t to contribute anything constructive but to attack the dev and put them on their back foot.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            15
            ·
            17 days ago

            Yeah same. I’d like to think i’d answer “I’ll use AI, if you don’t like it you can fork the project and i wish you good luck. Go share your opinion on AI in an appropriate place.”. But realistically there’s a high chance it catches me on a bad day and i get stupid.

        • MousePotatoDoesStuff@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          16 days ago

          … You’re right. I definitely wouldn’t be above such a response.

          The problem is, a lot of people here - myself included - were/are also being impulsive about their responses to this issue, at least partially due to all the shitty stuff caused by GenAI.

          There might be some toxic people too, I wouldn’t be surprised - but this can happen without them, too.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            16 days ago

            The thing is, toxic people thrive in mob situations and are often found leading or even manufacturing them. I tend to be wary around this kind of setups as they are easy to get caught up in and hard to get out of.

      • aksdb@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        26
        ·
        17 days ago

        Trolling? They gave a pretty good answer explaining their reasoning.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          60
          arrow-down
          3
          ·
          17 days ago

          I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.

          Seems pretty obvious to me that they knew this wouldn’t go over well. It was inflammatory by design.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            21
            ·
            17 days ago

            Yeah ok. True. I think the rest of the post has much more weight, though. But yeah, he should have swallowed that last sentence.

    • UnfortunateShort@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      17 days ago

      They are on liberapay if you want to support the project btw. Combined with Patreon, they sit at less than 700$ a week. That’s like half a dev before tax

    • 4am@lemmy.zip
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      17
      ·
      17 days ago

      They want to put clanker code that they freely admit they don’t validate into a product that goes on the computers of people who’s experience with Linux is “I heard it’s faster for games”

      It’s irresponsible to hide it from review. It doesn’t matter if AI tools got better, AI tools still aren’t perfect and so you still have to do the legwork. Or at least let your community.

      Also, you should let your community make ethics decisions about whether to support you.

      Overall it was a rash reaction to being pressured rudely in a GitHub thread; but you know AI is a contentious topic and you went in anyway. It’s weak AF to then have a tantrum and spit in the community’s face about it.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        4
        ·
        17 days ago

        Nothing is being hidden from review. The code is open source. They removed the specific attribution that indicates which parts of the code were created using Claude. That changes absolutely nothing about the ability to review the code, because a code review should not distinguish between human written code and machine written code; all of it should be checked thoroughly. In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.

  • adeoxymus@lemmy.world
    link
    fedilink
    English
    arrow-up
    124
    arrow-down
    35
    ·
    17 days ago

    Tbh I agree, if the code is appropriate why care if it’s generated by an LLM

    • deadcade@lemmy.deadca.de
      link
      fedilink
      English
      arrow-up
      81
      arrow-down
      40
      ·
      17 days ago

      It’s still made by the slop machine, the same one that could only be created by stealing every human made artwork that’s ever been published. (And this is not “just one company”, every LLM has this issue.)

      Not only that, the companies building massive datacenters are taking valuable resources from people just trying to live.

      If the developer isn’t able to keep up, they should look for (co-)maintainers. Not turn to the greedy megacorps.

      • bookmeat@fedinsfw.app
        link
        fedilink
        English
        arrow-up
        44
        arrow-down
        11
        ·
        17 days ago

        A few years ago we were all arguing about how copyright is unfair to society and should be abolished.

        • wirelesswire@lemmy.zip
          link
          fedilink
          English
          arrow-up
          52
          arrow-down
          2
          ·
          17 days ago

          Sure, but these same companies will drag you to court and rake you over the coals if you infringe on their copyrights.

          • lumpenproletariat@quokk.au
            link
            fedilink
            English
            arrow-up
            18
            arrow-down
            2
            ·
            17 days ago

            More reason to destroy copyright.

            Normal people can’t afford to fight the big companies who break theirs anyway. It’s only really a tool for big businesses to use against us.

          • Luminous5481 "Lawless Heathen" [they/them]@anarchist.nexus
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            9
            ·
            17 days ago

            Licenses only matter if you care about copyright. I’d much rather just appropriate whatever I want, whenever I want, for whatever I want. Copyright is capitalist nonsense and I just don’t respect notions of who “owns” what. You won’t need the GPL if you abolish the concept of intellectual property entirely.

            • astro@leminal.space
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              2
              ·
              17 days ago

              It is offensive to me on a philosophical level to see that so many people feel that they should have control, in perpetuity, over who can see/read/experience/use something that they’ve put from their mind into the world. Doubly so when considering that their own knowledge and perspective is shaped by the works of those who came before. Software especially. It is sad that capitalism has so thoroughly warped the notion of what society should be that even self-proclaimed leftists can’t imagine a world where everything isn’t transactional in some way.

              • obelisk_complex@piefed.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                16 days ago

                Precisely this, yes, well said. We all stand on the shoulders of those who came before us, one way or another.

        • Beacon@fedia.io
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          17 days ago

          We weren’t all saying copyright altogether was unfair. In fact i think most of us have always said copyright law should exist, just that it shouldn’t be like ‘lifetime of the creator plus another 75 years after their death’. Copyright should be closer to how it was when the law was first started, which is something like 20 years.

          (And personally imo there should also be some nuanced exceptions too.)

        • Bronzebeard@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          16 days ago

          Yeah people making that argument were dumb. Copyright needs to be fixed, not abolished.

      • Goretantath@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        12
        ·
        17 days ago

        Just like how every other human artist learned how to draw by looking at examples their art teacher gave them, aka “stealing it” in your words.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      17 days ago

      If a human is reviewing the code they submit and owning the changes I don’t care if they use an LLM or not. It’s when you just throw shit at the wall and hope it sticks that’s the problem.

      I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        17 days ago

        It’s the same for me.

        I don’t care if somebody uses Claude or Copilot if they take ownership and responsibility over the code it generates. If they ask AI to add a feature and it creates code that doesn’t fit within the project guidelines, that’s fine as long as they actually clean it up.

        I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

        This is the problem I have with it too. Using something that vulnerable to prompt injection to not only write code but commit it as well shows a complete lack of care for bare minimum security practices.

    • RightHandOfIkaros@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      17 days ago

      Personally, I have never seen LLM generated code that works without needing to be edited, but I imagine for routine blocks of code and very common things it probably does fine. I dont see why a programmer needs to rewrite the same code blocks over and over again for different projects when an LLM can do that part leaving more time for the programmer to write the more specialized parts. The programmer will still have to edit and verify the generated code, but programming is more mechanical than something like art.

      However, for more specialized code, I would be concerned. It would likely not function at all without editing, and if it did function it probably wouldn’t be optimized or secure. However, this programmer claims to have 30 years of experience, and if thats the case then he likely knows this and probably edits the LLM output code himself.

      As I have said before, Generative AI is a tool, like PhotoShop. I dont see why people should reject a tool if it can make their job easier. It won’t be able to completely replace people effectively. Businesses will try, but quality will drop off because its not being used by people that understand what the end result needs to be, and businesses will inevitably lose money.

    • drolex@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      5
      ·
      17 days ago
      • Ethical issue: products of the mind are what makes us humans. If we delegate art, intellectual works, creative labour, what’s left of us?
      • Socio-economic issue: if we lose labour to AI, surely the value produced automatically will be redistributed to the ones who need it most? (Yeah we know the answer to this one)
      • Cultural issue: AIs are appropriating intellectual works and virtually transferring their usufruct to bloody billionaires
    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      17 days ago

      “If” doing all the lifting here.

      If we ignore the mountain of evidence saying the opposite…

    • The_Blinding_Eyes@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 days ago

      While I know there is more nuance than this, but why should I spend any of my time on something, when you spent no time creating it? I know that applies more to the slop, but that’s where I am with most LLM generated stuff.

    • Kowowow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      17 days ago

      I want to one day make a game and there is no way I’m not prototyping it with llm code, though I would want to get things finalized by a real coder if I ever got the game finished but I’ve never made real progress on learning code even in school

        • Dremor@lemmy.worldM
          link
          fedilink
          English
          arrow-up
          35
          arrow-down
          3
          ·
          17 days ago

          Being a developer, I don’t care if someone else uses my code. Code is like a brick. By itself it has little value, the real value lies on how it is used.
          If I find an optimal way to do something, my only wish is to make it available to as much people as possible. For those who comes after.

            • Dremor@lemmy.worldM
              link
              fedilink
              English
              arrow-up
              1
              ·
              16 days ago

              That’s not how LLMs work either.

              An LLM had no knowledge, but has the statically probability of a token to follow another token, and given an overall context it create the statically most likely text.
              To calculate such probability as accurently as possible you need as much examples as possible, to determine how often word A follow word B. Thus the immense datasets required.
              Luckily for us programmers, computer programs are inherently statically similar, which makes LLMs quite good at it.
              Now, the programs it create aren’t perfect, but it allows to write long, boring code fast, and even explain it if you require it to. This way I’ve learned a lot of new things that I wouldn’t have unless I had the time and energy to screw around with my programs (which I wished I had, but don’t), or looked around Open Source programs source code, which would take years to an average human.

              Now there is the problem of the ethic use of AI, which is a whole other aspect. I use only local models, which I run on my own hardware (usually using Ollama, but I’m looking into NPU enabled alternatives).

            • Dremor@lemmy.worldM
              link
              fedilink
              English
              arrow-up
              1
              ·
              16 days ago

              I can live with helping some assholes if my contributions help others. At least I don’t make them richer since I only use local IAs.

        • adeoxymus@lemmy.world
          link
          fedilink
          English
          arrow-up
          24
          arrow-down
          3
          ·
          17 days ago

          Tbh all programmers have been copy pasting from each other forever. The middle step of searching stack overflow or GitHub for the code you want is simply removed

          • galaxy_nova@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            17 days ago

            Exactly. If someone has already come up with an optimal solution why the hell would I reimplement it. My real problems are not with LLMs themselves but rather the sourcing of the training data and the power usage. If I could use an “ethically sourced” llm locally I’d be mostly happy. Ultimately LLMs are also only good for code specifically. Architecture or things that require a lot of thought like data pipelines I’ve found AI to be pretty garbage at when experimenting

          • wholookshere@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            21
            arrow-down
            4
            ·
            17 days ago

            LLMs have stolen works from more than just artists.

            ALL of public repositories at a minimum have been used as training, regardless of licence. including licneses that require all dirivitive work be under the same license.

            so there’s more than just lutris stollen.

            • Lung@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              22
              ·
              17 days ago

              So he’s a badass Robinhood pirate that steals code from corporations and gives it to the people?

              • wholookshere@piefed.blahaj.zone
                link
                fedilink
                English
                arrow-up
                6
                ·
                17 days ago

                The fuck you talking about.

                Using a tool with billions of dollars behind it robinhood?

                How is stealing open source prihcets code regardless of license stealing fr corporation’s?

                • Lung@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  17 days ago
                  • he’s not anthropic, and doesn’t have billions of dollars
                  • stealing from open source is not stealing, that’s the point of open source
                  • the argument above is that these models are allegedly trained “regardless of license” i.e. implying they are trained on non-oss code
          • prole@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            17 days ago

            No, the LLM was trained on other code (possibly including Lutris, but also probably like billions of lines from other things)

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      17 days ago

      Yeah, management wants us to use AI at $DAYJOB and one of the strategies we’ve considered for lessening its negative impact on productivity, is to always put generated code into an entirely separate commit.

      Because it will guess design decisions at random while generating, and you want to know afterwards whether a design decision was made by the randomizer or by something with intelligence. Much like you want to know whether a design decision was made by the senior (then you should think twice about overriding this decision) or by the intern that knows none of the project context.

      We haven’t actually started doing these separate commits, because it’s cumbersome in other ways, but yeah, deliberately obfuscating whether the randomizer was involved, that robs you of that information even more.

    • Holytimes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      15
      ·
      17 days ago

      Well when you have a massive problem of harassment, death threats and fucking retarded shit stains screaming at every single dev that is even theorized to use ai regardless if it’s true or not.

      I blame fucking no one for hiding the fact.

      This is on the users not the dev. The users are fucking animals and created this very problem.

      Blaming the wrong people and attacking them is the yuck.

      Scream at the executives and giant corpos who created the problem not some random indie dev using a tool.

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        16 days ago

        Then just quit it isn’t worth it. I know AI has uses and is useful.

  • magikmw@piefed.social
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    9
    ·
    17 days ago

    Worth mentioning that the user that started the issue jumps around projects and creates inflammatory issues to the same effect. I’m not surprised lutris’ maintainer went off like they did, the issue is not made with good faith.

    • Zos_Kia@jlai.lu
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      6
      ·
      17 days ago

      Yes, both threads are led by two accounts with probably less than 50 commits to their names during the last year, none of which are of any relevance to the subject they are discussing.

      In a world where you could contribute your time to make some things better, there is a certain category of people who seek out nice things specifically to harm them. As open source enters mainstream culture, it also appears on the radar of this kind of people. It’s dangerous to catch their attention, as once they have you they’ll coordinate over reddit, lemmy, github, discord to ruin your reputation. The reputation of some guy who never ever did them any harm apart from bringing them something they needed, for free, but in a way that doesn’t 100% satisfy them. Pure vicious entitlement.

      I’d sooner have a drink with a salesman from OpenAI than with one of them.

  • southsamurai@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    18
    ·
    17 days ago

    Yeah, this is actually one of the good things a technology like this can do.

    He’s dead right, in terms of slop, if it’s someone with training and experience using a tool, it doesn’t matter if that tool is vim or claude. It ain’t slop if it’s built right.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      7
      ·
      16 days ago

      It ain’t slop if it’s built right.

      Yeah but the problem is, is it? They absolutely insist that we use AI at work, which is not only insane concept in and of itself, but the problem is that if I have to nanny it to make sure it doesn’t make a mistake then how is it a useful product?

      He says it helps him get work done he wouldn’t otherwise do, but how’s that possible? how is it possible that he is giving every line of code the same scrutiny he would if he wrote it himself, if he himself admits that he would never have got around to writing that code had the AI not done it? The math ain’t matching on this one.

      • southsamurai@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        16 days ago

        Well, I’m not a code monkey, between dyslexia and an aging brain. But if it’s anything like the tiny bit of coding I used to be able to do (back in the days of basic and pascal), you don’t really have to pore over every single line. Only time that’s needed is when something is broken. Otherwise, you’re scanning to keep oversight, which is no different than reviewing a human’s code that you didn’t write.

        Look at it like this; we automated assembly of machines a long time ago. It had flaws early on that required intense supervision. The only difference here on a practical level is about how the damn things learned in the first place. Automating code generation is way more similar to that than llms that generate text or images that aren’t logical by nature.

        If the code used to train the models was good, what it outputs will be no worse in scale than some high school kid in an ap class stepping into their first serious challenges. It will need review, but if the output is going to be open source to begin with, it’ll get that review even if the project maintainers slip up.

        And being real, lutris has been very smooth across the board while using the generated code so far. So if he gets lazy, it could go downhill; but that could happen if he gets lazy with his own code.

        Another concept that I am more familiar with, that does relate. Writing fiction can take months. Editing fiction usually takes days, and you can still miss stuff (my first book has typos and errors to this day because of the aforementioned dyslexia and me not having a copy editor).

        My first project back in the eighties in basic took me three days to crank out during the summer program I was in. The professor running the program took an hour to scan and correct that code.

        Maybe I’m too far behind the various languages, but I really can’t see it being a massively harder proposition to scan and edit the output of an llm.

  • super_user_do@feddit.it
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    19
    ·
    16 days ago

    I understand the hatred towards AI, but people gotta understand that there’s a difference between coding with AI and Vibecoding. They are DIFFERENT THINGS! AI is userful, what is not are both vibecoding and shaming a developer with 30 years of real world experience with no AI support for using it for once. Using AI is ok if you do that critically and with common sense

    • PrettyFlyForAFatGuy@feddit.uk
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      3
      ·
      16 days ago

      If it’s making commits for you you’re vibe coding.

      I use it at work, I use it for troubleshooting and if I get it to generate anything for me, I stage them and review them before committing myself

      • fossilesque@mander.xyz
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        16 days ago

        Jokes on you, I’ve used it to untangle messy git problems (with a backup of course).

    • SigmarStern@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      8
      ·
      16 days ago

      I totally agree. I’m not an AI hype man. I want to scream whenever I see a PR littered with emojis, bullet lists, and way too much text for a simple change. I hate the discussions about the transformative power of AI, the 10x production gains, all the million tools, agents, skills, plugins, methods I should be using but I am already behind and old and probably unemployed next week, right? Still, AI use is not inherently bad. It gets me unstuck. It finds subtle errors I wasn’t noticing, it writes documentation faster and better than I can. I hate the companies who are pushing it, the methods of it’s training, but the tool itself is just a tool and sometimes a very useful one. IMHO we shouldn’t shame every open source developer just for using it. As long as they are responsible with it, I’m fine with some AI code in my software.

    • flop_leash_973@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      16 days ago

      You are correct but people in general are pretty bad at subtly and grey area. Just look at the current state of political discourse in the US. Probably half the people that support the likes of Trump do so because they like black/white binary choice and can’t handle shades of grey in their life emotionally.

      • yabbadabaddon@lemmy.zip
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        12
        ·
        16 days ago

        Please, go ahead and remove everything “AI” in your life. No social media. No GPS. No assist when driving or being driven. No streaming of any kind. No meteo apps. Ask your boss to remove everything related to prevision in his company. Ask your doctor to not use any tool to help his diagnosis if you have a scanner for cancer.

        Let’s see how many of those you can “pass”. Or let’s see if it helps you develop a critical mind about to use which tool for which job and how to use it.

        • Reygle@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          16 days ago

          I’m already full Linux at work. Location on my mobile is always OFF unless I need it on rare occasions. I don’t stream. I self host.

          Say your last sentence into a mirror today.

          • yabbadabaddon@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            5
            ·
            16 days ago

            Bro, you are on fucking Lemmy. We are all like you. You are not special. You never ever use GPS to locate yourself, right? You never go from a to b. You never go in a shop to buy food. You never go to the doctor. You never buy anything online. You never watch YouTube. Sure.

              • yabbadabaddon@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                3
                ·
                16 days ago

                Ho, so that’s your argument? What a fucking kid. It’s easy to have an opinion. It’s harder to know why and not being a fucking parrot because you’re so edgy.

      • Honytawk@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        16 days ago

        It is more nuance than rationality.

        There are plenty of reasons to hate on AI. But in the end they are just tools to automate things. It depends entirely on how it is being used. With enough effort and most importantly checking the output, you can create things faster while still keeping the same quality as before.

        Calling anything that even slightly touched an LLM “slop” and crawling in a fetus position while crying is a lot less rational. These people have no idea about the real world.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      16 days ago

      I for news for you its the same thing. There is no difference besides maybe the prompt same AI is writing the code. And I do bit believe a coder is going over every single line of code.

  • Katana314@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    3
    ·
    17 days ago

    To admit some context: My company has strongly encouraged some AI usage in our coding. They also encourage us to be honest about how helpful, or not, it is. Usually, I tell them it turns out a lot of garbage and once in a while helps make a lengthy task easier.

    I can believe him about there being a sweet spot; where it’s not used for everything, only for processes that might have taken a night of manual checks. The very real, very reasonable backlash to it is how easily a poor management team or overconfident engineer will fall away from that sweet spot, and merge stuff that hasn’t had enough scrutiny.

    Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people’s lives. It’s just sad that in 99.9% of cases, we’re not anywhere near that perfect world.

    I don’t totally blame the dev for defending his use of AI backed by industry experience, if he’s still careful about it. But I also don’t blame people who don’t trust it. It’s kind of his call, and if the avoidance of AI is important enough to you, I’d say fork it. I think it’s a small red flag, but not nearly enough of one for me to condemn the project.

    • underisk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      17 days ago

      Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people’s lives.

      I don’t think you should make a claim like this while AI is being heavily subsidized and burning VC cash to stay afloat. The truth is whatever value it may add to such a society might actually be completely negated by it’s resource costs. Is even “moderate” AI use ecologically or economically sustainable?

      • utopiah@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        16 days ago

        Indeed, as they said in Italian “if my grandmother had wheels she would have been a bike” … the reasoning might be theoretically correct but in the current situation it’s just not the case.

      • Katana314@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        16 days ago

        For full disclosure, I remembered once someone claimed to me there are AI models that use much less power. But, to confirm that statement before replying, I looked up an investigation, and they say it’s much murkier, and that a company’s own claims are usually understating it. So, you’re on point.

    • tb_@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      17 days ago

      It can be useful for generating switch cases and other such not-quite copy-paste work too. There are reasonable use cases… if you ignore how the training data was sourced.

      • ChocolateFrostedSugarBombs@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        17 days ago

        And the incredible amount of damage and destruction it’s still inflicting on the environment, society, and the economy.

        No amount of output is worth that cost, even if it was always accurate with no unethical training.

  • Omega_Jimes@lemmy.ca
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    7
    ·
    17 days ago

    I don’t support the use of AI tools in general, but i have a soft spot for long-term maintainers. These people generally don’t have enough support for this to be a full-time hobby, and when a project becomes popular the pressure is massive.

    If the community wont step up to take the burden off the maintainer, but they still want active development, what can you do? As long as the program continues to be high quality, i cant complain about a free thing.

  • Crozekiel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    10
    ·
    16 days ago

    AI is actively destroying the environment and harming people. Data centers have been caught using methane burner generators (which are banned for use by the EPA) which significantly increase health risk to residents that live nearby (cancer and asthma rates already significantly increased). Then you have the ridiculous effects it is having on computer hardware markets, energy and water infrastructure and prices.

    Then after all of that, the AI themselves are hallucinating somewhere in the neighborhood of 25% of the time, and multiple studies have found that people that use them regularly are losing their own skills.

    I can’t figure out why people would choose to use them. I can’t figure out why programming is the one place where people that might have otherwise been considered experts in the field are excited to use them. Writers, artists, lawyers, doctors, basically every other professional field that AI companies have suggested these would be good for, they get trashed by experts in the fields for making garbage. I have a hard time believing the only thing AI can do well is write code when it sucks so badly at everything else it does. Does development suck this much? Do developers have so little idea what they are doing that this seems like a good idea?

  • darkangelazuarl@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    5
    ·
    17 days ago

    If he’s using like an IDE and not vibe coding then I don’t have much issue with this. His comment indicates that he has a brain and uses it. So many people just turn off their brain when they use AI and couldn’t even write this comment I just wrote without asking AI for assistance.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      11
      ·
      17 days ago

      Yeah, that’s my biggest worry. I always have to hold colleagues to the basics of programming standards as soon as they start using AI for a task, since it is easier to generate a second implementation of something we already have in the codebase, rather than extending the existing implementation.

      But that was pretty much always true. We still did not slap another implementation onto the side, because it’s horrible for maintenance, as you now need to always adjust two (or more) implementations when requirements change.
      And it’s horrible for debugging problems, because parts of the codebase will then behave subtly different from other parts. This also means usability is worse, as users expect consistency.

      And the worst part is that they don’t even have an answer to those concerns. They know that it’s going to bite us into the ass in the near future. They’re on a sugar high, because adding features is quick, while looking away from the codebase getting incredibly fat just as quickly.

      And when it comes to actually maintaining that generated code, they’ll be the hardest to motivate, because that isn’t as fun as just slapping a feature onto the side, nor do they feel responsible for the code, because they don’t know any better how it actually works. Nevermind that they’re also less sharp in general, because they’ve outsourced thinking.

    • Holytimes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      5
      ·
      17 days ago

      Hell most people turn off their brains when the word gets mentioned at all. There’s plenty of basic shit an ai can do exactly as good as a human. But people hear AI and instantly become the equivalent of a shit eating insect.

      As long as your educated and experienced enough to know the limitations of your tools and use them accurately and correctly. Then AI is literally a non factor and about as likely to make an error as the dev themselves.

      The problem with AI slop code comes from executives in high up positions forcing the use of it beyond the scope it can handle and in use cases it’s not fit for.

      Lutris doesn’t have that problem.

      So unless the guy suddenly goes full stupid and starts letting AI write everything the quality is not going to change. If anything it’s likely to improve as he off loads tedious small things to his more efficient tools.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        16 days ago

        The problem is I’ve seen people who supposedly have a brain start to use a high and over time they become increasingly confident in the AI’s abilities. Then they stop bothering to review the code.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 days ago

          That is the problem. They become dependent on it and it is human nature to be lazy. So eventually the “safe guards” well come off.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 days ago

      Just wait in a couple months he’ll have a teenage girl sentient AI.

  • ipkpjersi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    9
    ·
    edit-2
    16 days ago

    Honestly, unfortunately, I agree. It IS unfortunately helpful, and if you’re a competent developer using AI tooling, you can make sure it doesn’t generate slop. You are responsible for your code, at the end of the day.

    AI does generate societal damage, but that’s mostly because of how companies abuse it and less because of the technology itself.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      16 days ago

      By telling people he expected this and obfuscating the authorship afterwards, he is doing damage in the form of eroding trust for a tool that has otherwise proven reliable.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      16 days ago

      As I’ve said elsewhere here, I really don’t have a problem with people holding a moral stance against the use of genAI. It’s fine to just say “However useful this might be, I don’t want to see it used because I think it has too many ethical costs/consequences.” But blanket accusing all work that involved genAI in any capacity of being “slop” isn’t holding a moral stance, it’s demanding that reality conform to your beliefs; “I hate this, therefore it must be terrible in every respect.”

      If you truly hold a well founded ethical stance against the use of genAI, that stance shouldn’t be threatened by people doing good and effective work with genAI, because it’s effectiveness should have nothing to do with your objections.

  • Skankhunt420@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    4
    ·
    16 days ago

    Open source stuff is awesome and I really like people improving Linux in their spare time

    But, to do it this way is basically saying “fuck you” to the community which is fucked up.

    Could have talked about how AI helps him or how he uses it for templates or whatever and damn even if I didn’t agree with those points either that’s a lot better than being like “alright good luck finding it now then bitch

    I wouldn’t mess with anything this guy does anymore after this.

    • pheelicks@lemmy.zip
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      5
      ·
      16 days ago

      Are you talking about his way of communicating or about his AI use? I think it could have been said a bit more level headed, but I mostly agree with what he’s said. I also see no issue with the part “good luck finding it then” that seems to sound malicious to you. To me this means “if you can’t find a difference in quality, your whole complaint is invalid because there basically is no difference in quality”. Yes, it’s still AI and should not be viewed as more than a knowledgeable intern, yada yada, but I hope the point comes across…

      • Skankhunt420@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        16 days ago

        If there’s no difference in quality why obfuscate it? Why hide something that you think is a valuable tool if your code can speak for itself?

        He could have used that opportunity to take a standing his own way “this is what I am doing and if you don’t like it feel free to make a fork but I think this is blown out of proportion for: (reasons he could list his opinions on)”

        But being like “good luck finding it now” is 100% malicious in this context. Or if malicious is too strong of a word for this, its definitely not user friendly at all.

        And certainly not very “open”.

        • pheelicks@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          16 days ago

          I don’t see it as obfuscation if there is no underlying difference. Why treat working code differently depending on the source if what matters is that it works (which it does by definition). Of course there has to be more quality control if AI is able to produce more code, but I don’t think that’s the point here right? Why highlight the different sources of the code if, as you said, the code can speak for itself. What’s the difference to you if you can’t tell them apart?

          • Skankhunt420@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            16 days ago

            The difference is that AI is a known issue creator (that huntarrr app comes to mind) with many projects and AI usage is supposed to be disclosed transparently for compliance with copyrights and licensing.

            But even despite all that its kind of a shitty way to go about it the way he did, in my opinion.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 days ago

          If there’s no difference in quality why obfuscate it? Why hide something that you think is a valuable tool if your code can speak for

          The timeline was that he started adding attribution indicating the use of AI.

          Then the anti-AI drones started bombarding the Github, Discord and forums with harassment. His recent statements and removal of attribution are entirely addressed at and because of the anti-AI people harassing the project staff.

          He’s not removing it and saying ‘fuck you’ to the users. He’s tired of being harassed by third parties who are not involved with the project in any way and so he removed the source of the harassment.

      • Bongles@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        ·
        16 days ago

        In my opinion, he should’ve left it as a co author. I think if you as a user have an ethical issue with Claude, that’s your choice and you can make the decision not to use lutris. I mostly agree with what he says until that part about removing Claude so “good luck finding it”.

        It’s not about finding a difference for people (usually), it’s about how that model was trained on the work of others, without consent, for free, to then sell. He made his points about how much it helps, that it’s better than using Meta, Google, OpenAI, or Copilot and I think that’s probably true. But he made that case, so why then hide what Claude has done?

        In gaming, Valve requires you to list if you have used AI in the creation of your game and you describe in what way. It’s not because the game will 100% of the time be absolute slop (right now it usually is), it’s so that the potential customer can be informed and choose to or not to support the use of AI in those products.

        As far as I’m reading, most people who reviewed the actual code think it’s fine. So, again, I don’t see the point in hiding it other than being somewhat petty.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 days ago

          I don’t see the point in hiding it other than being somewhat petty.

          The point in hiding it was that it was being used, without harassment or complaint, right up until he added attribution which resulted in an avalanche of complaints which require resources to deal with. Discord, the forums and Github pull requests now require much more moderation labor, which takes away from the project.

          People had no complaints about the code quality until he started adding AI attribution. So he removed the attribution.

          Like he said, if people can’t tell the difference until he started marking the code AI assisted… then they don’t actually have an argument and are simply bringing anti-AI politics into the project.

      • Senal@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        15 days ago

        Think of it like a jeweller suddenly announcing they were going to start mixing in blood diamonds with their usual diamonds “good luck finding them”.

        Functionally, blood diamonds aren’t different.

        Leaving aside that you might not want blood diamonds, are you really going to trust someone who essentially says “Fuck you, i’m going to hide them because you’re complaining”

        If you don’t know what blood diamonds are, it’s easily searchable.

        I’ll go on record as saying the aesthetic diamond industry is inflationist monopolist bullshit, but that doesn’t alter the analogy


        Secondly, it seems you don’t really understand why LLM generated code can be problematic, i’m not going to go in to it fully here but here’s a relevant outline.

        LLM generated code can (and usually does) look fine, but still not do what it’s supposed to do.

        This becomes more of an issue the larger the codebase.

        The amount of effort needed to find this reasonable looking, but flawed, code is significantly higher than just reading a new dev’s version.

        Hiding where this code is makes it even harder to find.

        Hiding the parts where you really should want additional scrutiny is stupid and self-defeating.

        • pheelicks@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          16 days ago

          Thanks, I think your first point is a really valid one. AI technology is far from clean, especially in a political scope.

          To your second point. I see that, but on the other hand, it makes an impression on me as if human code would be free of such errors. I would not put human code on an (implied) pedestal (especially not mine), but maybe I’m missing your point. I think being suspicious about AI code is good but same goes for human code. To me it sounds like nobody should ever trust AI code because there can or will be mistakes you can’t see, which is reasonably careful at best and paranoid at worst. At some point there is no difference anymore between “it looks fine” and “it is fine”.

          • Senal@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 days ago

            Let’s assume we’re skipping the ethical and moral concerns about LLM usage and just discuss the technical.

            it makes an impression on me as if human code would be free of such errors

            Nobody who knows anything about coding is claiming human code is error free, that’s why code reviews, testing and all the other aspects of the software development lifecycle exist.

            To me it sounds like nobody should ever trust AI code

            Nobody should trust any code unless it can be verified that it does what is required consistently and predictably.

            because there can or will be mistakes you can’t see, which is reasonably careful at best and paranoid at worst

            This is a known thing, paranoia doesn’t really apply here, only subjectively appropriate levels of caution.

            Also it’s not that they can’t be seen, it’s just that the effort required to spot them is greater and the likelihood to miss something is higher.

            Whether or not these problems can be overcome (or mitigated) remains to be seen, but at the moment it still requires additional effort around the LLM parts, which is why hiding them is counterproductive.

            At some point there is no difference anymore between “it looks fine” and “it is fine”.

            This is important because it’s true, but it’s only true if you can verify it.

            This whole issue should theoretically be negated by comprehensive acceptance criteria and testing but if that were the case we’d never have any bugs in human code either.


            Personally i think the “uncanny valley code” issue is an inherent part of the way LLM’s work and there is no “solution” to it, the only option is to mitigate as best we can.

            I also really really dislike the non-declarative nature of generated code, which fundamentally rules it out as a reliable end to end system tool unless we can get those fully comprehensive tests up to scratch, for me at least.

            • pheelicks@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              15 days ago

              Thanks for taking the time to reply.

              Also it’s not that they can’t be seen, it’s just that the effort required to spot them is greater and the likelihood to miss something is higher.

              Greater compared to human code? Not sure about that, but I’m not disagreeing either. Greater compared to verified able programmers, sure, but in general?..

              I also really really dislike the non-declarative nature of generated code, which fundamentally rules it out as a reliable end to end system tool unless we can get those fully comprehensive tests up to scratch, for me at least.

              I don’t think I’m getting your point here. Do you mean by that, the code basically lacks focus on an end goal? Or are you talking about the fuzzyness and randomization of the output?

              • Senal@programming.dev
                link
                fedilink
                English
                arrow-up
                2
                ·
                15 days ago

                Greater compared to human code? Not sure about that, but I’m not disagreeing either. Greater compared to verified able programmers, sure, but in general?..

                Both.

                The reasons are quite hard to describe, which is why it’s such a trap, but if you spend some time reviewing LLM code you’ll see what I mean.

                One reason is that it isn’t coding for logical correctness it’s coding for linguistic passability.

                Internally there are mechanisms for mitigating this somewhat, but its not an actual fix so problems slip through.

                I don’t think I’m getting your point here. Do you mean by that, the code basically lacks focus on an end goal? Or are you talking about the fuzzyness and randomization of the output?

                The latter, if you give it the exact same input in the exact same conditions, it’s not guaranteed to give you the same output.

                The fact that its sometimes close to the same actually makes it worse because then you can’t tell at a glance what has changed.

                It also isn’t a simple as using a diff tool, at least for anything non-trivial, because it’s variations can be in logical progression as well as language.

                Meaning you need to track these differences across the whole contextual area which, if you are doing end to end generation, is the whole codebase.

                As I said, there are mitigations, but they aren’t fixes.

  • forrgott@lemmy.zip
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    17 days ago

    Up until recently, Lutris worked perfectly for me. Ever since around the release of Wine 11, though, cant get anything to even install, let alone play. This might explain my increasing frustration with the app.

    Guess I’m going back to using Bottles for the odd game or app I don’t feel like trying to shoehorn into steam.

    • brsrklf@jlai.lu
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 days ago

      I am very much a beginner, and until now lutris was kind of my default answer for “how the hell do I get that windows exe installer to spit its entrails so I can run it through wine” (or even native engines like VCMI, Daggerfall Unity and Creatures Docking Station).

      For everything that doesn’t come from Steam, obviously.

      What is the more direct way? Does Bottles do that? I haven’t tried it yet.

      • forrgott@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        17 days ago

        There’s actually a number of options. Lutris and Bottles are both built on top of Wine. And there are other apps that use Wine to make it all work, but I’m not very familiar with anything else…yet!

        Bottles can be a little tricky to get used to - one of the biggest issues is that it sandboxes the Wine runtime, so you’ll often need to move your .exe into the right file path. But, other than that I found it pretty easy to use! So if you need something that you can “drop in” to replace Lutris, it’s worth a try! It has some helpful preconfigured runtime environments, depending on if you are running a general propose application or a video game. For the power users, you can even start with a blank slate.

        • brsrklf@jlai.lu
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 days ago

          Interesting. I am mostly interested in running games. I’ll have a look into how Bottles work then.

          I feel like for most if not all of my use cases that are not specific games, I can find some decent stuff running natively.

          • forrgott@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            16 days ago

            Oh, definitely. One of the best things about Linux and the free software movement, innit? But, the ‘applications’ preset in bottles is great for that one tool that is just hard to live without, or some specific tool created by the community that may or may not ever get a native port (SAK.exe for managing Switch ROM files comes to mind for me).

            • brsrklf@jlai.lu
              link
              fedilink
              English
              arrow-up
              1
              ·
              16 days ago

              For now I think the thing that I’ll miss the most will be Virtual Desktop. I haven’t tried my headset with this PC yet, I have a more recent one that’s still on Win11 for that, but I know SteamLink is completely broken for me and VD is what makes PCVR even possible for me.

              I blame Valve for that need by the way. They had a version of SteamVR/SteamLink that worked well enough a couple versions back, they broke it in newer versions for my headset, and I can’t even go back to the one that worked because the only option is “previous” and we’re past that. Many reports later they still haven’t fixed it.

              • forrgott@lemmy.zip
                link
                fedilink
                English
                arrow-up
                2
                ·
                16 days ago

                I am trying to give their programmers credit where I can; first, the massive influx of time and money into gaming on Linux has had obvious, amazing benefits. And my recent gripes would be about a persistent bug that has crept into Steam OS desktop mode; but it’s a one line shell script to fix, and they just moved to a much more recent kernel, not to mention officially tackling support for third party handheld PCs, so…yeah, that all sounds like a headache on crack.

                But, honestly, I hear ya all the same. I think we feel confident holding these guys to a high standard for good reasons, so hopefully it all comes out in the wash.

                Edit: I don’t know much about virtual desktop options that run native, but wish you luck. Seems like lots of stuff going on in that area these days…

                • brsrklf@jlai.lu
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  16 days ago

                  Thank you, I think at some point I’ll end up getting the Frame, or at least a newer headset that’s guaranteed to be supported by their API, so I certainly hope it’ll work on Linux.

                  Sure, they’ve done a lot to make the transition to linux easier, and that’s great. Especially right now with Microsoft going to shit harder than ever. To me it sounded a bit overdramatic around Win8 when they went all “Microsoft gaming is over” but they were definitely right to start working on it.

                  But specifically for VR I tend to think they should be held somewhat accountable because, they sell VR games. I bought games there with the expectation they’d work, and they did, for a while. The fact they suddenly don’t without anything changing on my end is bad. Especially since one solution would be letting us go back to the version that worked.

                  Unfortunately for now the only good workaround I know is VD, which is Windows-only proprietary software.

      • prole@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 days ago

        If you’re talking about games, I usually just add the exe to Steam as a non-Steam game and enable proton for it