The GNOME.org Extensions hosting for GNOME Shell extensions will no longer accept new contributions with AI-generated code. A new rule has been added to their review guidelines to forbid AI-generated code.

Due to the growing number of GNOME Shell extensions looking to appear on extensions.gnome.org that were generated using AI, it’s now prohibited. The new rule in their guidelines note that AI-generated code will be explicitly rejected

  • theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    16 days ago

    We do this all the time. I’m certified for a whole bunch of heavy machinery, if I were worse people would’ve died

    And even then, I’ve nearly killed someone. I haven’t, but on a couple occasions I’ve come way too close

    It’s good that I went through training. Sometimes, it’s better to restrict who is able to use powerful tools

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        16 days ago

        People have already died to AI. It’s cute when the AI tells you to put glue on your pizza or asks you to leave your wife, it’s not so cute when architects and doctors use it

        Bad information can be deadly. And if you rely too hard on AI, your cognitive abilities drop. It’s a simple mental shortcut that works on almost everything

        It’s only been like 18 months, and already it’s become very apparent a lot of people can’t be trusted with it. Blame and punish those people all you want, it’ll just keep happening. Humans love their mental shortcuts

        Realistically, I think we should just make it illegal to have customer facing LLMs as a service. You want an AI? Set it up yourself. It’s not hard, but realizing it’s just a file on your computer would do a lot to demystify it

          • theneverfox@pawb.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            16 days ago

            Well I was just arguing that people generally are using AI irresponsibly, but if you want to get specific…

            You say ban the users, but realistically how are they determining that? The only way to reliably check if something is AI is human intuition. There’s no tool to do that, it’s a real problem

            So effectively, they made it an offense to submit AI slop. Because if you just use AI properly as a resource, no one would be able to tell

            So what are you upset about?

            They did basically what you suggested, they just did it by making a rule so that they can have a reason to reject slop without spending too much time justifying the rejection

              • theneverfox@pawb.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                16 days ago

                I don’t think you understand what doing code reviews is like.

                So someone submits terrible code. You don’t get to just say “this is bad code” and reject it wholesale, you have to explain in exhaustive detail what the problems are. Doing otherwise leads to really toxic environments. It’s killed countless projects

                That’s why you write rules. You don’t have to argue if they need tests or not, you tap the sign and reject it without actually reviewing it if it doesn’t meet the requirement

                Same thing here. You open up vibe coded nonsense so you tap the sign and reject it without bothering to review it. Do the same thing with “bad code” as a reason and it starts insane drama.

                People are really sensitive about their code, and there’s a whole methodology around how to do without ending up in a screaming match