The GNOME.org Extensions hosting for GNOME Shell extensions will no longer accept new contributions with AI-generated code. A new rule has been added to their review guidelines to forbid AI-generated code.
Due to the growing number of GNOME Shell extensions looking to appear on extensions.gnome.org that were generated using AI, it’s now prohibited. The new rule in their guidelines note that AI-generated code will be explicitly rejected
You used to be able to tell an image was photoshopped because of the pixels. Now with code you can tell it was written with AI because of the comments.
and from seeing quite a few slops in my time
Emojis in comments, filename as a comment in the first line, and so on
Isn’t fine name in the comment in the first line default behavior for multiple IDE/boilerplate generations?
They werent hiding it, they started with vibe
# Optional but […]
edit to explain my very vague comment: ChatGPT loves to offer code with some lines commented as “Optional [… explanation]”. You can easily tell AI code when the monologuing comments are left in
extension developers should be able to justify and explain the code they submit, within reason
I think this is the meat of how the policy will work. People can use AI or not. Nobody is going to know. But if someone slops in a giant submission and can’t explain why any of the code exists, it needs to go in the garbage.
Too many people think because something finally “works”, it’s good. Once your AI has written code that seems to work, that’s supposed to be when the human starts their work. You’re not done. You’re not almost done. You have a working prototype that you now need to turn into something of value.
Too many people think because something finally “works”, it’s good. Once your AI has written code that seems to work, that’s supposed to be when the human starts their work.
Holy shit, preach!
Once you give a shit ton of prompts and the feature finally starts working, the code is most likely complete ass, probably filled with a ton of useless leftovers from previous iterations, redundant and unoptimized code. That’s when you start reading/understanding the code and polishing it, not when you ship it lol
Just the fact that people are actually trying to regulate it instead of “too nuanced, I will fix it tomorrow” makes me haply.
But they are also doing it pretty reasonably too. I like this.
Rare, so needed Gnome W
I applaud the move, but man, that’s gonna be a lot of work on their end.
How is AI-generated content detected and what is the process for disputing such claims?
if it’s not clear if it’s ai, it’s not the code this policy was targeting. this is so they don’t have to waste time justifying removing the true ai slop.
if the code looks bad enough to be indistinguishable from ai slop, I don’t think it matters that it was handwritten or not.
I guess the practical idea is that if your AI generated code is so good and you’ve reviewed it so well that it fools the reviewer, the rule did it’s job and then it doesn’t matter.
But most of the time the AI code jumps out immediately to any experienced reviewer, and usually for bad reasons.
So then it’s not really a blanket “no-AI” rule if it can’t be enforceable if it’s good enough? I suppose the rule should have been “no obviously bad AI” or some other equally subjective thing?
wow. that dude is a piece of work. made the mistake of clicking one of the links to his blog, and wow. there’s a stunning lack of knowledge or self respect there
It’s not hard, just use your eyes or an AI-detector
ai detectors are not good. may as well ask your magic 8 ball
Which is how the code was generated in the first place.
This is one of the things that people who use AI to vibe code don’t get. Sure your AI genned code ends up working but when you actually look at the code it’s sloppy as all fuck, with a lot of unnecessary junk in it. And if you ever have to fix it, good fucking luck finding what’s actually going on. Since you didn’t write it there’s no way for you to know exactly what it is that’s actually fucking up.
Really you end up being no better than some homebody who copy-pasted some code they found on the internet and plugged it into their shit with no idea of how any of it actually works.
Good.
I’m mostly switched off SAMMI because their current head dev is all in on AI bullshit. Got maybe one thing left to move to streamerbot and I’m clear there. My two regular viewers wont notice at all but I’ll feel better about it.

So what does this mean? Bc like (at least with my boss) whenever I submit ai generated code at work I still have to have a deep and comprehensive understanding of the changes that I made, and I have to be right (meaning I have to be right about what I say bc I cannot say the AI solved the problem). What’s the difference between that and me writing the code myself (+googling and stack overflow)?
The difference is people aren’t being responsible with AI
You’re projecting competence onto others. You speak like you’re using AI responsibly
I use AI when it makes things easier. All the time. I bet you do too. Many people are using AI without a steady hand, without the intellectual strength to use it properly in a controlled manner
Its like a gas can over a match. Great for starting a campfire. Excellent for starting a wildfire.
Learning the basics and developing a workflow with VC is the answer.
That sounds like copium… But I’ll hear you out. What if VC? It better not be version control
Large language models are incredibly useful for replicating patterns.
They’re pretty hit and miss with writing code, but once I have a pattern that can’t easily be abstracted, I use it all the time and simply review the commit.
Or a quick proof of concept to ensure a higher level idea can work. They’re great for that too.
It is very annoying though when I have people submit me code that is all AI and incredibly incorrect.
Its just another tool on my belt. Its not going anywhere so the real trick is figuring out when to use it and why and when not to use it.
To be clear VC was version control. I should have been more clear.
Okay, that’s pretty fair. You seem to understand the tool properly
I’d argue that version control is not the correct layer to evaluate output, but it is a tool that can be used in many different ways…I don’t think that’s a great workflow, but I can conceive situations where that’s viable enough
If I were handing out authorizations to use AI, you’d get it
Banning a tool because the people using it don’t check their work seems shortsighted. Ban the poor users, not the tool.
We do this all the time. I’m certified for a whole bunch of heavy machinery, if I were worse people would’ve died
And even then, I’ve nearly killed someone. I haven’t, but on a couple occasions I’ve come way too close
It’s good that I went through training. Sometimes, it’s better to restrict who is able to use powerful tools
Yeah something tells me operating heavy machinery is different from uploading an extension for a desktop environment. This isn’t building medical devices, this isn’t some misra compliance thing, this is a widget. Come on, man, you have to know the comparison is insane.
People have already died to AI. It’s cute when the AI tells you to put glue on your pizza or asks you to leave your wife, it’s not so cute when architects and doctors use it
Bad information can be deadly. And if you rely too hard on AI, your cognitive abilities drop. It’s a simple mental shortcut that works on almost everything
It’s only been like 18 months, and already it’s become very apparent a lot of people can’t be trusted with it. Blame and punish those people all you want, it’ll just keep happening. Humans love their mental shortcuts
Realistically, I think we should just make it illegal to have customer facing LLMs as a service. You want an AI? Set it up yourself. It’s not hard, but realizing it’s just a file on your computer would do a lot to demystify it
Have people died to desktop extensions?
Cause that’s the topic here.
You’re fighting a holy war against all AI, dune style.
I’m saying this is a super low risk environment where the implications appear to be extra try/catch blocks the code reviewers don’t like – not even incorrect functionality.
Well I was just arguing that people generally are using AI irresponsibly, but if you want to get specific…
You say ban the users, but realistically how are they determining that? The only way to reliably check if something is AI is human intuition. There’s no tool to do that, it’s a real problem
So effectively, they made it an offense to submit AI slop. Because if you just use AI properly as a resource, no one would be able to tell
So what are you upset about?
They did basically what you suggested, they just did it by making a rule so that they can have a reason to reject slop without spending too much time justifying the rejection
They should state a justification. Not merely what they are looking for to identify AI generated code.
The justification could be the author is unlikely to be capable of maintenance. In which case the extension is just going to inconvenience/burden onto others.
So far their is no justification stated besides, da fuk and yuk.
Exactly, there isn’t a criteria other than the reviewer getting butthurt. Granted this is gnome, so doing whatever they feel like regardless of consequences is kind of their thing, but a saner organization would try to make the actual measurable badness more clear.
A saner organization would also hit up submitters for a reviewer’s fee. This would reduce AI spam. Barriers to entry matter.
A reviewers fee is equivalent to Canonical offering customer support contracts. Obviously a person that needs to lean on AI as a crutch, is just screaming out for reviewers to act as advisers. The reviewer just wielding the giant DENIED stamp is fun, but doesn’t address the issue of noobs implicitly asking to work with a consultant.
gnome reviewers obviously never missing an opportunity to miss an opportunity.
The answer you seek is literally the post.
What’s the difference? Jesus, we have seen the difference in the news for the past year. You know the difference. Don’t play dumb now.
We still talking about extensions, right? Those things in gnome, that shows weather or time in different time zone?
Cause if yes, your response is kinda weird. Oh no, my weather applet is created using AI! Everything will fall apart! Jesus Christ, we need to burn author for that!













