When I look at this website, which seems to be intended as a serious project and not a joke, to me it kind of feels like it would be the end of FOSS… https://malus.sh/ Is it just me?
Or will the majority of contributors still bother if they won’t even get the most basic attribution anymore, let alone GPL and other complex licenses being enforcable at all?
There are also these events that make me wonder if this service can even work, given the apparent training data plagiarism problem. This feels kind of independent from whether a gen AI being fed a project can ever be “clean room”:
https://dl.acm.org/doi/10.1145/3543507.3583199
https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/
I feel like there are more reasons than ever to tell people to cut out gen AI code from FOSS entirely, if they care about respecting attribution and the work of others. Even if just morally. This whole ride seems to be going in a bad direction.
I’m curious about other people’s thoughts, however.
PS: Don’t trust me on any law-related guesses, IANAL. This isn’t legal advice. I’m just a concerned coder.
Update: seems like it is satire https://malus.sh/blog.html but the trend as a whole seems to be real: https://www.mrlatte.net/en/stories/2026/03/05/relicensing-with-ai-assisted-rewrite/ https://www.theregister.com/2026/03/06/ai_kills_software_licensing/ https://writings.hongminhee.org/2026/03/legal-vs-legitimate/
It strikes me as a joke/sarcastic jab at the industry.
Tired of putting “Portions of this software…” in your documentation? Those maintainers worked for free—why should they get credit?
…and suddenly your entire proprietary codebase must be open sourced. The horror!
Through our offshore subsidiary in a jurisdiction that doesn’t recognize software copyright
I’m surprised anyone could take the site seriously.
I assumed it was real because people seem to be doing this for real: https://www.theregister.com/2026/03/06/ai_kills_software_licensing/
And outside of that supposed “clean room” AI trend, all the gen AI coders seem to be ignoring that apparently AI is plagiarizing training data too. And it seems to happen randomly and unpredictably even for who you would think are expert users, Microslop themselves: https://www.pcgamer.com/software/ai/microsoft-uses-plagiarized-ai-slop-flowchart-to-explain-how-github-works-removes-it-after-original-creator-calls-it-out-careless-blatantly-amateuristic-and-lacking-any-ambition-to-put-it-gently/
The website itself may be a joke, but we’re already starting to see this concept in practice with that one project that was rewritten by AI and relicensed from a copyleft license to a pushover one.
The other side of that is that any leaked proprietary code could also be reverse engineered and vibe coded
Yeah, I find it scary how many projects embrace gen AI despite all the training data controversies. I’ve tried to convince some not to, but it’s hard. Even the Linux kernel appears to be using it now. It’s sad.
Some will argue that what we do is exploitative, that we are extracting the ideas from open source while leaving behind the people who contributed them. To this I say: yes, that is a reasonably accurate description of our business model. It is also a reasonably accurate description of every company that has ever used open source software without contributing back, which is to say, virtually every company that has ever used open source software. We are simply being honest about it, and charging a fee for the privilege.
I don’t think it’s satire.
The terms of use link goes nowhere, so I honestly don’t know.
But I feel like it doesn’t matter whether it is, for the sake of discussing where gen AI seems to be leading FOSS…
Lemmy currently seems to consider embracing it too, sadly. Feels potentially short-sighted to me, idk.
It might be the end of GPL-type licenses. But, at least as far as I’ve understood it, the point of copyleft was to use copyright against itself in the first place, because copyright sucks, and at the end of the day we don’t really want copyright OR copyleft. They’re both asserting “ownership” of stuff that honestly belongs in the public domain free to all humans to use (in an ideal world, that doesn’t contain evil corporations that are considered people for some reason). We already know copyleft open source has been widely abused in proprietary software. This is not new nor surprising. We gave them the richly deserved middle finger whenever we could find out they did it before, and we hate it, but it was never “the end” of open source software because making it publicly available is precisely the defiance we are ultimately aiming for and we will always do that no matter how much they steal it and make it closed source.
People making closed source software are the enemy, and our war of freedom against them continues regardless of what tactics they use to demean our efforts while they make their closed source software. We will never let them win. They think they’ve found a new way around the GPL, that’s a shame, but so be it. The arms race will continue, but open source will not go away, because the point of it has nothing to do with meekly relying on the law to allow open source to exist, that’s just a method that has been used, with some success, and allowed a lot of people to turn it into a livelihood, and it will be a terrible shame to lose that.
Those things are not the true goal of open source though. The intention of open source, is to not let proprietary, hidden software dictate the fate of humanity and we will do it for as long as we have to. We’ll do it if we’re protected by copyleft, we’ll do it if we’re not. We’ll still do it even if they make it illegal, and we’ll call it reverse engineering, hacking, and piracy if we have to. Because the information and code that humanity relies on must be free, not owned.
If it’s not satire, then calling it “malice” is too on-the-nose.
Seems like it might be satire after all: https://malus.sh/blog.html Will update my post in a second. But the trend seems to be real, with others discussing the effects on the GPL too: https://writings.hongminhee.org/2026/03/legal-vs-legitimate/
didn’t court recently rule that AI works couldn’t be patented or copyright protected? Meaning if malus.sh is for real anyone using that service should be ironically donating results to public domain, OR anyone leaking produced code to public can’t be held liable. 🤷♂️
The problem is cultural, not technical or legal. Most people are at best indifferent and more often supportive of the exploitation of others. Unless that changes, the exploitation will be relentless. AI is a new tool that facilitates a kind of exploitation. But the fundamental inclination to exploit with minimal appreciation and compensation is nothing new. Exploitation is not merely tolerated. It is broadly encouraged and venerated. The law is primarily a tool of the elite to protect themselves. It does little to protect the interests of a typical FOSS contributor and the state does even less. There have been a few cases fought and won but compared to the scale of the industry, the resources committed to defending FOSS are trivial. That’s no more the end of FOSS now than it was in the beginning. It will probably reduce revenue for a few companies that have been exploiting FOSS and FOSS producers for profit. The vast majority of contributors were never compensated. Of those that were, it was typically far less than the value of their contributions.
Now the next gpl needs to protect code written based on the docs
Wait, this makes no sense. Just ask your AI to write the libraries you need rather than clone an existing project
I suppose, but then the “training data plagiarism” moral question remains. Rate seems to be like 2-5% for what they can pin down: https://dl.acm.org/doi/10.1145/3543507.3583199 I’m guessing that means the hidden actual rate might be higher… and there are the high profile incidents.
When I look at this website, which seems to be intended as a serious project and not a joke, to me it kind of feels like it would be the end of FOSS… https://malus.sh/ Is it just me?
This is satire but its satirizing something very real.
Seems fishy… Can it do the reverse for proprietary code? If not, seems like it’s relying on being trained on the original code and not “clean room”.
That said, you fork it, you own it. Not technically a fork I guess but conceptually. And all code has bugs so welcome to your full time job maintaining 50 different previously freely maintained libraries.
So, first, that’s definitely satire. Look at the names of things, like Definitely Real, Inc.
Second, I think open source will continue to thrive, we just need to ensure AI slop is avoided.
For my own projects (and anyone else’s if they’d like), I’ve written a very comprehensive human-only contribution policy:
https://sciactive.com/human-contribution-policy/
If projects adopt policies like these, we can safely guard open source projects from the slop.
I wonder if it can also convert binaries. Put a copy of Oracle through it and watch the shenanigans.
Also wonder whether they can generate movies the same way. Put in a DVD and a completely new movie comes out, with the same plot, characters, etc., but with all the names and dialogue changed and the actors are replaced by deepfakes that look different.
Fwiw look up “derived work”.
Was there some generative AI that was making original Seinfeld episodes? What’s the deal with that?
I don’t know if that service can, but LLM-based workflows can do that. Here’s an LLM-based decompiler project which could serve as the first step in such a pipeline.
Yeah, or the leaked Windows XP source code. Every day any gen AI code use feels more to me like license laundering, if anything then of the training data that was involved. And I mean this purely as a gut feeling, I have no idea what a court would say. But it feels wrong.








