Yes, and so can most experienced developers. In fact unmaintainable human-written code is more often caused by organisational dysfunctions than by lack of individual competence.
In my experience there’s usually a confluence of individual and institutional failures.
It usually goes like this.
- hotshot developers is hired at company with crappy software
- hotshot dev pitches a complete rewrite that will solve all issues
- complete rewrite is rejected
- hotshot dev shoehorns a new architecture and trendy dependencies into the old codebase
- hotshot new dev leaves
- software is more complex, inconsistent, and still crappy
That’s one of the failure modes, good orgs would have design and review processes to stop it.
There are other classics like arbitrary deadlines, conflicting and shifting requirements and product direction, perverse incentives, etc.
I would even say that the AI craze is a result of the latter.
Yeah, certain code developed organically (aka shifting demands). Devs know the code gets worse, but either by time or money they don’t have the option to review and redo code.
Yes. But the important thing is that now disfunctional organizations have access to tools to write unmaintainable code really fast.
Not in my case. I dont write spaghetti code, i write fettuchini code
I want to write gnocchi code, where each little nugget is good on its own and they still blend together perfectly in the sauce. But I still end up with mashed potato-code if I don’t watch myself.
Pretty sure I can, considering I’m still maintaining a project I originally started in 2009, which is a core component of my email service.
Please tell me the software patent in that project is copylefted
The one in Port87 is the only patent I have, and it is not copyleft. I have tons of open source code that I could have patented, including in Nymph, but didn’t. Now that prior art exists and is in the market, those things can’t be patented.
There’s very little reason to seek a patent except to offer the product for sale in the market. It’s wildly time consuming and expensive. Mine cost me about $17k and took me three years to get. And I’m not a big company with mountains of cash and lawyers on the payroll. I patented it so that Microsoft, Google, etc. couldn’t just see my idea and be like, “that’s good, let’s take it”. That would kill my business. Copylefting the patent would allow them to do that.
Its Apache 2.0
Port87 is not Apache 2.0. There are no patents that cover Nymph.js, which is the one that’s Apache 2.0.
I mean, yes, absolutely I can. So can my peers. I’ve been doing this for a long, long time, as have my peers.
The code we produce is many times more readable and maintainable than anything an LLM can produce today.
That doesn’t mean LLMs are useless, and it also doesn’t mean that we’re irreplaceable. It just means this argument isn’t very effective.
If you’re comparing an LLM to a Junior developer? Then absolutely. Both produce about the same level of maintainable code.
But for Senior/Principal level engineers? I mean this without any humble bragging at all: but we run circles around LLMs from the optimization and maintainability standpoint, and it’s not even close.
This may change in the future, but today it is true (and I use all the latest Claude Code models)
sir, this is programmer_humor
and some jokes just aren’t funny
The biggest problem with using AI instead of junior developers is that junior developers eventually become senior developers. LLMs … don’t.
😞 Sir this is a Wendy’s.
Maybe the real slop was the code we wrote along the way
But, I didn’t check any of mine in?
Bah, you both read the same Stack Exchange. But it remembered it byte for byte.
Yes. That’s literally the first point in my job description.
When that coworker tells you “hah you must have generated this” but you coded this yourself 👀
“You need to try your best” “This was my best…”
ITT: AI induced dunning-kruger. Everybody can write maintenable code, just somehow it happens that nobody does.
Most of the unmaintainable code I’ve seen is because businesses don’t appreciate the need to occasionally refactor/rewrite or do anything to maintain code. They only appreciate piling more on. They’d do away with bug fixing too if they could.
This is why AI coding is being pushed so hard. Guess what’s great at piling on at 30x speed? If piling on is all companies appreciate then that’s what they’ll demand.
My company is totally like this. If you don’t write a shiny new feature immediately, you don’t last.
Many opensource projects are in same state, I know for sure my projects become spaghetti if I work more than a year on them.
Besides, I’d argue that if you need to rewrite (part of) it is because it wasn’t maintainable in the first place.
I disagree.
Rewrites can happen due to new feature support.
For examlle: It’s entirely possible that a synchronous state machine worked for the previous needs of the software, but it grew to a point where now that state machine is unable to meet the new requirements and needs to be replaced with a modern fam with asynchronous singals/delegates.
Just because that system was replaced doesn’t mean that it wasn’t maintainable, wasn’t readable, or easy to understand. It just wasn’t compatible with the growing needs of the application.
It can, but usually that’s not the case.
This 100%
Can I, sure. Do I give af since my company doesn’t care about me as anything other than a number in a spreadsheet, no.
Well, even for my private projects that I care about I end up having to rewrite every few years.
No, so let’s vibe unmaintainable code together!

Yus [good image]. Use it to assist and expedite learning (mostly by double checking its output, and debugging its code) to get better. Not as a slave to do your work for you.
Guys, you can laugh at a joke. The AI doesn’t win just because someone upvoted a meme. Maintainability of codebases has been a joke for longer than LLMs have been around because there’s a lot of truth to it.
Even the most well intentioned design has weaknesses that we didn’t see coming. Some of its abstractions are wrong. There are changes to the requirements and feature set that they didn’t anticipate. They over engineered other parts that make them more difficult to navigate for no maintainability gain. That’s ok. Perfectly maintainable code requires us to be psychics and none of us are.
I actually laughed out loud at this meme.
I might not be the best, but I can still do a better job than AI
This is a bold claim I will not make.
If you are complete novice then obviously not but I think anyone reasonably proficient in a language would be able to identify optimisations that an AI just doesn’t seem to perceive largely because humans are better at context.
It’s like that question about whether it’s worth driving your car to the car wash if the car wash is only 10 metres away. AIs have no experience of the real world so they don’t inherently understand that you can’t wash a car if it’s not at the car wash. A human would instantly know that that’s a stupid statement without even thinking about it, and unless you instruct an AI to actually deeply think about something they just give you the first answer they come up with.
What’s why they’re pushing for the datacenters, they want to turn make every query that deep. The tech is here, but the ability to sustain it isn’t. They build the data centers, kick the developers out, depress the education market for it, and then raise the prices.
Companies will be paying the AI companies 60k per year per seat in a decade.
At that price it would be cheeper to use humans
That’s the brilliance. There won’t be a pool of trained young developers by then.
What makes you think there will be a decade passing with any result to exist from here?
I agree with you. But the tool will output a basic code that mostly do what asked in seconds instead of tens of minutes if not hours. So now we could argue if the optimization you make are worth the added cost I’d writing the code yourself or if it’s better to have the tool to generate the code and then optimizing it.
A tale as old as time. The US nuclear missile codes were 000000, but it didn’t matter. The chain of command was purpose-built, ironically, so the front line soldier in a cold war scenario had to make the last decision to delete all life on the planet. Chain of command doesn’t matter at that point. You are choosing to kill everyone you know from an order from who knows who. The ultimate checksum.
You will always be better at decisions than an n-dimensional matrix of numbers on an overpriced GPU.
I don’t understand your point about the solider on the front line, but I’m interested. If you get a chance, can you elaborate please?
You will always be better at decisions than an n-dimensional matrix of numbers on an overpriced GPU.
I’d be careful about these claims. Maybe with our current iteration of “attention-based” LLMs, yes. But keep in mind that our way of processing information is strongly limited compared to how much data is fed to these LLMs while training, so they in theory have a lot more foundation to be able to reason about new problems.
We’re vastly more capable at the moment at interpreting our limited view on foreign code, being actually creative, find new ways to reason, yes. Capable developers (open source…) often have seen quite a bit more code than the average developer and are highly skilled, still with just a tiny subset of the code that an LLM has seen.
But say these models improve in creativity and “higher-level of thought” through whatever means (e.g. through more reinforcement learning). Well, let’s just say I’m careful with these claims. These LLMs are already quite a help with stupid boilerplaty code (less so with novel stuff, and writing idiomatic non-redundant code, but compared to 2-3 years ago it’s quite a step already, to the point that they’re actually helpful, disregarding all the hype and obvious marketing strategies of these AI-companies)
Yes.
Yes.
Why would you tell on yourself like this?
Haha. As the saying goes, “I still get paid.”
I could.
I choose not to! Take that, LLM!
Exactly. I’ve been sabotaging the AI with shitty code output since long before LLMs existed. That’s how I play 4D chess. (This is just meant to get a laugh. Some of my code is even quite nice, actually.)
More maintainable that whatever shit it put out
Frankly I believe it can be maintainable if the person doing the prompting actually does something and correctly do their role of human reviewing and correcting. Vibe coding without any review is dooming the software maintainability
In my experience, the biggest problem is that maintainable code necessarily requires extending/adapting existing structures rather than just slapping a feature onto the side.
And if we’re not just talking boilerplate, then this necessarily requires understanding the existing logic, which problems it solves, and how you can mold it to continue to solve those problems, while also solving the new problem.
For that, you can’t just review the code afterwards. You have to do the understanding yourself.
And once you have a clear understanding, it’s likely that the actual code change is rather trivial. At least more trivial than trying to convey your precise understanding to an LLM/intern/etc…I’ll use an LLM to write bulk code, unit tests, other boring stuff… but, I specifically only have it write code I’m already very familiar with, and even then, I hand-code it every so often, like 1 in every 3 times I’ll do it by hand to make sure I’m still able to. If I have to look something up, then I’ll stop using an LLM for that task for a long while.
Yeah, a lot of maintainability is about understanding how it works. Architectural decisions are the other half. Someone who’s paying attention can do well on both of these even using AI tools.











