• ViatorOmnium@piefed.social
    link
    fedilink
    English
    arrow-up
    190
    arrow-down
    3
    ·
    1 month ago

    Yes, and so can most experienced developers. In fact unmaintainable human-written code is more often caused by organisational dysfunctions than by lack of individual competence.

    • Samskara@sh.itjust.works
      link
      fedilink
      arrow-up
      90
      ·
      1 month ago

      In my experience there’s usually a confluence of individual and institutional failures.

      It usually goes like this.

      1. hotshot developers is hired at company with crappy software
      2. hotshot dev pitches a complete rewrite that will solve all issues
      3. complete rewrite is rejected
      4. hotshot dev shoehorns a new architecture and trendy dependencies into the old codebase
      5. hotshot new dev leaves
      6. software is more complex, inconsistent, and still crappy
      • ViatorOmnium@piefed.social
        link
        fedilink
        English
        arrow-up
        17
        ·
        edit-2
        1 month ago

        That’s one of the failure modes, good orgs would have design and review processes to stop it.

        There are other classics like arbitrary deadlines, conflicting and shifting requirements and product direction, perverse incentives, etc.

        I would even say that the AI craze is a result of the latter.

        • PapstJL4U@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 month ago

          Yeah, certain code developed organically (aka shifting demands). Devs know the code gets worse, but either by time or money they don’t have the option to review and redo code.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      5
      ·
      1 month ago

      Yes. But the important thing is that now disfunctional organizations have access to tools to write unmaintainable code really fast.

      • kindnesskills@literature.cafe
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        I want to write gnocchi code, where each little nugget is good on its own and they still blend together perfectly in the sauce. But I still end up with mashed potato-code if I don’t watch myself.

    • inari@piefed.zip
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      1 month ago

      Please tell me the software patent in that project is copylefted

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        1 month ago

        The one in Port87 is the only patent I have, and it is not copyleft. I have tons of open source code that I could have patented, including in Nymph, but didn’t. Now that prior art exists and is in the market, those things can’t be patented.

        There’s very little reason to seek a patent except to offer the product for sale in the market. It’s wildly time consuming and expensive. Mine cost me about $17k and took me three years to get. And I’m not a big company with mountains of cash and lawyers on the payroll. I patented it so that Microsoft, Google, etc. couldn’t just see my idea and be like, “that’s good, let’s take it”. That would kill my business. Copylefting the patent would allow them to do that.

        • hperrin@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 month ago

          Port87 is not Apache 2.0. There are no patents that cover Nymph.js, which is the one that’s Apache 2.0.

  • neukenindekeuken@sh.itjust.works
    link
    fedilink
    arrow-up
    54
    arrow-down
    3
    ·
    1 month ago

    I mean, yes, absolutely I can. So can my peers. I’ve been doing this for a long, long time, as have my peers.

    The code we produce is many times more readable and maintainable than anything an LLM can produce today.

    That doesn’t mean LLMs are useless, and it also doesn’t mean that we’re irreplaceable. It just means this argument isn’t very effective.

    If you’re comparing an LLM to a Junior developer? Then absolutely. Both produce about the same level of maintainable code.

    But for Senior/Principal level engineers? I mean this without any humble bragging at all: but we run circles around LLMs from the optimization and maintainability standpoint, and it’s not even close.

    This may change in the future, but today it is true (and I use all the latest Claude Code models)

  • Tja@programming.dev
    link
    fedilink
    arrow-up
    44
    arrow-down
    7
    ·
    1 month ago

    ITT: AI induced dunning-kruger. Everybody can write maintenable code, just somehow it happens that nobody does.

    • mushroommunk@lemmy.today
      link
      fedilink
      arrow-up
      46
      arrow-down
      1
      ·
      1 month ago

      Most of the unmaintainable code I’ve seen is because businesses don’t appreciate the need to occasionally refactor/rewrite or do anything to maintain code. They only appreciate piling more on. They’d do away with bug fixing too if they could.

      • errer@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 month ago

        This is why AI coding is being pushed so hard. Guess what’s great at piling on at 30x speed? If piling on is all companies appreciate then that’s what they’ll demand.

      • Tja@programming.dev
        link
        fedilink
        arrow-up
        10
        arrow-down
        6
        ·
        1 month ago

        Many opensource projects are in same state, I know for sure my projects become spaghetti if I work more than a year on them.

        Besides, I’d argue that if you need to rewrite (part of) it is because it wasn’t maintainable in the first place.

        • odelik@lemmy.today
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          I disagree.

          Rewrites can happen due to new feature support.

          For examlle: It’s entirely possible that a synchronous state machine worked for the previous needs of the software, but it grew to a point where now that state machine is unable to meet the new requirements and needs to be replaced with a modern fam with asynchronous singals/delegates.

          Just because that system was replaced doesn’t mean that it wasn’t maintainable, wasn’t readable, or easy to understand. It just wasn’t compatible with the growing needs of the application.

    • Bieren@lemmy.today
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      1 month ago

      Can I, sure. Do I give af since my company doesn’t care about me as anything other than a number in a spreadsheet, no.

      • Tja@programming.dev
        link
        fedilink
        arrow-up
        5
        ·
        1 month ago

        Well, even for my private projects that I care about I end up having to rewrite every few years.

    • Digit@lemmy.wtf
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Yus [good image]. Use it to assist and expedite learning (mostly by double checking its output, and debugging its code) to get better. Not as a slave to do your work for you.

  • Fatal@piefed.social
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    1 month ago

    Guys, you can laugh at a joke. The AI doesn’t win just because someone upvoted a meme. Maintainability of codebases has been a joke for longer than LLMs have been around because there’s a lot of truth to it.

    Even the most well intentioned design has weaknesses that we didn’t see coming. Some of its abstractions are wrong. There are changes to the requirements and feature set that they didn’t anticipate. They over engineered other parts that make them more difficult to navigate for no maintainability gain. That’s ok. Perfectly maintainable code requires us to be psychics and none of us are.

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        8
        ·
        1 month ago

        If you are complete novice then obviously not but I think anyone reasonably proficient in a language would be able to identify optimisations that an AI just doesn’t seem to perceive largely because humans are better at context.

        It’s like that question about whether it’s worth driving your car to the car wash if the car wash is only 10 metres away. AIs have no experience of the real world so they don’t inherently understand that you can’t wash a car if it’s not at the car wash. A human would instantly know that that’s a stupid statement without even thinking about it, and unless you instruct an AI to actually deeply think about something they just give you the first answer they come up with.

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          What’s why they’re pushing for the datacenters, they want to turn make every query that deep. The tech is here, but the ability to sustain it isn’t. They build the data centers, kick the developers out, depress the education market for it, and then raise the prices.

          Companies will be paying the AI companies 60k per year per seat in a decade.

        • yabbadabaddon@lemmy.zip
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 month ago

          I agree with you. But the tool will output a basic code that mostly do what asked in seconds instead of tens of minutes if not hours. So now we could argue if the optimization you make are worth the added cost I’d writing the code yourself or if it’s better to have the tool to generate the code and then optimizing it.

      • skuzz@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        ·
        1 month ago

        A tale as old as time. The US nuclear missile codes were 000000, but it didn’t matter. The chain of command was purpose-built, ironically, so the front line soldier in a cold war scenario had to make the last decision to delete all life on the planet. Chain of command doesn’t matter at that point. You are choosing to kill everyone you know from an order from who knows who. The ultimate checksum.

        You will always be better at decisions than an n-dimensional matrix of numbers on an overpriced GPU.

        • declanruediger@aussie.zone
          link
          fedilink
          arrow-up
          2
          ·
          1 month ago

          I don’t understand your point about the solider on the front line, but I’m interested. If you get a chance, can you elaborate please?

        • fuck_u_spez_in_particular@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          You will always be better at decisions than an n-dimensional matrix of numbers on an overpriced GPU.

          I’d be careful about these claims. Maybe with our current iteration of “attention-based” LLMs, yes. But keep in mind that our way of processing information is strongly limited compared to how much data is fed to these LLMs while training, so they in theory have a lot more foundation to be able to reason about new problems.

          We’re vastly more capable at the moment at interpreting our limited view on foreign code, being actually creative, find new ways to reason, yes. Capable developers (open source…) often have seen quite a bit more code than the average developer and are highly skilled, still with just a tiny subset of the code that an LLM has seen.

          But say these models improve in creativity and “higher-level of thought” through whatever means (e.g. through more reinforcement learning). Well, let’s just say I’m careful with these claims. These LLMs are already quite a help with stupid boilerplaty code (less so with novel stuff, and writing idiomatic non-redundant code, but compared to 2-3 years ago it’s quite a step already, to the point that they’re actually helpful, disregarding all the hype and obvious marketing strategies of these AI-companies)

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      Exactly. I’ve been sabotaging the AI with shitty code output since long before LLMs existed. That’s how I play 4D chess. (This is just meant to get a laugh. Some of my code is even quite nice, actually.)

  • Electricd@lemmybefree.net
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    1 month ago

    More maintainable that whatever shit it put out

    Frankly I believe it can be maintainable if the person doing the prompting actually does something and correctly do their role of human reviewing and correcting. Vibe coding without any review is dooming the software maintainability

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 month ago

      In my experience, the biggest problem is that maintainable code necessarily requires extending/adapting existing structures rather than just slapping a feature onto the side.

      And if we’re not just talking boilerplate, then this necessarily requires understanding the existing logic, which problems it solves, and how you can mold it to continue to solve those problems, while also solving the new problem.

      For that, you can’t just review the code afterwards. You have to do the understanding yourself.
      And once you have a clear understanding, it’s likely that the actual code change is rather trivial. At least more trivial than trying to convey your precise understanding to an LLM/intern/etc…

    • Omgpwnies@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 month ago

      I’ll use an LLM to write bulk code, unit tests, other boring stuff… but, I specifically only have it write code I’m already very familiar with, and even then, I hand-code it every so often, like 1 in every 3 times I’ll do it by hand to make sure I’m still able to. If I have to look something up, then I’ll stop using an LLM for that task for a long while.

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 month ago

      Yeah, a lot of maintainability is about understanding how it works. Architectural decisions are the other half. Someone who’s paying attention can do well on both of these even using AI tools.