• xkbx@startrek.website
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    2
    ·
    1 month ago

    Couldn’t you just set up actual AI/LLM verification questions, like “how many r’s in strawberry?”

    Or even just have an AI / Manual contribution divide. Wouldn’t stop everything 100% but might help the clean-up process better

    • CameronDev@programming.dev
      link
      fedilink
      English
      arrow-up
      85
      ·
      1 month ago

      Those kind of challenges only work for a short while. Chatgpt has solved the strawberry one already.

      That said, I wish these AI people would just create their own projects and contribute to them. Create a LLM fork of the engine, and go nuts. If your AI is actually good, you’ll end up with a better engine and become the dominant fork.

      • warm@kbin.earth
        link
        fedilink
        arrow-up
        46
        ·
        1 month ago

        They don’t want to do it in a corner where nobody can see, they want to push it on existing projects and attempt to justify it.

          • mcv@lemmy.zip
            link
            fedilink
            English
            arrow-up
            10
            ·
            1 month ago

            Use open source maintainers as free volunteers check whether your AI coding experiment works.

      • new_guy@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        1 month ago

        There’s a joke in science circles that goes something like this:

        “Do you know how they call alternative medicine that works? Just regular medicine.”

        Good code made by LLM should be indistinguishable from code made by an human… It would simply be “just code”.

        It’s hard to create a project the size of Godot’s and not have a human in the loop somewhere filtering the slop and trying to create a cohesive code base. At that poin they either would be overwhelmed again or the code would be unmaintainable.

        And then we would go full circle and get to the same point described by the article.

        • CameronDev@programming.dev
          link
          fedilink
          English
          arrow-up
          21
          ·
          1 month ago

          They can fork Godot and let their LLMs go at it. They don’t have to use the Godot human maintainers as free slop filters.

          But of course, if they did that, their LLMs would have to stand on their own merits.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 month ago

        People who submit AI-generated code tend to crumble, or sound incomprehensible, in the face of the simplest questions. Thank goodness this works for code reviews… because if you look at AI CEO interviews, journalists can’t detect the BS.

    • one_old_coder@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 month ago

      You could also ask users to type the words fuck or shit in the description somewhere. LLMs cannot do that AFAIK.

      • Pamasich@kbin.earth
        link
        fedilink
        arrow-up
        5
        ·
        1 month ago

        I mean, ChatGPT can do it. I just tested it. And if you run your own AI, you can probably remove most such rules anyway.

    • SkunkWorkz@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      Yeah but that won’t stop people from manually submitting prs made with AI. A lot of the slop isn’t just automated pull requests but people using AI to find and fix “bugs”, without understanding the code at all.

    • turboSnail@piefed.europe.pub
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      1 month ago

      How about asking it to write a short political speech on climate change. Then, just count the number of rhetoric devices and em-dashes. A human dev wouldn’t be bothered to write anything fancy or impactful when they just want to submit a bug fix. It would be simple, poorly written, and filled with typos. LLMs try to make it way too impressive and impactful.

        • turboSnail@piefed.europe.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          No need to add any more than you usually do. Just leave the ones you are unable to see. Besides, LLMs tend to write in overly grand style, whereas humans can’t be bothered to use every trick in the book. Humans just get to the point and skip all the high-impact language that LLMs seem to love.

          • boonhet@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            I usually proofread any messages that aren’t for my close friends or family lol