• vrighter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        12
        arrow-down
        1
        ·
        7 months ago

        exactly, you can only really verify the code if you were capable of writing it in the first place.

        And it’s an old well known fact that reading code is much harder than writing it.

        • ulterno@programming.dev
          link
          fedilink
          English
          arrow-up
          8
          ·
          7 months ago

          An irrelevant but interesting take is that this applies as an analogue to a lot of stuff in electronics related space.

          • It is harder to receive data than to transmit it, because you need to do things like:
            • match your receiver’s frequency with that of the transmission (which might be minutely different from the agreed upon frequency), to understand it
            • know how long the data will be, before feeding into digital variables, or you might combine multiple messages or leave out some stuff without realising
          • this gets even harder when it is wireless, because now, you have noise, which is often, valid communication among other devices

          Getting back to code, you now need to get in the same “wavelength” as the one who wrote the code, at the time they wrote the code.

    • Kbobabob@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      7 months ago

      Even if you’re the one that built, programmed, and trained the AI when nothing else like it existed?

      • vrighter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        arrow-down
        4
        ·
        7 months ago

        So? Some of the people pushing out ai slop would be perfectly capable of writing their own llm out of widely available free tools. Contrary to popular belief, they are not complex pieces of software, just extremely data hungry. Does not mean they magically understand the code output by the llm when it spits out something.

        • Honytawk@feddit.nl
          link
          fedilink
          arrow-up
          6
          ·
          7 months ago

          Stark would have developed their own way of training their AI. It wouldn’t be an LLM in the first place.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            7 months ago

            and he stil wouldn’t understand its output. Because as we clearly see, he doesn’t even try to look at it.

              • vrighter@discuss.tchncs.de
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                7 months ago

                given that expert systems are pretty much just a big ball of if-then statements, then he might be considered to have written the app. Just with way more extra steps.