• FreedomAdvocate@lemmy.net.au
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    10 days ago

    Makes sense. AI can “learn” from and “read” a book in the same way a person can and does, as long as it is acquired legally. AI doesn’t reproduce a work that it “learns” from, so why would it be illegal?

    Some people just see “AI” and want everything about it outlawed basically. If you put some information out into the public, you don’t get to decide who does and doesn’t consume and learn from it. If a machine can replicate your writing style because it could identify certain patterns, words, sentence structure, etc then as long as it’s not pretending to create things attributed to you, there’s no issue.

    • badcommandorfilename@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 days ago

      Ask a human to draw an orc. How do they know what an orc looks like? They read Tolkien’s books and were “inspired” Peter Jackson’s LOTR.

      Unpopular opinion, but that’s how our brains work.

      • burntbacon@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 days ago

        Fuck you, I won’t do what you tell me!

        >.>

        <.<

        spoiler

        I was inspired by the sometimes hilarious dnd splatbooks, thank you very much.

    • elrik@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      10 days ago

      AI can “learn” from and “read” a book in the same way a person can and does

      This statement is the basis for your argument and it is simply not correct.

      Training LLMs and similar AI models is much closer to a sophisticated lossy compression algorithm than it is to human learning. The processes are not at all similar given our current understanding of human learning.

      AI doesn’t reproduce a work that it “learns” from, so why would it be illegal?

      The current Disney lawsuit against Midjourney is illustrative - literally, it includes numerous side-by-side comparisons - of how AI models are capable of recreating iconic copyrighted work that is indistinguishable from the original.

      If a machine can replicate your writing style because it could identify certain patterns, words, sentence structure, etc then as long as it’s not pretending to create things attributed to you, there’s no issue.

      An AI doesn’t create works on its own. A human instructs AI to do so. Attribution is also irrelevant. If a human uses AI to recreate the exact tone, structure and other nuances of say, some best selling author, they harm the marketability of the original works which fails fair use tests (at least in the US).

      • FreedomAdvocate@lemmy.net.au
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        edit-2
        6 days ago

        Your very first statement calling my basis for my argument incorrect is incorrect lol.

        LLMs “learn” things from the content they consume. They don’t just take the content in wholesale and keep it there to regurgitate on command.

        On your last part, unless someone uses AI to recreate the tone etc of a best selling author and then markets their book/writing as being from said best selling author, and doesn’t use trademarked characters etc, there’s no issue. You can’t copyright a style of writing.

        • elrik@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          10 days ago

          I’ll repeat what you said with emphasis:

          AI can “learn” from and “read” a book in the same way a person can and does

          The emphasized part is incorrect. It’s not the same, yet your argument seems to be that because (your claim) it is the same, then it’s no different from a human reading all of these books.

          Regarding your last point, copyright law doesn’t just kick in because you try to pass something off as an original (by, for ex, marketing a book as being from a best selling author). It applies based on similarity whether you mention the original author or not.

          • FreedomAdvocate@lemmy.net.au
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 days ago

            Are you taking that as me saying that they “learn in the same way” as in…by using their eyes to see it and ears to listen to it? You seem to be reading waaaaay too much into a simple sentence. AI “learns” by consuming the content. People learn by consuming the content.

            It applies based on similarity whether you mention the original author or not.

            That’s if you’re recreating something. Writing fan-fiction isn’t a violation of copyright.

        • WraithGear@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          edit-2
          10 days ago

          If what you are saying is true, why were these ‘AI’s” incapable of rendering a full wine glass? It ‘knows’ the concept of a full glass of water, but because of humanities social pressures, a full wine glass being the epitome of gluttony, art work did not depict a full wine glass, no matter how ai prompters demanded, it was unable to link the concepts until it was literally created for it to regurgitate it out. It seems ‘AI’ doesn’t really learn, but regurgitates art out in collages of taken assets, smoothed over at the seams.

            • WraithGear@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              10 days ago

              1 it’s not full, but closer then it was.

              1. I specifically said that the AI was unable to do it until someone specifically made a reference so that it could start passing the test so it’s a little bit late to prove much.
              • alsimoneau@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 days ago

                The concept of a glass being full and of a liquid being wine can probably be separated fairly well. I assume that as models got more complex they started being able to do this more.

                • WraithGear@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  10 days ago

                  You mean when the training data becomes more complete. But that’s the thing, when this issue was being tested, the’AI’ would swear up and down that the normally filled wine glasses were full, when it was pointed out that it was not indeed full, the ‘AI’ would agree, and change some other aspect of the picture it didn’t fully understand. You got wine glasses where the wine would half phase out of the bounds of the cup. And yet still be just as empty. No amount of additional checks will help without an appropriate reference

                  I use ‘AI’ extensively, i have one running locally on my computer, i swap out from time to time. I don’t have anything against its use with certain exceptions. But i can not stand people personifying it beyond its scope

                  Here is a good example. I am working on an APP so every once in a wile i will send it code to check. But i have to be very careful. The code it spits out will be unoptimized like: variable1=IF (variable2 IS true, true, false) .

                  Some have issues with object permanence, or the consideration of time outside its training data. Its like saying a computer can generate a true random number, by making the function to calculate a number more convoluted.

            • WraithGear@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              10 days ago

              “it was unable to link the concepts until it was literally created for it to regurgitate it out“

              -WraithGear

              The’ problem was solved before their patch. But the article just said that the model is changed by running it through a post check. Just like what deep seek does. It does not talk about the fundamental flaw in how it creates, they assert if does, like they always did

              • FaceDeer@fedia.io
                link
                fedilink
                arrow-up
                3
                ·
                10 days ago

                I don’t see what distinction you’re trying to draw here. It previously had trouble generating full glasses of wine, they made some changes, now it can. As a result, AIs are capable of generating an image of a full wine glass.

                This is just another goalpost that’s been blown past, like the “AI will never be able to draw hands correctly” thing that was so popular back in the day. Now AIs are quite good at drawing hands, and so new “but they can’t do X!” Standards have been invented. I see no fundamental reason why any of those standards won’t ultimately be surpassed.