• fartsparkles@lemmy.world
    link
    fedilink
    arrow-up
    148
    arrow-down
    1
    ·
    1 month ago

    If this passes, piracy websites can rebrand as AI training material websites and we can all run a crappy model locally to train on pirated material.

        • kibiz0r@midwest.social
          link
          fedilink
          English
          arrow-up
          18
          arrow-down
          2
          ·
          1 month ago

          Also true. It’s scraping.

          In the words of Cory Doctorow:

          Web-scraping is good, actually.

          Scraping against the wishes of the scraped is good, actually.

          Scraping when the scrapee suffers as a result of your scraping is good, actually.

          Scraping to train machine-learning models is good, actually.

          Scraping to violate the public’s privacy is bad, actually.

          Scraping to alienate creative workers’ labor is bad, actually.

          We absolutely can have the benefits of scraping without letting AI companies destroy our jobs and our privacy. We just have to stop letting them define the debate.

          • Grumuk@lemmy.ml
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 month ago

            Molly White also wrote about this in the context of open access on the web and people being concerned about how their works are being used.

            “Wait, not like that”: Free and open access in the age of generative AI

            The same thing happened again with the explosion of generative AI companies training models on CC-licensed works, and some were disappointed to see the group take the stance that, not only do CC licenses not prohibit AI training wholesale, AI training should be considered non-infringing by default from a copyright perspective.

          • Grimy@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            3
            ·
            1 month ago

            Creators who are justifiably furious over the way their bosses want to use AI are allowing themselves to be tricked by this argument. They’ve been duped into taking up arms against scraping and training, rather than unfair labor practices.

            That’s a great article. Isn’t this kind of exactly what is going on here? Wouldn’t bolstering copyright laws make training unaffordable for everyone except a handful of companies. Then these companies, because of their monopoly, could easily make the highest level models only affordable by the owner class.

            People are mad at AI because it will be used to exploit them instead of the ones who exploit them every chance they get. Even worse, the legislation they shout for will make that exploitation even easier.

          • FauxLiving@lemmy.world
            link
            fedilink
            arrow-up
            4
            arrow-down
            5
            ·
            1 month ago

            Our privacy was long gone well before AI companies were even founded, if people cared about their privacy then none of the largest tech companies would exist because they all spy on you wholesale.

            The ship has sailed on generating digital assets. This isn’t a technology that can be invented. Digital artists will have to adapt.

            Technology often disrupts jobs, you can’t fix that by fighting the technology. It’s already invented. You fight the disruption by ensuring that your country takes care of people who lose their jobs by providing them with support and resources to adapt to the new job landscape.

            For example, we didn’t stop electronic computers to save the job of Computer (a large field of highly trained humans who did calculations) and CAD destroyed the drafting profession. Digital artists are not the first to experience this and they won’t be the last.

            • masterspace@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 month ago

              Our privacy was long gone well before AI companies were even founded, if people cared about their privacy then none of the largest tech companies would exist because they all spy on you wholesale.

              In the US. The EU has proven that you can have perfectly functional privacy laws.

              If your reasoning is based o the US not regulating their companies and so that makes it impossible to regulate them, then your reasoning is bad.

              • FauxLiving@lemmy.world
                link
                fedilink
                arrow-up
                6
                arrow-down
                1
                ·
                edit-2
                1 month ago

                My reasoning is based upon observing the current Internet from the perspective of working in cyber security and dealing with privacy issues for global clients.

                The GDPR is a step in the right direction, but it doesn’t guarantee your digital privacy. It’s more of a framework to regulate the trading and collecting of your personal data, not to prevent it.

                No matter who or where you are, your data is collected and collated into profiles which are traded between data brokers. Anonymized data is a myth, it’s easily deanonymized by data brokers and data retention limits do essentially nothing.

                AI didn’t steal your privacy. Advertisers and other data consuming entities have structured the entire digital and consumer electronics ecosystem to spy on you decades before transformers or even deep networks were ever used.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    60
    ·
    1 month ago

    Okay, I can work with this. Hey Altman you can train on anything that’s public domain, now go take those fuck ton of billions and fight the copyright laws to make public domain make sense again.

    • meathappening@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 month ago

      This is the correct answer. Never forget that US copyright law originally allowed for a 14 year (renewable for 14 more years) term. Now copyright holders are able to:

      • reach consumers more quickly and easily using the internet
      • market on more fronts (merch didn’t exist in 1710)
      • form other business types to better hold/manage IP

      So much in the modern world exists to enable copyright holders, but terms are longer than ever. It’s insane.

    • turnip@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 month ago

      Sam Altman hasn’t complained surprisingly, he just said there’s competition and it will be harder for OpenAI to compete with open source. I think their small lead is essentially gone, and their plan is now to suckle Microsoft’s teet.

      • HiddenLayer555@lemmy.ml
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 month ago

        it will be harder for OpenAI to compete with open source

        Can we revoke the word open from their name? Please?

  • Phoenixz@lemmy.ca
    link
    fedilink
    arrow-up
    41
    arrow-down
    1
    ·
    1 month ago

    This is a tough one

    Open-ai is full of shit and should die but then again, so should copyright law as it currently is

    • meathappening@lemmy.ml
      link
      fedilink
      English
      arrow-up
      29
      ·
      edit-2
      1 month ago

      That’s fair, but OpenAI isn’t fighting to reform copyright law for everyone. OpenAI wants you to be subject to the same restrictions you currently face, and them to be exempt. This isn’t really an “enemy of my enemy” situation.

      • Melvin_Ferd@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 month ago

        Is anyone trying to make stronger copyright laws? Wouldn’t be rich people that control media would it?

      • droplet6585@lemmy.ml
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        1 month ago

        They monetize it, erase authorship and bastardize the work.

        Like if copyright was to protect against anything, it would be this.

  • Rekorse@sh.itjust.works
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    1 month ago

    Getting really tired of these fucking CEOs calling their failing businesses “threats to national security” so big daddy government will come and float them again. Doubly ironic its coming from a company whos actually destroying the fucking planet while it achieves fuck-all.

  • Zink@programming.dev
    link
    fedilink
    arrow-up
    14
    ·
    1 month ago

    What I’m hearing between the lines here is the origin of a legal “argument.”

    If a person’s mind is allowed to read copyrighted works, remember them, be inspired by them, and describe them to others, then surely a different type of “person’s” different type of “mind” must be allowed to do the same thing!

    After all, corporations are people, right? Especially any worth trillions of dollars! They are more worthy as people than meatbags worth mere billions!

    • ArtificialHoldings@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      1 month ago

      This has been the legal basis of all AI training sets since they began collecting datasets. The US copyright office heard these arguments in 2023: https://www.copyright.gov/ai/listening-sessions.html

      MR. LEVEY: Hi there. I’m Curt Levey, President of the Committee for Justice. We’re a nonprofit that focuses on a variety of legal and policy issues, including intellectual property, AI, tech policy. There certainly are a number of very interesting questions about AI and copyright. I’d like to focus on one of them, which is the intersection of AI and copyright infringement, which some of the other panelists have already alluded to.

      That issue is at the forefront given recent high-profile lawsuits claiming that generative AI, such as DALL-E 2 or Stable Diffusion, are infringing by training their AI models on a set of copyrighted images, such as those owned by Getty Images, one of the plaintiffs in these suits. And I must admit there’s some tension in what I think about the issue at the heart of these lawsuits. I and the Committee for Justice favor strong protection for creatives because that’s the best way to encourage creativity and innovation.

      But, at the same time, I was an AI scientist long ago in the 1990s before I was an attorney, and I have a lot of experience in how AI, that is, the neural networks at the heart of AI, learn from very large numbers of examples, and at a deep level, it’s analogous to how human creators learn from a lifetime of examples. And we don’t call that infringement when a human does it, so it’s hard for me to conclude that it’s infringement when done by AI.

      Now some might say, why should we analogize to humans? And I would say, for one, we should be intellectually consistent about how we analyze copyright. And number two, I think it’s better to borrow from precedents we know that assumed human authorship than to invent the wheel over again for AI. And, look, neither human nor machine learning depends on retaining specific examples that they learn from.

      So the lawsuits that I’m alluding to argue that infringement springs from temporary copies made during learning. And I think my number one takeaway would be, like it or not, a distinction between man and machine based on temporary storage will ultimately fail maybe not now but in the near future. Not only are there relatively weak legal arguments in terms of temporary copies, the precedent on that, more importantly, temporary storage of training examples is the easiest way to train an AI model, but it’s not fundamentally required and it’s not fundamentally different from what humans do, and I’ll get into that more later if time permits.

      The “temporary storage” idea is pretty central for visual models like Midjourney or DALL-E, whose training sets are full of copyrighted works lol. There is a legal basis for temporary storage too:

      The “Ephemeral Copy” Exception (17 U.S.C. § 112 & § 117)

      U.S. copyright law recognizes temporary, incidental, and transitory copies as necessary for technological processes.
      Section 117 allows temporary copies for software operation.
      Section 112 permits temporary copies for broadcasting and streaming.
      
      • ArtificialHoldings@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 month ago

        BTW, if anyone was interested - many visual models use the same training set, collected by a German non-profit: https://laion.ai/

        It’s “technically not copyright infringement” because the set is just a link to an image, paired with a text description of each image. Because they’re just pointing to the image, they don’t really have to respect any copyright.

        • ArtificialHoldings@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 month ago

          Copyright law doesn’t cover recipes - it’s just a “trade secret”. But the approximate recipe for coca cola is well known and can be googled.