• Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      11 days ago

      Maybe if they mean fine tuning but from scratch, no.

      It’s the main reason I think the whole anti-AI movement is going in the wrong direction. If we all don’t get open access to it, that means the access is dictated by sites like shutter stock and deviant art. It doesn’t go away.

      • null@piefed.nullspace.lol
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 days ago

        That’s my thinking, if the concern is that they want to be positive that no copyrighted materials were used in the toolchain, that really just doesn’t sound feasible for a single game dev to pull off.

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      11 days ago

      Yes and no.

      Yes in the sense that they could write a model completely from first principles as it were. The algorithms to train the models are pretty trivial. Providing source material to train the model on to specialize it is also trivial… if you have it (which Larian presumably would).

      The (vastly simplifying so anyone who wants to “well ackshually” can go suck Yurgir’s fat one (negative)) initial weights are the problem. Think of it like what is required for the model to even understand what “give me a weathered stone exterior texture” means. THOSE are fundamentally built on stolen IP (and the uncredited work of grad students around the world…).

      How much you care about that is up to you. But that is what facebook et al had seedboxes running 24/7 to steal. They might not train “their model” on your favorite author’s work. But they used your favorite author’s work and previous generations of their model to create the initial weights they optimized on.

      And a “from scratch” model will not have those. Many are trivially easy to find but those are also very poisoned.

      • null@piefed.nullspace.lol
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 days ago

        Exactly what I’m thinking. If it’s an ethical line then the answer is to just not use AI full stop. If they can’t do that, then this is really just about optics.

        And if they can’t stop using AI, it makes me wonder why I keep seeing people say it’s so useless…

        • NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 days ago

          There are different levels to “AI”. Generally speaking, people are referring to what is generally called “generative AI” in these cases.

          You know all those insane tools in the adobe suite that can do stuff like literally erase a person from a photo or weather a surface or even select only the object you want to delete with a single click of the mouse? Those are, varying levels, of the same underlying algorithms behind “AI” content creation. Hell, most of the good plugins for IDEs for handling stuff like docstring or unit test stub generation are in a similar boat.

          By and large, people don’t have major issues with those. Some of the training data gets really messy but they are a fundamental part of most creative workflows and can be argued as being comparable to using a reference book when drawing human anatomy and so forth.

          The issue comes when you take that a dozen steps farther and have “generative AI”. Rather than take an existing photo and remove the ex you hope dies in a fire, you just say “hey grok. Make a photo of me on the streets of Osaka by myself. And undress a child while you’re at it”. Rather than create a docstring or unit test stub you just have Cursor write an app for you based off a prompt. And so forth.

          At which point it stops being a case of someone using the same reference material to draw a superhero and more that guy who just traces porn for Marvel every month.

          And… much like someone who can’t draw their way out of a paper bag, you see the same with generative AI use in content creation. Generative AI is generally great at replacing entry level employees. It can’t replace a skilled senior creative. And if you are wondering how you get people the experience they need to hit that tier… you get it.

          But that leads to the other problem. If you are someone who is cutting costs left and right to increase profits and realized you can replace 60% of your staff with a subscription to openai? How long until you decide that if you just lower your standards a bit you can replace 80% instead?

      • null@piefed.nullspace.lol
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        11 days ago

        I’m sure they can train an existing model with that, but my understanding is that basic functionality requires a huge amount of broad data to train the base model

    • warm@kbin.earth
      link
      fedilink
      arrow-up
      1
      ·
      11 days ago

      Why even bother? New concept ideas are better coming straight from the mind rather than using a model to regurgitate existing things.