Generative AI could also provide more opportunities for players to go off-script and create their own stories if designers can craft environments that feel more alive and can react to players’ choi…
How small can you make an LLM before it starts having issues with grammar and coherency? I would argue that the bare minimum still would be rather large, and in videogames we’re already using vram for other resources. In a 3D game especially I imagine very little vram is left to utilize.
You’d be surprised how small you can go. That’s IMO pretty much the future of AI - a shit ton of small specialized models. While the heavyweights have their use, they’re way too expensive and overkill for specialized tasks.
Some small models can comfortably run on the CPU as well, games can easily detect whether you have VRAM to spare and use GPU or CPU based on that.
It’s not there, yet, but what some of the small models can do is impressive. And if you train them extensively on fantasy scripts, I can see them generating NPC lines on the fly.
Not sure, but what I am sure on is companies paying “ai engineers” (or whatever they are called) to trim them to a usable point instead of hiring a better writing team.
That’s immensely expensive though, and not guaranteed to work because much of that stuff is still research stage. You’re right that paring down the models to make them leaner and more specialized is the primary direction that current research is pursuing, but it’s far from certain at this point how to do it, how well it will work, and how small you can get them before they start to fall apart. Not something game studios are likely to gamble their budgets on, at least not yet.
We’re nowhere near the “just hire a guy to trim it down instead of hiring writers” stage, and it’s unclear yet whether or not that’s where we’ll end up. We could pull off “just hire a guy to fine-tune an existing foundation model,” but that doesn’t make them smaller.
How small can you make an LLM before it starts having issues with grammar and coherency? I would argue that the bare minimum still would be rather large, and in videogames we’re already using vram for other resources. In a 3D game especially I imagine very little vram is left to utilize.
You’d be surprised how small you can go. That’s IMO pretty much the future of AI - a shit ton of small specialized models. While the heavyweights have their use, they’re way too expensive and overkill for specialized tasks.
Some small models can comfortably run on the CPU as well, games can easily detect whether you have VRAM to spare and use GPU or CPU based on that.
It’s not there, yet, but what some of the small models can do is impressive. And if you train them extensively on fantasy scripts, I can see them generating NPC lines on the fly.
Not sure, but what I am sure on is companies paying “ai engineers” (or whatever they are called) to trim them to a usable point instead of hiring a better writing team.
That’s immensely expensive though, and not guaranteed to work because much of that stuff is still research stage. You’re right that paring down the models to make them leaner and more specialized is the primary direction that current research is pursuing, but it’s far from certain at this point how to do it, how well it will work, and how small you can get them before they start to fall apart. Not something game studios are likely to gamble their budgets on, at least not yet.
We’re nowhere near the “just hire a guy to trim it down instead of hiring writers” stage, and it’s unclear yet whether or not that’s where we’ll end up. We could pull off “just hire a guy to fine-tune an existing foundation model,” but that doesn’t make them smaller.