I don’t necessarily disagree that we may figure out AGI, and even that LLM research may help us get there, but frankly, I don’t think an LLM will actually be any part of an AGI system.
Because fundamentally it doesn’t understand the words it’s writing. The more I play with and learn about it, the more it feels like a glorified autocomplete/autocorrect. I suspect issues like hallucination and “Waluigis” or “jailbreaks” are fundamental issues for a language model trying to complete a story, compared to an actual intelligence with a purpose.
Yeah, my first thought as well was that “pulling up” would be pulling the steering wheel back, which wouldn’t do anything. Certainly wouldn’t swerve the car all the way off the road, you wouldn’t want to jerk a plane left or right in that scenario either.
So… definitely made up. But still an amusing greentext.