

I think the difference comes from understanding. When we inferior, fleshy ones “make up” information, it’s usually based on our understanding (or misunderstanding) of the subject at hand. We will fill in the blanks in our knowledge with what we know about similar subjects.
An LLM doesn’t understand its output, though. All it knows is that word_string_x immediately follows word_string_y in 84.821% of its training data, so that’s what gets pasted next.
For us, making up false information comes from gaps in our cognition, from personal agendas, our own unique lived experiences, etc. For an LLM, these are just mathematical anomalies.
There’s a few people I know in real life, who I originally met online, and who call me “Chozo” more often than by my real name. Names are weird.