When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

  • rsuri@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    3
    ·
    2 months ago

    “Hallucinations” is the wrong word. To the LLM there’s no difference between reality and “hallucinations”, because it has no concept of reality or what’s true and false. All it knows it what word maybe should come next. The “hallucination” only exists in the mind of the reader. The LLM did exactly what it was supposed to.

    • Hobo@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      4
      ·
      edit-2
      2 months ago

      They’re bugs. Major ones. Fundamental flaws in the program. People with a vested interest in “AI” rebranded them as hallucinations in order to downplay the fact that they have a major bug in their software and they have no fucking clue how to fix it.

      • Terrasque@infosec.pub
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 months ago

        It’s an inherent negative property of the way they work. It’s a problem, but not a bug any more than the result of a car hitting a tree at high speed is a bug.

        Calling it a bug indicates that it’s something unexpected that can be fixed, and as far as we know it can’t be fixed, and is expected behavior. Same as the car analogy.

        The only thing we can do is raise awareness and mitigate.

        • futatorius@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          It’s a problem, but not a bug any more than the result of a car hitting a tree at high speed is a bug.

          You’re attempting to redefine “bug.”

          Software bugs are faults, flaws, or errors in computer software that result in unexpected or unanticipated outcomes. They may appear in various ways, including undesired behavior, system crashes or freezes, or erroneous and insufficient output.

          From a software testing point of view, a correctly coded realization of an erroneous algorithm is a defect (a bug). It fails validation (a test for fitness for use) rather than verification (a test that the code correctly implements the erroneous algorithm).

          This kind of issue arises not only with LLMs, but with any software that includes some kind of model within it. The provably correct realization of a crap model is still crap.

      • SkunkWorkz@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        3
        ·
        edit-2
        2 months ago

        It’s not a bug. Just a negative side effect of the algorithm. This what happens when the LLM doesn’t have enough data points to answer the prompt correctly.

        It can’t be programmed out like a bug, but rather a human needs to intervene and flag the answer as false or the LLM needs more data to train. Those dozens of articles this guy wrote aren’t enough for the LLM to get that he’s just a reporter. The LLM needs data that explicitly says that this guy is a reporter that reported on those trials. And since no reporter starts their articles with ”Hi I’m John Smith the reporter and today I’m reporting on…” that data is missing. LLMs can’t make conclusions from the context.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      Well, It’s not lying because the AI doesn’t know right or wrong. It doesn’t know that it’s wrong. It doesn’t have the concept of right or wrong or true or false.

      For the llm’s the hallucinations are just a result of combining statistics and producing the next word, as you say. From the llm’s “pov” it’s as real as everything else it knows.

      So what else can it be called? The closest concept we have is when the mind hallucinates.