Twitter enforces strict restrictions against external parties using its data for AI training, yet it freely utilizes data created by others for similar purposes.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Yet another reminder that LLM is not “intelligence” for any common definition of the term. The thing just scraped responses of other LLM and parroted it as its own response, even though it was completely irrelevant for itself. All with an answer that sounds like it knows what it’s talking about, copying the simulated “personal implication” of the source.

    In this case, sure, who cares? But the problem is something that is sold by its designers to be an expert of sort is in reality prone to making shit up or using bad sources, while using a very good language simulation that sounds convincing enough.

    • Hyperreality@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Meat goes in. Sausage comes out.

      The problem is that LLM are being sold as being able to turn meat into a black forest gateau.