• nickiam2@aussie.zone
    link
    fedilink
    arrow-up
    68
    arrow-down
    2
    ·
    7 days ago

    I think the trick here is to not use Google. The Wikipedia page for the movie heat is the first result on DuckDuckGo

  • Freshparsnip@lemm.ee
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    6 days ago

    People Google questions like that? I would have looked up “Heat” in either Wikipedia or imdb and checked the cast list. Or gone to Jolie’s Wikipedia or imdb pages to see if Heat is listed

    • pyre@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      5 days ago

      doesn’t matter, this is “AI” and it should know the difference from context. not to mention you can have gemini as an assistant, which is supposed to respond to natural language input. and it does this.

      best thing about it is that it doesn’t remember previous questions most of the time so after listening to your “assistant” being patronizing about the term “in heat” not applying to humans you can try to explain saying “dude I meant the movie heat”, it will go “oh you mean the 1995 movie? of course… what do you want to know about it?”

    • _stranger_@lemmy.world
      link
      fedilink
      arrow-up
      37
      arrow-down
      1
      ·
      7 days ago

      Because you’re not getting an answer to a question, you’re getting characters selected to appear like they statistically belong together given the context.

      • howrar@lemmy.ca
        link
        fedilink
        arrow-up
        14
        arrow-down
        2
        ·
        7 days ago

        A sentence saying she had her ovaries removed and that she is fertile don’t statistically belong together, so you’re not even getting that.

        • JcbAzPx@lemmy.world
          link
          fedilink
          English
          arrow-up
          22
          arrow-down
          2
          ·
          7 days ago

          You think that because you understand the meaning of words. LLM AI doesn’t. It uses math and math doesn’t care that it’s contradictory, it cares that the words individually usually came next in it’s training data.

          • howrar@lemmy.ca
            link
            fedilink
            arrow-up
            3
            ·
            7 days ago

            It has nothing to do with the meaning. If your training set consists of a bunch of strings consisting of A’s and B’s together and another subset consisting of C’s and D’s together (i.e. [AB]+ and [CD]+ in regex) and the LLM outputs “ABBABBBDA”, then that’s statistically unlikely because D’s don’t appear with A’s and B’s. I have no idea what the meaning of these sequences are, nor do I need to know to see that it’s statistically unlikely.

            In the context of language and LLMs, “statistically likely” roughly means that some human somewhere out there is more likely to have written this than the alternatives because that’s where the training data comes from. The LLM doesn’t need to understand the meaning. It just needs to be able to compute probabilities, and the probability of this excerpt should be low because the probability that a human would’ve written this is low.

            • monotremata@lemmy.ca
              link
              fedilink
              English
              arrow-up
              5
              ·
              7 days ago

              Honestly this isn’t really all that accurate. Like, a common example when introducing the Word2Vec mapping is that if you take the vector for “king” and add the vector for “woman,” the closest vector matching the resultant is “queen.” So there are elements of “meaning” being captured there. The Deep Learning networks can capture a lot more abstraction than that, and the Attention mechanism introduced by the Transformer model greatly increased the ability of these models to interpret context clues.

              You’re right that it’s easy to make the mistake of overestimating the level of understanding behind the writing. That’s absolutely something that happens. But saying “it has nothing to do with the meaning” is going a bit far. There is semantic processing happening, it’s just less sophisticated than the form of the writing could lead you to assume.

            • JcbAzPx@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 days ago

              Unless they grabbed discussion forums that happened to have examples of multiple people. It’s pretty common when talking about fertility, problems in that area will be brought up.

              People can use context and meaning to avoid that mistake, LLMs have to be forced not to through much slower QC by real people (something Google hates to do).

  • DeusUmbra@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    6 days ago

    This is why no one can find anything on Google anymore, they don’t know how to google shit.

  • ArtificialHoldings@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    5
    ·
    edit-2
    6 days ago

    Everyone in this post is the annoying IT person who says “why don’t you just run Linux?” to people who don’t even fully understand what an OS is in the first place.

  • Retreaux@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    5 days ago

    It’s hilarious I got the same results with Charlize Theron with the exact same movie, I guess we both don’t know who actresses are apparently.

  • Bongles@lemm.ee
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    7 days ago

    You’ve sullied my quick answer:

    The assistant figures it out though:

    • LemmyKnowsBest@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      7 days ago

      Maybe that’s why ai had trouble determining anything about AJ & the movie Heat, because she’s wasn’t even in it!

  • Alexaral@infosec.pub
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    10
    ·
    7 days ago

    Leaving aside the fact that this looks like AI slop/trash bait; who the fudge is so clueless as to think Ashley Judd, assuming that she’s who they’re confusing, looks anything like Angelina Jolie back then

    • Bosht@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      7 days ago

      First, it’s the internet, you can cuss. Either structure the sentence not to include it at all or just cuss for fuck’s sake. Second, not everyone knows every actor/actress or is familiar, especially one that’s definitely not in the limelight anymore like Ashley Judd. Hell even when she was popular she wasn’t in a lot.

  • otacon239@lemmy.world
    link
    fedilink
    arrow-up
    52
    ·
    edit-2
    7 days ago

    It also contradicts itself immediately, saying she’s fertile, then immediately saying she’s had her ovaries removed end that she’s reached menopause.

  • tiredofsametab@fedia.io
    link
    fedilink
    arrow-up
    31
    arrow-down
    9
    ·
    7 days ago

    A statistical model predicted that “in heat” with no upper-case H nor quotes, was more likely to refer to the biological condition. Don’t get me wrong: I think these things are dumb, but that was a fully predictable result. (‘…the movie “Heat”’ would probably get you there).

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      44
      arrow-down
      1
      ·
      edit-2
      7 days ago

      As a comparison I ran the same all lower case query in bing and got the answer about the movie because asking about a movie is statistically more likely than asking if a human is in heat. Google’a ai is worse than fucking bing, while google’s old serach algorith consistently had the right answers.

      Google made itself worse by replacing a working system with ai.

      • wise_pancake@lemmy.ca
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        edit-2
        7 days ago

        Kagi quick answers for comparison gets this tweet, but now it thinks that heat is not the movie kind lol

        The AI ouroboros in action

      • tiredofsametab@fedia.io
        link
        fedilink
        arrow-up
        7
        arrow-down
        2
        ·
        7 days ago

        It might be the way Bing is tokenizing and/or how far back it’s looking to connect things when compared to Google.

    • over_clox@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      7 days ago

      While I get your point of the capital H thing, Google’s AI itself decided to put “heat” in quotes all on its own…

    • wander1236@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      7 days ago

      I tried the search myself and the non-AI results that aren’t this Bluesky post are pretty useless, but at least they’re useless without using two small towns’ worth of electricity

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 days ago

        Non-AI results are not going to generally include sites about how something isn’t true unless it is a common misconception.

  • frezik@midwest.social
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    7 days ago

    We all know how AI has made things worse, but here’s some context on how it’s outright backwards.

    Early search engines had a context problem. To use an example from “Halt and Catch Fire”, if you search for “Texas Cowboy”, do you mean the guys on horseback driving a herd of cows, or do you mean the football team? If you search for “Dallas Cowboys”, should that bias the results towards a different answer? Early, naive search engines gave bad results for cases like that. Spat out whatever keywords happen to hit the most.

    Sometimes, it was really bad. In high school, I was showing a history teacher how to use search engines, and he searched for “China golden age”. All results were asian porn. I think we were using Yahoo.

    AltaVista largely solved the context problem. We joke about its bad results now, but it was one of the better search engines before Google PageRank.

    Now we have AI unsolving the problem.

    • doingthestuff@lemy.lol
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      I was okay with keyword results. If you knew what you were dealing with in the search engine, you could usually find what you were looking for.