• MissJinx@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    edit-2
    1 day ago

    If you use ai as a tool in your daily life one tip is to ALWAYS ask “what is wrong with the answer you gave above” 9 out of 10 times it will correct something

    Ps: I know, I know “AI bad can’t use”. I’m just leaving a tip for those that do use it as a tool. e.g. I use sometimes to map frameworks for example, very dumb work.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 hours ago

      It depends on the model.

      I’ve had some interactions where you can make it flip flop by repeatedly asking “are you sure about that?”

      Or another time, I was thinking about some math thing, figured it was probably already a theorem if it was true, asked one of the GPT5 models on duck duck go, and then argued with it for longer than I should have when it gave a response that was obviouly wrong. Asked another model, it gave the correct response plus told me the name of the theorem.

      So I asked the first one about that theorem, and yes, it was familiar, but it didnt apply in my case for some bs reason (my specific case was trivially reduced to exactly what the theorem was about). I did eventually get it to admit the truth, but it just wouldn’t let it go for the longest time.

      So it doesn’t hurt to ask, but a) it might be wrong when it corrects something that was right, and b) it might argue it is right when it is wrong.

  • No1@aussie.zone
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    5
    ·
    edit-2
    2 days ago

    I shit you not:

    The comparison between AI and Temu as a metaphor for thinking highlights a paradox: while AI systems, like the e-commerce platform Temu, are built on vast data and sophisticated algorithms, their “thinking” is fundamentally different from human cognition. Temu uses AI to personalize shopping by analyzing user behavior, showing products based on clicks and basket additions, creating a highly efficient, addictive experience that mirrors how AI models learn from data. Similarly, large language models (LLMs) are trained on massive internet text, learning to predict the next word by adjusting internal connections—much like Temu’s algorithms refine product recommendations in real time. However, this process is not genuine understanding. Just as Temu’s AI can generate absurd imagery, such as a trailer-hitch-shaped camper, which reflects a failure to grasp real-world physics or context, LLMs can produce plausible-sounding text that lacks true comprehension or experience. The AI’s “thought” is a statistical simulation, not a conscious or experiential process. As one researcher noted, AI is not self-aware and has no idea of what it is doing, operating purely through probability-based decisions without any internal model of reality. While both Temu and AI systems appear intelligent by generating tailored, seemingly coherent outputs, they do so by manipulating patterns in data rather than engaging in genuine reasoning or understanding. This is why some critics describe LLMs as “stochastic parrots” that mimic language without comprehension. In essence, AI’s “thinking” is like Temu’s interface: highly optimized, responsive, and persuasive, but ultimately rooted in pattern recognition, not insight.

    AI-generated answer. Please verify critical facts.

    Yeah, nah. We are all fucked.