• da_hooman_husky@lemmy.sdf.org
    link
    fedilink
    arrow-up
    24
    arrow-down
    1
    ·
    1 day ago

    There are absolutely people that believe if you tell ChatGPT not to make mistakes that the output is more accurate 😩… it’s things like this where I kinda hate what Apple and Steve Jobs did by making tech more accessible to the masses

    • AeonFelis@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      9 hours ago

      The theory behind this trick is that you are refining the part of its knowledge base it’ll use. You are basically saying “most of the examples you were trained on was written by idiots and is full of mistakes, so when you answer my query limit yourself to the examples that have no mistakes”. It sounds stupid but apparently, to some extent, it kind of works?

    • Gonzako@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 day ago

      Well, you can get it to output better math by telling it to take a breathe first. It’s stupid but LLMS got trained on human data, so it’s only fair that it mimics human output

      • Lemminary@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        12 hours ago

        breathe

        Not to be rude, this is only an observation as an ESL. Just yesterday, someone wrote “I can’t breath”. Are these two spellings switching places now? I’m seeing it more often.

        • Whats_your_reasoning@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          edit-2
          11 hours ago

          No, it’s just a very common mistake. You’re right, it’s supposed to be the other way around (“breath” is the noun, “breathe” is the verb.)

          English spelling is confusing for native speakers, too.

    • scratchee@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      edit-2
      1 day ago

      Whilst I’ve avoided LLMs mostly so far, seems like that should actually work a bit. LLMs are imitating us, and if you warn a human to be extra careful they will try to be more careful (usually), so an llm should have internalised that behaviour. That doesn’t mean they’ll be much more accurate though. Maybe they’d be less likely to output humanlike mistakes on purpose? Wouldn’t help much with llm-like mistakes that they’re making all on their own though.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        10 hours ago

        You are absolutely correct and 10 seconds of Google searching will show that this is the case.

        You get a small boost by asking it to be careful or telling it that it’s an expert in the subject matter. on the “thinking” models they can even chain together post review steps.