• psmgx@lemmy.world
    link
    fedilink
    arrow-up
    143
    ·
    1 month ago

    “Sorry, we’ll format correctly in JSON this time.”

    [Proceeds to shit out the exact same garbage output]

  • Engraver3825@piefed.social
    link
    fedilink
    English
    arrow-up
    63
    ·
    1 month ago

    True story:

    AI: 42, ]

    Vibe coder: oh no, a syntax error, programming is too difficult, software engineers are gatekeeping with their black magic.

    • towerful@programming.dev
      link
      fedilink
      arrow-up
      39
      ·
      1 month ago
      let data = null
      do {
          const response = await openai.prompt(prompt)
          if (response.error !== null) continue;
          try {
              data = JSON.parse(response.text)
          } catch {
              data = null // just in case
          }
      } while (data === null)
      return data
      

      Meh, not my money

  • borth@sh.itjust.works
    link
    fedilink
    arrow-up
    59
    ·
    1 month ago

    The AI probably: Well, I might have made up responses before, but now that “make up responses” is in the prompt, I will definitely make up responses now.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    41
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Funny thing is correct json is easy to “force” with grammar-based sampling (aka it literally can’t output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that…

    A conspiratorial part of me thinks that’s on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of “we’re almost at AGI, I just need another trillion to scale up with no other improvements!”