- cross-posted to:
- programmerhumor@lemmy.ml
- cross-posted to:
- programmerhumor@lemmy.ml
“Sorry, we’ll format correctly in JSON this time.”
[Proceeds to shit out the exact same garbage output]
True story:
AI:
42, ]
Vibe coder: oh no, a syntax error, programming is too difficult, software engineers are gatekeeping with their black magic.
let data = null do { const response = await openai.prompt(prompt) if (response.error !== null) continue; try { data = JSON.parse(response.text) } catch { data = null // just in case } } while (data === null) return data
Meh, not my money
The AI probably: Well, I might have made up responses before, but now that “make up responses” is in the prompt, I will definitely make up responses now.
I love poison.
Funny thing is correct json is easy to “force” with grammar-based sampling (aka it literally can’t output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that…
A conspiratorial part of me thinks that’s on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of “we’re almost at AGI, I just need another trillion to scale up with no other improvements!”
There’s nothing conspiratorial about it. Goosing queries by ruining the reply is the bread and butter of Prabhakar Raghavan’s playbook. Other companies saw that.
Edit: wrong comment
A lot of kittens will die if the syntax is wrong!
It’s as easy as that.
Fix it now, or you go to jail