Twitter enforces strict restrictions against external parties using its data for AI training, yet it freely utilizes data created by others for similar purposes.
Yet another reminder that LLM is not “intelligence” for any common definition of the term. The thing just scraped responses of other LLM and parroted it as its own response, even though it was completely irrelevant for itself. All with an answer that sounds like it knows what it’s talking about, copying the simulated “personal implication” of the source.
In this case, sure, who cares? But the problem is something that is sold by its designers to be an expert of sort is in reality prone to making shit up or using bad sources, while using a very good language simulation that sounds convincing enough.
Meat goes in. Sausage comes out.
The problem is that LLM are being sold as being able to turn meat into a black forest gateau.
I can buy that this was accidental because that answer is way less direct/relevant that what ChatGPT would provide. The guy asked for malicious code and Grok described how to not get malicious code.
And then he asks if there’s a policy preventing Grok from doing that, and Grok answers with a policy that prevents ChatGPT from providing malicious code. Seems pretty consistently wrong.