The alternative is Facebook with lies that go unchecked completely. This is actually an area where AI is not bad.
edit: sigh. Refusing to acknowledge where things can be useful. NO, ALL BAD. BAD BAD BAD! AI BAD! ALWAYS BAD! NO USE! NO GOOD! ONLY BAD! BAD BAD BAD! Such fucking blindness.
The system that is notorious for lying being used for fact checking. Yea maybe you should write “bad” in caps lock one more time, that will make you right.
I doubt it, honestly. It’d likely catch a lot of misinfo, yes, but it would likely also classify any new findings that run counter to previous assumptions as misinfo. LLMs can’t keep up to date. And they still have the same issue that whoever trains them gets to decide what is and isn’t misinfo, which starts being a problem when it’s an ubiquitous social media site.
deleted by creator
Funny timeline we live in
The alternative is Facebook with lies that go unchecked completely. This is actually an area where AI is not bad.
edit: sigh. Refusing to acknowledge where things can be useful. NO, ALL BAD. BAD BAD BAD! AI BAD! ALWAYS BAD! NO USE! NO GOOD! ONLY BAD! BAD BAD BAD! Such fucking blindness.
The system that is notorious for lying being used for fact checking. Yea maybe you should write “bad” in caps lock one more time, that will make you right.
deleted by creator
I doubt it, honestly. It’d likely catch a lot of misinfo, yes, but it would likely also classify any new findings that run counter to previous assumptions as misinfo. LLMs can’t keep up to date. And they still have the same issue that whoever trains them gets to decide what is and isn’t misinfo, which starts being a problem when it’s an ubiquitous social media site.