0Din researchers have uncovered an encoding technique that allows ChatGPT-4o and other popular AI models to bypass safety mechanisms and generate exploit code.
It really does not feel like AGI is near when all of these holes exist. Even when they are filtered for and thus patched over, the core issue is still in the model.
It really does not feel like AGI is near when all of these holes exist. Even when they are filtered for and thus patched over, the core issue is still in the model.
Ironically the smarter the ai gets the harder it is to censor. Also the more u sensor it the less intelligent and less truthful it becomes.
Agi and LLM are two different things that fall under the general umbrella term “AI”.
That a particular LLM can’t be censored doesn’t say anything about its abilities.