You think these billion-dollar companies keep hyper-illegal images around, just to train their hideously expensive models to do the things they do not want those models to do?
Like combining unrelated concepts isn’t the whole fucking point?
True enough - but fortunately, there’s approximately zero such images readily-available on public websites, for obvious reasons. There certainly is not some well-labeled training set on par with all the images of Shrek.
It literally can’t combine unrelated concepts though. Not too long ago there was the issue where one (Dall-E?) couldn’t make a picture of a full glass of wine because every glass of wine it had been trained on was half full, because that’s generally how we prefer to photograph wine. It has no concept of “full” the way actual intelligences do, so it couldn’t connect the dots. It had to be trained on actual full glasses of wine to gain the ability to produce them itself.
I’m saying it can’t combine clothed children and naked adults to make naked children. It doesn’t know what “naked” means. It can’t imagine what something might look like. It can only make naked children if it has been trained on them directly.
AI CSAM was generated from real CSAM
AI being able to accurately undress kids is a real issue in multiple ways
AI can draw Shrek on the moon.
Do you think it needed real images of that?
It used real images of shrek and the moon to do that. It didnt “invent” or “imagine” either.
The child porn it’s generating is based on literal child porn, if not itself just actual child porn.
You think these billion-dollar companies keep hyper-illegal images around, just to train their hideously expensive models to do the things they do not want those models to do?
Like combining unrelated concepts isn’t the whole fucking point?
No, I think these billion dollar companies are incredibly sloppy about curating the content they steal to train their systems on.
True enough - but fortunately, there’s approximately zero such images readily-available on public websites, for obvious reasons. There certainly is not some well-labeled training set on par with all the images of Shrek.
It literally can’t combine unrelated concepts though. Not too long ago there was the issue where one (Dall-E?) couldn’t make a picture of a full glass of wine because every glass of wine it had been trained on was half full, because that’s generally how we prefer to photograph wine. It has no concept of “full” the way actual intelligences do, so it couldn’t connect the dots. It had to be trained on actual full glasses of wine to gain the ability to produce them itself.
And you think it’s short on images of fully naked women?
I’m saying it can’t combine clothed children and naked adults to make naked children. It doesn’t know what “naked” means. It can’t imagine what something might look like. It can only make naked children if it has been trained on them directly.
Incorrect.