OpenAI's latest image-generation update has taken social media by storm, as users are flooding X, Instagram, and Reddit with Studio Ghibli-style images
The funny thing is OpenAI’s image generator didn’t really do a good job with making a Ghibli stylized version of Altman.
That being said, there will be a downstream impact on media quality if there is no novel approach to balancing creative work and AI slop generators. Don’t think there is a simple answer.
Replacing clip art, generic filler from Getty images, and other hand-crafted slop with machine-made slop for things like slideshows, YouTube thumbnails, and other applications where the image isn’t meant to convey something actually existing from the primary content, that I think is fine.
Of course it should be based on free software (such as AGPL) and use only freely provided or public domain inputs.
Of course it shouldn’t be used to misrepresent its outputs as produced by, authorized, or of people that it is not.
But what we have right now is an another sort of enclosure of the cultural commons, blended with plagerism-by-another-name. If there are already terms for this sort of misappropriation, I can’t think of them right now.
In theory they get super rich, but in practice the early adopters of AI seem to be hemoraging money as a result of it. It doesn’t actually make the bare minimun content so they end up hiring humans to fix their bullshit and the end product is worse than just using humans.
The funny thing is OpenAI’s image generator didn’t really do a good job with making a Ghibli stylized version of Altman.
That being said, there will be a downstream impact on media quality if there is no novel approach to balancing creative work and AI slop generators. Don’t think there is a simple answer.
Replacing amazing creative humans with bland AI generated content is not a good use of AI.
Mostly true, but…
Replacing clip art, generic filler from Getty images, and other hand-crafted slop with machine-made slop for things like slideshows, YouTube thumbnails, and other applications where the image isn’t meant to convey something actually existing from the primary content, that I think is fine.
Of course it should be based on free software (such as AGPL) and use only freely provided or public domain inputs.
Of course it shouldn’t be used to misrepresent its outputs as produced by, authorized, or of people that it is not.
But what we have right now is an another sort of enclosure of the cultural commons, blended with plagerism-by-another-name. If there are already terms for this sort of misappropriation, I can’t think of them right now.
And despite all of its other problems, it’s still not even profitable.
Ironic since the decrease of human made work (art or software) will decrease the quality or diversity of generative AI itself
Which the shareholders couldn’t freaking care less. They only need to get super rich in their lifetime.
In theory they get super rich, but in practice the early adopters of AI seem to be hemoraging money as a result of it. It doesn’t actually make the bare minimun content so they end up hiring humans to fix their bullshit and the end product is worse than just using humans.
deleted by creator
Artist will no longer exist as a species