Some of us value authenticity. Plagiarism-powered hallucination engines have exactly none of that. The disturbed individual (or individuals) that painted the bathroom of my primary school with feces created something more artful than any AI slop could ever be.
Imagine arguing that flavor is what is important in a dish and not the type of knife used to cut the vegetables, and have someone respond he’d rather drink piss.
Its more like arguing a soulless robot should make your food built upon stolen recipes, not only are the recipes stolen but that robot cannot taste nor understand flavor. All it understands is the words of the recipes and sometimes not even that, it than needs to make new recipes without being able to taste it. Your food will taste as bland and souless as the robot who cannot taste it, even if it does taste good you’ll know its basically just a worse version based on stolen recipes.
I hate to be the one to break it to you, but a huge amount of the food that is eaten in the world is made by “robots”. It ain’t the Keebler Elves in those factories baking your vanilla sandwich cookies, that’s for sure.
Go watch any video on mass produced food and you’ll see that it is made by machines. Drinks are mixed, bottled and packed without any human intervention. You would have a hard time trying to find a dish that you eat that was not prepared in some part by soulless, tasteless machines.
AI are also still configured by humans, since they are the ones choosing which training data is used. So automatically generated art is still human art.
The needed training data is so enormous, it is not cherry picked by humans. Furthermore, transforming random data until it fits the given description enough according to a “neural network” that was statistically curve fitted to the training data, is not in any way human.
Furthermore, transforming random data until it fits the given description enough according to a “neural network” that was statistically curve fitted to the training data, is not in any way human.
You’re confidently stating something that, literally, no scientist would claim. We have no idea how neurons form our mind.
The reason that we use the term neural network is because the functions implemented in the individual neurons in neural networks are based on functions derived from measuring actual neurons in real brains. The model weights are literally a mathematical description of how different neurons connect to each other and, when trained on similar data computational neural networks and organic neural networks form similar data processing structures.
Inspired by biological vision, the architecture of deep neural networks has undergone significant transformations. For instance, the design of Convolutional Neural Networks (CNN) draws inspiration from the organization of the visual cortex in the brain, while Recurrent Neural Networks (RNN) emulate the mechanisms in the brain for processing sequential data.
You are now arguing that statistical CGI is human because its “neural networks” are inspired by biological neurons, which is an entirely different argument than the one I answered on. But fine.
As your article says:
the architecture of deep neural networks has undergone significant transformations.
The functions and achitecture has been so optimized and simplified, that it is just matrix multiplication now. It’s just math now. Math that is a lot simpler than the math that would be required to describe and simulate human brains.
If AI art only used ethically-sourced data, there’d be a lot less objection to it.
I can say, for sure, that this isn’t true.
People still catch the exact same flak for using generative fill in Photoshop despite Adobe training their models on artwork with the explicit permissions (and compensation) of the artists involved in making the training data.
People treat every model like it has personally eaten ever drawing and macaroni painting that they’ve ever done.
Because if you use words that only have objective definitions then you can arbitrarily move your definition around if people come up with counter-examples.
It’s a way of creating an argument that means nothing and also can’t be argued against on Internet forums where there are no rules (unlike, say a debate stage or court room where you have to rationally prove your points).
Some of us value authenticity. Plagiarism-powered hallucination engines have exactly none of that. The disturbed individual (or individuals) that painted the bathroom of my primary school with feces created something more artful than any AI slop could ever be.
Imagine arguing that flavor is what is important in a dish and not the type of knife used to cut the vegetables, and have someone respond he’d rather drink piss.
Its more like arguing a soulless robot should make your food built upon stolen recipes, not only are the recipes stolen but that robot cannot taste nor understand flavor. All it understands is the words of the recipes and sometimes not even that, it than needs to make new recipes without being able to taste it. Your food will taste as bland and souless as the robot who cannot taste it, even if it does taste good you’ll know its basically just a worse version based on stolen recipes.
I mean I eat food made by a robot basically every day and it’s pretty good.
I hate to be the one to break it to you, but a huge amount of the food that is eaten in the world is made by “robots”. It ain’t the Keebler Elves in those factories baking your vanilla sandwich cookies, that’s for sure.
Go watch any video on mass produced food and you’ll see that it is made by machines. Drinks are mixed, bottled and packed without any human intervention. You would have a hard time trying to find a dish that you eat that was not prepared in some part by soulless, tasteless machines.
Those robots were still configured by humans to produce a product the humans designed.The automatically produced food is still human food.
AI are also still configured by humans, since they are the ones choosing which training data is used. So automatically generated art is still human art.
The needed training data is so enormous, it is not cherry picked by humans. Furthermore, transforming random data until it fits the given description enough according to a “neural network” that was statistically curve fitted to the training data, is not in any way human.
You’re confidently stating something that, literally, no scientist would claim. We have no idea how neurons form our mind.
The reason that we use the term neural network is because the functions implemented in the individual neurons in neural networks are based on functions derived from measuring actual neurons in real brains. The model weights are literally a mathematical description of how different neurons connect to each other and, when trained on similar data computational neural networks and organic neural networks form similar data processing structures.
https://www.sciencedirect.com/science/article/abs/pii/S1566253524003609
You are now arguing that statistical CGI is human because its “neural networks” are inspired by biological neurons, which is an entirely different argument than the one I answered on. But fine.
As your article says:
The functions and achitecture has been so optimized and simplified, that it is just matrix multiplication now. It’s just math now. Math that is a lot simpler than the math that would be required to describe and simulate human brains.
what if the knife were made out of the skulls of infants
Then that is a fucked up knife, but doesn’t change anything about the dish.
The argument is: the dish requires the use of the fucked-up knife.
If AI art only used ethically-sourced data, there’d be a lot less objection to it.
I can say, for sure, that this isn’t true.
People still catch the exact same flak for using generative fill in Photoshop despite Adobe training their models on artwork with the explicit permissions (and compensation) of the artists involved in making the training data.
People treat every model like it has personally eaten ever drawing and macaroni painting that they’ve ever done.
Why does AI art have “no authenticity”?
Because if you use words that only have objective definitions then you can arbitrarily move your definition around if people come up with counter-examples.
It’s a way of creating an argument that means nothing and also can’t be argued against on Internet forums where there are no rules (unlike, say a debate stage or court room where you have to rationally prove your points).