AI are also still configured by humans, since they are the ones choosing which training data is used. So automatically generated art is still human art.
The needed training data is so enormous, it is not cherry picked by humans. Furthermore, transforming random data until it fits the given description enough according to a “neural network” that was statistically curve fitted to the training data, is not in any way human.
Furthermore, transforming random data until it fits the given description enough according to a “neural network” that was statistically curve fitted to the training data, is not in any way human.
You’re confidently stating something that, literally, no scientist would claim. We have no idea how neurons form our mind.
The reason that we use the term neural network is because the functions implemented in the individual neurons in neural networks are based on functions derived from measuring actual neurons in real brains. The model weights are literally a mathematical description of how different neurons connect to each other and, when trained on similar data computational neural networks and organic neural networks form similar data processing structures.
Inspired by biological vision, the architecture of deep neural networks has undergone significant transformations. For instance, the design of Convolutional Neural Networks (CNN) draws inspiration from the organization of the visual cortex in the brain, while Recurrent Neural Networks (RNN) emulate the mechanisms in the brain for processing sequential data.
You are now arguing that statistical CGI is human because its “neural networks” are inspired by biological neurons, which is an entirely different argument than the one I answered on. But fine.
As your article says:
the architecture of deep neural networks has undergone significant transformations.
The functions and achitecture has been so optimized and simplified, that it is just matrix multiplication now. It’s just math now. Math that is a lot simpler than the math that would be required to describe and simulate human brains.
The functions and achitecture has been so optimized and simplified, that it is just matrix multiplication now. It’s just math now.
“Just math” is used to describe essentially everything in science. You’re implying that a mathematical model can’t predict reality which is just incorrect.
We use math to accurately describe all kinds of natural processes and phenomenon. Mathematical models are the foundation of most fields of science because they accurately model reality.
And, because matrices a useful mathematical tool for describing complex systems (here, the connections between large numbers of neurons) they’re often used in many fields.
This is why we can predict time dilation in the GPS satellites used to locate your phone or how air will flow over the blades in jet turbines: because mathematical models of a process completely describe the process.
my argument is that the mathematical model for machine learning is in no way close to human minds anymore
And I’m saying that you can’t know that, because science doesn’t know that.
There is a reason that these are called neural networks. The atomic unit that they’re built on is a model of actual neurons and the information encoded in the network (connection strength and activation threshold) is based on observational studies of brains and how they process information.
Making a claim like ‘it isn’t the same as a human mind’ is simply not supported by evidence because there are no studies that try to correlate neural structures with the subjective ‘mind’ (i.e. the software running on the brain hardware).
However, we do know how neurons accept inputs based on weighting and apply linear transformations of their inputs into their outputs and we can create mathematical neurons that match observed neurons. We can train these networks and the way that they adjust their weights also matches cultured physical neurons. We know based on observational data that the mathematical model matches the physical neurons.
Obviously we don’t have Transformer networks in our brains, because we don’t learn to predict next tokens or to denoise images. But the underlying hardware that these systems run on is an exact analog of the neurons that make up the brains of everything on Earth. There’s nothing magical about human minds, they’re built out of neurons just as much as transformer networks.
You’re trying to split hairs but not explaining how any of that applies to the definition of art.
Those robots were still configured by humans to produce a product the humans designed.The automatically produced food is still human food.
AI are also still configured by humans, since they are the ones choosing which training data is used. So automatically generated art is still human art.
The needed training data is so enormous, it is not cherry picked by humans. Furthermore, transforming random data until it fits the given description enough according to a “neural network” that was statistically curve fitted to the training data, is not in any way human.
You’re confidently stating something that, literally, no scientist would claim. We have no idea how neurons form our mind.
The reason that we use the term neural network is because the functions implemented in the individual neurons in neural networks are based on functions derived from measuring actual neurons in real brains. The model weights are literally a mathematical description of how different neurons connect to each other and, when trained on similar data computational neural networks and organic neural networks form similar data processing structures.
https://www.sciencedirect.com/science/article/abs/pii/S1566253524003609
You are now arguing that statistical CGI is human because its “neural networks” are inspired by biological neurons, which is an entirely different argument than the one I answered on. But fine.
As your article says:
The functions and achitecture has been so optimized and simplified, that it is just matrix multiplication now. It’s just math now. Math that is a lot simpler than the math that would be required to describe and simulate human brains.
“Just math” is used to describe essentially everything in science. You’re implying that a mathematical model can’t predict reality which is just incorrect.
We use math to accurately describe all kinds of natural processes and phenomenon. Mathematical models are the foundation of most fields of science because they accurately model reality.
And, because matrices a useful mathematical tool for describing complex systems (here, the connections between large numbers of neurons) they’re often used in many fields.
This is why we can predict time dilation in the GPS satellites used to locate your phone or how air will flow over the blades in jet turbines: because mathematical models of a process completely describe the process.
bruh
of course math can predict and model reality, but that was not my argument
my argument is that the mathematical model for machine learning is in no way close to human minds anymore
And I’m saying that you can’t know that, because science doesn’t know that.
There is a reason that these are called neural networks. The atomic unit that they’re built on is a model of actual neurons and the information encoded in the network (connection strength and activation threshold) is based on observational studies of brains and how they process information.
Making a claim like ‘it isn’t the same as a human mind’ is simply not supported by evidence because there are no studies that try to correlate neural structures with the subjective ‘mind’ (i.e. the software running on the brain hardware).
However, we do know how neurons accept inputs based on weighting and apply linear transformations of their inputs into their outputs and we can create mathematical neurons that match observed neurons. We can train these networks and the way that they adjust their weights also matches cultured physical neurons. We know based on observational data that the mathematical model matches the physical neurons.
Obviously we don’t have Transformer networks in our brains, because we don’t learn to predict next tokens or to denoise images. But the underlying hardware that these systems run on is an exact analog of the neurons that make up the brains of everything on Earth. There’s nothing magical about human minds, they’re built out of neurons just as much as transformer networks.
You’re trying to split hairs but not explaining how any of that applies to the definition of art.