Blockchain is an adequate solution to a problem that already has other, cheaper solutions.
AI is an adequate solution to a problem that has no other similarly adequate solutions (classification of complex information). Unfortunately, all the money is in that solution being applied to problems where it’s not adequate (content generation, user interaction).
AI is an adequate solution to a problem that has no other similarly adequate solutions (classification of complex information).
Sentiment analysis machines and such have been around before LLMs and eat much less electricity.
LLMs taken over the “AI” label so much that any success from a machine learning context is attributed to it, while it actually defunds and kills research out of ML all into LLMs.
It’s true that LLMs (and GANs) are taking over a term that contains a lot of other stuff, from fuzzy logic to a fair chunk of computer linguistics.
If you look at what AI does, however, it’s mostly classification. Whether it’s fitting imprecise measurements into categories or analyzing waveform to figure out which word it represents regardless of diction and dialect; a lot of AI is just the attempt at classifying hard to classify stuff.
And then someone figure out how to hook that up to a Markov chain generator (LLMs) or run it repeatedly to “recognize” an image in pure noise (GANs). And those are cool little tricks but not really ones that solve a problem that needed solving. Okay, I’ll grant that GANs make a few things in image retouching more convenient but they’re also subject to a distressingly large number of failure modes and consume a monstrous amount of resources.
Plus the whole thing where they’re destroying the concept of photographic and videographic evidence. I dislike that as well.
I really like AI when used for what it’s good at: Taking messy input data and classifying it. We’re getting some really cool things done that way and some even justify the resources we’re spending. But I do agree with you that the vast majority of funding and resources gets spent on the next glorified chatbot in the vague hope that this one will actually generate some kind of profit. (I don’t think that any of the companies who are invested in AI still actually believe their products will generate a real benefit for the end user.)
If you look at what AI does, however, it’s mostly classification.
Not necessarily, a huge use case is regulation and control in the engineering, not the political sense. Like driverless cars, independently flying drones and such. And yeah, they need classification subsystems under the hood to work, but their ultimate outputs are complex control signals, not simple classes.
And don’t get me wrong, I also like ML and AI as a field, I just don’t like how OpenAI fucked the field with text generators that they got Silicon Valley to worship like gods. I even like LLMs, just not the grotesquely outsized cult around them.
Right. We’ve tried to slay ‘truth’ before and nothing else has worked. Weve tried to end consciousness before, and we came up with a solution in the 40s but everyone was too chicken shit to use it, so we had to build something nastier.
Blockchain is an adequate solution to a problem that already has other, cheaper solutions.
AI is an adequate solution to a problem that has no other similarly adequate solutions (classification of complex information). Unfortunately, all the money is in that solution being applied to problems where it’s not adequate (content generation, user interaction).
Sentiment analysis machines and such have been around before LLMs and eat much less electricity.
LLMs taken over the “AI” label so much that any success from a machine learning context is attributed to it, while it actually defunds and kills research out of ML all into LLMs.
It’s true that LLMs (and GANs) are taking over a term that contains a lot of other stuff, from fuzzy logic to a fair chunk of computer linguistics.
If you look at what AI does, however, it’s mostly classification. Whether it’s fitting imprecise measurements into categories or analyzing waveform to figure out which word it represents regardless of diction and dialect; a lot of AI is just the attempt at classifying hard to classify stuff.
And then someone figure out how to hook that up to a Markov chain generator (LLMs) or run it repeatedly to “recognize” an image in pure noise (GANs). And those are cool little tricks but not really ones that solve a problem that needed solving. Okay, I’ll grant that GANs make a few things in image retouching more convenient but they’re also subject to a distressingly large number of failure modes and consume a monstrous amount of resources.
Plus the whole thing where they’re destroying the concept of photographic and videographic evidence. I dislike that as well.
I really like AI when used for what it’s good at: Taking messy input data and classifying it. We’re getting some really cool things done that way and some even justify the resources we’re spending. But I do agree with you that the vast majority of funding and resources gets spent on the next glorified chatbot in the vague hope that this one will actually generate some kind of profit. (I don’t think that any of the companies who are invested in AI still actually believe their products will generate a real benefit for the end user.)
Not necessarily, a huge use case is regulation and control in the engineering, not the political sense. Like driverless cars, independently flying drones and such. And yeah, they need classification subsystems under the hood to work, but their ultimate outputs are complex control signals, not simple classes.
And don’t get me wrong, I also like ML and AI as a field, I just don’t like how OpenAI fucked the field with text generators that they got Silicon Valley to worship like gods. I even like LLMs, just not the grotesquely outsized cult around them.
There is no other solution for creating a shared, permissionless database.
Yes and no one but crypto needs that. Everyone else is much better served with traditional databases.
Supply chains need that. Traditional databases can’t be used because there would be hundreds.
Right. We’ve tried to slay ‘truth’ before and nothing else has worked. Weve tried to end consciousness before, and we came up with a solution in the 40s but everyone was too chicken shit to use it, so we had to build something nastier.