Google photos is alarmingly good at object and individual recognition. It’ll probably be used by the droid war killbots to distinguish “robot” from “human with bucket on head.”
What would be a “nearly impossible” task in this post-AI world? Short of the provably impossible tasks like the busy beaver problem (and even then, you would be able to make an algorithm that covers a subset of the problem space), I really can’t think of anything.
I think 100% autonomous robotics and driving is still at least 5-10 years away even with large research teams working on it. I mean truly robust AI which is able to handle any situation you could throw at it with zero intervention needed.
Reliability. We can do pretty much anything… with a 5% success rate. Deep learning can take any input, approximate any function and generate the required output, but it’s only as good as the training set and most of them suck. Or it needs to be so large and complex that it’s not fast enough.
Yeah, of course. I think I was misunderstood, which is probably why I got so many downvotes.
Most tasks are possible (and often trivial, given access to the right library) with traditional programming. If it’s possible to do them this way, this is by far the best approach.
Of the things that are not reasonably doable this way, like determining whether a photo is of a bird as in the comic, quite a lot of them are possible nowadays with machine learning (AKA “AI”), and often trivial given access to the right pre-trained model. And in this realm, I would say success rates are very often higher than that. Image recognition is insanely good.
What I’m asking is, what’s a task that’s virtually impossible both with programming and with machine learning?
“Mission critical” tasks which require very high and provable reliability, such as autonomous driving cars, technically fit this question but I think it’s ignoring the point of the question.
And if you were going to mention counterexamples where specially crafted images get mislabeled by AI: this is akin to attacking vulnerabilities in traditional software, which have always existed. If you’re making a low-stakes app or a game, this doesn’t matter.
I think if we’re looking at it conceptually, it has to be something that is too complex to do with traditional heuristics reliably and also doesn’t allow us to generate enough data for good DL results.
There’s also liability to consider, for cases like airplanes and trains. Trains are dead simple to automate, but there needs to be someone there for long tail events, to make people feel safer, and as a fall guy in case of accidents. So in practice it’s impossible to automate beyond subways where you control the entire environment despite the tech being fully capable of it. Same goes for airliners, they practically fly themselves but you need two people there anyway just in case.
I’m sad that the relevant xkcd is kinda obsolete now (because it’s been long enough for that research team to finish doing its thing).
Google photos is alarmingly good at object and individual recognition. It’ll probably be used by the droid war killbots to distinguish “robot” from “human with bucket on head.”
Immich blows Google photos completely out of the water on this, and all locally.
And this is how you summon demons.
you linked to the page, not the image
My bad. I throught doing that with the page for X CD worked. Could have sworn I’ve done that before
What would be a “nearly impossible” task in this post-AI world? Short of the provably impossible tasks like the busy beaver problem (and even then, you would be able to make an algorithm that covers a subset of the problem space), I really can’t think of anything.
I think 100% autonomous robotics and driving is still at least 5-10 years away even with large research teams working on it. I mean truly robust AI which is able to handle any situation you could throw at it with zero intervention needed.
Reliability. We can do pretty much anything… with a 5% success rate. Deep learning can take any input, approximate any function and generate the required output, but it’s only as good as the training set and most of them suck. Or it needs to be so large and complex that it’s not fast enough.
Yeah, of course. I think I was misunderstood, which is probably why I got so many downvotes.
Most tasks are possible (and often trivial, given access to the right library) with traditional programming. If it’s possible to do them this way, this is by far the best approach.
Of the things that are not reasonably doable this way, like determining whether a photo is of a bird as in the comic, quite a lot of them are possible nowadays with machine learning (AKA “AI”), and often trivial given access to the right pre-trained model. And in this realm, I would say success rates are very often higher than that. Image recognition is insanely good.
What I’m asking is, what’s a task that’s virtually impossible both with programming and with machine learning?
“Mission critical” tasks which require very high and provable reliability, such as autonomous driving cars, technically fit this question but I think it’s ignoring the point of the question.
And if you were going to mention counterexamples where specially crafted images get mislabeled by AI: this is akin to attacking vulnerabilities in traditional software, which have always existed. If you’re making a low-stakes app or a game, this doesn’t matter.
I think if we’re looking at it conceptually, it has to be something that is too complex to do with traditional heuristics reliably and also doesn’t allow us to generate enough data for good DL results.
There’s also liability to consider, for cases like airplanes and trains. Trains are dead simple to automate, but there needs to be someone there for long tail events, to make people feel safer, and as a fall guy in case of accidents. So in practice it’s impossible to automate beyond subways where you control the entire environment despite the tech being fully capable of it. Same goes for airliners, they practically fly themselves but you need two people there anyway just in case.