A decent camera only vision system should still be able to detect the wall. I was actually shocked at the fact that Tesla failed this test so egregiously.
If you use two side by side cameras you can determine distance to a feature by calculating the offset in position of the feature between the two camera images. I had always assumed this was how Tesla planned to achieve camera only FSD, but that would make too much sense.
Even if they wanted to avoid any redundant hardware and only go with one camera for each direction, there is still a chance they could’ve avoided this kind of issue if they used structure through motion, but that’s much harder to do if the objects could be moving.
It’s weird, an optical sensor should fall for this, but LiDAR detects objects in 3D.
Teslas famously don’t use lidar because Musk declared that cameras were good enough. Reality disagrees, but reality owns no shares of Tesla.
And then disabled existing lidar sensors in teslas, so his team could just focus on camera vision only
He’s a dumbass
A decent camera only vision system should still be able to detect the wall. I was actually shocked at the fact that Tesla failed this test so egregiously.
If you use two side by side cameras you can determine distance to a feature by calculating the offset in position of the feature between the two camera images. I had always assumed this was how Tesla planned to achieve camera only FSD, but that would make too much sense.
https://www.intelrealsense.com/stereo-depth-vision-basics/
Even if they wanted to avoid any redundant hardware and only go with one camera for each direction, there is still a chance they could’ve avoided this kind of issue if they used structure through motion, but that’s much harder to do if the objects could be moving.
https://en.m.wikipedia.org/wiki/Structure_from_motion