Soon it will be undeniable that FSD is the most advanced and safest self-driving system on the planet and the haters just need to accept it. It’s a good thing because it’s literally saving human lives.
You might not like Teslas as a vehicle nor the company itself due to the CEO but the fact is that no other manufacturer offers equally capable self-driving system and this is unlikely to change anytime soon. If you’ve not been following this technology closely they just switched from human code to 100% neural nets and the difference between V11 and V12 cannot be overstated. It’s still not perfect and probably never will be but it’s really good and there’s a good chance it’s already safer driver than the average human. This is all done using only cameras. No radar, no LiDAR.
There’s also rumors going around that Ford is about to licence the FSD software in their own vehicles and others are likely to follow so it will not be just Tesla’s that are using it.
Literally today a Tesla on autodrive killed a person on a motorcycle.
The reason no one else offers it is because it’s half baked, unfinished, permanently beta, and not safe to be on real roads.
Driving has 10,000 edge cases and every one needs to be tested and 100% perfect, and because of that self driving is a long ways away. Real car manufacturers know that, and know what Tesla is actually selling - a gimmick.
This is the ‘devil is in the details’, as to why autodriving isn’t there yet.
Having the code for every one of those edge cases in the office/lab via simulation has got to be a nightmare, and no way to be complete before releasing.
Anti Commercial-AI license (CC BY-NC-SA 4.0)
exactly. Anyone who has ever coded anything professionally knows how intense a problem like this is. There’s a reason that no one, not even Google, Microsoft, or Apple have successfully done it. They may still be researching, but to think it’s a simple problem that can be handwaved away with AI and models is incredibly naiive.
AI is just probability. This picture is probably a dog, with over 90% accuracy. Which is great when you’re classifying cats and dogs - but we’re doing real time live determinations of things while driving, and that’s a completely different problem set. Now we need AI to predict with a much higher probability that there is a person in the street, or the street is dividing, or there is a construction zone, or the car ahead is starting to slow down, or… 10,000 other edge cases.
I still don’t understand how self driving vehicles hit things. Job #1 is don’t hit things. If they can just do that they will be much better than human drivers.
Usually they misidentify the boundaries of the road, or objects moving on and off of the road.
Anti Commercial-AI license (CC BY-NC-SA 4.0)
That’s why Tesla moved away from human code and instead they use neural nets to analyze video content of good human drivers. The point about edge cases still stands but the advantage Tesla has over most other manufacturers is that there’s a ton of people already using FSD and reporting said edge cases back to Tesla so that they can be fixed.
There’s also a possibility that going with human code along with radar/LiDAR is a dead end and once others realize this Tesla so far ahead that the rest can’t catch up.
Tesla doesn’t have a system called autodrive. They have autopilot and Full Self Driving and I’m speaking of the latter.
Self driving cars are not going to entirely stop traffic accidents either. Even with a system that’s 10x better driver than a human there’s still going to be 3000 deaths due to traffic accidents in the US alone.
Sorry I didn’t read their marketing briefs. Autopilot and Full Self Driving killed a person today. Sorry, I hope that comes off better to you now.
According to the news article he was using autopilot. This topic is about FSD. They’re different systems.