this post was submitted on 19 Aug 2024
29 points (96.8% liked)
[Dormant] Electric Vehicles
3206 readers
1 users here now
We have moved to:
A community for the sharing of links, news, and discussion related to Electric Vehicles.
Rules
- No bigotry - including racism, sexism, ableism, casteism, speciesism, homophobia, transphobia, or xenophobia.
- Be respectful, especially when disagreeing. Everyone should feel welcome here.
- No self-promotion.
- No irrelevant content. All posts must be relevant and related to plug-in electric vehicles — BEVs or PHEVs.
- No trolling.
- Policy, not politics. Submissions and comments about effective policymaking are allowed and encouraged in the community, however conversations and submissions about parties, politicians, and those devolving into general tribalism will be removed.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Propoganda and marketing spin. Waymo also said its previous cars were safe and they've still had multiple incidents.
It's utterly unacceptable that these companies have been allowed to beta-test their 2-ton vehicles and beta software on public streets.
Human drivers have hundreds of incidents per day. Has anyone done an analysis to see if Waymo incident rate is better than the human incident rate? While we absolutely need to hold companies accountable, it's important to remember that autonomous vehicles don't need to be perfect to be an imovement. They just need to be better than humans, which is a rather low bar.
Man, hard disagree. These systems have to be WAY better than humans to justify their huge costs. From a policy perspective, "better than humans" isn't good enough. And from a fiscal and legal perspective, it's disastrous. Companies need to achieve perfect or nearly perfect records to avoid being sued out of existence in products liability suits.
Also, just a friendly reminder that Cruise (competitor to Waymo) admitted that it had an average of 1.5 employees directing each so-called autonomous car. Waymo hasn't had to disclose those numbers yet, but it employs far more people than Cruise, so I think it's safe to assume that the number is not zero. As much as I want it to be true, this tech is nowhere close to actually autonomous yet. My suspicion is that true autonomous vehicles are still many decades away, due to computing power constraints, sensor fidelity, etc. https://www.nextbigfuture.com/2023/11/one-and-half-remote-cruise-employees-were-supporting-each-driverless-car.html
Couldn’t agree more. If all driving was easily predictable then “just better than humans” would be reasonable. But in my decades of driving I’ve encountered so many edge cases I’ve had to deal with that I seriously doubt true self driving will exist until we developed true AI (not just the LLM stuff that’s currently all the rage) that can react to events that aren’t pre-programmed.
Just a few examples of things I’ve encountered:
Thanks for sharing your experience. Do you think there are currently more unhandleable edge cases than there are human drivers who are tired, drunk, or distracted?
My feeling is that autonomous vehicles will only get better from this point onward, and whereas I don't foresee any appreciable improvement in human drivers. At what point do you think these lines will cross? 3 years? 8 years? 20 years?
Well that’s the thing about edge cases - by definition they haven’t been explicitly taken into account by the programming in these cars. It is literally impossible to define them all, program responses for them, and test those situations in real-world situations. For a self driving car to handle real-world edge cases it needs to be able to identify when one is happening and very quickly determine a safe response to it.
These cars may already be safer than drunk/drowsy drivers in optimal situations, but even a drowsy driver will likely respond safely if they encounter an unusual situation that they’ve never seen before. At the very least they’d likely slow down or stop until they can assess the situation and figure out to proceed. Self driving cars also need to be able to recognize completely new/unexpected situations and figure out how to proceed safely. I don’t think they will be able to do that without some level of human intervention until true AI exists, and we’re still many decades away from that.
We let distracted and inexperienced apes drive on public roads as well. I bet these drive better than them.