this post was submitted on 01 Nov 2024
27 points (100.0% liked)
[Dormant] Electric Vehicles
3206 readers
1 users here now
We have moved to:
A community for the sharing of links, news, and discussion related to Electric Vehicles.
Rules
- No bigotry - including racism, sexism, ableism, casteism, speciesism, homophobia, transphobia, or xenophobia.
- Be respectful, especially when disagreeing. Everyone should feel welcome here.
- No self-promotion.
- No irrelevant content. All posts must be relevant and related to plug-in electric vehicles — BEVs or PHEVs.
- No trolling.
- Policy, not politics. Submissions and comments about effective policymaking are allowed and encouraged in the community, however conversations and submissions about parties, politicians, and those devolving into general tribalism will be removed.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Like I said, the argument is that if AI vision is actually solved, at that point it's like walking with perfect vision and a blind cane.
LIDAR's true strength isn't even useful for driving at speed. LIDAR is super precise - useful for parking perhaps, but when driving at 50km/h or faster, does it really matter if the object in front is 30.34m ahead or 30.38m?
Also, the main problem with LIDAR is that it really doesn't see any more than cameras do. It uses light, or near-visible light, so it basically gets blocked by the same things that a camera gets blocked by. When heavy fog easily fucks up both cameras and LIDAR at the same time, that's not really redundancy.
I'd like to see redundancy provided by multiple systems that work differently. Advanced high resolution radar, thermal vision, etc. But it still requires vision and AI 100%: the ability to identify what an object is and its likely actions, not simply measure its size and distance.
The spinning lidar sensors mechanically remove occlusions like raindrops and dust, too. And one important thing with lidar is that it involves active emission of lasers so that it's a two way operation, like driving with headlights, not just passive sensing, like driving with sunlight.
Waymo's approach appears to differ in a few key ways:
There's a school of thought that because many of these would need to be eliminated for true level 5 autonomous driving, Waymo is in danger of walking down a dead end that never gets them to the destination. But another take is that this is akin to scaffolding during construction, that serves an important function while building up the permanent stuff, but can be taken down afterward.
I suspect that the lidar/radar/ultrasonic/extra cameras will be more useful for training the models necessary to reduce reliance on human intervention and maybe reduce the number of sensors. Not just in the quantity of training data, but some filtering/screening function that can improve the quality of data fed into the training.