this post was submitted on 24 May 2025
91 points (83.2% liked)
Technology
70300 readers
4713 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Evidence, please.
I have literally been in thousands of driving incidences where a human has not randomly driven into a tree.
You are making a claim here: that these AI systems are safer than humans. There is at least one clear counter example to your claim in existence (which I cited - https://youtu.be/frGoalySCns if anyone wants to try to figure out what this AI was doing) and there are others including ones where they have driven into the sides of tractor trailers. I assume you will make an argument about aggregates, but the sample size we have for these AI driving systems relative to the sample size we have for humans is many orders of magnitude different. And having now seen years of these incidents continuing to pile up, I believe there needs to be much more rigorous research and testing before you can make valid claims these systems are somehow safer.
It’s all in how you combine the numbers, and yes we need a lot more progress, but …. When was the last time an ai caused a collision because it was texting? How often does a self driving vehicle threaten or harm others with road rage?
I do t know what the numbers are but human driving sets a very low bar so it’s easy to believe even today’s inadequate self-driving is safer
This is the same anecdotal appeal we get over and over while AI cars drive into firetrucks and trees in ways even the most basic licensed driver would not. Then we are told these are safer because people text or become distracted. I am over this garbage. Get real numbers and find a way to do it that doesn't put me and my family at risk.
I always said this will be the problem. Self-driving cars will never be perfect. They’ll always have different failure modes than human drivers. So at what point is increased safety worth the trade off of new ways to die. Are we there yet?
At what point is it acceptable to the rest of us? Humans will always prefer the risk they know over the one they don’t, even when it’s objectively wrong
https://fuelarc.com/tech/can-teslas-self-driving-software-detect-bus-only-lanes-not-reliably-no/
edit: it's trivial to find examples of these utterly failing at basic driving. This isn't close to human performance and it is obvious.
There are 5 classified levels of automation. At the lower levels of automation, the very article you are responding to quotes this evidence for you. Here is another article that gets deeper into it, I haven't read it all so feel free to draw your own conclusions, but this data has been available and well reported on for many years. https://www.consumeraffairs.com/automotive/autonomous-vehicle-safety-statistics.html