this post was submitted on 24 May 2025
84 points (82.8% liked)

Technology

70283 readers
2972 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Just to be clear, I do think the obvious solution to terrible things like this is vastly expanded public transit so that people don't have to rely on cars to get everywhere, not overhyped technology and driving aids that are still only marginally better than a human driver. I just thought the article was interesting.

you are viewing a single comment's thread
view the rest of the comments
[–] underline960@sh.itjust.works 10 points 23 hours ago* (last edited 18 hours ago) (5 children)

What technology?

Safety features like lane-keep assist, automatic emergency braking (AEB), and blind-spot detection...

... AI-powered traffic systems that predict and prevent accidents....

Impaired driving is also solvable. On-demand breathalyzers, smartphone saliva tests, and eye-tracking sensors... Uber is already testing real-time driver sobriety verification...

Why aren't we using it?

The article doesn't have an answer.

[–] andyburke@fedia.io 8 points 23 hours ago (2 children)

A Tesla in FSD randomly just veered off the road into a tree. There is video. It makes no sense, very difficult to work out why the AI thought that looked like a good move.

These tools this author is saying we have do not work how people claim they do.

[–] AA5B@lemmy.world 1 points 22 hours ago (2 children)

Tesla gets telemetry that should show exactly what happened. We need to require that to be collected with each accident so someone can look for patterns and improvements.

But I’ll agree with the other guy that’s it’s still quite possible this is safer than human drivers already. It makes news because it seems like a ridiculous failure. But what happens when you compare it to the number of accidents caused by people falling asleep or getting distracted, or letting their rage out?

The critical data is the cost in human lives, and it’s quite possible for technology to fail spectacularly while saving lives overall

[–] aesthelete@lemmy.world 3 points 21 hours ago

Tesla self-driving failures are in a class of their own because the asshat in charge didn't want to outfit the cars with the needed sensors to provide reasonable self-driving capabilities.

[–] andyburke@fedia.io 2 points 21 hours ago

Get the data. Get it without putting me and my family at risk.

[–] MonkRome@lemmy.world -2 points 23 hours ago (1 children)

They only have to work better and more consistently than humans to be a net positive. Which I believe most of these systems already do by a wide margin. Psychologically it's harder to accept a mistake from technology than it is from a human because the lack of control, but if the goal is to save lives, these safety systems accomplish that.

[–] andyburke@fedia.io 4 points 23 hours ago (2 children)

Evidence, please.

I have literally been in thousands of driving incidences where a human has not randomly driven into a tree.

You are making a claim here: that these AI systems are safer than humans. There is at least one clear counter example to your claim in existence (which I cited - https://youtu.be/frGoalySCns if anyone wants to try to figure out what this AI was doing) and there are others including ones where they have driven into the sides of tractor trailers. I assume you will make an argument about aggregates, but the sample size we have for these AI driving systems relative to the sample size we have for humans is many orders of magnitude different. And having now seen years of these incidents continuing to pile up, I believe there needs to be much more rigorous research and testing before you can make valid claims these systems are somehow safer.

[–] AA5B@lemmy.world 1 points 22 hours ago* (last edited 22 hours ago) (1 children)

It’s all in how you combine the numbers, and yes we need a lot more progress, but …. When was the last time an ai caused a collision because it was texting? How often does a self driving vehicle threaten or harm others with road rage?

I do t know what the numbers are but human driving sets a very low bar so it’s easy to believe even today’s inadequate self-driving is safer

[–] andyburke@fedia.io -1 points 21 hours ago (1 children)

This is the same anecdotal appeal we get over and over while AI cars drive into firetrucks and trees in ways even the most basic licensed driver would not. Then we are told these are safer because people text or become distracted. I am over this garbage. Get real numbers and find a way to do it that doesn't put me and my family at risk.

[–] AA5B@lemmy.world 0 points 17 hours ago* (last edited 17 hours ago) (1 children)

I always said this will be the problem. Self-driving cars will never be perfect. They’ll always have different failure modes than human drivers. So at what point is increased safety worth the trade off of new ways to die. Are we there yet?

At what point is it acceptable to the rest of us? Humans will always prefer the risk they know over the one they don’t, even when it’s objectively wrong

[–] andyburke@fedia.io 0 points 13 hours ago

https://fuelarc.com/tech/can-teslas-self-driving-software-detect-bus-only-lanes-not-reliably-no/

edit: it's trivial to find examples of these utterly failing at basic driving. This isn't close to human performance and it is obvious.

[–] MonkRome@lemmy.world 1 points 22 hours ago

There are 5 classified levels of automation. At the lower levels of automation, the very article you are responding to quotes this evidence for you. Here is another article that gets deeper into it, I haven't read it all so feel free to draw your own conclusions, but this data has been available and well reported on for many years. https://www.consumeraffairs.com/automotive/autonomous-vehicle-safety-statistics.html

[–] 0x0@lemmy.zip 4 points 23 hours ago

Lane.assist.. fine.
Autobreak? Fuck no.
AI?! ffs

[–] postmateDumbass@lemmy.world 1 points 19 hours ago

Because of how it will go when everyone assumes the car they are trying to merge with will auto brake if they go for it.

[–] catloaf@lemm.ee 1 points 22 hours ago

It does. It says it's optional, only in new cars, and it costs extra money, which anyone with half a brain could have told you.