this post was submitted on 17 Nov 2024
192 points (97.5% liked)
Technology
59404 readers
2537 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
IMO, there's no such thing as responsible AI use. All of the uses so far are bad, and I can't see any that would work as well as a trained human. Even worse, there's zero accountability; when an AI makes a mistake and gets people killed, no executives or programmers will ever face any criminal charges because the blame will be too diffuse.
I'm no AI enthusiast, but this is clear hyperbole. Of course there are uses for it; it's not magic, it's just technology. You'll have been using some of them for years before the AI fad came along and started labelling everything.
Translation services are a good example. Google Translate and Bing Translate have both been using machine learning neural networks as their core technology for a decade and more. There's no other way of doing it that produces anything close to as good a result. And yes, paying a human translator might get you good results too, but realistically that's not a competitive option for the vast majority of uses (nobody is paying a translator to read restaurant menus or train station signage to them).
This whole AI assistant fad can do one as far as I'm concerned, but the technologies behind the fad are here to stay.
There are valid uses for AI. It is much better at pattern recognition than people. Apply that to healthcare and it could be a paradigm shift in early diagnosis of conditions that doctors wouldn't think to look for until more noticeable symptoms occur.
There is no gray. Only black and white!
So who should be held accountable when (mis)use of AI results in a needless death? Or worse?
Let's say a company creates an AI taxi that runs you over leaving you without legs. Who are you going to sue?
"Oh it's grey, so I'll have a dollar from each shareholder." That doesn't sound right to me.
I hate AI as much as the next AI-sceptic but that argument is just nonsense. We have plenty of machinery and other company owned assets already that could injure a human being without a direct human intervention causing the injury. Every telephone pole rotting through and falling on someone would legally be a similar situation.
Who's getting killed because of the "translate page" button in my browser?
The "translate page" button in my browser is evil? Get a grip.