this post was submitted on 23 Jul 2023
114 points (95.2% liked)
Technology
60058 readers
2559 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Can we like.. maybe have some good as in morally use cases for AI?
I know we had the medical diagnosis one, that was nice. Maybe some more like that?
I'm extremely skeptical of medical diagnosis AIs. Without being able to explain why it comes to a conclusion, how do we know it won't just accidentally find correlations? One example I heard of recently was an AI that was extremely good at detecting TB... based on the age of the machine that took the x-ray. Because it turns out places with older machines tend to be poorer, and poorer places tend to have more TB.
The only positive use I can think of is time saving measures. A researcher can feed a study to ChatGPT and have it write a rough first draft of the abstract. A Game Master could ask it for inspiration on the next few game sessions if they're underprepared. An internet commenter could ask it for a third example of how it could save time.
But for anything serious, until it can explain why it comes to the conclusions it comes to, and can understand when a human says "no, you're doing it wrong," I can't see it being a real force for good.
Ehh...at least we know we don't understand how the AI reached its conclusion. When you study human cognition long enough you discover that our beliefs about how we reach our conclusions are just stories the conscious mind makes up to justify after the fact.
"No, you're doing it wrong" isn't really a problem - it's fundamental to most ML processes.
What? Like ones that can quarantine you for being asymptomatic of Sea horse flu et al? Great idea.
No, more like the ones that give early warning signs of like, dementia or something.
It would be unethical for a calculator to offer up an alleged determination given the effects of nocebo. AI will never understand irony let alone health and well being ...