this post was submitted on 10 Sep 2023
88 points (80.1% liked)
Technology
59296 readers
6104 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sucks if your one of the 10-20% who don't get proper treatment (maybe die?) because some doctor doesn't have time to double check. But hey ... efficiency!
Ya that's a fundamental misunderstanding of percentages. For an analogous situation with which we're all more intuitively familiar, a self driving car that is 99.9% accurate in detecting obstacles crashes into one in one thousand people and/or things. That sucks.
Also, most importantly, LLMs are incapable of collaboration, something very important in any complex human endeavor but difficult to measures, and therefore undervalued by our inane, metrics-driven business culture. Chatgpt won't develop meaningful, mutually beneficial relationships with its colleagues, who can ask each other for their thoughts when they don't understand something. It'll just spout bullshit when it's wrong, not because it doesn't know, but because it has no concept of knowing at all.
It really needs to be pinned to the top of every single discussion around chatgbt:
It does not give answers because it knows. It gives answers because it thinks it looks right.
Remember back in school when you didn't study for a test and went through picking answers that "looked right" because you vaguely remember hearing the words in Answer B during class at some point?
It will never have wisdom and intuition from experience, and that's critically important for doctors.
“Looks right” in a human context means the one that matches a person’s actual experience and intuition. “Looks right” in an LLM context means the series of words have been seen together often in the training data (as I understand it, anyway - I am not an expert).
Doctors are most certainly not choosing treatment based on what words they’ve seen together.
Or one of the ninty nine percent of people who don't give the AI their symptoms in medical terminology.