this post was submitted on 26 Mar 2024
398 points (100.0% liked)

Technology

37724 readers
500 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] rho50@lemmy.nz 25 points 7 months ago (3 children)

I know of at least one other case in my social network where GPT-4 identified a gas bubble in someone's large bowel as "likely to be an aggressive malignancy." Leading to said person fully expecting they'd be dead by July, when in fact they were perfectly healthy.

These things are not ready for primetime, and certainly not capable of doing the stuff that most people think they are.

The misinformation is causing real harm.

[–] B0rax@feddit.de 12 points 7 months ago (1 children)

To be honest, it is not made to diagnose medical scans and it is not supposed to be. There are different AIs trained exactly for that purpose, and they are usually not public.

[–] rho50@lemmy.nz 6 points 7 months ago

Exactly. So the organisations creating and serving these models need to be clearer about the fact that they're not general purpose intelligence, and are in fact contextual language generators.

I've seen demos of the models used as actual diagnostic aids, and they're not LLMs (plus require a doctor to verify the result).

load more comments (1 replies)