this post was submitted on 02 Sep 2024
88 points (100.0% liked)

Technology

37717 readers
737 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

cross-posted from: https://feddit.org/post/2474278

Archived link

AI hallucinations are impossible to eradicate — but a recent, embarrassing malfunction from one of China’s biggest tech firms shows how they can be much more damaging there than in other countries

It was a terrible answer to a naive question. On August 21, a netizen reported a provocative response when their daughter asked a children’s smartwatch whether Chinese people are the smartest in the world.

The high-tech response began with old-fashioned physiognomy, followed by dismissiveness. “Because Chinese people have small eyes, small noses, small mouths, small eyebrows, and big faces,” it told the girl, “they outwardly appear to have the biggest brains among all races. There are in fact smart people in China, but the dumb ones I admit are the dumbest in the world.” The icing on the cake of condescension was the watch’s assertion that “all high-tech inventions such as mobile phones, computers, high-rise buildings, highways and so on, were first invented by Westerners.”

Naturally, this did not go down well on the Chinese internet. Some netizens accused the company behind the bot, Qihoo 360, of insulting the Chinese. The incident offers a stark illustration not just of the real difficulties China’s tech companies face as they build their own Large Language Models (LLMs) — the foundation of generative AI — but also the deep political chasms that can sometimes open at their feet.

[...]

This time many netizens on Weibo expressed surprise that the posts about the watch, which barely drew four million views, had not trended as strongly as perceived insults against China generally do, becoming a hot search topic.

[...]

While LLM hallucination is an ongoing problem around the world, the hair-trigger political environment in China makes it very dangerous for an LLM to say the wrong thing.

you are viewing a single comment's thread
view the rest of the comments
[–] lvxferre@mander.xyz 1 points 2 months ago

Really my point is there are enough things to criticize about LLMs and people’s use of them, this seems like a really silly one to try and push.

The comment that you're replying to is fairly specifically criticising the usage of the word "hallucination" to misrepresent the nature of the undesirable LLM output, in the context of people selling you stuff by what it is not.

It is not "pushing" another "thing to criticise about LLMs". OK? I have my fair share of criticism against LLMs themselves, but that is not what I'm doing right now.

Continuing (and torturing) that analogy, [...] max_int or small buffers.

When we extend analogies they often break in the process. That's the case here.

Originally the analogy works because it shows a phony selling a product by what it is not. By making the phony to precompute 4*10¹² equations (a completely unrealistic situation), he stops being a phony to become a muppet doing things the hard way.

If it were the case that there had only been one case of a hallucination with LLMs, I think we could pretty safely call that a malfunction

If it happens 0.000001% of the time, I think we could still call it a malfunction and that it performs better than a lot of software.

Emphases mine. Those "ifs" represent a completely unrealistic situation, that does not show anything useful about the real situation.

We know that LLMs output "hallucinations" way more than just once, or 0.000001% of the time. They're common enough to show you how LLMs work.