39
We need [Gen AI] literacy like media literacy
(jamesg.blog)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
They do because the "layers" that you're talking about (feed forward, embedding, attention layers etc.) are still handling tokens and their relationship, and nothing else. LLMs were built for that.
This is like saying "we don't know, so let's assume that it doesn't matter". It does matter, as shown.
I'm quoting out of order because this is relevant: by default, h₀ is always "the phenomenon doesn't happen", "there is no such attribute", "this doesn't exist", things like this. It's scepticism, not belief; otherwise we're incurring in a fallacy known as "inversion of the burden of proof".
In this case, h₀ should be that LLMs do not have the ability to handle concepts. That said:
If you can show a LLM chatbot that never hallucinates, even when we submit prompts designed to make it go nuts, it would be decent albeit inductive evidence that that chatbot in question is handling more than just tokens/morphemes. Note: it would not be enough to show that the bot got it right once or twice, you need to show that it consistently gets it right.
If necessary/desired I can pull out some definition of hallucination to fit this test.
EDIT: it should also show some awareness of the contextual relevance of the tidbits of information that it pours down, regardless of their accuracy.