592

I'm rather curious to see how the EU's privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn't have a paywall)

you are viewing a single comment's thread
view the rest of the comments
[-] hansl@lemmy.ml 23 points 1 year ago

It’s closer to how you (as a person) know things than, say, how a database know things.

I still remember my childhood home phone number. You could ask me to forget it a million times I wouldn’t be able to. It’s useless information today. I just can’t stop remembering it.

[-] Veraticus@lib.lgbt -4 points 1 year ago* (last edited 1 year ago)

No, you knowing your old phone number is closer to how a database knows things than how LLMs know things.

LLMs don't "know" information. They don't retain an individual fact, or know that something is true and something else is false (or that anything "is" at all). Everything they say is generated based on the likelihood of a word following another word based on the context that word is placed in.

You can't ask it to "forget" a piece of information because there's no "childhood phone number" in its memory. Instead there's an increased likelihood it will say your phone number as the result of someone prompting it to tell it a phone number. It doesn't "know" the information at all, it simply has become a part of the weights it uses to generate phrases.

[-] Zeth0s@lemmy.world 15 points 1 year ago* (last edited 1 year ago)

It's the same in your brain though. There is no number in your brain. Just a set of synapses that allows a depolarization wave to propagate across neurons, via neurotransmitters released and absorbed in a narrow space.

The way the brain is built allows you to "remember" stuff, reconstruct information incompletely stored as different, unique connections in a network. But it is not "certain", we can't know if it's the absolute truth. That's why we need password databases and phone books, because our memory is not a database. It is probably worse than gpt-4

[-] SpiderShoeCult@sopuli.xyz 6 points 1 year ago

Genuinely curious how you would describe humans remembering stuff, because if I remember correctly my biology classes, it's about reinforced neural pathways that become more likely to be taken by an electrical impulse than those that are less 'travelled'. The whole notion of neural networks is right there in the name, based on how neurons work.

[-] MarcoPogo@lemmy.world 5 points 1 year ago

Are we sure that this is substantially different from how our brain remembers things? We also remember by association

[-] Veraticus@lib.lgbt -4 points 1 year ago

But our memories exist -- I can say definitively "I know my childhood phone number." It might be meaningless, but the information is stored in my head. I know it.

AI models don't know your childhood phone number, even if you tell them explicitly, even if they trained on it. Your childhood phone number becomes part of a model of word weights that makes it slightly more likely, when someone asks it for a phone number, that some digits of your childhood phone number might appear (or perhaps the entire thing!).

But the original information is lost.

You can't ask it to "forget" the phone number because it doesn't know it and never knew it. Even if it supplies literally your exact phone number, it isn't because it knew your phone number or because that information is correct. It's because that sequence of numbers is, based on its model, very likely to occur in that order.

[-] theneverfox@pawb.social 1 points 1 year ago

This isn't true at all - first, we don't know things like a database knows things.

Second, they do retain individual facts in the same sort of way we know things, through relationships. The difference is, for us the Eiffel tower is a concept, and the name, appearance, and everything else about it are relationships - we can forget the name of something but remember everything else about it. They're word based, so the name is everything for them - they can't learn facts about a building then later learn the name of it and retain the facts, but they could later learn additional names for it

For example, they did experiments using some visualization tools and edited it manually. They changed the link been Eiffel tower and Paris to Rome, and the model began to believe it was in Rome. You could then ask what you'd see from the Eiffel tower, and it'd start listing landmarks like the coliseum

So you absolutely could have it erase facts - you just have to delete relationships or scramble details. It just might have unintended side effects, and no tools currently exist to do this in an automated fashion

For humans, it's much harder - our minds use layers of abstraction and aren't a unified set of info. That mean you could zap knowledge of the Eiffel tower, and we might forget about it. But then thinking about Paris, we might remember it and rebuild certain facts about it, then thinking about world fairs we might remember when it was built and by who, etc

this post was submitted on 31 Aug 2023
592 points (97.9% liked)

Technology

58144 readers
4368 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS