this post was submitted on 31 Jul 2024
286 points (96.7% liked)

Technology

59404 readers
2082 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Meta "programmed it to simply not answer questions," but it did anyway.

you are viewing a single comment's thread
view the rest of the comments
[–] doodledup@lemmy.world -2 points 3 months ago (3 children)

It's designed in a ways that'll make it inherently incorrect. Even on a physical basis (due to numeric issues). It's not a problem of the algorithm because it has been designed that way. The problem is that you don't know how to correctly use it.

I can't explain it any differently without getting overly technical. You wouldn't understand it anyways, judging by your comment "lolwut". If you want to learn how LLMs work specifically, there are plenty of ressources on the internet.

[–] snooggums@midwest.social 4 points 3 months ago* (last edited 3 months ago) (1 children)

It’s designed in a ways that’ll make it inherently incorrect. Even on a physical basis (due to numeric issues). It’s not a problem of the algorithm because it has been designed that way. The problem is that you don’t know how to correctly use it.

"It doesn't make a good source of knowledge."

"Yeah, but it is designed to be inherently wrong"

How does that make any sense when trying to use something for knowledge? Being inherently wrong is the opposite of helpful for knowledge.

AI is great at pattern recognition, but knowledge isn't pattern recognition. Needing to know when it gives false information requires the "supervisor" to already have that knowledge. That makes the AI less useful than a simple reference because at least the reference can come from a trusted source.

If people stopped trying to jam AI into situations where being correct is important it wouldn't be a problem. But excusing that because it is designed to be inherently wrong deserves another LOLWUT.

[–] doodledup@lemmy.world -4 points 3 months ago* (last edited 3 months ago)

How does that make any sense when trying to use something for knowledge? Being inherently wrong is the opposite of helpful for knowledge.

It was never designed to reproduce knowledge. It was designed to do reasoning and natural language processing and generation. You're using it wrong.

LULWUT

If you don't know what you're talking about and don't have any capacity to learn something new, it's sometimes best to stop talking. Especially when you're starting to get rude to knowlegable people that try to explain it to you.

[–] CileTheSane@lemmy.ca 2 points 3 months ago

It's designed in a ways that'll make it inherently incorrect. Even on a physical basis (due to numeric issues). It's not a problem of the algorithm because it has been designed that way. The problem is that you don't know how to correctly use it.

So it is bad at things like giving or finding factual information. I agree, companies need to stop cramming it into everything (like search engines) for tasks that it is specifically bad at because it is not designed for it.

[–] uranibaba@lemmy.world 1 points 3 months ago (1 children)

Can you recommend any for resource to start with? (If I can be picky, then something I can consume after a whole day of being a patent because there is no energy for much else.)