this post was submitted on 09 Oct 2024
611 points (96.6% liked)

Technology

60041 readers
2287 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

I suspect that this is the direct result of AI generated content just overwhelming any real content.

I tried ddg, google, bing, quant, and none of them really help me find information I want these days.

Perplexity seems to work but I don't like the idea of AI giving me "facts" since they are mostly based on other AI posts

ETA: someone suggested SearXNG and after using it a bit it seems to be much better compared to ddg and the rest.

you are viewing a single comment's thread
view the rest of the comments
[–] lvxferre@mander.xyz 13 points 2 months ago (1 children)

Stable Diffusors are pretty good at regurgitating information that’s widely talked about.

Stable Diffusion is an image generator. You probably meant a language model.

And no, it's not just OP. This shit has been going on for a while well before LLMs were deployed. Cue to the old "reddit" trick that some people used.

[–] FlyingSquid@lemmy.world 6 points 2 months ago (1 children)

Also, they're pretty good at regurgitating bullshit. Like the famous 'glue on pizza' answer.

[–] lvxferre@mander.xyz 2 points 2 months ago

Or, in a deeper aspect: they're pretty good at regurgitating what we interpret as bullshit. They simply don't care about truth value of the statements at all.

That's part of the problem - you can't prevent them from doing it, it's like trying to drain the ocean with a small bucket. They shouldn't be used as a direct source of info for anything that you won't check afterwards; at least in kitnaht's use case if the LLM is bullshitting it should be obvious, but go past that and you'll have a hard time.