this post was submitted on 13 May 2024
34 points (72.4% liked)
Technology
59404 readers
2077 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've had this argument with friends a lot recently.
Them: it's so cool that I can just ask chatgpt to summarise something and I can get a concise answer rather than googling a lot for the same thing.
Me: But it gets things wrong all the time.
Them: Oh I know so I Google it anyway.
Doesn't make sense to me.
People like AI because searches are full of SEO spam listicles. Eventually they will make LLMs as ad-riddled as everything else.
My specific point here was about how this friend doesn't trust the results AND still goes to Google/others to verify, so he's effectively doubled his workload for every search.
Then why not use an ad-blocker? It's not wise to think you're getting the right information when you can't verify the sources. Like I said, at least for me, the trust me bro aspect doesn't cut it.
Ad blockers won't cut out SEO garbage.
And the AI will? It will use all websites to give you the info. It doesn’t think, it spins.
I didn't say that it will, just saying that ad blockers won't block it out.
This is why I do a lot of my Internet searches with perplexity.ai now. It tells me exactly what it searched to get the answer, and provides inline citations as well as a list of its sources at the end. I've never used it for anything in depth, but in my experience, the answer it gives me is typically consistent with the sources it cites.
We also get things wrong all the time. Would you double check info you got from a friend of coworker? Perhaps you should.
I know how my friends and coworkers are likely to think. An LLM is far less predictable.