this post was submitted on 09 Oct 2024
441 points (99.6% liked)
Technology
59378 readers
3139 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If you're following citations, may as well just search for the citations themselves... aka just a regular search engine.
Yes but the power of it is that you can in effect refine your search using natural language, like talking to a person, as it remembers the last 2-3 exchanges.
And it presents the information the way you asked to see it.
For example (my side of the "conversation"):
The citations confirm the information, they are not the end goal. The added value is the fact that the information is pre-digested and presented in a way that matches my learning process. It's a lot easier for me to assimilate information by getting answers to questions that I've asked.
Yeah, if search hadn't become dog shit I'd be happy with it
Instead, everything is a video for some reason, and the results are purposely worse than a year ago...I don't want to watch a video, I can read 20x faster than I can listen, I don't want to read an ad in article form - I'm generally looking for one little nugget of information
I took this into my own hands - I'll use free services if they work, but increasingly they're just demos for a product that may or may not be better. So I spun up a searx container, I point a local LLM at it, and I let it filter read through results. My next stage is to crawl documentation, use LLMs to feed it into a vector db, and use AI to retrieve exactly what I want without sifting through garbage myself