this post was submitted on 20 Aug 2024
1188 points (97.8% liked)
Technology
59972 readers
2440 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Only a matter of time before LLMs start injecting their own ads into these responses.
Nah, local LLMs are easily in the range of transcribe/summarize. I bet you could do that nicely with llama 8B without even needing a gpu.
Cant wait to have these
You already can I think? Ollama is something you can install, and then you can set up a webui like sillytavern for roleplays, or some other more fitting ui for whatever you want. Also, Linux is great for projects like these, on windows it's fucking a pain to set up, Linux it's easy.
By that point I'm pretty sure we'll have an effective compact model that can run locally and transcribe downloaded videos on reasonable hardware. Or you can just sic a paid model like chatgpt on the task. The corporate Internet is entirely focused on subscription service models now, unless you run the model yourself on local hardware you're going to end up paying someone somewhere a service fee.
Edit: y'all need to learn about minified models designed to run on edge hardware, they're a thing and often work shockingly well.
Local and open source