this post was submitted on 24 Jul 2024
53 points (90.8% liked)

Selfhosted

40670 readers
525 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Im using Ollama on my server with the WebUI. It has no GPU so its not quick to reply but not too slow either.

Im thinking about removing the VM as i just dont use it, are there any good uses or integrations into other apps that might convince me to keep it?

you are viewing a single comment's thread
view the rest of the comments
[–] pe1uca@lemmy.pe1uca.dev 9 points 5 months ago (1 children)

I've used it to summarize long articles, news posts, or videos when the title/thumbnail looks interesting but I'm not sure if it's worth the 10+ minutes to read/watch.
There are other solutions, like a dedicated summarizer, but I've investigated into them and they only extract exact quotes from the original text, an LLM can also paraphrase making the summary a bit more informative IMO.
(For example, one article mentioned a quote from an expert talking about a company, the summarizer only extracted the quote and the flow of the summary made me believe the company said it, but the LLM properly stated the quote came from the expert)

This project https://github.com/goniszewski/grimoire has in it's road map a way to connect to an AI to summarize the bookmarks you make and generate at 3 tags.
I've seen the code, I don't remember what the exact status of the integration.


Also I have a few models dedicated for coding, so I've also asked a few pieces of code and configurations to just get started on a project, nothing too complicated.

[–] VeryNiiiice@sh.itjust.works 4 points 5 months ago (2 children)

Which one do you use to summerize videos?

[–] AnUnusualRelic@lemmy.world 4 points 4 months ago (1 children)

Does it work with porn videos?

[–] maniel@sopuli.xyz 1 points 4 months ago* (last edited 4 months ago)

asking the important question, but yeah, the plot is essential in porn

[–] pe1uca@lemmy.pe1uca.dev 1 points 4 months ago

Well, it's a bit of a pipeline, I use a custom project to have an API to be able to send files or urls to summarize videos.
With yt-dlp I can get the video and transcribe it with fast whisper (https://github.com/SYSTRAN/faster-whisper), then the transcription is sent to the LLM to actually make the summary.

I've been meaning to publish the code, but it's embedded in a personal project, so I need to take the time to isolate it '^_^