this post was submitted on 24 Jul 2024
53 points (90.8% liked)

Selfhosted

40670 readers
463 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Im using Ollama on my server with the WebUI. It has no GPU so its not quick to reply but not too slow either.

Im thinking about removing the VM as i just dont use it, are there any good uses or integrations into other apps that might convince me to keep it?

you are viewing a single comment's thread
view the rest of the comments
[–] thirdBreakfast@lemmy.world 5 points 4 months ago (2 children)

I use the Continue VS Code plugin with Ollama to use a couple of different models (deepseek-coder-v2 & starcoder2) to recreate a local only Github Copilot type experience for coding. This is on an M1 Apple Silicon though. For autocomplete the generation needs to be pretty brisk - I'm not sure how that would go in a VM without a GPU.

[–] Amongussussyballs100@sh.itjust.works 2 points 4 months ago (1 children)

How well does the M1 chip keep up? What size models are you running with it? Interested in getting an M1 laptop and I am curious.

[–] thirdBreakfast@lemmy.world 1 points 4 months ago
starcoder2:latest       	f67ae0f64584	1.7 GB	3 days ago 	
phi3:latest             	d184c916657e	2.2 GB	3 weeks ago	
deepseek-coder-v2:latest	8577f96d693e	8.9 GB	3 weeks ago	
llama3:8b-instruct-q8_0 	1b8e49cece7f	8.5 GB	3 weeks ago	
dolphin-mistral:latest  	5dc8c5a2be65	4.1 GB	3 weeks ago	
codeqwen:latest         	df352abf55b1	4.2 GB	3 weeks ago	
llama3:latest           	365c0bd3c000	4.7 GB	4 weeks ago

I mostly use starcoder2 with Continue for code autocomplete, the big deepseek coder is a bit slow (I can feel it thinking), but it and the regular llama3 are good for chatbot type programming questions.

I don't really have anything to compare the M1 performance to. I guess the 8GB models output text a little slower than the web versions of the same models, and the 4GB ones about the same. Using ollama in the terminal, there's sometimes a 0.5-2 second pause before it starts outputting. Not with phi3 though - it's surprisingly snappy for the quality of answers.