this post was submitted on 13 Feb 2025
7 points (88.9% liked)

LocalLLaMA

2590 readers
14 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

I have an GTX 1660 Super (6 GB)

Right now I have ollama with:

  • deepseek-r1:8b
  • qwen2.5-coder:7b

Do you recommend any other local models to play with my GPU?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] TheHobbyist@lemmy.zip 2 points 1 week ago (1 children)

Deepseek is good at reasoning, qwen is good at programming, but I find llama3.1 8b to be well suited for creativity, writing, translations and other tasks which fall out of the scope of your two models. It's a decent all arounder. It's about 4.9GB in q4_K_M.

It's not out of my scope, I'm just learning what can I do locally with my current machine.


Today I read about RAG, maybe I'm gonna try an easy local setup to chat with a PDF.