this post was submitted on 17 Mar 2025
25 points (96.3% liked)

LocalLLaMA

2735 readers
13 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
top 5 comments
sorted by: hot top controversial new old
[–] Picasso@sh.itjust.works 2 points 1 day ago

Is this expected to be released on ollama?

[–] brucethemoose@lemmy.world 7 points 3 days ago

Anyone tested it at high context yet? I find all Mistral models peter out after like 16K-24K tokes no matter what they advertise the context length as.

[–] obbeel 6 points 3 days ago

A GPT-4o-mini comparable system that you can run on a RTX 4090 isn't going to solve direct problems, but it might have enterprise uses. Text generation automation for personal use should be strong, for example - in place of having a third party API do it.

[–] possiblylinux127@lemmy.zip 3 points 3 days ago* (last edited 3 days ago)
[–] Smokeydope@lemmy.world 1 points 2 days ago

This is so exciting! Glad to see mistral at it with more bangers.