Is this expected to be released on ollama?
this post was submitted on 17 Mar 2025
25 points (96.3% liked)
LocalLLaMA
2735 readers
13 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
Anyone tested it at high context yet? I find all Mistral models peter out after like 16K-24K tokes no matter what they advertise the context length as.
A GPT-4o-mini comparable system that you can run on a RTX 4090 isn't going to solve direct problems, but it might have enterprise uses. Text generation automation for personal use should be strong, for example - in place of having a third party API do it.
This is so exciting! Glad to see mistral at it with more bangers.