[-] daredevil@kbin.social 2 points 7 months ago

defaming them without due diligence, think about that before continuing

The irony here is unbelievable rofl you can't make this up. My previous statement was calling you childish and desperate for attention. Thanks for reminding me of that fact, so I can stop wasting my time. It is very clear you're not interested in a genuine and constructive conversation.

[-] daredevil@kbin.social 2 points 7 months ago* (last edited 7 months ago)

It's not one week of inactivity, is has been going on for months

Looks at 2 months straight of kbin devlogs since October, when the man was having pretty significant personal issues

Not to mention he was: recently sick; tended to financial issues, and personal matters; formalities relating to the project. This isn't even mentioning that he communicated this in the devlog magazine. Or the fact that he has implemented suggestions multiple times at the request of the community to enhance QoL, and allowed users to have agency in making mod contributions.

You might want to take your own advice. This has also allowed me to revise my earlier statement. You people are actually insane.

[-] daredevil@kbin.social 4 points 7 months ago

every post I see from them further paints them as very childish and desperate for attention.

[-] daredevil@kbin.social 4 points 7 months ago

Agreed, every post I see from them further paints them as very childish and desperate for attention.

[-] daredevil@kbin.social 2 points 7 months ago

Came here to post because I've also seen The Symphony of the Goddess live. The poster for it is behind me at the moment. Great experience.

[-] daredevil@kbin.social 2 points 7 months ago

I've only felt the need to change distros once, from Linux Mint to EndeavourOS, because I wanted Wayland support. I realize there were ways to get Wayland working on Mint in the past, but I've already made the switch and have already gotten used to my current setup. I personally don't feel like I'm missing out by sticking to one distro, tbh. If you're enjoying Mint, I'd suggest to stick with it, unless another distro fulfills a specific need you can't get on Mint.

46

Terminal Trove showcases the best of the terminal, Discover a collection of CLI, TUI, and more developer tools at Terminal Trove.

[-] daredevil@kbin.social 1 points 9 months ago

Came here with this show in mind. Would recommend.

[-] daredevil@kbin.social 1 points 9 months ago

I haven't, but I'll keep this in mind for the future -- thanks.

[-] daredevil@kbin.social 1 points 9 months ago

I believe I was when I tried it before, but it's possible I may have misconfigured things

[-] daredevil@kbin.social 3 points 9 months ago* (last edited 9 months ago)

I'll give it a shot later today, thanks

edit: Tried out mistral-7b-instruct-v0.1.Q4_K_M.ggufvia the LM Studio app. it runs smoother than I expected -- I get about 7-8 tokens/sec. I'll definitely be playing around with this some more later.

116

On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej Karpathy and Jim Fan. That means we're closer to having a ChatGPT-3.5-level AI assistant that can run freely and locally on our devices, given the right implementation.

Mistral, based in Paris and founded by Arthur Mensch, Guillaume Lample, and Timothée Lacroix, has seen a rapid rise in the AI space recently. It has been quickly raising venture capital to become a sort of French anti-OpenAI, championing smaller models with eye-catching performance. Most notably, Mistral's models run locally with open weights that can be downloaded and used with fewer restrictions than closed AI models from OpenAI, Anthropic, or Google. (In this context "weights" are the computer files that represent a trained neural network.)

Mixtral 8x7B can process a 32K token context window and works in French, German, Spanish, Italian, and English. It works much like ChatGPT in that it can assist with compositional tasks, analyze data, troubleshoot software, and write programs. Mistral claims that it outperforms Meta's much larger LLaMA 2 70B (70 billion parameter) large language model and that it matches or exceeds OpenAI's GPT-3.5 on certain benchmarks, as seen in the chart below.
A chart of Mixtral 8x7B performance vs. LLaMA 2 70B and GPT-3.5, provided by Mistral.

The speed at which open-weights AI models have caught up with OpenAI's top offering a year ago has taken many by surprise. Pietro Schirano, the founder of EverArt, wrote on X, "Just incredible. I am running Mistral 8x7B instruct at 27 tokens per second, completely locally thanks to @LMStudioAI. A model that scores better than GPT-3.5, locally. Imagine where we will be 1 year from now."

LexicaArt founder Sharif Shameem tweeted, "The Mixtral MoE model genuinely feels like an inflection point — a true GPT-3.5 level model that can run at 30 tokens/sec on an M1. Imagine all the products now possible when inference is 100% free and your data stays on your device." To which Andrej Karpathy replied, "Agree. It feels like the capability / reasoning power has made major strides, lagging behind is more the UI/UX of the whole thing, maybe some tool use finetuning, maybe some RAG databases, etc."

Mixture of experts

So what does mixture of experts mean? As this excellent Hugging Face guide explains, it refers to a machine-learning model architecture where a gate network routes input data to different specialized neural network components, known as "experts," for processing. The advantage of this is that it enables more efficient and scalable model training and inference, as only a subset of experts are activated for each input, reducing the computational load compared to monolithic models with equivalent parameter counts.

In layperson's terms, a MoE is like having a team of specialized workers (the "experts") in a factory, where a smart system (the "gate network") decides which worker is best suited to handle each specific task. This setup makes the whole process more efficient and faster, as each task is done by an expert in that area, and not every worker needs to be involved in every task, unlike in a traditional factory where every worker might have to do a bit of everything.

OpenAI has been rumored to use a MoE system with GPT-4, accounting for some of its performance. In the case of Mixtral 8x7B, the name implies that the model is a mixture of eight 7 billion-parameter neural networks, but as Karpathy pointed out in a tweet, the name is slightly misleading because, "it is not all 7B params that are being 8x'd, only the FeedForward blocks in the Transformer are 8x'd, everything else stays the same. Hence also why total number of params is not 56B but only 46.7B."

Mixtral is not the first "open" mixture of experts model, but it is notable for its relatively small size in parameter count and performance. It's out now, available on Hugging Face and BitTorrent under the Apache 2.0 license. People have been running it locally using an app called LM Studio. Also, Mistral began offering beta access to an API for three levels of Mistral models on Monday.

view more: next ›

daredevil

joined 1 year ago