this post was submitted on 22 Oct 2023
314 points (98.5% liked)

Linux

48145 readers
753 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Please create a comment or react with an emoji there.

(IMO, they should've limited comments,and gone with reaction count there, its looks mess right now )

you are viewing a single comment's thread
view the rest of the comments
[–] sir_reginald@lemmy.world 107 points 1 year ago* (last edited 1 year ago) (2 children)

As long as this allows running local, free software models I don't see the drawback of including this.

My main issue with ChatGPT and similar products is that they use my data to train their models. Running a model locally (like Llama) solves this problem, but running LLMs require extremely powerful GPUs, specially the bigger ones like Llama 70b.

So dedicated hardware for this is a nice thing for those that want it.

[–] Tibert@jlai.lu 21 points 1 year ago* (last edited 1 year ago)

It requires powerful gpus yes but not always. It depends a lot on how fast you want it to run. Microsoft and openai need powerful ai gpus because they have a lot of requests, data and want it to go fast. The dataset may also require to be stored in memory or gpu memory for fast access and use by the ai.

For Llama, it has been released as open source. And what is amazing about open source, is the community. A Llama entirely in c++ has been created https://github.com/ggerganov/llama.cpp .

And someone even managed to make it run, fast enough, on a phone with 8gb of available ram https://github.com/ggerganov/llama.cpp/discussions/750 . Tho with a smaller dataset.