602
submitted 1 month ago* (last edited 1 month ago) by RmDebArc_5@sh.itjust.works to c/technology@lemmy.world

Surprised pikachu face

you are viewing a single comment's thread
view the rest of the comments
[-] T156@lemmy.world 9 points 1 month ago

At the same time, the trouble with local LLMs is that they're very resource heavy. Your average household computer isn't going to be able to run one with much usability or speed.

[-] floquant@lemmy.dbzer0.com 22 points 1 month ago

Which, you know, is fine. Maybe if people had an idea of how much power is required to run them, they would think twice before using a gigawatt to output a poem about farts, and perhaps even wonder how OpenAI can offer that for free. Btw, a 7b model should run ok on any PC with at least 16GB of RAM and a modern processor/GPU.

[-] RmDebArc_5@sh.itjust.works 3 points 1 month ago

Phi 3 can run on pretty low specs (requires 4gb RAM) and has relatively good output

[-] TriflingToad@lemmy.world 1 points 1 month ago

it's a lot slower that chatgpt but on my integrated graphics i7 laptop it ran decent, def enough to be useable. Also there's different models to play around with, some are faster but worse and some are smarter but slower

this post was submitted on 14 Sep 2024
602 points (97.8% liked)

Technology

59161 readers
2186 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS