this post was submitted on 09 Nov 2023
281 points (100.0% liked)

Technology

37801 readers
129 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] soulfirethewolf@lemdro.id 1 points 1 year ago (1 children)

I kind of want to go for the framework laptop, but I still do like ARM and given I want to do more stuff around machine learning in the future, which is already kind of difficult to run large language models with only 8 gigabytes of RAM, it at least kind of runs with ARM. On my basement PC, It will barely do anything

[–] tal@lemmy.today 1 points 1 year ago (1 children)

There are some external GPUs that can be USB-attached. Dunno about for the Mac. Latency hit, but probably not as significant for current LLM use than games, as you don't have a lot of data being pushed over the bus once the model is up.

[–] soulfirethewolf@lemdro.id 1 points 1 year ago

Those don't work on Apple silicon Macs. sadly