wtf does that mean
llama3:8b, I know it's "far from ideal" but only really specific use cases require more advanced models to run locally, if you do software development, graphic design or video editing 8gb is enough
edit: just tried it after some time and it works better than I remembered showcase
vscode + photoshop + illustrator + discord + arc + chrome + screen recording and still no lag
I have a macbook air m2 with 8gb of ram and I can even run ollama, never had ram problems, I don't get all the hate
vegetarians
you're right
federated and decentralized aren't the same thing
ssebastianoo
joined 1 year ago
Google has Google Cloud and Microsoft has Azure