this post was submitted on 28 Jan 2025
875 points (94.5% liked)

memes

11278 readers
3769 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 

Office space meme:

"If y'all could stop calling an LLM "open source" just because they published the weights... that would be great."

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Naia@lemmy.blahaj.zone 12 points 2 days ago (1 children)

For neural nets the method matters more. Data would be useful, but at the amount these things get trained on the specific data matters little.

They can be trained on anything, and a diverse enough data set would end up making it function more or less the same as a different but equally diverse set. Assuming publicly available data is in the set, there would also be overlap.

The training data is also by necessity going to be orders of magnitude larger than the model itself. Sharing becomes impractical at a certain point before you even factor in other issues.

[โ€“] Poik@pawb.social 3 points 1 day ago

That... Doesn't align with years of research. Data is king. As someone who specifically studies long tail distributions and few-shot learning (before succumbing to long COVID, sorry if my response is a bit scattered), throwing more data at a problem always improves it more than the method. And the method can be simplified only with more data. Outside of some neat tricks that modern deep learning has decided is hogwash and "classical" at least, but most of those don't scale enough for what is being looked at.

Also, datasets inherently impose bias upon networks, and it's easier to create adversarial examples that fool two networks trained on the same data than the same network twice freshly trained on different data.

Sharing metadata and acquisition methods is important and should be the gold standard. Sharing network methods is also important, but that's kind of the silver standard just because most modern state of the art models differ so minutely from each other in performance nowadays.

Open source as a term should require both. This was the standard in the academic community before tech bros started running their mouths, and should be the standard once they leave us alone.