this post was submitted on 24 Jan 2024
270 points (90.9% liked)
Open Source
31272 readers
264 users here now
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
- Free Software Foundation
- Electronic Frontier Foundation
- Software Freedom Conservancy
- It's FOSS
- Android FOSS Apps Megathread
Rules
- Posts must be relevant to the open source ideology
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
- !libre_culture@lemmy.ml
- !libre_software@lemmy.ml
- !libre_hardware@lemmy.ml
- !linux@lemmy.ml
- !technology@lemmy.ml
Community icon from opensource.org, but we are not affiliated with them.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The reason they're black boxes is because that's how LLMs work. Nothing new here, neural networks have been basically black boxes for a long time.
Sure, but nothing is theoretically stopping them from documenting every single data source input into the training module and then crediting it later.
For some reason they didn't want to do that of course.
Llama and stability AI published their sources, did they not?