this post was submitted on 09 Aug 2023
-4 points (40.0% liked)
Science Fiction
13591 readers
2 users here now
Welcome to /c/ScienceFiction
December book club canceled. Short stories instead!
We are a community for discussing all things Science Fiction. We want this to be a place for members to discuss and share everything they love about Science Fiction, whether that be books, movies, TV shows and more. Please feel free to take part and help our community grow.
- Be civil: disagreements happen, but that doesn’t provide the right to personally insult others.
- Posts or comments that are homophobic, transphobic, racist, sexist, ableist, or advocating violence will be removed.
- Spam, self promotion, trolling, and bots are not allowed
- Put (Spoilers) in the title of your post if you anticipate spoilers.
- Please use spoiler tags whenever commenting a spoiler in a non-spoiler thread.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I said:
The original bucket containing the blue marble isn't going anywhere. It still exists. The blue marble will always be available to mix into future AIs. All you have to do is make sure you're using some historical data (or otherwise guaranteed "human-generated") along with whatever new unvetted stuff you're using.
So then your back to locking LLMs to the year 2023. They're usefulness is severely limited if you can't train them on new data.
Emphasis added. Please read more carefully, this is getting repetitive. You keep assuming that the AI will be trained either entirely with old data or entirely with new data and that's just not the case.
And what happens when "whatever new unvetted stuff" is primarily comprised of AI-generated content?
Then the missing diversity comes from the non-AI-generated stuff that's included in the mix.
I'm not sure what the problem is here. The cause of model collapse when AIs are fed on the output of previous generations is that the rare "fringes" of the data are lost over time. The training data becomes increasingly monotonous. Adding that fringe data back in should cure that.