463
(page 2) 37 comments
sorted by: hot top controversial new old
[-] afraid_of_zombies@lemmy.world 4 points 8 months ago* (last edited 8 months ago)

Like a billion hours of YouTube videos out there I am not seeing the issue plus the entire library of Congress

[-] gapbetweenus@feddit.de 3 points 8 months ago

Wasn't there a paper not long time ago that it was possible to generate data with AI as a training set for AI? I was surprised (and the math is to much for me to check out my self) but that seems to solve that problem.

[-] realharo@lemm.ee 3 points 8 months ago

As far as I know, that is mainly used where a better, bigger model generates training data for a more efficient smaller model to bring it a bit closer to its level.

Were there any cases of an already state of the art model using this method to improve itself?

[-] gapbetweenus@feddit.de 1 points 8 months ago* (last edited 8 months ago)

I will search for the paper.

EDIT: can't find it, dang.

[-] General_Effort@lemmy.world 1 points 8 months ago

Sorta. This "model collapse" thing is basically an urban legend at this point.

The kernel of truth is this: A model learns stuff. When you use that model to generate training data, it will not output all it has learned. The second generation model will not know as much as the first. If you repeat this process a couple times, you are left with nothing. It's hard to see how this could become a problem in the real world.

Incest is a good analogy, if you know what the problem with inbreeding is: You lose genetic diversity. Still, breeders use this to get to desired traits and so does nature (genetic bottleneck, founder effect).

[-] gapbetweenus@feddit.de 2 points 8 months ago

Training data for models in general was a big problem when I studied systems biology. Interesting that we finding works around, since it sounded rather fundamental to me. I found your metaphor rather helpful, thanks.

[-] jacksilver@lemmy.world 3 points 8 months ago

I wouldn't say we've really found a workaround. AI companies hire lots of people to parse and clean data. That can work for things like pose estimation, which are largely a once and done thing. But for things that are constantly evolving, language/art/videos, it may not be a viable long term strategy.

load more comments (2 replies)
[-] Uranium3006@kbin.social 3 points 8 months ago

now that the low hanging fruit of internet scraping is exhausted, we're gonna have to start purpose-building datasets. this will be expensive and might be the new bottleneck on AI progress.

[-] PoliticallyIncorrect@lemmy.world 0 points 8 months ago* (last edited 8 months ago)

The AIrmageddon..

load more comments
view more: ‹ prev next ›
this post was submitted on 28 Feb 2024
463 points (97.5% liked)

Technology

59161 readers
2217 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS