this post was submitted on 22 Aug 2023
766 points (95.9% liked)

Technology

60058 readers
2505 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling's Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

you are viewing a single comment's thread
view the rest of the comments
[–] Eccitaze@yiffit.net 4 points 1 year ago (2 children)

If Google took samples from millions of different songs that were under copyright and created a website that allowed users to mix them together into new songs, they would be sued into oblivion before you could say "unauthorized reproduction."

You simply cannot compare one single person memorizing a book to corporations feeding literally millions of pieces of copyrighted material into a blender and acting like the resulting sausage is fine because "only a few rats fell into the vat, what's the big deal"

[–] jadegear@lemm.ee 1 points 1 year ago (1 children)
[–] AlexisLuna@lemmy.blahaj.zone 2 points 1 year ago (1 children)
[–] player2@lemmy.dbzer0.com 2 points 1 year ago* (last edited 1 year ago) (1 children)

The analogy talks about mixing samples of music together to make new music, but that's not what is happening in real life.

The computers learn human language from the source material, but they are not referencing the source material when creating responses. They create new, original responses which do not appear in any of the source material.

[–] Cethin@lemmy.zip 5 points 1 year ago (1 children)

"Learn" is debatable in this usage. It is trained on data and the model creates a set of values that you can apply that produce an output similar to human speach. It's just doing math though. It's not like a human learns. It doesn't care about context or meaning or anything else.

[–] player2@lemmy.dbzer0.com 0 points 1 year ago

Okay, but in the context of this conversation about copyright I don't think the learning part is as important as the reproduction part.

[–] Touching_Grass@lemmy.world -2 points 1 year ago* (last edited 1 year ago) (1 children)

Google crawls every link available on all websites to index and give to people. That's a better example. Which is legal and up to the websites to protect their stuff

[–] Cethin@lemmy.zip 1 points 1 year ago (2 children)

It's not a problem that it reads something. The problem is the thing that it produces should break copyright. Google search is not producing something, it reads everything to link you to that original copyrighted work. If it read it and then just spit out what's read on its own, instead of sending you to the original creators, that wouldn't be OK.

[–] Touching_Grass@lemmy.world 1 points 1 year ago

How is it reproducing the works

[–] Schadrach@lemmy.sdf.org 1 points 1 year ago

The blurb it puts out in the search results is much more directly "spitting out what's read" than anything an LLM does. As are most other srts of results that appear on the front page of a google search.