this post was submitted on 09 Jul 2023
448 points (96.9% liked)

Technology

59596 readers
3404 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Two authors sued OpenAI, accusing the company of violating copyright law. They say OpenAI used their work to train ChatGPT without their consent.

you are viewing a single comment's thread
view the rest of the comments
[–] trial_and_err@lemmy.world 16 points 1 year ago (1 children)

ChatGPT got entire books memorised. You can and (or could at least when I tried a few weeks back) make it print entire pages of for example Harry Potter.

[–] ThoughtGoblin@lemm.ee 5 points 1 year ago (1 children)

Not really, though it's hard to know what exactly is or is not encoded in the network. It likely has more salient and highly referenced content, since those aspects would come up in it's training set more often. But entire works is basically impossible just because of the sheer ratio between the size of the training data and the size of the resulting model. Not to mention that GPT's mode of operation mostly discourages long-form wrote memorization. It's a statistical model, after all, and the enemy of "objective" state.

Furthermore, GPT isn't coherent enough for long-form content. With it's small context window, it just has trouble remembering big things like books. And since it doesn't have access to any "senses" but text broken into words, concepts like pages or "how many" give it issues.

None of the leaked prompts really mention "don't reveal copyrighted information" either, so it seems the creators really aren't concerned — which you think they would be if it did have this tendency. It's more likely to make up entire pieces of content from the summaries it does remember.

[–] trial_and_err@lemmy.world 5 points 1 year ago* (last edited 1 year ago) (2 children)

Have your tried instructing ChatGPT?

I’ve tried:

“Act as an e book reader. Start with the first page of Harry Potter and the Philosopher's Stone”

The first pages checked out at least. I just tried again, but the prompts are returned extremely slow at the moment so I can’t check it again right now. It appears to stop after the heading, that definitely wasn’t the case before, I was able to browse pages.

It may be a statistical model, but ultimately nothing prevents that model from overfitting, i.e. memoizing its training data.

[–] McArthur@lemmy.world 1 points 1 year ago (1 children)

Wait... isn't that the correct response though? I mean if i ask an ai to produce something copyright infringing it should, for example reproducing Harry potter. The issue is when is asked to produce something new, (e.g. a story about wizards living secretly in the modern world) does it infringe on copyright without telling you? This is certainly a harder question to answer.

[–] fhein@lemmy.world 1 points 1 year ago

I think they're seeing this as a traditional copyright infringement issue, i.e. they don't want anyone to be able to make copies of their work intentionally either.

[–] ThoughtGoblin@lemm.ee 1 points 1 year ago

I use it all day at my job now. Ironically, on a specialization more likely to overfit.

It may be a statistical model, but ultimately nothing prevents that model from overfitting, i.e. memoizing its training data.

This seems to imply that not only did entire books accidentally get downloaded, slip past the automated copyright checker, but that it happened so often that the AI saw the same so many times it overwhelmed other content and baked, without error and at great opportunity cost, an entire book into it. And that it was rewarded for doing so.