this post was submitted on 20 Sep 2023
552 points (95.5% liked)

Technology

59223 readers
3330 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] archomrade@midwest.social 5 points 1 year ago

Yea, I mean I get why automated tools are bad for companies, I just don't have any sympathy for them, nor do I think we should be stretching our laws beyond their intent to protect them from competition. I think the fair-use exceptions for the DMCA (such as for breaking digital copy-protection for personal use) are comparable here. Under those exceptions for example, it's considered fair use to rip a DVD into a digital file as long as it's for personal use. An IP holder could argue that practice "eats into their potential future profits" for individuals who may want a digital version of a media product, but it's still protected. In that case, the value to the consumer is prioritized over a companies dubious copyright claim.

In my mind, a ChatGPT short story is not a true alternative to an original creative work (an individual can't use GPT to read ASOIAF, only derivative short stories), and the work that GPT CAN produce are somewhat valueless to an individual who hasn't already read the original. Only if they were to take those short stories and distribute them (i.e. someone ripping a DVD and sharing that file with friends and family) could 'damages' really be assumed.

I think the outcome of these lawsuits can help inform what we should do, also: LLMs as a tool will not go away at this point, so the biggest outcome of this kind of litigation would be the inflation of cost in producing an LLM and inflation of the value of the "data" necessary to train it. This locks out future competitors and preemptively consolidates the market into established hands (twitter, reddit, facebook, and google already "own" the data their users have signed over to them in their TOS). Now is the time to rethink copyright and creative compensation models, not double-down on our current system.

I really hope the judges overseeing these cases can see the implications here.