192
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 17 Aug 2023
192 points (100.0% liked)
Technology
37699 readers
338 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
That's something that can currently be done by a human and is generally considered fair use. All a language model really does is drive the cost of doing that from tens or hundreds of dollars down to pennies.
A fair use defense does not have to include noncompetition. That's just one factor in a fair use defense and the other factors may be enyon their own.
I think it'll come down to how "the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes" and "the amount and substantiality of the portion used in relation to the copyrighted work as a whole;" are interpreted by the courts. Do we judge if a language model by the model itself or by the output itself? Can a model itself be uninfringing and it still be able to potentially produce infringing content?
The model is intended for commercial use, uses the entire work and creates derivative works based on it which are in direct competition.
You are kind of hitting on one of the issues I see. The model and the works created by the model may b considered two separate things. The model itself may not be infringing in of itself. It's not actually substantially similar to any of the individual training data. I don't think anyone can point to part of it and say this is a copy of a given work. But the model may be able to create works that are infringing.
This may not actually be true though. If it's a Q&A interface, it's very unlikely they are training the model on the entire work (since model training is extremely expensive and done extremely infrequently). Now sure, maybe they actually are training on NYT articles, but a similarly powerful LLM could exist without training on those articles and still answer questions about it.
Suppose you wanted to make your own Bing Chat. If you tried to answer the questions entirely based on what the model is trained on, you'd get crap results because the model may not have been trained on any new data in over 2 years. More likely, you're using retrieval-augmented generation (RAG) to select portions of articles, generally the ones you got from your search results, to provide as context to your LLM.
Also, the argument that these are derivative works seems to be a bit iffy. Derivative works use substantial portions of the original work, but generally speaking a Q&A interface like this would be purely generative. With certain carefully-crafted prompts, it may be able to generate portions of the original work, but assuming they're using RAG, it's extremely unlikely they would generate the exact same content that's in the article because they wouldn't be using the entirety of the article for generation anyway.
How is this any different from a person scanning an article and writing their own summary based on what they read? Is doing so a violation of copyright, and if so, aren't news outlets especially notorious for doing this (writing articles based on the articles put out by other news outlets)?
Edit: I should probably add as well, but search engines have been indexing and training models on the content they crawl over for years, and that never seemed to cause anyone to complain about copyright. It's interesting to me that it's suddenly a problem now.
That's kind of the point though isn't it? Fair use is only fair use because it's a human doing it, not an algorithm.
That is not actually one of the criteria for fair use in the US right now. Maybe that'll change but it'll take a court case or legislation to do.
I am aware of that, but those rules were written before technology like this was conceivable.
I think there's a good case that it's transformative entirely. It doesn't just spit out NYT articles. I feel like saying they "stole IP" from NYT doesn't really hunt because that would mean anyone who read the NYT and then wrote any kind of article at some point also engaged in IP theft because almost certainly their consumption of the NYT influenced their writing in some way. ( I think the same thing holds up to a weaker degree with generative image AI just seems a bit different sometimes directly copying the actual brushstrokes etc of real artists there's also only so many ways to arrange words)
It is however an entirely new thing, so it's up to judges for now to rule how that works.
I have it on good authority that the writers of the NYT have also read other news papers before. This blatant IP theft goes deeper than we could have ever imagined.
Yeah, we need to get this in front of the Supreme Court.