this post was submitted on 09 Jan 2025
274 points (95.4% liked)

Technology

60340 readers
4184 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] NuXCOM_90Percent@lemmy.zip 14 points 15 hours ago* (last edited 15 hours ago) (2 children)

Friendly reminder: Deleting your account won't accomplish what you think it will.

Facebook will still keep all data that is associated with other users as per their own disclaimer. They also still keep logs that are "disassociated with personal identifiers. "

So all training can still occur. And understand what while Jane Smith may have deleted her account, they still have all the data it takes to indicate that User 12345 was tagged in photos with John Smith at the Burger King on 404 Fake St. And, because of that, the data that User 12345 had previously provided is ALSO John Smith's data. And Fred Wilkerson since he was at that Burger King once. And so forth.

And ALL that data is still there for training.

So do what you gotta do to make it less appealing to other users. But understand your data is already out there and is never going away. Same with reddit and all other social media (which includes Lemmy).

[–] RubberElectrons@lemmy.world 25 points 15 hours ago (2 children)

Yeah but you know what? That's still better than actively engaging with their "services".

Eventually, it'll just be bots interacting with themselves, given enough time.

[–] AbidanYre@lemmy.world 4 points 12 hours ago* (last edited 12 hours ago)

Eventually, it'll just be bots interacting with themselves, given enough time.

It seems like that's a good chunk of it already

[–] NuXCOM_90Percent@lemmy.zip 4 points 14 hours ago

Yes. Like I said. Do what you gotta do to make it less appealing to other users.

But if, for example, you are an LGBTQIA+ person who thinks this will provide any form of protection...

[–] cygnus@lemmy.ca 11 points 14 hours ago (1 children)

If you're in the US, sure. If you're in Europe you can compel them to completely delete everything as per the GDPR.

[–] NuXCOM_90Percent@lemmy.zip 14 points 14 hours ago (1 children)

And I am sure a company that is now openly training their LLMs on copyrighted materials is going to totally comply with all of that...

One of these days people are going to learn "But it is against the law" doesn't apply to the rich and powerful, law enforcement, or megacorporations.

[–] LainTrain@lemmy.dbzer0.com -3 points 11 hours ago* (last edited 4 hours ago) (1 children)

Training LLMs on copyright material isn't illegal to begin with, just like how learning from a pirated book isn't or having drugs in your system isn't, only being in possession of these things is illegal.

GDPR violations are on the other hand - illegal. You're right in principle, don't get me wrong and I appreciate your healthy cynicism but in this particular case being slapped with a GDPR fine is actually not worth keeping the data of one user.

Edit: Downvoted for being right as usual. Bruh Lemmy is becoming more and more like Reddit every day.

[–] grue@lemmy.world 4 points 10 hours ago* (last edited 10 hours ago) (1 children)

Training LLMs on copyright material isn’t illegal to begin with

Reproducing identifiable chunks of copyrighted content in the LLM's output is copyright infringement, though, and that's what training on copyrighted material leads to. Of course, that's the other end of the process and it's a tort, not a crime, so yeah, you make a good point that the company's legal calculus could be different.

[–] LainTrain@lemmy.dbzer0.com 0 points 4 hours ago* (last edited 4 hours ago) (1 children)

Thank you, I'm glad someone is sane ITT.

To further refine the point, do you know of any lawsuits that were ruled successfully on the basis that as you say - the company that made the LLM is responsible because someone could prompt it to reproduce identifiable chunks of copyright material? Which specific bills make it so?

Wouldn't it be like suing Seagate because I use their hard drives to pirate corpo media? I thought Sony Corp. of America v. Universal City Studios, Inc. would serve as the basis there and just like Betamax it'd be distribution of copyright material by an end user that would be problematic, rather than the potential of a product to be used for copyright infringement.

[–] grue@lemmy.world 1 points 2 hours ago* (last edited 2 hours ago)

I’m glad someone is sane ITT.

https://www.youtube.com/watch?v=uY9z2b85qcE

To be clear, I think it ought to be the case that at least "copyleft" GPL code can't be used to train an LLM without requiring that all output of the LLM become GPL (which, if said GPL training data were mixed with proprietary training data, would likely make the model legally unusable in total). AFAIK it's way too soon for there to be a precedent-setting court ruling about it, though.

In particular...

I thought Sony Corp. of America v. Universal City Studios, Inc. would serve as the basis there

...I don't see how this has any relevancy at all, since the whole purpose of an LLM is to make new -- arguably derivative -- works on an industrial scale, not just single copies for personal use.