574
submitted 8 months ago by Kory@lemmy.ml to c/technology@lemmy.world

I know there are other ways of accomplishing that, but this might be a convenient way of doing it. I'm wondering though if Reddit is still reverting these changes?

you are viewing a single comment's thread
view the rest of the comments
[-] lvxferre@mander.xyz 89 points 8 months ago

Let's pretend for a moment that we know that Reddit has any sort of decent versioning system, and that it keeps the old versions of your comments alongside the newer ones, and that it's feeding the LLM with the old version. (Does it? I have my doubts, given that Reddit Inc. isn't exactly competent.)

Even then, I think that it's sensible to use this tool, to scorch the earth and discourage other human users from adding their own content to that platform. It still means less data for Google to say "it's a bunch of users, who cares about the intellectual property of those filthy things? Their data is now my data. Feed it ~~to the wolves~~ to Gemini".

[-] T156@lemmy.world 31 points 8 months ago* (last edited 8 months ago)

Let’s pretend for a moment that we know that Reddit has any sort of decent versioning system, and that it keeps the old versions of your comments alongside the newer ones, and that it’s feeding the LLM with the old version. (Does it? I have my doubts, given that Reddit Inc. isn’t exactly competent.)

They almost certainly do, if only because of the practicalities of adding a new comment, then having that be fetched in place of the old one, compared to making and propagating an edit across all their databases. With exceptions, it'd be a bit easier to implement it as an additional comment, and increment a version number that you fetch the latest version of, rather than needing to scan through the entire database to make changes.

It would also help with any administration/moderation tasks if they could see whether people posted rule-breaking content and then tried to hide it behind edits.

That said, one of the many Spez controversies did show that they are capable of making actual edits on the back end if they wished.

[-] lvxferre@mander.xyz 21 points 8 months ago

They almost certainly do, if only because of the practicalities of adding a new comment

If this is true, it shifts the problem from "not having it" to "not knowing which version should be used" (to train the LLM).

They could feed it the unedited versions and call it a day, but a lot of times people edit their content to correct it or add further info, specially for "meatier" content (like tutorials). So there's still some value on the edits, and I believe that Google will be at least tempted to use them.

If that's correct, editing it with nonsense will lower the value of edited comments for the sake of LLM training. It should have an impact, just not as big as if they kept no version system.

It would also help with any administration/moderation tasks if they could see whether people posted rule-breaking content and then tried to hide it behind edits.

I know from experience (I'm a former Reddit janny) that moderators can't see earlier versions of the content, only the last one. The admins might though.

That said, one of the many Spez controversies did show that they are capable of making actual edits on the back end if they wished.

The one from TD, right?

  • spez: "let them babble their violent rhetoric. Freeze peaches!"
  • also spez: "nooo they're casting me on a bad light. I'm going to edit it!"
[-] GBU_28@lemm.ee 6 points 8 months ago

Wouldn't be hard to scan a user and say:

  • they existed for 5 years.
  • they made something like 5 comments a day. They edit 1 or 2 comments a month.
  • then randomly on March 7th 2024 they edited 100% of all comments across all subs.
  • use comment version March 6th 2024
[-] lvxferre@mander.xyz 6 points 8 months ago

It would.

First you'd need to notice the problem. Does Google even realise that some people want to edit their Reddit content to boycott LLM training?

Let's say that Google did it. Then it'd need to come up with a good (generalisable, low amount of false positives, low amount of false negatives) set of rules to sort those out. And while coming up with "random" rules is easy, good ones take testing, trial and error, and time.

But let's say that Google still does it. Now it's retrieving and processing a lot more info from the database than just the content and its context, but also account age, when the piece of content was submitted, when it was edited.

So doing it still increases the costs associated with the corpus, making it less desirable.

[-] GBU_28@lemm.ee 3 points 8 months ago

Huh? Reddit has all of this plus changes in their own DBs. Google has nothing to do with this, it's pre handover.

[-] lvxferre@mander.xyz -1 points 8 months ago* (last edited 8 months ago)

I'm highlighting that having the data is not enough, if you don't find a good way to use the data to sort the trash out. Google will need to do it, not Reddit; Reddit is only handing the data over.

Is this clear now? If you're still struggling to understand it, refer to the context provided by the comment chain, including your own comments.

[-] GBU_28@lemm.ee 1 points 8 months ago* (last edited 8 months ago)

I'm saying reddit will not ship a trashed deliverable. Guaranteed.

Reddit will have already preprocessed for this type of data damage. This is basic data engineering and trivial to do to find events in the data and understanding timeseries of events.

Google will be receiving data that is uncorrupted, because they'll get data properly versioned to before the damaging event.

If a high edit event happens on March 7th, they'll ship march 7th - 1d. Guaranteed.

Edit to be clear: you're ignoring/not accepting the practice of noting high volume of edits per user as an event, and using that timestamped event as a signal of data validity.

[-] lvxferre@mander.xyz -1 points 8 months ago* (last edited 8 months ago)

I’m saying reddit will not ship a trashed deliverable. Guaranteed.

Nobody said anything about the database being trashed. What I'm saying is that the database is expected to have data unfit for LLM training, that Google will need to sort out, and Reddit won't do it for Google.

Reddit will have already preprocessed for this type of data damage.

Do you know it, or are you assuming it?

If you know it, source it.

If you're assuming, stop wasting my time with shit that you make up and your "huuuuh?" babble.

[-] GBU_28@lemm.ee 1 points 8 months ago* (last edited 8 months ago)

I know it because I've worked in corporate data engineering and large data migrations and it would be abnormal to do anything else. there's a full review of test data, a scope of work, an acceptance period, etc.

You think reddit doesn't know about these utilities? You think Google doesn't?

You need to chill out and acknowledge how an industry works. I'm sure you are convinced but your idea of things isn't how the industry works.

I don't need to explain to you that the sky is blue. And I shouldn't need to explain to you that Google isn't going to accept a damaged product, and that reddit can or can't do some basic querying and timeseries manipulations.

Edit like you literally asked for a textbook.

[-] lvxferre@mander.xyz 0 points 8 months ago* (last edited 8 months ago)

I know it because I’ve worked in corporate data migrations

In other words: "I dun have sauce, I'm assooming, but chruuuust me lol"

At this rate it's safe to simply ignore your comments as noise. I'm not wasting further time with you.

[-] GBU_28@lemm.ee 2 points 8 months ago* (last edited 8 months ago)

Seems like people are voting your comment as noise but whatever.

You are trying to prove something normal ISNT happening. I'm describing normal industry behavior.

Seems like you need to prove an abnormal sitch is occuring.

Edit it's like your asking for proof that they'll build stairs with a hand rail

[-] Voroxpete@sh.itjust.works 1 points 8 months ago

It sounds like what's needed here is a version of this tool that makes the edits slowly, at random intervals, over a period of time. And perhaps has the ability to randomize the text in each edit so that they're all unusable garbage, but different unusable garbage (like the suggestion of taking ChatGPT output at really high temp that someone else made). Maybe it also only edits something like 25% of your total comment pool, and perhaps makes unnoticeably minor edits (add a space, remove a comma) to a whole bunch of other comments. Basically masking the poison by hiding it in a lot of noise?

[-] GBU_28@lemm.ee 2 points 8 months ago* (last edited 8 months ago)

Now you're talkin .

Intra comment edit threshold would be fun to explore

[-] londos@lemmy.world 5 points 8 months ago

Honestly, parsing through version history is actually something an LLM could handle. It might even make more sense of it than without. For example, if someone replies to a comment and then the parent is edited to say something different. No one will have to waste their time filtering anything.

[-] lvxferre@mander.xyz 3 points 8 months ago* (last edited 8 months ago)

They could use an LLM to parse through the version history of all those posts/comments, to use it to train another LLM with it. It sounds like a bad (and expensive, processing time-wise) idea, but it could be done.

EDIT: thinking further on this, it's actually fairly doable. It's generally a bad idea to feed the output of an LLM into another, but in this case you're simply using it to pick one among multiple versions of a post/comment made by a human being.

It's still worth to scorch the earth though, so other human users don't bother with the platform.

[-] reksas@sopuli.xyz 12 points 8 months ago

What if we edit the comments slowly, words or even letters at a time. Then, if they save all of the edits they will end up with a lot of pointless versions. And if they dont, the buffer will eventually get full and original gets lost

[-] lvxferre@mander.xyz 6 points 8 months ago

I'll ping @lemmyvore@feddit.nl because the answer is relevant for both.

Another user mentioned the possibility that they could use an LLM to sort this shit out. If that's correct neither slow edits nor multiple edits will do much, as the LLM could simply pick the best version of each comment.

And while it's a bit silly to use LLM to sort data out to train another LLM, this sounds like the sort of shit that Google could and would do.

[-] chalupapocalypse@lemmy.world 7 points 8 months ago

Let's also pretend that reddit isn't a cesspool of bots, marketing campaigns, foreign agents, incels, racists, Republicans, gun nuts, shit posters, trolls...the list goes on.

Is it even that valuable? It didn't take long for that Microsoft bot to turn into Hitler, feeding reddit into an "AI" is like speed running Ultron.

[-] lvxferre@mander.xyz 4 points 8 months ago

It's still somewhat valuable due to the size of the corpus (it's huge) and because people used to share technical expertise there.

[-] lemmyvore@feddit.nl 7 points 8 months ago

Even if they had comment versioning, who's gonna dig through the versions to figure out which are nonsense. Just use the overwrite tool several times and then wish them good luck.

[-] Murdoc@sh.itjust.works 2 points 8 months ago

I'm guessing, the AI? Seems like a job it'd be good at.

[-] bluekieran@lemmy.world 1 points 8 months ago* (last edited 8 months ago)

Last version of comment within 24 hours of it being posted initially. So, probably one line of code.

this post was submitted on 07 Mar 2024
574 points (97.7% liked)

Technology

59137 readers
2139 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS