this post was submitted on 03 Sep 2024
1576 points (97.8% liked)

Technology

59609 readers
3606 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] countablenewt@allthingstech.social -1 points 2 months ago (3 children)

@zbyte64 data quality, again, was out of the scope of what I was talking about originally

Which, again, was that legal precedent would suggest that the *how* is largely irrelevant in copyright cases, they’re mostly focused on *why* and the *scale of the operation*

I’m not getting sued for copyright infringement by the NYT because I used inspect element to delete content to read behind their paywall, OpenAI is

[–] zbyte64@awful.systems 0 points 2 months ago (2 children)

I was narrowly taking issue with the comparison to how humans learn, I really don't care about copyrights.

[–] countablenewt@allthingstech.social 0 points 2 months ago (1 children)

@zbyte64 where am I wrong? The process is effectively the same: you get a set of training data (a textbook) and a set of validation data (a test) and voila, I’m trained

To learn how to draw an image of a thing, you look at the thing a lot (training data) and try sketching it out (validation data) until it’s right

How the data is acquired is irrelevant, I can pirate the textbook or trespass to find a particular flower, that doesn’t mean I’m learning differently than someone who paid for it

[–] zbyte64@awful.systems 1 points 2 months ago* (last edited 2 months ago)

Do we assume everything read in a textbook is correct? When we get feedback on drawing, do we accept the feedback as always correct and applicable? We filter and groom data for the AI so it doesn't need to learn these things.