this post was submitted on 28 Oct 2024
122 points (83.5% liked)

Technology

59357 readers
3460 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] NaibofTabr@infosec.pub 1 points 2 weeks ago (5 children)

We're not talking about a "style", we're talking about producing finished work. The image generation models aren't style guides, they output final images which are produced from the ingestion of other images as training data. The source material might be actual art (or not) but it is generally the product of a real person (because ML ingesting its own products is very much a garbage-in garbage-out system) who is typically not compensated for their work. So again, these generative ML models are ripoff systems, and nothing more. And no, typing in a prompt doesn't count as innovation or creativity.

[–] VintageGenious@sh.itjust.works 2 points 2 weeks ago (4 children)

Generative ai is not only prompting, which shows you don't know. Who are you to decide what is creativity and innovation? Are you Mr Art?

Anyway, it is not ingesting images and photobashing them into a final picture, that's not how it works. It has no memory of training data images, instead it learned to generate images by trying and when similar to a training data image going more in that direction. So it has the ability to create in the same style, but the original images it doesn't have them

[–] NaibofTabr@infosec.pub 0 points 2 weeks ago (3 children)

I see, so your argument is that because the training data is not stored in the model in its original form, it doesn't count as a copy, and therefore it doesn't constitute intellectual property theft. I had never really understood what the justification for this point of view was, so thanks for that, it's a bit clearer now. It's still wrong, but at least it makes some kind of sense.

If the model "has no memory of training data images", then what effect is it that the images have on the model? Why is the training data necessary, what is its function?

[–] Even_Adder@lemmy.dbzer0.com 2 points 2 weeks ago

Here's a video explaining how diffusion models work, and this article by Kit Walsh, a senior staff attorney at the EFF.

load more comments (2 replies)
load more comments (2 replies)
load more comments (2 replies)