this post was submitted on 24 Feb 2025
384 points (99.5% liked)

Gaming

2812 readers
660 users here now

The Lemmy.zip Gaming Community

For news, discussions and memes!


Community Rules

This community follows the Lemmy.zip Instance rules, with the inclusion of the following rule:

You can see Lemmy.zip's rules by going to our Code of Conduct.

What to Expect in Our Code of Conduct:


If you enjoy reading legal stuff, you can check it all out at legal.lemmy.zip.


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] mke@programming.dev 1 points 9 hours ago* (last edited 9 hours ago) (1 children)

That's an interesting enough idea in theory, so here's my take on it, in case you want one.

Yes, it sounds magical, but:

  • AI sucks at make it more X. It doesn't understand scary, so you'll get worse crops of the training data, not meaningful changes.
  • It's prohibitively expensive and unfeasible for the majority of consumer hardware.
  • Even if it gets a thousand times cheaper and better at its job, is GenAI really the best way to do this?
  • Is it the only one? Are alternatives also built on exploitation? If they aren't, I think you should reconsider.
[–] Lumiluz@slrpnk.net 1 points 3 hours ago* (last edited 3 hours ago)

•Ok, I know the researching ability of people has decreased greatly over the years, but using "knowyourmeme" as a source? Really?

• You can now run optimized open source diffusion models on an iPhone, and it's been possible for years. I use that as an example because yes, there's models that can easily run on an Nvidia 1060 these days. Those models are more than enough to handle incremental changes to an image in-game

• Already has for awhile as demonstrated by it being able to run on an iPhone, but yes, it's probably the best way to get an uncanny valley effect in certain paintings in a horror game, as the alternatives would be:

  • spending many hours manually making hundreds of incremental changes to all the paintings yourself (and the will be a limit to how much they warp, and this assumes you have even better art skills)
  • hiring someone to do what I just mentioned (assumes you have a decent amount of money) and is still limited of course.

• I'll call an open source model exploitation the day someone can accurately generate an exact work it was trained on not within 1, but at least within 10 generations. I have looked into this myself, unlike seemingly most people on the internet. Last I checked, the closest was a 90 something % similarity image after using an algorithm that modified the prompt over time after thousands of generations. I can find this research paper myself if you want, but there may be newer research out there.