77
you are viewing a single comment's thread
view the rest of the comments
[-] Saeculum@hexbear.net 9 points 1 month ago

They've been able to do hands fine for months now.

[-] sweatersocialist@hexbear.net 11 points 1 month ago

i've yet to see this proven true and i see ai bullshit daily on facebook

[-] Saeculum@hexbear.net 14 points 1 month ago

The Facebook stuff is mostly old stable diffusion models or Dalle, because they're free and relatively easy to use. Midjourney and the newer stable diffusion models get it right most of the time, and have an inpainting feature so you can tell the computer to do that bit again when they don't.

[-] PoY@lemmygrad.ml 1 points 1 month ago

flux seems to do a pretty decent job most of the time

[-] amemorablename@lemmygrad.ml 7 points 1 month ago

Yes and no. It's not a solved problem, but a worked around problem. Diffusion models struggle with parts that are especially small and would normally have to be done with precision to look right. Some tech does better on this, by increasing the resolution (so that otherwise smaller parts come out bigger) and/or by tuning the model such that it's stiffer in what it can do but some of the worst renders are less likely.

In other words, fine detail is still a problem in diffusion models. Hands are related to it some of the time, but are not the entirety of it. Hands were kind of like a symptom of the fine detail problem, but now that they've made hands better, they haven't fixed that problem (at least not in entirety and fixing it in entirety might not be possible within the diffusion architecture). So it's sorta like they've treated the symptoms more so.

this post was submitted on 10 Aug 2024
77 points (100.0% liked)

Technology

929 readers
11 users here now

A tech news sub for communists

founded 2 years ago
MODERATORS