this post was submitted on 13 Jun 2024
12 points (87.5% liked)

AI Generated Images

7171 readers
179 users here now

Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.

No explicit violence, gore, or nudity.

This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator's discretion. Any suggestive imagery may be removed at any time.

Refer to https://lemmynsfw.com/ for any NSFW imagery.

No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.

AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.

To embed images type:

“![](put image url in here)”

Follow all sh.itjust.works rules.


Community Challenge Past Entries

Related communities:

founded 1 year ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] j4k3@lemmy.world 3 points 5 months ago* (last edited 5 months ago) (1 children)

I expect SD3 to be like how SD2 proved to be irrelevant, but I guess we'll see. I find no value in learning or training SD3 when I can't potentially use it for anything.

[–] 3volver@lemmy.world 2 points 5 months ago (1 children)

SD3 seems even worse than SD1.5 in some ways. Better at certain things like text specifically though.

[–] j4k3@lemmy.world 4 points 5 months ago

They usually seem that way at first. I think it is a case of the massive amount of data available initially; like it is unbiased. I think of it like all the shelves in an enormous library lined up on a giant wall. It is very capable and full of information, but lacks any kind of focus.

Also, when training the big models, as far as I understand it, they train until they find the sweat spot and then go past it. Once they know the curve of how much training is "overtrained," they go back to a point a little bit before the peak so that fine tuning should place the model at the peak.

The raw checkpoints usually lack focus in both positive and negative directions. It can be similar to how LLM's need really good instructions that help Name-2 understand its own role. If you try and define this role even with diffusion, you're likely to get a change, maybe an improvement. Something like this may help, "You are a helpful generative AI that follows the prompt details exactly.\n\nPrompt:". That actually helped with Pony when I tried it on the base checkpoint recently.