1
15
MEGATHREAD (lemmy.dbzer0.com)

This is a copy of /r/stablediffusion wiki to help people who need access to that information


Howdy and welcome to r/stablediffusion! I'm u/Sandcheeze and I have collected these resources and links to help enjoy Stable Diffusion whether you are here for the first time or looking to add more customization to your image generations.

If you'd like to show support, feel free to send us kind words or check out our Discord. Donations are appreciated, but not necessary as you being a great part of the community is all we ask for.

Note: The community resources provided here are not endorsed, vetted, nor provided by Stability AI.

#Stable Diffusion

Local Installation

Active Community Repos/Forks to install on your PC and keep it local.

Online Websites

Websites with usable Stable Diffusion right in your browser. No need to install anything.

Mobile Apps

Stable Diffusion on your mobile device.

Tutorials

Learn how to improve your skills in using Stable Diffusion even if a beginner or expert.

Dream Booth

How-to train a custom model and resources on doing so.

Models

Specially trained towards certain subjects and/or styles.

Embeddings

Tokens trained on specific subjects and/or styles.

Bots

Either bots you can self-host, or bots you can use directly on various websites and services such as Discord, Reddit etc

3rd Party Plugins

SD plugins for programs such as Discord, Photoshop, Krita, Blender, Gimp, etc.

Other useful tools

#Community

Games

  • PictionAIry : (Video|2-6 Players) - The image guessing game where AI does the drawing!

Podcasts

Databases or Lists

Still updating this with more links as I collect them all here.

FAQ

How do I use Stable Diffusion?

  • Check out our guides section above!

Will it run on my machine?

  • Stable Diffusion requires a 4GB+ VRAM GPU to run locally. However, much beefier graphics cards (10, 20, 30 Series Nvidia Cards) will be necessary to generate high resolution or high step images. However, anyone can run it online through DreamStudio or hosting it on their own GPU compute cloud server.
  • Only Nvidia cards are officially supported.
  • AMD support is available here unofficially.
  • Apple M1 Chip support is available here unofficially.
  • Intel based Macs currently do not work with Stable Diffusion.

How do I get a website or resource added here?

*If you have a suggestion for a website or a project to add to our list, or if you would like to contribute to the wiki, please don't hesitate to reach out to us via modmail or message me.

2
8
submitted 9 hours ago* (last edited 7 hours ago) by AdComfortable1514@lemmy.world to c/stable_diffusion@lemmy.dbzer0.com

Created by me.

Link : https://huggingface.co/codeShare/JupyterNotebooks/blob/main/sd_token_similarity_calculator.ipynb

How does this work?

Similiar vectors = similiar output in the SD 1.5 / SDXL / FLUX model

CLIP converts the prompt text to vectors (“tensors”) , with float32 values usually ranging from -1 to 1.

Dimensions are [ 1x768 ] tensors for SD 1.5 , and a [ 1x768 , 1x1024 ] tensor for SDXL and FLUX.

The SD models and FLUX converts these vectors to an image.

This notebook takes an input string , tokenizes it and matches the first token against the 49407 token vectors in the vocab.json : https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/tokenizer

It finds the “most similiar tokens” in the list. Similarity is the theta angle between the token vectors.

The angle is calculated using cosine similarity , where 1 = 100% similarity (parallell vectors) , and 0 = 0% similarity (perpendicular vectors).

Negative similarity is also possible.

How can I use it?

If you are bored of prompting “girl” and want something similiar you can run this notebook and use the “chick” token at 21.88% similarity , for example

You can also run a mixed search , like “cute+girl”/2 , where for example “kpop” has a 16.71% similarity

There are some strange tokens further down the list you go. Example: tokens similiar to the token "pewdiepie" (yes this is an actual token that exists in CLIP)

Each of these correspond to a unique 1x768 token vector.

The higher the ID value , the less often the token appeared in the CLIP training data.

To reiterate; this is the CLIP model training data , not the SD-model training data.

So for certain models , tokens with high ID can give very consistent results , if the SD model is trained to handle them.

Example of this can be anime models , where japanese artist names can affect the output greatly.

Tokens with high ID will often give the "fun" output when used in very short prompts.

What about token vector length?

If you are wondering about token magnitude, Prompt weights like (banana:1.2) will scale the magnitude of the corresponding 1x768 tensor(s) by 1.2 . So thats how prompt token magnitude works.

Source: https://huggingface.co/docs/diffusers/main/en/using-diffusers/weighted_prompts*

So TLDR; vector direction = “what to generate” , vector magnitude = “prompt weights”

How prompting works (technical summary)

  1. There is no correct way to prompt.

  2. Stable diffusion reads your prompt left to right, one token at a time, finding association from the previous token to the current token and to the image generated thus far (Cross Attention Rule)

  3. Stable Diffusion is an optimization problem that seeks to maximize similarity to prompt and minimize similarity to negatives (Optimization Rule)

Reference material (covers entire SD , so not good source material really, but the info is there) : https://youtu.be/sFztPP9qPRc?si=ge2Ty7wnpPGmB0gi

The SD pipeline

For every step (20 in total by default) for SD1.5 :

  1. Prompt text => (tokenizer)
  2. => Nx768 token vectors =>(CLIP model) =>
  3. 1x768 encoding => ( the SD model / Unet ) =>
  4. => Desired image per Rule 3 => ( sampler)
  5. => Paint a section of the image => (image)

Disclaimer /Trivia

This notebook should be seen as a "dictionary search tool" for the vocab.json , which is the same for SD1.5 , SDXL and FLUX. Feel free to verify this by checking the 'tokenizer' folder under each model.

vocab.json in the FLUX model , for example (1 of 2 copies) : https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/tokenizer

I'm using Clip-vit-large-patch14 , which is used in SD 1.5 , and is one among the two tokenizers for SDXL and FLUX : https://huggingface.co/openai/clip-vit-large-patch14/blob/main/README.md

This set of tokens has dimension 1x768.

SDXL and FLUX uses an additional set of tokens of dimension 1x1024.

These are not included in this notebook. Feel free to include them yourselves (I would appreciate that).

To do so, you will have to download a FLUX and/or SDXL model

, and copy the 49407x1024 tensor list that is stored within the model and then save it as a .pt file.

//---//

I am aware it is actually the 1x768 text_encoding being processed into an image for the SD models + FLUX.

As such , I've included text_encoding comparison at the bottom of the Notebook.

I am also aware thar SDXL and FLUX uses additional encodings , which are not included in this notebook.

//---//

If you want them , feel free to include them yourself and share the results (cuz I probably won't) :)!

That being said , being an encoding , I reckon the CLIP Nx768 => 1x768 should be "linear" (or whatever one might call it)

So exchange a few tokens in the Nx768 for something similiar , and the resulting 1x768 ought to be kinda similar to 1x768 we had earlier. Hopefully.

I feel its important to mention this , in case some wonder why the token-token similarity don't match the text-encoding to text-encoding similarity.

Note regarding text encoding vs. token

To make this disclaimer clear; Token-to-token similarity is not the same as text_encoding similarity.

I have to say this , since it will otherwise get (even more) confusing , as both the individual tokens , and the text_encoding have dimensions 1x768.

They are separate things. Separate results. etc.

As such , you will not get anything useful if you start comparing similarity between a token , and a text-encoding. So don't do that :)!

If you spot errors / ideas for improvememts; feel free to fix the code in your own notebook and post the results.

I'd appreciate that over people saying "your math is wrong you n00b!" with no constructive feedback.

//---//

Regarding output

What are the symbols?

The whitespace symbol indicate if the tokenized item ends with whitespace ( the suffix "banana" => "banana " ) or not (the prefix "post" in "post-apocalyptic ")

For ease of reference , I call them prefix-tokens and suffix-tokens.

Sidenote:

Prefix tokens have the unique property in that they "mutate" suffix tokens

Example: "photo of a #prefix#-banana"

where #prefix# is a randomly selected prefix-token from the vocab.json

What is that gibberish tokens that show up?

The gibberish tokens like "ðŁĺħ</w>" are actually emojis!

Try writing some emojis in this online tokenizer to see the results: https://sd-tokenizer.rocker.boo/

It is a bit borked as it can't process capital letters properly.

Also note that this is not reversible.

If tokenization "😅" => ðŁĺħ

Then you can't prompt "ðŁĺħ" and expect to get the same result as the tokenized original emoji , "😅".

SD 1.5 models actually have training for Emojis.

But you have to set CLIP skip to 1 for this to work!.

For example, this is the result from "photo of a 🧔🏻‍♂️"

A mini-tutorial on stuff you can do with the vocab.list concluded.

Anyways, have fun with the notebook.

There might be some updates in the future with features not mentioned here.

//---//

3
9
4
18
ComfyUI v0.2.0 Release (blog.comfy.org)
5
2
6
2
7
8

Changelog

Highlights for 2024-08-31

Summer break is over and we are back with a massive update!

Support for all of the new models:

What else? Just a bit... ;)

New fast-install mode, new Optimum Quanto and BitsAndBytes based quantization modes, new balanced offload mode that dynamically offloads GPU<->CPU as needed, and more...
And from previous service-pack: new ControlNet-Union all-in-one model, support for DoRA networks, additional VLM models, new AuraSR upscaler

Breaking Changes...

Due to internal changes, you'll need to reset your attention and offload settings!
But...For a good reason, new balanced offload is magic when it comes to memory utilization while sacrificing minimal performance!

Details for 2024-08-31

New Models...

To use and of the new models, simply select model from Networks -> Reference and it will be auto-downloaded on first use

  • Black Forest Labs FLUX.1
    FLUX.1 models are based on a hybrid architecture of multimodal and parallel diffusion transformer blocks, scaled to 12B parameters and builing on flow matching
    This is a very large model at ~32GB in size, its recommended to use a) offloading, b) quantization
    For more information on variations, requirements, options, and how to donwload and use FLUX.1, see Wiki
    SD.Next supports:
  • AuraFlow
    AuraFlow v0.3 is the fully open-sourced largest flow-based text-to-image generation model
    This is a very large model at 6.8B params and nearly 31GB in size, smaller variants are expected in the future
    Use scheduler: Default or Euler FlowMatch or Heun FlowMatch
  • AlphaVLLM Lumina-Next-SFT
    Lumina-Next-SFT is a Next-DiT model containing 2B parameters, enhanced through high-quality supervised fine-tuning (SFT)
    This model uses T5 XXL variation of text encoder (previous version of Lumina used Gemma 2B as text encoder)
    Use scheduler: Default or Euler FlowMatch or Heun FlowMatch
  • Kwai Kolors
    Kolors is a large-scale text-to-image generation model based on latent diffusion
    This is an SDXL style model that replaces standard CLiP-L and CLiP-G text encoders with a massive chatglm3-6b encoder supporting both English and Chinese prompting
  • HunyuanDiT 1.2
    Hunyuan-DiT is a powerful multi-resolution diffusion transformer (DiT) with fine-grained Chinese understanding
  • AnimateDiff
    support for additional models: SD 1.5 v3 (Sparse), SD Lightning (4-step), SDXL Beta

New Features...

  • support for Balanced Offload, thanks @Disty0!
    balanced offload will dynamically split and offload models from the GPU based on the max configured GPU and CPU memory size
    model parts that dont fit in the GPU will be dynamically sliced and offloaded to the CPU
    see Settings -> Diffusers Settings -> Max GPU memory and Max CPU memory
    note: recommended value for max GPU memory is ~80% of your total GPU memory
    note: balanced offload will force loading LoRA with Diffusers method
    note: balanced offload is not compatible with Optimum Quanto
  • support for Optimum Quanto with 8 bit and 4 bit quantization options, thanks @Disty0 and @Trojaner!
    to use, go to Settings -> Compute Settings and enable "Quantize Model weights with Optimum Quanto" option
    note: Optimum Quanto requires PyTorch 2.4
  • new prompt attention mode: xhinker which brings support for prompt attention to new models such as FLUX.1 and SD3
    to use, enable in Settings -> Execution -> Prompt attention
  • use PEFT for LoRA handling on all models other than SD15/SD21/SDXL
    this improves LoRA compatibility for SC, SD3, AuraFlow, Flux, etc.

Changes & Fixes...

  • default resolution bumped from 512x512 to 1024x1024, time to move on ;)
  • convert Dynamic Attention SDP into a global SDP option, thanks @Disty0!
    note: requires reset of selected attention option
  • update default CUDA version from 12.1 to 12.4
  • update requirements
  • samplers now prefers the model defaults over the diffusers defaults, thanks @Disty0!
  • improve xyz grid for lora handling and add lora strength option
  • don't enable Dynamic Attention by default on platforms that support Flash Attention, thanks @Disty0!
  • convert offload options into a single choice list, thanks @Disty0!
    note: requires reset of selected offload option
  • control module allows reszing of indivudual process override images to match input image
    for example: set size->before->method:nearest, mode:fixed or mode:fill
  • control tab includes superset of txt and img scripts
  • automatically offload disabled controlnet units
  • prioritize specified backend if --use-* option is used, thanks @lshqqytiger
  • ipadapter option to auto-crop input images to faces to improve efficiency of face-transfter ipadapters
  • update IPEX to 2.1.40+xpu on Linux, thanks @Disty0!
  • general ROCm fixes, thanks @lshqqytiger!
  • support for HIP SDK 6.1 on ZLUDA backend, thanks @lshqqytiger!
  • fix full vae previews, thanks @Disty0!
  • fix default scheduler not being applied, thanks @Disty0!
  • fix Stable Cascade with custom schedulers, thanks @Disty0!
  • fix LoRA apply with force-diffusers
  • fix LoRA scales with force-diffusers
  • fix control API
  • fix VAE load refrerencing incorrect configuration
  • fix NVML gpu monitoring
8
5
9
7
10
19

Abstract

We present GameNGen, the first game engine powered entirely by a neural model that enables real-time interaction with a complex environment over long trajectories at high quality. GameNGen can interactively simulate the classic game DOOM at over 20 frames per second on a single TPU. Next frame prediction achieves a PSNR of 29.4, comparable to lossy JPEG compression. Human raters are only slightly better than random chance at distinguishing short clips of the game from clips of the simulation. GameNGen is trained in two phases: (1) an RL-agent learns to play the game and the training sessions are recorded, and (2) a diffusion model is trained to produce the next frame, conditioned on the sequence of past frames and actions. Conditioning augmentations enable stable auto-regressive generation over long trajectories.

Paper: https://arxiv.org/abs/2408.14837

Project Page: https://gamengen.github.io/

11
9
12
6
13
4
14
2
15
2
16
17
17
25

Text:

Emad@EMostaque

Delighted to announce the public open source release of #StableDiffusion!

Please see our release post and retweet! stability.ai/blog/stable-di...

Proud of everyone involved in releasing this tech that is the first of a series of models to activate the creative potential of humanity

11:07 AM • Aug 22, 2022

18
7
19
3
submitted 2 weeks ago* (last edited 2 weeks ago) by Even_Adder@lemmy.dbzer0.com to c/stable_diffusion@lemmy.dbzer0.com
20
15

Has anyone of you stumbled upon any information on how to get it running on machines like mine, or does it just not have enough power?

21
7
22
6
23
12
submitted 3 weeks ago* (last edited 3 weeks ago) by Even_Adder@lemmy.dbzer0.com to c/stable_diffusion@lemmy.dbzer0.com
24
24
25
17
view more: next ›

Stable Diffusion

4244 readers
11 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS