this post was submitted on 08 Aug 2023
1 points (100.0% liked)

Stable Diffusion

4320 readers
6 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

I have been running 1.4, 1.5, 2 without issue - but everytime I try to run SDXL 1.0 (via Invoke, or Auto1111) it will not load the checkpoint.

I have the official hugging face version of the checkpoint, refiner, lora offset and VAE. They are all named properly to match how they need to. They are all in the appropriate folders. When I pick the model to load, it tries for about 20 seconds, then pops a super long error in the python instance and defaults to the last model I loaded. Oddly, it loads the refiner without issue.

Is this a case of my 8gb vram just not being enough? I have tried with the no-half/full precision arguments.

top 8 comments
sorted by: hot top controversial new old
[–] whitecapstromgard@sh.itjust.works 1 points 1 year ago (1 children)

SDXL is very memory hungry. Most base models are around 6-7 GB, which doesn't leave much room for anything else.

[–] Thanks4Nothing@lemm.ee 2 points 1 year ago (1 children)

Thanks. Oddly enough, the most recent release of InvokeAI fixed the problem I was having. My 8gb 3070 can run SDXL in about 30 seconds now. It seems to take a little bit to clear everything in-between generations though. I want to move up to a 12/24 gb GPU, but am waiting/hoping for the price crash.

[–] RotaryKeyboard@lemmy.ninja 1 points 1 year ago

I had issues before I updated A1111. Do a git pull in the A111 directory and try again.

[–] Stampela@startrek.website 1 points 1 year ago

3060 here, it might be the vram. SDXL eats a lot of it (and if you had say the vae in the wrong spot it would output very wrong images) so it might be that either 8gb aren't enough, or maybe they aren't enough with the resolution of your screen plus whatever you are running, like the browser.

Or, OR: the checkpoint is corrupted. I had that happen a couple of times in the past and the whole huge error with loading of another model was what happened.

[–] chicken@lemmy.dbzer0.com 1 points 1 year ago (1 children)

I'm not sure why, but I have 8GB vram and my experience with this has been the same as others who describe that SDXL will not run with Auto1111 but it will work with ComfyUI. So I think this is not purely a vram issue.

Auto1111 might be trying to load multiple models at the same time, which it does not have room for.

[–] Novman@feddit.it 1 points 1 year ago

Nvidia has problem wih newest drivers, auto 111 give out of memory, comfyui works smootly with your card.