Even_Adder

joined 1 year ago
[–] Even_Adder@lemmy.dbzer0.com 3 points 2 days ago

I forgive him for being 19.

[–] Even_Adder@lemmy.dbzer0.com 4 points 2 days ago (2 children)

I won't stand for libel against Kou. He was a great test pilot that provided superb data despite his awful circumstances and went to prison fighting fascism.

 

A quantized version of ControlNet Union for Flux for less powerful computers.

 

TL;DR

A new post-training training quantization paradigm for diffusion models, which quantize both the weights and activations of FLUX.1 to 4 bits, achieving 3.5× memory and 8.7× latency reduction on a 16GB laptop 4090 GPU.

Paper: http://arxiv.org/abs/2411.05007

Weights: https://huggingface.co/mit-han-lab/svdquant-models

Code: https://github.com/mit-han-lab/nunchaku

Blog: https://hanlab.mit.edu/blog/svdquant

Demo: https://svdquant.mit.edu/

[–] Even_Adder@lemmy.dbzer0.com 3 points 1 week ago

Chiaroscuro Scooter.

 

Abstract

Diffusion models have demonstrated excellent capabilities in text-to-image generation. Their semantic understanding (i.e., prompt following) ability has also been greatly improved with large language models (e.g., T5, Llama). However, existing models cannot perfectly handle long and complex text prompts, especially when the text prompts contain various objects with numerous attributes and interrelated spatial relationships. While many regional prompting methods have been proposed for UNet-based models (SD1.5, SDXL), but there are still no implementations based on the recent Diffusion Transformer (DiT) architecture, such as SD3 and this http URL this report, we propose and implement regional prompting for FLUX.1 based on attention manipulation, which enables DiT with fined-grained compositional text-to-image generation capability in a training-free manner. Code is available at this https URL.

Paper: https://arxiv.org/abs/2411.02395

Code: https://github.com/instantX-research/Regional-Prompting-FLUX

 
  • Add Intel Core Ultra Series 2 (Lunar Lake) NPU support by @rupeshs in #277
  • Seeding improvements by @wbruna in #273
[–] Even_Adder@lemmy.dbzer0.com 6 points 1 week ago* (last edited 1 week ago) (2 children)

If you want to mess with Omnigen it was designed for this kind of thing. The code and model were released a few days ago.

 

Details: https://github.com/Nerogar/OneTrainer/blob/master/docs/RamOffloading.md

  • Flux LoRA training on 6GB GPUs (at 512px resolution)
  • Flux Fine-Tuning on 16GB GPUs (or even less) +64GB of RAM
  • SD3.5-M Fine-Tuning on 4GB GPUs (at 1024px resolution)
[–] Even_Adder@lemmy.dbzer0.com 6 points 2 weeks ago

Dandadan dropping this week is wild. It looks like Yakuza Fiancé finally caught on, watching those two is like a train wreck I can't take my eyes off of.

[–] Even_Adder@lemmy.dbzer0.com 4 points 2 weeks ago (1 children)

You're killing it with these gens.

[–] Even_Adder@lemmy.dbzer0.com 5 points 2 weeks ago

Fair use isn't a loophole, it is copyright law.

[–] Even_Adder@lemmy.dbzer0.com 7 points 2 weeks ago

Don't believe this dog, for it only tells lies.

[–] Even_Adder@lemmy.dbzer0.com 11 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I thought Mario was canonically 24?

[–] Even_Adder@lemmy.dbzer0.com 2 points 2 weeks ago

Here's a video explaining how diffusion models work, and this article by Kit Walsh, a senior staff attorney at the EFF.

 

Highlights for 2024-10-29

  • Support for all SD3.x variants
    SD3.0-Medium, SD3.5-Medium, SD3.5-Large, SD3.0-Large-Turbo
  • Allow quantization using bitsandbytes on-the-fly during models load Load any variant of SD3.x or FLUX.1 and apply quantization during load without the need for pre-quantized models
  • Allow for custom model URL in standard model selector
    Can be used to specify any model from HuggingFace or CivitAI
  • Full support for torch==2.5.1
  • New wiki articles: Gated Access, Quantization, Offloading

Plus tons of smaller improvements and cumulative fixes reported since last release

README | CHANGELOG | WiKi | Discord

[–] Even_Adder@lemmy.dbzer0.com 2 points 2 weeks ago

Your comment made my day. Thanks.

[–] Even_Adder@lemmy.dbzer0.com 0 points 2 weeks ago (9 children)

Anyone spreading this misinformation and trying gatekeep being an artist after the avant-garde movement doesn't have an ounce of education in art history. Generative art, warts and all, is a vital new form of art that's shaking things up, challenging preconceptions, and getting people angry - just like art should.

view more: next ›