[-] AdComfortable1514@lemmy.world 1 points 1 hour ago* (last edited 57 minutes ago)

I get it. I hope you don't interpret this as arguing against results etc.

What I want to say is ,

If implemented correctly , same seed does give the same result for output for a given prompt.

If there is variation , then something in the pipeline must be approximating things.

This may be good (for performance) , or it may be bad.

You are 100% correct in highlighting this issue to the dev.

Though its not a legal document , or a science paper.

Just a guide to explain seeds to newbies.

Omitting non-essential information , for the sake of making the concept clearer , can be good too.

[-] AdComfortable1514@lemmy.world 1 points 3 hours ago* (last edited 3 hours ago)

Perchance dev is correct here Allo ;

the same seed will generate the exact same picture.

If you see variety , it will be due to factors outside the SD model. That stuff happens.

But it's good that you fact check stuff.

[-] AdComfortable1514@lemmy.world 1 points 7 hours ago

Do you know where I can find documemtation on the perchance API?

Specifically createPerchanceTree ?

I need to know which functions there are , and what inputs/outputs they take.

[-] AdComfortable1514@lemmy.world 2 points 22 hours ago

Thanks! I appreciate the support. Helps a lot to know where to start looking ( ; v ;)b!

2
submitted 1 day ago* (last edited 22 hours ago) by AdComfortable1514@lemmy.world to c/perchance@lemmy.world

Error disappears by updating any HTML element on the fusion gen page.

Source: https://perchance.org/fusion-ai-image-generator

Dynamic imports plugin: https://perchance.org/dynamic-import-plugin

I let name = localStorage.name , if it exists ,

when running dynamicImports(name).

I didn't have this error when I implemented the localStorage thingy.

So I suspected this to be connected to some new added feature for dynamic Imports.

Ideas on solving this?

Code when I select names for dynamic import upon start (the error only occurs upon opening/reloading the page) :

_generator
    gen_danbooru
        fusion-t2i-danbooru-1
        fusion-t2i-danbooru-2
        fusion-t2i-danbooru-3

    gen_lyrics
        fusion-t2i-lyrics-1
        fusion-t2i-lyrics-2

...

_genKeys
    gen_danbooru
    gen_lyrics

...

// Initialize
getStartingValue(type) =>
  _genKeys.selectAll.forEach(function(_key) {
    document[_key] = 'fusion-t2i-empty';
    if (localStorage.getItem(_key) && localStorage.getItem(_key) != '' && localStorage.getItem(_key) != 'fusion-t2i-empty') {
      document[_key] = localStorage.getItem(_key);
    } else {
      document[_key] = [_generator[_key].selectOne];
      localStorage.setItem(_key, document[_key]);
    };
  });

...

  dynamicImport(document.gen_danbooru, 'preload');  

...

if (type == "danbooru"): return document.gen_danbooru; 
}; 

// End of getStartingValue(type)

...

_folders
  danbooru = dynamicImport(document.gen_danbooru || getStartingValue("danbooru")) 

10
submitted 4 days ago* (last edited 3 days ago) by AdComfortable1514@lemmy.world to c/stable_diffusion@lemmy.dbzer0.com

This is an open ended question.

I'm not looking for a specific answer , just what people know about this topic.

I've asked this question on Huggingface discord as well.

But hey, asking on lemmy is always good, right? No need to answer here. This is a repost, essentially.

This might serve as an "update" of sorts from the previous post: https://lemmy.world/post/19509682

//---//

Question;

FLUX model uses a combo of CLIP+T5 to create a text_encoding.

CLIP is capable if doing both image_encoding and text_encoding.

T5 model seems to be strictly text-to-text.

So I can't use the T5 to create image_encodings. Right?

https://huggingface.co/docs/transformers/model_doc/t5

But nonetheless, the T5 encoder is used in text-to-image generation.

So surely, there must be good uses for the T5 in creating a better CLIP interrogator?

Ideas/examples on how to do this?

I have 0% knowledge on the T5 , so feel free to just send me a link someplace if you don't want to type out an essay.

//----//

For context;

I'm making my own version of a CLIP interrogator : https://colab.research.google.com/#fileId=https%3A//huggingface.co/codeShare/JupyterNotebooks/blob/main/sd_token_similarity_calculator.ipynb

Key difference is that this one samples the CLIP-vit-large-patch14 tokens directly instead of using pre-written prompts.

I text_encode the tokens individually , store them in a list for later use.

I'm using the method shown in this paper, the "NND-Nearest neighbor decoding" .

Methods for making better CLIP interrogators: https://arxiv.org/pdf/2303.03032

T5 encoder paper : https://arxiv.org/pdf/1910.10683

Example from the notebook where I'm using the NND method on 49K CLIP tokens (Roman girl image) :

Most similiar suffix tokens : "{vfx |cleanup |warcraft |defend |avatar |wall |blu |indigo |dfs |bluetooth |orian |alliance |defence |defenses |defense |guardians |descendants |navis |raid |avengersendgame }"

most similiar prefix tokens : "{imperi-|blue-|bluec-|war-|blau-|veer-|blu-|vau-|bloo-|taun-|kavan-|kair-|storm-|anarch-|purple-|honor-|spartan-|swar-|raun-|andor-}"

[-] AdComfortable1514@lemmy.world 1 points 4 days ago

New stuff

Paper: https://arxiv.org/abs/2303.03032

Takes only a few seconds to calculate.

Most similiar suffix tokens : "{vfx |cleanup |warcraft |defend |avatar |wall |blu |indigo |dfs |bluetooth |orian |alliance |defence |defenses |defense |guardians |descendants |navis |raid |avengersendgame }"

most similiar prefix tokens : "{imperi-|blue-|bluec-|war-|blau-|veer-|blu-|vau-|bloo-|taun-|kavan-|kair-|storm-|anarch-|purple-|honor-|spartan-|swar-|raun-|andor-}"

[-] AdComfortable1514@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

I count casualty_rate = number_shot / (number_shot + number_subdued)

Which in this case is 22/64 = 34% casualty rate for civilians

and 98/131 = 75% casualty rate for police

[-] AdComfortable1514@lemmy.world 5 points 1 week ago

So its 64-131 between work done by bystanders vs. work done by police?

And casualty rate is actually lower for bystanders doing the work (with their guns) than the police?

11
submitted 1 week ago* (last edited 1 week ago) by AdComfortable1514@lemmy.world to c/stable_diffusion@lemmy.dbzer0.com

Created by me.

Link : https://huggingface.co/codeShare/JupyterNotebooks/blob/main/sd_token_similarity_calculator.ipynb

How does this work?

Similiar vectors = similiar output in the SD 1.5 / SDXL / FLUX model

CLIP converts the prompt text to vectors (“tensors”) , with float32 values usually ranging from -1 to 1.

Dimensions are [ 1x768 ] tensors for SD 1.5 , and a [ 1x768 , 1x1024 ] tensor for SDXL and FLUX.

The SD models and FLUX converts these vectors to an image.

This notebook takes an input string , tokenizes it and matches the first token against the 49407 token vectors in the vocab.json : https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/tokenizer

It finds the “most similiar tokens” in the list. Similarity is the theta angle between the token vectors.

The angle is calculated using cosine similarity , where 1 = 100% similarity (parallell vectors) , and 0 = 0% similarity (perpendicular vectors).

Negative similarity is also possible.

How can I use it?

If you are bored of prompting “girl” and want something similiar you can run this notebook and use the “chick” token at 21.88% similarity , for example

You can also run a mixed search , like “cute+girl”/2 , where for example “kpop” has a 16.71% similarity

There are some strange tokens further down the list you go. Example: tokens similiar to the token "pewdiepie" (yes this is an actual token that exists in CLIP)

Each of these correspond to a unique 1x768 token vector.

The higher the ID value , the less often the token appeared in the CLIP training data.

To reiterate; this is the CLIP model training data , not the SD-model training data.

So for certain models , tokens with high ID can give very consistent results , if the SD model is trained to handle them.

Example of this can be anime models , where japanese artist names can affect the output greatly.

Tokens with high ID will often give the "fun" output when used in very short prompts.

What about token vector length?

If you are wondering about token magnitude, Prompt weights like (banana:1.2) will scale the magnitude of the corresponding 1x768 tensor(s) by 1.2 . So thats how prompt token magnitude works.

Source: https://huggingface.co/docs/diffusers/main/en/using-diffusers/weighted_prompts*

So TLDR; vector direction = “what to generate” , vector magnitude = “prompt weights”

How prompting works (technical summary)

  1. There is no correct way to prompt.

  2. Stable diffusion reads your prompt left to right, one token at a time, finding association from the previous token to the current token and to the image generated thus far (Cross Attention Rule)

  3. Stable Diffusion is an optimization problem that seeks to maximize similarity to prompt and minimize similarity to negatives (Optimization Rule)

Reference material (covers entire SD , so not good source material really, but the info is there) : https://youtu.be/sFztPP9qPRc?si=ge2Ty7wnpPGmB0gi

The SD pipeline

For every step (20 in total by default) for SD1.5 :

  1. Prompt text => (tokenizer)
  2. => Nx768 token vectors =>(CLIP model) =>
  3. 1x768 encoding => ( the SD model / Unet ) =>
  4. => Desired image per Rule 3 => ( sampler)
  5. => Paint a section of the image => (image)

Disclaimer /Trivia

This notebook should be seen as a "dictionary search tool" for the vocab.json , which is the same for SD1.5 , SDXL and FLUX. Feel free to verify this by checking the 'tokenizer' folder under each model.

vocab.json in the FLUX model , for example (1 of 2 copies) : https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/tokenizer

I'm using Clip-vit-large-patch14 , which is used in SD 1.5 , and is one among the two tokenizers for SDXL and FLUX : https://huggingface.co/openai/clip-vit-large-patch14/blob/main/README.md

This set of tokens has dimension 1x768.

SDXL and FLUX uses an additional set of tokens of dimension 1x1024.

These are not included in this notebook. Feel free to include them yourselves (I would appreciate that).

To do so, you will have to download a FLUX and/or SDXL model

, and copy the 49407x1024 tensor list that is stored within the model and then save it as a .pt file.

//---//

I am aware it is actually the 1x768 text_encoding being processed into an image for the SD models + FLUX.

As such , I've included text_encoding comparison at the bottom of the Notebook.

I am also aware thar SDXL and FLUX uses additional encodings , which are not included in this notebook.

//---//

If you want them , feel free to include them yourself and share the results (cuz I probably won't) :)!

That being said , being an encoding , I reckon the CLIP Nx768 => 1x768 should be "linear" (or whatever one might call it)

So exchange a few tokens in the Nx768 for something similiar , and the resulting 1x768 ought to be kinda similar to 1x768 we had earlier. Hopefully.

I feel its important to mention this , in case some wonder why the token-token similarity don't match the text-encoding to text-encoding similarity.

Note regarding CLIP text encoding vs. token

To make this disclaimer clear; Token-to-token similarity is not the same as text_encoding similarity.

I have to say this , since it will otherwise get (even more) confusing , as both the individual tokens , and the text_encoding have dimensions 1x768.

They are separate things. Separate results. etc.

As such , you will not get anything useful if you start comparing similarity between a token , and a text-encoding. So don't do that :)!

What about the CLIP image encoding?

The CLIP model can also do an image_encoding of an image, where the output will be a 1x768 tensor. These can be compared with the text_encoding.

Comparing CLIP image_encoding with the CLIP text_encoding for a bunch of random prompts until you find the "highest similarity" , is a method used in the CLIP interrogator : https://huggingface.co/spaces/pharmapsychotic/CLIP-Interrogator

List of random prompts for CLIP interrogator can be found here, for reference : https://github.com/pharmapsychotic/clip-interrogator/tree/main/clip_interrogator/data

The CLIP image_encoding is not included in this Notebook.

If you spot errors / ideas for improvememts; feel free to fix the code in your own notebook and post the results.

I'd appreciate that over people saying "your math is wrong you n00b!" with no constructive feedback.

//---//

Regarding output

What are the symbols?

The whitespace symbol indicate if the tokenized item ends with whitespace ( the suffix "banana" => "banana " ) or not (the prefix "post" in "post-apocalyptic ")

For ease of reference , I call them prefix-tokens and suffix-tokens.

Sidenote:

Prefix tokens have the unique property in that they "mutate" suffix tokens

Example: "photo of a #prefix#-banana"

where #prefix# is a randomly selected prefix-token from the vocab.json

The hyphen "-" exists to guarantee the tokenized text splits into the written #prefix# and #suffix# token respectively. The "-" hypen symbol can be replaced by any other special character of your choosing.

Capital letters work too , e.g "photo of a #prefix#Abanana" since the capital letters A-Z are only listed once in the entire vocab.json.

You can also choose to omit any separator and just rawdog it with the prompt "photo of a #prefix#banana" , however know that this may , on occasion , be tokenized as completely different tokens of lower ID:s.

Curiously , common NSFW terms found online have in the CLIP model have been purposefully fragmented into separate #prefix# and #suffix# counterparts in the vocab.json. Likely for PR-reasons.

You can verify the results using this online tokenizer: https://sd-tokenizer.rocker.boo/

What is that gibberish tokens that show up?

The gibberish tokens like "ðŁĺħ</w>" are actually emojis!

Try writing some emojis in this online tokenizer to see the results: https://sd-tokenizer.rocker.boo/

It is a bit borked as it can't process capital letters properly.

Also note that this is not reversible.

If tokenization "😅" => ðŁĺħ

Then you can't prompt "ðŁĺħ" and expect to get the same result as the tokenized original emoji , "😅".

SD 1.5 models actually have training for Emojis.

But you have to set CLIP skip to 1 for this to work is intended.

For example, this is the result from "photo of a 🧔🏻‍♂️"

A tutorial on stuff you can do with the vocab.list concluded.

Anyways, have fun with the notebook.

There might be some updates in the future with features not mentioned here.

//---//

[-] AdComfortable1514@lemmy.world 2 points 2 weeks ago

I can't speculate.

If you feel up for the task I'd suggest running prompts that use Euler a at 20 steps for a given seed using that model and see if results match images on the perchance site.

If they do , then we know the furry model = Pony diffusion

(Though IIRC the furry model on perchance existed before Pony Diffusion. )

[-] AdComfortable1514@lemmy.world 2 points 2 weeks ago

Aha. So what you wanted to say was that "Starlight" and/or "Glimmer" are triggerwords for the furry model. Gotcha!

[-] AdComfortable1514@lemmy.world 2 points 2 weeks ago

Those are both the furry model tho?

[-] AdComfortable1514@lemmy.world 2 points 3 weeks ago

From what I know it is possible to bypass the keyword trigger by writing something like _anime or _1girl

3
submitted 3 weeks ago* (last edited 3 weeks ago) by AdComfortable1514@lemmy.world to c/perchance@lemmy.world

Just adding quick line of code that allows you to write something like (model:::1) , (model:::2) and (model:::3) to override the keywords would be so helpful.

Things work really well overall but it's frustrating to , for example, run a anime prompt only to have it switch out to the furry SD model because perchance detects the word "dragon" in the prompt etc.

For context: Perchance uses 3 SD models that run depending on keywords in the prompt. I want to be able override this feature.

2
submitted 1 month ago* (last edited 1 month ago) by AdComfortable1514@lemmy.world to c/perchance@lemmy.world

Yes I know this was asked before: https://lemmy.world/post/8258092

And it seems back then the model used might have been SD1.5+Deliberate V2

Possibly this one : https://huggingface.co/XpucT/Deliberate/tree/main

But results don't match.

EDIT : By this I mean if run an image prompt for a given seed on any Deliberate model with settings that match the percance generator then I get very different results

It would be nice to have the perchance SD 1.5 model uploaded on huggingface so people can use it on other platforms and/or privately.

The perchance text-to-image generator is a really well-balanced SD 1.5 model!

I feel it would be good for the SD community to have it available for download online.

2
submitted 1 month ago* (last edited 1 month ago) by AdComfortable1514@lemmy.world to c/perchance@lemmy.world

Feed it an image, get the closest matching prompt out of a set of written alternatives.

Try it here: https://huggingface.co/spaces/pharmapsychotic/CLIP-Interrogator

Example

Input image from google

Result when running the output prompt on perchance text-to-image model

Source code for this version : https://github.com/pharmapsychotic/clip-interrogator

In this source code, here are the list of pre-made "prompt fragments" this module can spit out:

You can write any prompt fragments in a list, and the output will be the "closest matching result".

There are other online variants that use CLIP , but sample a different prompt library to find the "closest match" : https://imagetoprompt.com/tools/i2p

The difference between these two is just what "pre-written prompt fragments" they have choosen to match with the image. Both use the CLIP model.

What is it?

The CLIP model a part of the Stable Diffusion model , but it is a "standalone" thing that can be used for other stuff than just image generation

It would be nice to have the CLIP model available as a standalone thing on perchance.

The CLIP model : https://github.com/openai/CLIP/blob/main/CLIP.png

Practical use cases

Making for example a "fantasy character" image-to-prompt generator (note the order) by feeding it "fantasy prompt fragments" to match with

How does this work?

The CLIP model is where the "magic" happens.

Image below describes how CLIP is trained. It matches words with images in this kind-of-grid style format

The CLIP model can process an image , or a text , and both will generate a 1x768 vector that are "the same" .

That's the "magic".

How to make your own purpose-built CLIP interrogator on perchance (assuming this is a feature)

Tokenizer creates 77-token prompt chunk embedding from any kind of text ("the prompt")

CLIP processes a 77-token prompt chunk embedding into 1x768 text encoding

Image (any kind) becomes a 1x768 image encoding (this requires GPU resources).

Image encoding A and Text encoding B "match" are calculated using cosine similarity via the formula below

Where the final range from 1 to 0.

1 = 100% match between image A and text B encodings , and 0 = no match at all.

Do this for 1000 text encodings and pick the "text" that gives the highest cosine similarity.

That's it. Now you have a "prompt" from an image.

//---//

8
submitted 2 months ago* (last edited 2 months ago) by AdComfortable1514@lemmy.world to c/perchance@lemmy.world

A common feature in T2i generation is to skip the last and final layer (matrix calculation) in the CLIP text-encoding model

This will "distort" the text encoding slightly , which SD users have discovered works to their benefit when prompting with common english words like "banana", "car" , "anime", "woman", "tree" etc

Being able to select between a CLIP Skip 2 text encoder , and the default text-encoder will be an appreciated feature for perchance users.

For exotic tokens like emojis or other tokens with high ID in the vocab.json , the un-modified CLIP configuration (CLIP skip 1) is far superior.

But for "boring normal english word" prompts , CLIP skip 2 will often improve the output.

This code here shows how one can import a SD1.5 CLIP text encoder configured to CLIP skip 2

https://github.com/huggingface/diffusers/issues/3212

//---//

Sidenote: Personally , I'd love to see a split of the:

text prompt -> tokenizer -> embedding -> text-encoding -> image generation

pipeline into their separate modules on perchance

So instead of sending a text to the perchance server the user can send an embedding (many are available to be downloaded online) or a text+embedding mix.

, or a text encoding configured to either CLIP skip 1 or 2

to the perchance server and get an image back.

The CLIP model is unique in that it can create both text- and image-encodings. By checking cosine similarity between the text and image encodings you can generate a text prompt for any given input image, that when prompted , will generate "that kind of image".

Note that for either of these cases there wont be a text prompt for the image. The pipeline is a "one-way-process".

//---//

Main thing here to consider is adding a CLIP Skip 2 option, as I think a lot of "standard" text-to-image generators on perchance would benefit from having this option.

2

Hello y'all,

Does anyone know how to check if a perchance generator name exists on perchance?

Context:

I've added a feature to allow users to load their own datasets into the fusion-gen. The code is above, but the feature itself looks like this:

I'd like to be able to check if the string matches a generator that exists on perchance. Does anyone know the name of that function?

11
submitted 3 months ago* (last edited 3 months ago) by AdComfortable1514@lemmy.world to c/perchance@lemmy.world

For me , whenever I load datasets into the Fusion Generator I have to be mindful on how many items I load into the generator upon start.

If I load too much data , or do something crazy with the HTML , then there is a risk some users ( like those who browse perchance on older phones ) will not be able to access the generator page at all.

They will get a browser error message, and will not be able to access the generator at all to report the problem. They will be locked out forever , effectively.

So my suggestion is to have some kind of default "Oops something went wrong" page when loading generators on a perchance page.

The generator owner can customize the text written on the page. Maybe the image as well.

Importantly , they should be able to direct the user to somewhere (like a discord page) where they can report the problem.

TLDR : If the generator fails to load , show a link to a "Bug Report" page

4
submitted 3 months ago* (last edited 3 months ago) by AdComfortable1514@lemmy.world to c/perchance@lemmy.world

So I'm testing out a new method for sampling prompts which I , at scale , would need to import via the dynamic imports plugin .

I want to use a two-tier selection process.

I select a random category like "Star Trek" and then within that category select "tags" that are associated with "Star Trek" (more info below).

This works for normal imports. But I'm wondering how to make this possible for dynamic imports.

I have no clue how to do this. I have no idea what is causing the error either.

Example here to showcase the problem (you can skip reading this post and just go here ):

Example-generator:

https://perchance.org/fusion-t2i-tv-series-perchance-example-1

Sub-generators with different methods :

https://perchance.org/fusion-t2i-tv-series-1

https://perchance.org/fusion-t2i-tv-series-2

Reason behind this:

I want to randomly select a category, for instance "Star Trek" . Then within that category, I have asked Bing Copilot , which is an AI chatbot that can browse the web , to generate "tags" that are associated with "Star Trek".

It looks like this in https://perchance.org/fusion-t2i-tv-series-2.

//-----//

Discord link to .json savefile for those who wish to test the prompts

Again , this works for normal imports.

But since I will likely be able to get a LOT of data out of Bing Copilot for stuff , I'd like some ideas/suggestions on how to make this work with dynamic imports.

No rush , but ideas are very welcome :) !

view more: next ›

AdComfortable1514

joined 5 months ago
MODERATOR OF