this post was submitted on 10 Mar 2024
42 points (100.0% liked)

Comradeship // Freechat

2168 readers
127 users here now

Talk about whatever, respecting the rules established by Lemmygrad. Failing to comply with the rules will grant you a few warnings, insisting on breaking them will grant you a beautiful shiny banwall.

A community for comrades to chat and talk about whatever doesn't fit other communities

founded 3 years ago
MODERATORS
 

One use of LLMs that I haven't seen mentioned before is to use them as a sounding board for your own ideas. By discussing your concept with an LLM, you can gain fresh perspectives through its generated responses.

In this context, the LLM's actual comprehension is irrelevant. The purpose lies in its ability to spark new thought processes by prompting you with unexpected framings or questions.

Definitely recommend trying this trick next time you're writing something.

you are viewing a single comment's thread
view the rest of the comments
[–] FuckBigTech347@lemmygrad.ml 2 points 8 months ago* (last edited 8 months ago) (1 children)

On the topic of GPT4ALL, I'm curious is there an equivalent of that that but for txt2img/img2img models? All the FOSS txt2img stuff I've tried so far is either buggy (some of the projects I tried don't even compile), require a stupid amount of third party dependencies, are made with NVidia hardware in mind while everyone else is second class or require unspeakable amounts of VRAM.

[–] lurkerlady@hexbear.net 1 points 8 months ago* (last edited 8 months ago) (1 children)

automatic1111 webui launcher, its stable diffusion. fun fact its icon is a pic of ho chi minh

if you wait, stable diffusion 3 is coming out soon. nvidia will run faster because its tensors are better unfortunately. SD is more ethical than others, you can load up models that are trained only on public art and pics

[–] FuckBigTech347@lemmygrad.ml 1 points 8 months ago (2 children)

I'm pretty sure I tried that one but it kept running out of VRAM. Also it utilizes proprietary AMD/NVidia software stacks which are a pain to set up. GPT4ALL is a lot better in that regard, they just use Vulkan compute shaders to run the models.

[–] lurkerlady@hexbear.net 1 points 8 months ago

could try out the turbo models, might help

[–] yogthos@lemmygrad.ml 1 points 8 months ago (1 children)

There's also ComfyUI, but the learning curve is a bit steeper https://github.com/comfyanonymous/ComfyUI

although there's CushyStudio frontend for it that's more user friendly https://github.com/rvion/CushyStudio

[–] FuckBigTech347@lemmygrad.ml 2 points 8 months ago (1 children)

ComfyUI seems like the most promising but it also uses ROCm/CUDA which don't officially support any of my current GPUs (models load successfully but midway through computing it fails). Why can't everyone just use compute shaders lol.

[–] yogthos@lemmygrad.ml 2 points 8 months ago

Oh yeah that whole thing is just such a mess, another L for proprietary tech.