this post was submitted on 10 Mar 2024
42 points (100.0% liked)

Comradeship // Freechat

2168 readers
140 users here now

Talk about whatever, respecting the rules established by Lemmygrad. Failing to comply with the rules will grant you a few warnings, insisting on breaking them will grant you a beautiful shiny banwall.

A community for comrades to chat and talk about whatever doesn't fit other communities

founded 3 years ago
MODERATORS
 

One use of LLMs that I haven't seen mentioned before is to use them as a sounding board for your own ideas. By discussing your concept with an LLM, you can gain fresh perspectives through its generated responses.

In this context, the LLM's actual comprehension is irrelevant. The purpose lies in its ability to spark new thought processes by prompting you with unexpected framings or questions.

Definitely recommend trying this trick next time you're writing something.

top 21 comments
sorted by: hot top controversial new old
[–] queermunist@lemmy.ml 35 points 8 months ago* (last edited 8 months ago) (2 children)

That's why I argue on the internet. I am as likely to convince a poster to change their mind as I am to convince a robot, but they generate such interesting responses that force me to think hard about my own positions. Grinding random encounters in the posting RPG for exp

[–] yogthos@lemmygrad.ml 21 points 8 months ago

I've noticed that as well. A lot of the time you can use the arguing to refine your own idea and just use the other party to provide you with a feedback loop. :)

[–] pinguinu@lemmygrad.ml 16 points 8 months ago

Reddit is the Dark Souls of posting

[–] ksynwa@lemmygrad.ml 9 points 8 months ago (2 children)

I read about this on the cursed orange site. Some guy talked about going on a walk with his wireless warplugs on, talking to ChatGPT's audio interface discussing some world building he was doing.

Are there any LLM services that can be reasonably used without paying? I tried some llamafiles but seems like my laptop cannot handle them well.

[–] yogthos@lemmygrad.ml 8 points 8 months ago (2 children)

As long as you don't care about your inputs being harvested, gemini is free currently. I've been using GPT4All to run stuff locally, but if your laptop is having trouble with llamafiles, then it's probably gonna have trouble with that too.

[–] ksynwa@lemmygrad.ml 4 points 8 months ago (1 children)
[–] yogthos@lemmygrad.ml 2 points 8 months ago

I find I like Wizard 1.2 and Hermes the best

[–] FuckBigTech347@lemmygrad.ml 2 points 8 months ago* (last edited 8 months ago) (1 children)

On the topic of GPT4ALL, I'm curious is there an equivalent of that that but for txt2img/img2img models? All the FOSS txt2img stuff I've tried so far is either buggy (some of the projects I tried don't even compile), require a stupid amount of third party dependencies, are made with NVidia hardware in mind while everyone else is second class or require unspeakable amounts of VRAM.

[–] lurkerlady@hexbear.net 1 points 8 months ago* (last edited 8 months ago) (1 children)

automatic1111 webui launcher, its stable diffusion. fun fact its icon is a pic of ho chi minh

if you wait, stable diffusion 3 is coming out soon. nvidia will run faster because its tensors are better unfortunately. SD is more ethical than others, you can load up models that are trained only on public art and pics

[–] FuckBigTech347@lemmygrad.ml 1 points 8 months ago (2 children)

I'm pretty sure I tried that one but it kept running out of VRAM. Also it utilizes proprietary AMD/NVidia software stacks which are a pain to set up. GPT4ALL is a lot better in that regard, they just use Vulkan compute shaders to run the models.

[–] lurkerlady@hexbear.net 1 points 8 months ago

could try out the turbo models, might help

[–] yogthos@lemmygrad.ml 1 points 8 months ago (1 children)

There's also ComfyUI, but the learning curve is a bit steeper https://github.com/comfyanonymous/ComfyUI

although there's CushyStudio frontend for it that's more user friendly https://github.com/rvion/CushyStudio

[–] FuckBigTech347@lemmygrad.ml 2 points 8 months ago (1 children)

ComfyUI seems like the most promising but it also uses ROCm/CUDA which don't officially support any of my current GPUs (models load successfully but midway through computing it fails). Why can't everyone just use compute shaders lol.

[–] yogthos@lemmygrad.ml 2 points 8 months ago

Oh yeah that whole thing is just such a mess, another L for proprietary tech.

[–] lurkerlady@hexbear.net 5 points 8 months ago* (last edited 8 months ago)

seconding gpt4all, makes it quick and easy to run and if youre fancy you can stream the output from your computer to your phone. i run a capybara-hermes-mistral mix but i would suggest starting with mistral instruct until claude3 comes out

[–] moujikman@hexbear.net 6 points 8 months ago (1 children)

I used to love it as a socratic tutor. All I need it to do is ask reasonable questions to challenge my understanding. But later releases of chatgpt made it too focused on giving you the answer.

[–] yogthos@lemmygrad.ml 3 points 8 months ago

If your machine is beefy enough, running a local model works pretty well and there are a lot to choose from now tuned for different use cases. I've had good luck with using gpt4all as a no hassle app for running this stuff on my machine.

[–] MinekPo1@lemmygrad.ml 5 points 8 months ago (1 children)

I did this with ChatGPT once and can't say I had a good time . To be fair , what I was pitching to it was to unusual yet similar for it to grasp it .

So yeah as long as the LLM doesn't get to overconfident with thinking it knows what you're talking about , you might be able to get somewhere .

[–] yogthos@lemmygrad.ml 3 points 8 months ago

Yeah it definitely works for some topics better than others. Also depends a lot on the model and what it was trained on.

[–] lil_tank@lemmygrad.ml 4 points 8 months ago (1 children)

LLMs are way more interesting when talking about coding rather than asking them to generate code. The code generation is janky but if you keep asking questions you might get new directions, learn about concepts you didn't know etc ... It's great to learn tech stuff

[–] yogthos@lemmygrad.ml 2 points 8 months ago

Yeah it's handy for pointing you in the right direction when you don't know what you need.