this post was submitted on 02 Aug 2023
45 points (97.9% liked)
Comradeship // Freechat
2161 readers
128 users here now
Talk about whatever, respecting the rules established by Lemmygrad. Failing to comply with the rules will grant you a few warnings, insisting on breaking them will grant you a beautiful shiny banwall.
A community for comrades to chat and talk about whatever doesn't fit other communities
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How long is it? Just so you know, I'm a total newbie at LLM creation/AI/Neural Networking.
So... if they're inherently unreliable, why make them? Genuine question.
I have never done a full run on an LLM training, but on the lab we used to have language models training for like 1-2 weeks making full use of 4 2080s IIRC. Fine tuning is generally faster than training so it'd be around a day or two on that hardware, but I don't have access to it anymore (and don't think it'd be ethical to use it for personal projects either way). On personal hardware I think it would again be back at the week mark. Since it's an iterative process sometimes one might want to either have multiple training runs with different parameters in parallel or repeatedly train to try and solve issues. There are some cloud options with on-demand GPUs but then we'd need to be spending money.
The bulk of the work is actually on making sure the data is appropriate and then validating if the model works correctly, which a lot of researchers tend to skimp on in their papers, and in practice is usually done by low paid interns or MTurk contractors.
Cynical answer, stock exchange hype. Investors get really interested whenever we get human-looking stuff like text, voice, walking robots and synthetic flesh, even if those things have very little ROI on the technical side. Just look at people talking about society being managed by AI in the future despite most investment going into human-facing systems rather than logistics optimisations.
The main issue (incredibly oversimplified) with ChatGPT is that due to it working on a text probability level, it can sometimes create some really convincing and human-sounding text, that is either completely false or contains subtle misrepresentations. It also has a lot of trouble providing accurate sources for what it says. Or it can mimic what appears like "human memory" by referring to previously said things, but that's just emergent behaviour and you can easily "convince" it that it has said things it had not, for instance. Also the data can get so large that some stuff that shouldn't be there can get in there as well, like how ChatGPT 3 is supposed to have a knowledge cut-off in September 2021, but it can sometimes answer questions about the war in Ukraine.
ChatGPT can still be useful for bouncing ideas around, getting some broad overviews, text recommendations or creative writing experimentation. They're also fun to dunk on if you're bored on the bus. I think this would be a fun project, but if do it, we should always have a big red disclaimer that goes "this bot is dumb sometimes, actually read a book."
Here's an example of how bad chatGPT is at sources. Bing has direct access to the internet and can sometimes fetch sources, but I'm not sure how that works and if it is feasible with our non-Microsoft-level resources.
CW libshit
turns out OpenAI isn't actually open
It was actually started for completely open (Free) AI stuff but then they realised it wasn't making any money. Also Musk was involved with it, I think he's been ousted though. Hard to know with the guy who bought the founder title in Tesla.
this is why we need gommusim
I thought the war started in 2014?
Well yes, but the one it talked about was the SMO. Ukraine didn't exist to libs before then. I tried replicating it and they seem to have fixed it now, but the other day I also managed to get it to talk about the Wagner "rebellion" so there's definitely some data leakage in there.