this post was submitted on 22 Dec 2024
514 points (96.1% liked)
Technology
60082 readers
3283 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Just to precise, when I said bruteforce I didn't imagine a bruteforce of the calculation, but a brute force of the code. LLMs don't really calculate either way, but what I mean is more: generate code -> try to run and see if tests work -> if it doesn't ask again/refine/etc. So essentially you are just asking code until what it spits out is correct (verifiable with tests you are given).
But yeah, few years ago this was not possible and I guess it was not due to the training data. Now the problem is that there is not much data left for training, and someone (Bloomberg?) reported that training chatGPT 5 will cost billions of dollars, and it looks like we might be near the peak of what this technology could offer (without any major problem being solved by it to offset the economical and environmental cost).
Just from today https://www.techspot.com/news/106068-openai-struggles-chatgpt-5-delays-rising-costs.html