this post was submitted on 20 Oct 2024
68 points (95.9% liked)
Futurology
1853 readers
138 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LLMs would have no problem doing any of this. There's a discernible pattern in any judge's verdict. LLMs can easily pick this pattern up.
LLMs in their current form are "spitting out" code in a very literal way. Actual programmers never do that. No one is smart enough to code by intuition. We write code, take a look at it, run it, see warnings/errors if any, fix them and repeat. No programmer writes code and gets it correct in the first try itself.
LLMs till now have had their hands tied behind their backs. They haven't been able to run the code by themselves at all. They haven't been able to do recursive reasoning. TILL NOW.
The new O1 model (I think) is able to do that. It'll just get better from here. Look at the sudden increase in the quality of code output. There's a very strong reason as to why I believe this as well.
I heavily use LLMs for my code. They seem to write shit code in the first pass. I give it the output, the issues with the code, semantic errors if any and so on. By the third or fourth time I get back to it, the code it writes is perfect. I have stopped needing to manually type out comments and so on. LLMs do that for me now (of course, I supervise what it writes n don't blindly trust it). Using LLMs has sped up my coding at least by 4 times (and I'm not even using a fine tuned model).
There's no reason as to why it would do that. The underlying function behind verdicts/legal arguments has been the same, and will remain the same, because it's based on logic and human morals. Tackling morals is easy because LLMs have been trained on human data. Their morals are a reflection of ours. If we want to specify our morals explicitly, then we could make them law (and we already have for the ones that matter most), which makes stuff even easier.
That's worse! You do see how that's worse right?!?
You are factually correct, but those are called biases. That doesn't mean that LLMs would be good at that job. It means they can do the job with comparable results for all the reasons that people are terrible at it. You're arguing to build a racism machine because judges are racist.
Ok, so you just ignore the reports and continue to coast on feels over reals. Cool.
Another report contradicting you
Stop believing the hype. Sam Altman is lying to you.
I didn't. I went through your links. Your links however, pointed at a problem with the environment our LLMs are in instead of the LLMs themselves. The code one, where the LLM invents package names is not the LLMs fault. Can you accurately come up with package names just from memory? No. Neither can the LLM. Give the LLM the functionality to look up npm's database. Give it the functionality to read the docs and then look at what it can do. I have done this myself (manually giving it this information), and it has been a beast.
As for the link in the reply, it's a blog post about anecdotal evidence. Come on now... I literally have personal anecdotal evidence to parry this.
But whatever, you're always going to go "AI bad AI bad AI bad" till it takes your job. I really don't understand why AI denialism is so prevalent on lemmy, a leftist platform, where we should be discussing about seizing the new means of production instead of denying its existence.
Regardless, I won't contribute to this thread any further, because I believe I've made my point.
Look at what OpenAI, Google, Microsoft, etc. Does and tell me once again that this is supposedly good for the workers. Jeez. 🙄