this post was submitted on 03 Jun 2024
1297 points (96.4% liked)

Technology

59577 readers
2928 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] FiniteBanjo@lemmy.today 5 points 5 months ago (2 children)

LLMs in particular are unlikely to solve really any problems, much less a measurable number of the problems it is currently being thrown at.

[–] Joelk111@lemmy.world 3 points 5 months ago (4 children)

Tell that to the code I have it write and debug daily. I was skeptical at first, but it's been a huge help for that, as well s learning new (development) languages.

[–] AusatKeyboardPremi@lemmy.world 10 points 5 months ago (2 children)

I do not agree with @FiniteBanjo@lemmy.today’s take. LLMs as these are used today, at the very least, reduces the number of steps required to consume any previously documented information. So these are solving at least one problem, especially with today’s Internet where one has to navigate a cruft of irrelevant paragraphs and annoying pop ups to reach the actual nugget of information.

Having said that, since you have shared an anecdote, I would like to share a counter(?) anecdote.

Ever since our workplace allowed the use of LLM-based chatbots, I have never seen those actually help debug any undocumented error or non-traditional environments/configurations. It has always hallucinated incorrectly while I used it to debug such errors.

In fact, I am now so sceptical about the responses, that I just avoid these chatbots entirely, and debug errors using the “old school” way involving traditional search engines.

Similarly, while using it to learn new programming languages or technologies, I always got incorrect responses to indirect questions. I learn that it has incorrectly hallucinated only after verifying the response through implementation. This makes the entire purpose futile.

I do try out the latest launches and improvements as I know the responses will eventually become better. Most recently, I tried out GPT-4o when it got announced. But I still don’t find them useful for the mentioned purposes.

[–] Joelk111@lemmy.world 1 points 5 months ago

That's an interesting anecdote. Usually my code sorta works and I just have to debug it a little bit, and it's way faster to get to a viable starting point that starting from scratch.

Often times my issue is unknown by it when debugging though, but sometimes it helps to find stupid mistakes.

I'd probably give it a 50% success rate, but I'll take the help.

[–] FiniteBanjo@lemmy.today 1 points 5 months ago

Seems like you agreed with everything I said, tho.

[–] FiniteBanjo@lemmy.today 3 points 5 months ago (1 children)

Mate, all it does is predict the next word or phrase. It doesn't know what you're trying to do or have any ethics. When it fucks up it's going to be your fuckup and since you relied on the bot rather than learned to do it yourself you're not going to be able to fix it.

[–] Joelk111@lemmy.world 1 points 5 months ago* (last edited 5 months ago)

I understand how it works, but that's irrelevant if it does work as a tool in my toolkit. I'm also not relying on the LLM, I'm taking it with a massive grain of salt. It usually gets most of the way there, and I have to fix issues or have it revise the code. For simple stuff that'd be busy work for me, it does pretty well.

It would be my fuck up if it fucks up, and I don't catch it. I'm not putting code it writes directly into production, I'm not stupid.

[–] balder1991@lemmy.world 1 points 5 months ago* (last edited 5 months ago) (1 children)

I think they do have their help, but it’s not nearly as dramatic as some companies earning money from it want us to think. It’s just a tool that helps just like a good IDE has helped in the past.

[–] Joelk111@lemmy.world 2 points 5 months ago

Oh absolutely, I agree with that comparison. That said, I'd take an IDE over AI 11 times out of 10.

[–] balder1991@lemmy.world 3 points 5 months ago

I mean, if LLMs really make software engineering easier, we should also expect Linux apps to improve dramatically. But I’m not betting on it.