this post was submitted on 16 Apr 2024
-30 points (30.8% liked)

Technology

59378 readers
3143 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 13 comments
sorted by: hot top controversial new old
[–] CarbonatedPastaSauce@lemmy.world 45 points 7 months ago (4 children)

I write automation code for devops stuff. I’ve tried to use ChatGPT several times for code, and it has never produced anything of even mild complexity that would work without modification. It loves to hallucinate functions, methods, and parameters that don’t exist.

It’s very good for helping point you in the right direction, especially for people just learning. But at the level it’s at now (and all the articles saying we’re already seeing diminishing returns with LLMs) it won’t be replacing any but the worst coders out there any time soon.

[–] QuadratureSurfer@lemmy.world 9 points 7 months ago (1 children)

It's great for Pseudo code. But I prefer to use a local LLM that's been fine tuned for coding. It doesn't seem to hallucinate functions/methods/parameters anywhere near as much as when I was using ChatGPT... but admittedly I haven't used ChatGPT for coding in a while.

I don't ask it to solve the entire problem, I mostly just work with it to come up with bits of code here and there. Basically, it can partially replace stack overflow. It can save time for some cases for sure, but companies are severely overestimating LLMs if they think they can replace coders with it in its current state.

[–] Pantherina@feddit.de 1 points 7 months ago (1 children)
[–] QuadratureSurfer@lemmy.world 2 points 7 months ago

I use this model for coding: https://huggingface.co/TheBloke/dolphin-2.5-mixtral-8x7b-GGUF I would recommend the one with the Q5_K_M quant method if you can fit it.

[–] tal@lemmy.today 6 points 7 months ago* (last edited 7 months ago) (1 children)

I can believe that they manage to get useful general code out of an AI, but I don't think that it's gonna be as simple as just training an LLM on English-code mapping. Like, part of the job is gonna be identifying edge conditions, and that can't be just derived from the English alone; or from a lot of other code. It has to have some kind of deep understanding of the subject matter on which it's working.

Might be able to find limited-domain tasks where you can use an LLM.

But I think that a general solution will require not just knowing the English task description and a lot of code. An AI has to independently know something about the problem space for which it's writing code.

[–] Cryan24@lemmy.world 1 points 7 months ago

It's good for doing the boilerplate code for you that's about it.. you still need a human to do the thinking on the hard stuff.

[–] 7heo@lemmy.ml 2 points 7 months ago

The thing is, devops is pretty complex and pretty diverse. You've got at least 6 different solutions among the popular ones.

Last time I checked only the list of available provisioning software, I counted 22.

Sure, some like cdist are pretty niche, but still, when you apply for a company, even tho it is going to either be AWS (mostly), azure, GCE, oracle, or some run of the mill VPS provider with extended cloud features (simili S3 based on minio, "cloud LAN", etc), and you are likely going to use terraform for host provisioning, the most relevant information to check is which software they use. Packer? Or dynamic provisioning like Chef? Puppet? Ansible? Salt? Or one of the "lesser ones"?

And thing is, even among successive versions, among compatible stacks, the DSL evolved, and the way things are supposed to be done changed. For example, before hiera, puppet was an entirely different beast.

And that's not even throwing docker or (or rkt, appc) in the mix. Then you have k8s, podman, helm, etc.

The entire ecosystem has considerable overlap too.

So, on one hand, you have pretty clean and useable code snippets on stackoverflow, github gist, etc. So much so that tools like that emerged... And then, the very second LLMs were able to produce any moderately usable output, they were trained on that data.

And on the other hand, you have devops. An ecosystem with no clear boundaries, no clear organisation, not much maturity yet (in spite of the industry being more than a decade old), and so organic that keeping up with developments is a full time job on its own. There's no chance in hell LLMs can be properly trained on that dataset before it cools down. Not a chance. Never gonna happen.

[–] TimeSquirrel@kbin.social 2 points 7 months ago* (last edited 7 months ago)

Context-aware AI is where it's at. One that's
integrated into your IDE and can see your entire codebase and offer suggestions with functions and variables that actually match the ones in your libraries. Github Copilot does this.

Once the codebase gets large enough, a lot of times you can just write out a comment and suddenly you'll have a completely functional code block pop up underneath it, and you hit "tab" to accept it and move on. It's a very sophisticated autocomplete. It removes tediousness and lets you focus on logic.

[–] AdamEatsAss@lemmy.world 27 points 7 months ago (2 children)

Lol. Humans are just moving up on the stack. I'm sure some people were upset about how we wouldn't need electrical engineers anymore once digital circuits were invented. AI is a tool, without a trained user a tool is almost useless.

[–] abhibeckert@lemmy.world 6 points 7 months ago* (last edited 7 months ago)

AI is a tool, without a trained user a tool is almost useless.

Exactly. This feels a bit like the invention of the wheel to me. Suddenly some things are a lot easier than they used to be and I'm sitting here thinking "holy crap half my job is so easy now" while watching other people harp on about all the things it doesn't help with. Sure - they're right, but who cares about that? Look at all the things this tool can do.

[–] vanderbilt@lemmy.world 4 points 7 months ago

I use Claude to write plenty of the code we use, but it comes with the huge caveat that you can't blindly accept what it says. Ever hear newscasters talk about some hacker thing and wonder how they get it so wrong? Same thing with AI code sometimes. If you can code you can tell what it does wrong.

[–] ptz@dubvee.org 23 points 7 months ago* (last edited 7 months ago)

Is that why Windows 11 sucks so much? Like, did they just turn their codebot loose on the repo?

[–] tsonfeir@lemm.ee 14 points 7 months ago

Bugs. Bugs. Bugs.

AI is fine as an assistant, or to brainstorm ideas, but don’t let it run wild, or take control.