this post was submitted on 16 Sep 2024
370 points (98.2% liked)

Technology

35123 readers
245 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 38 comments
sorted by: hot top controversial new old
[–] nexv@programming.dev 57 points 3 months ago

Not specified for this research but... if you rely on LLM to write code that is security-sensitive, I don't expect you to write secured code without LLM anyway

[–] 2pt_perversion@lemmy.world 26 points 3 months ago

I'm doing my part by writing really shitty foss projects for AI to steal and train on.

[–] NauticalNoodle@lemmy.ml 18 points 3 months ago* (last edited 3 months ago)

It seems to me that if one can adequately explain the function of their pseudocode in adequate detail for an LLM to turn it into a functional and reliable program, then the hardest part of writing the code was already done without the LLM.

[–] Nomecks@lemmy.ca 16 points 3 months ago (2 children)

No worries, the properly implemented CI/CD pipelines will catch the bad code!

[–] azimir@lemmy.ml 17 points 3 months ago (1 children)

I had a student came into office hours asking why their program got a bad grade. I looked and it didn't actually do anything related to the assignment.

Upon further query, they objected saying that the CI pipeline built it just fine.

So ..yeah... You can write a program that builds and runs, but doesn't do the required tasks, which makes it wrong. This was not a concept they'd figured out yet.

[–] Arcka@midwest.social 9 points 3 months ago

Shouldn't the pipeline have failed unless the functional tests passed?

[–] Hasherm0n@lemmy.world 7 points 3 months ago

Until you find out those were also built by a junior using an llm to help 🙃

[–] HubertManne@moist.catsweat.com 3 points 3 months ago (2 children)

I really don't get how its different than a search engine. Granted its surprising how often I have to give up in disgust and just go back to normal search but pretty often they can find more relevant stuff faster

[–] cypherpunks@lemmy.ml 21 points 3 months ago (1 children)

I really don’t get how its different than a search engine

Neither did this guy.

The difference is that LLM output is (in the formal sense) bullshit.

[–] HubertManne@moist.catsweat.com 1 points 3 months ago (1 children)

so is search. I mean I would not click the first link from a search and then copy and paste code from the site into my project no questions asked. similarly you can look over what the ai comes up with and see if it makes sense. same you would do with some dudes blog. you can also check the references it gives or ask it to expand on some part. hey what does the function X do. I really don't see it as being worse than search.

[–] moriquende@lemmy.world 9 points 3 months ago* (last edited 3 months ago) (1 children)

not that you should be copy pasting any significanct amount of code, but at least when you do you're required to understand it enough to fit it into your program. LLMs just straight up camouflage the shit code by putting something that already fits and has no squiggly red lines beneath. Many people probably don't bother reading it at that point.

[–] HubertManne@moist.catsweat.com 0 points 3 months ago (1 children)

yeah I mean by that standard anything a person like that uses is going to be an issue. They can be useful but im worried about the power they use although I wonder how much power that is realtive to be searching different blogs for 10 or 20 minutes.

[–] Facebones@reddthat.com 3 points 3 months ago (1 children)

For a point of comparison, a ChatGPT request uses 2.9 watt-hours (and rising) to a google searches 0.3 (which per your example would only be run once assuming you're checking different blogs from the same list of results.)

https://timesofindia.indiatimes.com/technology/tech-news/chatgpt-google-search-need-power-to-run-heres-how-much-water-and-electricity-are-used-to-answer-questions/articleshow/111382705.cms

[–] HubertManne@moist.catsweat.com 1 points 3 months ago

generally I end up checking some results and often changing the search with new keywords but all the same I generally am doing follow up questions similarly. Im betting to any energy the ai uses to check web destinations is likely not included which would be the same as I going to a destination. maybe less if its more of a crawl or api. Any way you slice it its going to be more I think.