this post was submitted on 17 Mar 2025
1347 points (99.7% liked)

Programmer Humor

34449 readers
1832 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Gladaed@feddit.org 8 points 9 hours ago (1 children)

Ok, but what did they try to do as a SaaS?

[–] synicalx@lemm.ee 4 points 10 hours ago (2 children)

Devils advocate, not my actual opinion; if you can make a Thing that people will pay to use, easily and without domain specific knowledge, why would you not? It may hit issues at some point but by them you’ve already got ARR and might be able to sell it.

[–] rmuk@feddit.uk 13 points 8 hours ago

If you started from first principles and made a car or, in this case, told an flailing intelligence precursor to make a car, how long would it take for it to create ABS? Seatbelts? Airbags? Reinforced fuel tanks? Firewalls? Collision avoidance? OBD ports? Handsfree kits? Side impact bars? Cupholders? Those are things created as a result of problems that Karl Benz couldn't have conceived of, let alone solve.

Experts don't just have skills, they have experience. The more esoteric the challenge, the more important that experience is. Without that experience you'll very quickly find your product fails due to long-solved problems leaving you - and your customers - in the position of being exposed dangers that a reasonable person would conclude shouldn't exist.

[–] Jimmycrackcrack@lemmy.ml 2 points 8 hours ago* (last edited 2 hours ago)

Yeh, arguably and to a limited extent, the problems he's having now aren't the result of the decision to use AI to make his product so much as the decision to tell people about that and people deliberately attempting to sabotage it. I'm careful to qualify that though because the self evident flaw in his plan even if it only surfaced in a rather extreme scenario, is that he lacks the domain specific knowledge to actually make his product work as soon as anything becomes more complicated than just collecting the money. Evidently there was more to this venture than just the building of the software, that was necessary to for it to be a viable service. Much like if you consider yourself the ideas man and paid a programmer to engineer the product for you and then fired them straight after without hiring anyone to maintain it or keep the infrastructure going or provide support for your clients and then claimed you 'built' the product, you'd be in a similar scenario not long after your first paying customer finds out the hard way that you don't actually know anything about your own service that you willingly took money for. He's discovering he can't actually provide the service part of the Software as a Service he's selling.

[–] dojan@lemmy.world 24 points 1 day ago (1 children)

Was listening to my go-to podcast during morning walkies with my dog. They brought up an example where some couple was using ShatGPT as a couple's therapist, and what a great idea that was. Talking about how one of the podcasters has more of a friend like relationship to "their" GPT.

I usually find this podcast quite entertaining, but this just got me depressed.

ChatGPT is by the same company that stole Scarlett Johansson's voice. The same vein of companies that thinks it's perfectly okay to pirate 81 terabytes of books, despite definitely being able to afford paying the authors. I don't see a reality where it's ethical or indicative of good judgement to trust a product from any of these companies with information.

[–] Bazoogle@lemmy.world 10 points 1 day ago (1 children)

I agree with you, but I do wish a lot of conservatives used chatGPT or other AI's more. It, at the very least, will tell them all the batshit stuff they believe is wrong and clear up a lot of the blatant misinformation. With time, will more batshit AI's be released to reinforce their current ideas? Yea. But ChatGPT is trained on enough (granted, stolen) data that it isn't prone to retelling the conspiracy theories. Sure, it will lie to you and make shit up when you get into niche technical subjects, or ask it to do basic counting, but it certainly wouldn't say Ukraine started the war.

[–] ZMoney@lemmy.world 2 points 1 day ago

It will even agree that AIs shouldn't controlled by oligarchic tech monopolies and should instead be distributed freely and fairly for the public good, but the international system of nation states competing against each other militarily and economically prevents this. But maybe it will agree to the opposite of that too, I didn't try asking.

[–] bitjunkie@lemmy.world 23 points 1 day ago

AI can be incredibly useful, but you still need someone with the expertise to verify its output.

[–] Treczoks@lemmy.world 45 points 1 day ago (1 children)

That is the future of AI written code: Broken beyond comprehension.

[–] LiveLM@lemmy.zip 15 points 1 day ago* (last edited 1 day ago)

Ooh is that job security I hear????

[–] Phoenicianpirate@lemm.ee 29 points 1 day ago (1 children)

I took a web dev boot camp. If I were to use AI I would use it as a tool and not the motherfucking builder! AI gets even basic math equations wrong!

[–] KyuubiNoKitsune@lemmy.blahaj.zone 3 points 10 hours ago (1 children)

Can't expect predictive text to be able to do math. You can get it to use a programming language to do it tho. If you ask it in a programmatic way it'll generate and run it's own code. Only way I got it to count the amount of r's in strawrbrerry.

[–] rmuk@feddit.uk 3 points 8 hours ago (2 children)

I love strawrbrerry mllilkshakes.

[–] slappypantsgo@lemm.ee 1 points 1 hour ago

At the liberry!

Way better than wrasprbrerry

[–] Nangijala@feddit.dk 36 points 1 day ago (1 children)

This feels like the modern version of those people who gave out the numbers on their credit cards back in the 2000s and would freak out when their bank accounts got drained.

[–] allo@sh.itjust.works 137 points 1 day ago (3 children)

Hilarious and true.

last week some new up and coming coder was showing me their tons and tons of sites made with the help of chatGPT. They all look great on the front end. So I tried to use one. Error. Tried to use another. Error. Mentioned the errors and they brushed it off. I am 99% sure they do not have the coding experience to fix the errors. I politely disconnected from them at that point.

What's worse is when a noncoder asks me, a coder, to look over and fix their ai generated code. My response is "no, but if you set aside an hour I will teach you how HTML works so you can fix it yourself." Never has one of these kids asking ai to code things accepted which, to me, means they aren't worth my time. Don't let them use you like that. You aren't another tool they can combine with ai to generate things correctly without having to learn things themselves.

[–] _carmin@lemm.ee 1 points 9 hours ago

Coder? You havent been to university right?

[–] Thoven@lemdro.id 60 points 1 day ago

100% this. I've gotten to where when people try and rope me into their new million dollar app idea I tell them that there are fantastic resources online to teach yourself to do everything they need. I offer to help them find those resources and even help when they get stuck. I've probably done this dozens of times by now. No bites yet. All those millions wasted...

load more comments (1 replies)
[–] M0oP0o@mander.xyz 107 points 1 day ago (2 children)

Ha, you fools still pay for doors and locks? My house is now 100% done with fake locks and doors, they are so much lighter and easier to install.

Wait! why am I always getting robbed lately, it can not be my fake locks and doors! It has to be weirdos online following what I do.

[–] MisterFrog@lemmy.world 3 points 21 hours ago

The difference is locks on doors truly are just security theatre in most cases.

Unless you're the BiLock and it takes the LockPickingLawyer 3 minutes to pick it open.

https://m.youtube.com/watch?v=f5uk6C1iDkQ

load more comments (1 replies)
[–] RedSnt@feddit.dk 53 points 1 day ago (1 children)

Yes, yes there are weird people out there. That's the whole point of having humans able to understand the code be able to correct it.

[–] interdimensionalmeme@lemmy.ml 37 points 1 day ago (2 children)

Chatgpt make this code secure against weird people trying to crash and exploit it ot

[–] Little8Lost@lemmy.world 23 points 1 day ago* (last edited 1 day ago)

beep boop
fixed 3 bugs
added 2 known vulnerabilities
added 3 race conditions
boop beeb

load more comments (1 replies)
[–] PeriodicallyPedantic@lemmy.ca 39 points 1 day ago

I hope this is satire 😭

[–] cronenthal@discuss.tchncs.de 291 points 2 days ago (2 children)

Bonus points if the attackers use ai to script their attacks, too. We can fully automate the SaaS cycle!

[–] 1024_Kibibytes@lemm.ee 116 points 2 days ago (15 children)

That is the real dead Internet theory: everything from production to malicious actors to end users are all ai scripts wasting electricity and hardware resources for the benefit of no human.

load more comments (15 replies)
load more comments (1 replies)
[–] slappypantsgo@lemm.ee 10 points 1 day ago

Holy crap, it’s real!

[–] rekabis@programming.dev 59 points 1 day ago (28 children)

The fact that “AI” hallucinates so extensively and gratuitously just means that the only way it can benefit software development is as a gaggle of coked-up juniors making a senior incapable of working on their own stuff because they’re constantly in janitorial mode.

[–] daniskarma@lemmy.dbzer0.com 15 points 1 day ago* (last edited 1 day ago) (1 children)

Plenty of good programmers use AI extensively while working. Me included.

Mostly as an advance autocomplete, template builder or documentation parser.

You obviously need to be good at it so you can see at a glance if the written code is good or if it's bullshit. But if you are good it can really speed things up without any risk as you will only copy cody that you know is good and discard the bullshit.

Obviously you cannot develop without programming knowledge, but with programming knowledge is just another tool.

[–] Nalivai@lemmy.world 9 points 1 day ago (2 children)

I maintain strong conviction that if a good programmer uses llm in their work, they just add more work for themselves, and if less than good one does it, they add new exciting and difficult to find bugs, while maintaining false confidence in their code and themselves.
I have seen so much code that looks good on first, second, and third glance, but actually is full of shit, and I was able to find that shit by doing external validation like talking to the dev or brainstorming the ways to test it, the things you categorically cannot do with unreliable random words generator.

[–] HumanPerson@sh.itjust.works 1 points 20 hours ago

There is an exception to this I think. I don't make ai write much, but it is convenient to give it a simple Java class and say "write a tostring" and have it spit out something usable.

load more comments (1 replies)
load more comments (27 replies)
[–] merthyr1831@lemmy.ml 167 points 2 days ago (10 children)

AI is yet another technology that enables morons to think they can cut out the middleman of programming staff, only to very quickly realise that we're more than just monkeys with typewriters.

load more comments (10 replies)
[–] GenosseFlosse@feddit.org 12 points 1 day ago

But what site is he talking about?

[–] satans_methpipe@lemmy.world 22 points 1 day ago

Eat my SaaS

[–] rtxn@lemmy.world 100 points 2 days ago

"If you don't have organic intelligence at home, store-bought is fine." - leo (probably)

[–] Charlxmagne@lemmy.world 31 points 1 day ago (1 children)

This is what happens when you don't know what your own code does, you lose the ability to manage it, that is precisely why AI won't take programmer's jobs.

[–] ILikeBoobies@lemmy.ca 33 points 1 day ago (3 children)

I don’t need ai to not know what my code does

load more comments (3 replies)
[–] thickertoofan@lemm.ee 8 points 1 day ago

taste of his own medicine

load more comments
view more: next ›