-53
top 33 comments
sorted by: hot top controversial new old
[-] dan1101@lemm.ee 41 points 2 months ago
[-] Olap@lemmy.world 12 points 2 months ago

Also, obviously no

[-] hendrik@palaver.p3x.de 28 points 2 months ago* (last edited 2 months ago)

Tl;Dr: Not anytime soon. It fails even at simple tasks.

[-] Technus@lemmy.zip 20 points 2 months ago

Even if it didn't, any middle manager who decides to replace their dev team with AI is going to realize pretty quickly that actually writing code is only a small part of the job.

Won't stop 'em from trying, of course. But when the laid-off devs get frantic calls from management asking them to come back and fix everything, they'll be in a good position to negotiate a raise.

[-] hendrik@palaver.p3x.de 2 points 2 months ago

If anything. AI could be used to replace managers ๐Ÿ˜† I mean lots of management seems to be just pushing paper to me. Ideal to be handled by AI. But I think we still need people to do the real work for quite some time to come. Especially software architecture and coding (complex) stuff ain't easy. Neither is project management. So I guess even some managers can stay.

[-] conciselyverbose@sh.itjust.works 6 points 2 months ago

Good management is almost all people skills. It needs to be influenced by domain knowledge for sure, but it's almost all about people.

You can probably match trash managers, but you won't replace remotely competent ones

[-] hendrik@palaver.p3x.de 2 points 2 months ago

I'm not even sure about the "people skills" of ChapGPT. Maybe it's good at that. It always says ...you have to consider this side but also the other side... ...This is like that, however it might... It can weasel itself out of situations (as it did in this video). It makes a big effort to keep a very friendly tone in all circumstances. I think OpenAI has put a lot of effort in ChatGPT having something that resembles a portion of people skills.

I've used those capabilities to rephrase emails that needed to tell some uncomfortable truths but at the same time not scare someone away. And it did a halfway decent job. Better than I could do. And we already see those people skills in use by the companies who replace their first level support with AI. I read somewhere it has a better customer satisfaction rate than a human powered callcenter. It's good at pacifying people, being nice to them and answering the most common 90% of questions over and over again.

So I'm not sure what to make of this. I think my point still remains valid. AI (at least ChatGPT) is orders of magnitude better at people skills than at programming. I'm not sure what kind of counterexamples we have... Sure, it can't come to your desk, look you in the eyes and see if you're happy or need something. Because it doesn't have any eyes. But at the same time that's a thing I rarely see with average human managers in big offices, either...

[-] conciselyverbose@sh.itjust.works 3 points 2 months ago

Using flowery language isn't "people skills".

People skills means handling conflict and competing objectives between people fairly and efficiently. It's a trait based almost entirely on empathy, with a level of ingenuity mixed in, and GPT isn't anywhere within many orders of magnitude of either. It will be well after it "can code" that it does anything remotely in the neighborhood of the soft skills of being a competent manager.

[-] hendrik@palaver.p3x.de 1 points 2 months ago* (last edited 2 months ago)

Yeah. I mean the fundamental issue is: ChatGPT isn't human. It just mimics things. That's the way it generates text, audio and images. And it's also the way it handles "empathy". It mimicks what it's learned from human interactions during training.

But in the end: Does it really matter where it comes from and why? I mean the goal of a venture is to produce or achieve something. And that isn't measured in where it comes from. But in actual output. I don't want to speculate too much. But despite not having real empathy, it could theoretically achieve the same thing by faking it well enough. And that has been proven in some narrow tasks already. We have customer satisfaction rates. And quite some people saying it helps them with different things. We need to measure that and do some more studies of what's the actual outcome of replacing something with AI. It could very well be that our perspective is wrong.

And with that said: I tried roleplaying with AI. It seems to have some theory of mind. Not really of course. But it get's what I'm hinting at. The desires and behaviour of characters. And so on. Lot's of models are very agreeable. Some can role play conflict. I think the current capabilities of these kinds of AI are enough to fake some things well enough to get somewhere and be actually useful. I don't say it has or hasn't people skills. I think it's somewhere on the spectrum between the two. I can't really tell where because I havent yet read any research considering this context.

And of course there is a big difference between everyday tasks and handling a situation that went completely haywire. We have to factor that in. But in reality there are ways to handle that. For example AI and humans could split up the tasks amongst them. And things can get escalated and humans make difficult decisions. But that could already mean 80% of the labor gets replaced.

[-] conciselyverbose@sh.itjust.works 2 points 2 months ago* (last edited 2 months ago)

The actual empathy (actually being able to understand people's perspectives) is how you get to places everyone is OK with. Empathy isn't language. It's using the understanding of what people feel and want to find solutions that work well for everyone. Without understanding that perspective at a deep and intuitive level, you don't solve actual problems. You don't routinely preempt problems by seeing them before they have a chance of happening and working around them.

Actual leadership isn't stepping in when people are almost at blows and parroting "conflict resolution" at them. It's understanding who your people are and what they want and putting them in position to succeed.

[-] hendrik@palaver.p3x.de 1 points 2 months ago* (last edited 2 months ago)

I get what you're saying. I think we're getting a bit philosophical here with the empathy. My point was: Sometimes, what matters is if something get's a job done. And I see some reason to believe it might become capable, despite doing it differently and having shortcomings.

I think it's true that empathy get's the job done. But I think it's a logical flaw to say, because empathy can do it, it's ONLY empathy that can do it. It might very well be the case that it's not like that. I think we don't know yet. I'm not set on one side or the other. I just want to see some research done and get a definitive answer instead of speculating.

And I see some reason to believe it's more complicated than that. What I outlined earlier is that it can apply something loosely resembling a theory of mind and get some use out of that. But we can also log every interaction of someone. Do sentiment analysis and find out with great accuracy if someone sitting at a computer is happy, angry, resignating or frustrated. AI can do that all day for each employer and outperform any human manager. On the flipside it can't do other things. And we already have "algorithms" for example on TikTok, Youtube etc that can tell a lot about someone and predict things about them. That works quite well as we all know. All of that makes me believe there is some potential to do things like what we're currently discussing.

[-] conciselyverbose@sh.itjust.works 1 points 2 months ago* (last edited 2 months ago)

I'm not arguing philosophy. I'm saying that the core definition of the job description is "understand people and use that understanding to get shit done". A middle manager doesn't decide strategy. They just make their team work well together. Understanding people is the whole job.

TikTok and YouTube algorithms don't (and don't have any desire to) care what people actually want or value. They just care what results in the highest amount of time wasted on their platform, and it results in creators explicitly telling their viewers (who also don't want the nonsense) that they're doing bullshit like clickbait thumbnails and titles because YouTube makes it impossible to succeed if they don't. They (along with almost all other social media) are prime examples of what bad, toxic algorithms look like.

[-] hendrik@palaver.p3x.de 1 points 2 months ago

I think the question then becomes: What's more important (and to whom?) Doing what's in the job description? Or actually getting the job done? These are two separate things. And I see arguments for both, depending on context.

And you have a point with the algorithms. They follow the goals that they're given by their masters. Exactly to the outcome you've outlined. But the goal is configurable. You could as well give it the goal to maximise team efficiency. Or employer satisfaction. Or company revenue. Practically anything that you can obtain some metric.

[-] conciselyverbose@sh.itjust.works 1 points 2 months ago

The job description is the only reason the position exists. It's the entire value add. If you aren't doing it, the job isn't getting done.

[-] hendrik@palaver.p3x.de 1 points 2 months ago* (last edited 2 months ago)

But going a level deeper, the whole position only exists because a company wants to get some job done. Describing it is just a means to achieve that. Not a thing in itself. I think we're circling about what I consider being the main point: What matters is if a job get's done. If you do it with a description and it gets the job done, it gets the job done. If you manage to go without and it also gets the job done, it also gets the job done. If you manage people by people and that gets the job done or if AI does it and also gets the job done... Delivering some goods is how a company makes profit. They don't really care how it's done because that's not what it's about. It just needs to fulfill a few criteria. Be profitable (have a good price/performance ratio) and be sustainable/reliable... It doesn't matter to them if it's AI, a human, with a description or without...

And I already had jobs where there wasn't any proper job description (just something on the paper). That usually leads to severe issues if there is any dispute. But nonetheless it worked out well for me and my employer. I know people who are in similar situations. Or had their job descriptions updated because things changed. So I don't welcome that as it will result in issues. And it shouldn't be like that. But speaking from experience, a job can be done without a description if circumstances are right. I also regularly see people organize their old stuff when retiring, read their old job description from decades ago for fun and that's not really what they've been doing the last 20 years.

I think our fundamental disagreement is, you say it's currently usually done like this and therefore that's the only way to do it. That might be a conservative perspective. But by logic, that doesn't follow. Just because something works some way, that doesn't exclude there being other possibilities or ways to achieve the same thing.

[-] conciselyverbose@sh.itjust.works 2 points 2 months ago

The job of middle management is "handle the human elements of the team (and potentially customers/vendors) so the team can be productive". There is no meaningful other job to do, excluding some bookkeeping stuff that can be done better with any other software but AI. The human parts are the only things that need to be done.

[-] Technus@lemmy.zip 3 points 2 months ago

Don't even need an AI. Just teach a parrot to say "let's circle back on this" and "how many story points is that?"

[-] Disregard3145@lemmy.world 1 points 2 months ago

"Its easy, right. Just ..."

[-] Jestzer@lemmy.world 10 points 2 months ago

The rule of any article asking asking a question in its title is that the answer is always no.

[-] flamingo_pinyata@sopuli.xyz 4 points 2 months ago

AI is actually great at typing the code quickly. Once you know exactly what you want. But it's already the case that if your engineers spend most of their time typing code, you're doing something wrong. AI or no AI.

[-] hendrik@palaver.p3x.de 4 points 2 months ago* (last edited 2 months ago)

I don't think so. I've had success letting it write boilerplate code. And simple stuff that I could have copied from stack overflow. Or a beginners programming book. With every task from my real life it failed miserably. I'm not sure if I did anything wrong. And it's been half a year since I last tried. Maybe things have changed substantially in the last few months. But I don't think so.

Last thing I tried was some hobby microcontroller code to do some robotics calculations. And ChatGPT didn't really get what it was supposed to do. And additionally instead of doing the maths, it would just invent some library functions, call them with some input values and imagine the maths to be miraculously be done in the background, by that nonexistent library.

[-] flamingo_pinyata@sopuli.xyz 3 points 2 months ago* (last edited 2 months ago)

Yes actually, I can imagine it getting microcontroller code wrong. My niche is general backend services. I've been using Github copilot a lot and it served me well for generating unit tests. Write test description and it pops out the code with ~ 80% accuracy

[-] hendrik@palaver.p3x.de 4 points 2 months ago* (last edited 2 months ago)

Sure. There are lots of tedious tasks in a programmers life that don't require a great amount of intelligence. I suppose writing some comments, docstrings, unit tests, "glue" and boilerplate code that connects things and probably several other things that now escape my mind are good tasks for an AI to assist a proper programmer and make them more effective and get things done faster.

I just wouldn't call that programming software. I think assisting with some narrow tasks is more exact.

Maybe I should try doing some backend stuff. Or give it an API definition and see what it does ๐Ÿ˜… Maybe I was a bit blinded by ChatGPT having read the Wikipedia and claiming it understands robotics concepts. But it really doesn't seem to have any proper knowledge. Same probably applies to engineering and other nighboring fields that might need software.

[-] flamingo_pinyata@sopuli.xyz 2 points 2 months ago

It might also have to do with specialized vs general models. Copilot is good at generating code but ask it to write prose text and it fails completely. In contrast ChatGPT is awful at code but handles human readable text decently.

[-] AmbiguousProps@lemmy.today 3 points 2 months ago
[-] agamemnonymous@sh.itjust.works 2 points 2 months ago

I think the obvious answer is "Yes, some, but not all".

It's not going to totally replace human software developers anytime soon, but it certainly has the potential to increase productivity of senior developers and reduce demand for junior developers.

[-] Telorand@reddthat.com 2 points 2 months ago

Not until it's better at QA than I am. Good luck teaching a machine how stupid end-users can be.

[-] A_A@lemmy.world 1 points 2 months ago

... it will take many years ... and designs will change considerably before we are there.

[-] OmnislashIsACloudApp@lemmy.world 1 points 2 months ago

people look at this stuff as a yes or no and that's a major misunderstanding.

I work in tech, and I can tell you 100% you could not just give a job to AI and call it a day.

I cannot even imagine this type of response generation ever being capable of that without developing some sort of true intelligence if for no other reason than to turn bad prompts by people who do not understand what they want or what is possible into functional projects.

that said, but I do believe is possible is that it makes like 5 to 10% of the job a little bit faster. programming is like 10 to 20% writing code and 80 to 90% understanding what that code should be and why it isn't working that way yet.

Even the code you get from it is generally wrong but sometimes useful.

best case scenario I could see right now is not that it replaces jobs but that it makes people more effective, kind of like giving a framer a nail gun instead of a box of nails and a hammer except not that big of an efficiency gain.

ultimately this might mean you do the job with 8 people instead of 10, or something like that.

if it reduced the total number of jobs because it was a tool that made people more effective - did it take the job away?

[-] pathief@lemmy.world 1 points 2 months ago

Even if the AI was at the point if outputing exactly what you want correcly, decision makers would still need to be able to specify exactly what they want and need. "I want a website that pops" isn't going to cut it.

[-] tal@lemmy.today 0 points 2 months ago

In the long run, sure.

In the near term? No, not by a long shot.

There are some tasks we can automate, and that will happen. That's been a very long-running trend, though; it's nothing new. People generally don't write machine language by physically flipping switches these days; many decades of automation have happened since then.

I also don't think that a slightly-tweaked latent diffusion model, of the present "generative AI" form, will get all that far, either. The fundamental problem: taking an incomplete specification in human language and translating it to a precise set of rules in machine language making use of knowledge of the real world, isn't something that I expect you can do very effectively by training on a existing corpus.

The existing generative AIs work well on tasks where you have a large training corpus that maps from something like human language to an image. The resulting image don't have a lot by way of hard constraints on their precision; you can illustrate that by generating a batch of ten images for a given prompt that might all look different, but a fair number look decent-enough.

I think that some of that is because humans typically process images and language in a way that is pretty permissive of errors; we rely heavily on context and our past knowledge about the real world to obtain meaning up with the correct meaning. An image just needs to "cue" our memories and understanding of the world. We can see images that are distorted or stylized, or see pixel art, and recognize it for what it is.

But...that's not what a CPU does. Machine language is not very tolerant of errors.

So I'd expect a generative AI to be decent at putting out content intended to be consumed by humans -- and we have, in fact, had a number of impressive examples of that working. But I'd expect it to be less-good at putting out content intended to be consumed by a CPU.

I think that that lack of tolerance for error, plus the need to pull in information from the real world, is going to make translating human language to machine language less of a good match than translating human language to human language or human language to human-consumable image.

[-] key@lemmy.keychat.org -1 points 2 months ago

"software developer says ai will not replace software developers" feels very John Henry

[-] eager_eagle@lemmy.world 4 points 2 months ago

tbh that is vastly more reliable than "seller of hardware used to train AI models says AI will replace developers"

this post was submitted on 14 Aug 2024
-53 points (14.7% liked)

Technology

59161 readers
2253 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS