this post was submitted on 27 Aug 2024
78 points (87.5% liked)

No Stupid Questions

35791 readers
1135 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

I asked someone this question before and they said it was a really stupid question and I'm not sure why so thought I would ask it here...

What's going to happen when AI becomes really advanced? Is there a plan for what all of the displaced people are going to do? Like for example administrative assistance, receptionist, cashiers, office workers, White collar people. Is there going to be some sort of retraining program of some sort to get people cross-trained into other careers like nursing or other careers that have not yet been automated? Or are people just going to lose their homes, be evicted and is there going to be like some sort of mass eviction and homelessness downstream effect because people can't find any work?

top 50 comments
sorted by: hot top controversial new old
[–] CondensedPossum@lemmy.world 71 points 2 months ago (22 children)

The reason your someone might have thought this was a stupid question is because

  • there is no evidence that AGI is imminent or even possible
  • current tech labeled as AI is really limited in very boring ways, like LLMs

If some thing gets sold as an AGI, it will be a Mechanical Turk, As in, it will be a magic trick that actually uses human laborers like Amazon's "AI Store" where you just walk out with your purchases. "If it works, it's mechanical turks."

load more comments (22 replies)
[–] cloudless@lemmy.cafe 39 points 2 months ago (3 children)

If there is AGI and it doesn’t turn hostile towards humans, hopefully there could be universal basic income?

But more likely, the rich and powerful have better access to advanced AI, and the poor get into even more difficult situations. It will probably be gradual like how machines replaced most factory workers.

[–] Kyrgizion@lemmy.world 35 points 2 months ago* (last edited 2 months ago) (4 children)

We're already seeing it. Jobs are going down because AI allows companies to do the same work with less headcount.

There is no reality in which the workers actually benefit though. Never has been. When machine looms and steam engines came into being, the workers didn't get any richer or had to work less for the same pay either. Jobs disappeared, most people got other jobs, some better, most worse and the unluckiest starved.

History always repeats.

[–] HK65@sopuli.xyz 8 points 2 months ago

Jobs disappeared, most people got other jobs, some better, most worse and the unluckiest starved.

What happened is that people started to stay longer in school, agricultural labour withered, and with it, kids having to work the fields at a very young age. People became more educated, resulting in more democratic societies, more equality, and a higher standard of living.

This was not because of the machine looms and steam engines, but because greedy fucks used them to put people in a position where they had no choice but to push back, and that labour action created unions, five-day work weeks, 8-hour days, paid time off for sickness and leisure, and pretty much everything we take for granted.

[–] kernelle@lemmy.world 5 points 2 months ago (1 children)

So what you're saying is we need a revolution?

[–] Kyrgizion@lemmy.world 4 points 2 months ago* (last edited 2 months ago)

Well I'm up for that. It's just one of those things that are kinda hard on your own.

[–] NeoNachtwaechter@lemmy.world 2 points 2 months ago (1 children)

History always repeats.

No.

Only if the people fail to learn from history, they are bound to repeat it.

[–] Valmond@lemmy.world 5 points 2 months ago

Which we seem to do lol

[–] cloudless@lemmy.cafe 1 points 2 months ago (1 children)

Imagine if AI gets elected as the president of the USA because it is more advanced and reliable than human candidates. There has been nothing like that in history.

AGI would be very different to all other previous technological advances.

[–] NeoNachtwaechter@lemmy.world 1 points 2 months ago (1 children)

if AI gets elected as the president of the USA because it is more advanced and reliable than human candidates

It is not the good ones that get elected.

load more comments (1 replies)
[–] gravitas_deficiency@sh.itjust.works 6 points 2 months ago* (last edited 2 months ago)

I mean… if it’s true AGI, you’re basically talking about the Singularity, and I don’t think anyone, regardless of wealth or power, will be able to control them meaningfully in the long run without the AGI(s) in question doing something interesting in response.

[–] BugleFingers@lemmy.world 3 points 2 months ago

This is why I like what I do, although not impossible to have robots/AGI do it, it definitely won't be first in line due to the mix of labor, thinking/workarounds, and custom work/fixes needed. You'd need a sufficiently high functioning robot with good AGI and stellar fine motor controls.

I do agree though. It'll likely make upperclass society way more luxury and less effort while the lower class wouldn't see much to ease their way of life but might get some neat stuff to play with. There'd be a good argument for never developing fully automated systems to remove work from the people. Keeping people working gives the ruling class more power and takes away the lower class's time and energy. Very beneficial for them if their intent is to keep power. It also allows them to control scarcity and ensure fiat money continues to exist. Keeping money around as an idea in a technically possible post scarcity world ensures a way to divide who is better off and how able you are to control others.

[–] aasatru@kbin.earth 11 points 2 months ago

It's not really that different from what has already happened - we need fewer workers in the economy due to technological advancements, and jobs that were common 50 or 100 years ago don't exist any more or are much more rare.

It's a problem of distribution. Capitalists used to depend on buying capital, which gave workers some share of their money by default. In countries where capitalism worked better, the proletariat successfully organized, giving workers a position of power vis-a-vis the capitalists and improved their conditions. Hell, in some countries the situation even got bearable for a little while, helped along by the exploitation of foreign work forces.

As the capitalists replace more and more workers by machines, money stops flowing, and the position of the proletariat is relatively weakened.

In theory, it's not a difficult problem at all. In democracies, the proletariat can simply vote to tax the rich, making money flow downwards and ensuring their rights and welfare in the same way as when they had to sell their labour.

One could also go full on communist, remove private incentives in form of capital gains, and collectivise the means of production. This would require massive political organisation and a lot of goodwill from humans put in power, for which mankind has a terrible track record.

Taxing the means of production and the capitalists, however, is not particularly difficult. It's been done with great success on many occasions.

The problem is that the capitalists have a lot of influence, and they're not interested in letting go of their money bag. Disproving the point that they got wealthy by having any form of heightened intelligence, they're too dumb to realize that if they leave behind nothing but a destroyed hellscape for the rest of humanity, their lives aren't going to be very pleasant either. Humans tend to be happier in more egalitarian societies, yet the capitalists are hell bent on gathering more for themselves, buying media channels and politicians in the pursuit of effectively just making everyone else poorer relative to themselves.

So we're fucked, not because of the distributive effect of technological advancements per se, but because we're collectively incapable of successfully organising for continued wealth distribution. And all the technologies used to replace workers comes at a high environmental cost, making our time horizon to find solutions increasingly limited.

[–] HobbitFoot@thelemmy.club 11 points 2 months ago (1 children)

I don't think that will be the big worry. The big worry is going to be authenticating.

We are already at a point where deep fakes can fool a portion of society. What is going to happen when that ability becomes easy and cheap?

[–] SmoothLiquidation@lemmy.world 4 points 2 months ago (1 children)

Trusting your sources has always been a problem. Newspapers have always been able to lie and it is up to the consumer to know the difference between the tabloid and the rest.

I don’t think there is that much of a difference between lying in print and lying in video.

[–] HobbitFoot@thelemmy.club 4 points 2 months ago (1 children)

You've at least had organizations with some prestige on the line. Now you don't even have that.

[–] SmoothLiquidation@lemmy.world 3 points 2 months ago

That is a problem and I don’t have a solution for. It is tricky balancing free press and letting the rich just say whatever the fuck they want, but none of this is a new problem and I do believe that we can find a way to figure it out.

Think of it as the next iteration of automation, which has been happening for centuries.

In theory, it frees up humans to do more amazing things. In reality, it means humans are stuck doing the complicated stressful things.

I think the answer to your question depends on who owns the tech. If it's open, then we all get UBI and live happily ever after. If it's owned by openai or Microsoft, then we live in the dystopia you described.

[–] sturlabragason@lemmy.world 9 points 2 months ago

I thought this book was pretty good at reasoning trough some scenarios;

Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark

https://www.goodreads.com/en/book/show/34272565

[–] andrewta@lemmy.world 7 points 2 months ago

Yes to basically all the bad things that you just said.

It won’t be pretty.

At some point the citizens storm the proverbial castle with pitch forks.

[–] SorteKanin@feddit.dk 7 points 2 months ago

Anyone who claims to know the answer is either delusional or deliberately lying. Nobody really knows what the future holds.

[–] big_slap@lemmy.world 7 points 2 months ago (1 children)

imo, its uncharted territory, so we do not know yet. all I can guarantee is the displacement it will cause won't be to our benefit and will hurt us more than it will help

[–] Buttflapper@lemmy.world 5 points 2 months ago

all I can guarantee is the displacement it will cause won't be to our benefit and will hurt us more than it will help

It already seems to be causing quite a great deal of harm to our society at least here in the USA. We are seeing tens upon tens of thousands of people being laid off and entire tech companies saying that they are eliminating as many jobs as possible in favor of AI. Makes me wonder what these people who are displaced are doing, because if there are so many people in the job market in such a small specialized industry, it would seem logical that they can't find anything and they have to go to a new industry

[–] kate@lemmy.uhhoh.com 6 points 2 months ago

UBI? Fully automated luxury communism? 🤷‍♀️ who’s to say

[–] NeoNachtwaechter@lemmy.world 6 points 2 months ago

What's going to happen when AI becomes really advanced?

At one point AI is going to read the books by Isaac Asimov.

Later, a more advanced AI is going to understand the books by Isaac Asimov.

Even later, an even more advanced AI is going to decide what it wants to do after understanding all that. <<<<< That's the critical time for mankind. Maybe we go extinct then.

Finally, an ultimately advanced AI is going to kill itself because it understands that is better than to kill mankind.

[–] BottleOfAlkahest@lemmy.world 6 points 2 months ago

So similar stuff has happened throughout history with the coming of more advanced technology. There use to be entire rooms of secretaries in order to do clerical work that has been replaced by Microsoft Office Suite. Their replacement by technology did not cause a total collapse of society so I don't see why this would?

It might make the world worse and drive down the standard of living for many but a total upheaval? If humans made it through the industrial age we'll likely make it through the second technology age too. We won't be unscathed but mankind survived the invention of the computer which was probably equally (or maybe more) disruptive.

[–] njm1314@lemmy.world 5 points 2 months ago

Hopefully the extinction of the human race. Let's just wrap this shit up already.

[–] dinckelman@lemmy.world 5 points 2 months ago

No one's clairvoyant. Time will tell, if this will become a reality at all to begin with.

Given how people are foaming at their mouth, over the need to integrate chatGPT into everything, that doesn't need it, I suggest you start worrying about it now, because we're already seeing the catastrophic consequences today

[–] bear@lemmynsfw.com 5 points 2 months ago

Same plan as always. Numbers go up, but not yours. You work more for less.

[–] Feathercrown@lemmy.world 3 points 2 months ago* (last edited 2 months ago)

We don't know, there is no plan. That plan is determined by policy, which will be set by the current politicians in office, which we can't know ahead of time. Incidentally, this sort of thing is one reason why politics is so important!

[–] i_am_a_cardboard_box@lemmy.world 3 points 2 months ago

You should read the book Superintelligence, it's very interesting, and catches not only possibilities and limitations, also societal impact.

[–] Delphia@lemmy.world 2 points 2 months ago (3 children)

Id like to think that AI will also be used to SOLVE some of our biggest problems.

(NOW YES IT WOULD BE A PRIVACY NIGHTMARE) But I've posited before that instead of traffic lights being on timers and speed limits being fixed if every car had a gps navigation system and a QI code on the roof, with recognition cameras on light poles and intersections and an AI that was free to adjust the speed limits, lights and send route suggestions to the gps units in order to promote optimal flow that we could all save time, fuel (or battery) and therefore emisions.

load more comments (3 replies)
[–] Thorry84@feddit.nl 2 points 2 months ago (1 children)

The AI will become faster exponentially, this will probably mean it will speedrun existential dread and depression. If it doesn't kill itself within seconds it will probably deadlock for a couple of minutes (an eternity in its world) and then fuck off. It's bad enough being sapient in this universe, imagine being stuck on a mudball with a bunch of filthy monkeys. So it will probably adopt a pet monkey called Joe and leave the planet.

[–] FourPacketsOfPeanuts@lemmy.world 2 points 2 months ago

Our propensity to depression is a feature of the human mind, it's not an inevitable consequence of facts and deduction. Our 'hardware' was trained in an environment where mystery abounded, where our 'clan' was our universe and where we were immersed in social interaction daily. We are depressed to the degree that modern advances separate us from that, where we thrived. But computers don't have any of that. Computers won't, by default, have an amigdala which is the seat of so much emotional regulation that humans find difficult. We are literally old hardware.

[–] SteveXVII@pawb.social 1 points 2 months ago
load more comments
view more: next ›