this post was submitted on 04 Apr 2025
431 points (96.9% liked)

Technology

68434 readers
4347 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

top 50 comments
sorted by: hot top controversial new old
[–] TommySoda@lemmy.world 97 points 3 days ago (4 children)

If it was marketed and used for what it's actually good at this wouldn't be an issue. We shouldn't be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people's jobs easier and achieve better results. I understand its uses and that it's not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.

[–] ohshittheyknow@lemmynsfw.com 32 points 3 days ago

This. It seems like they have tried to shoehorn AI into just about everything but what it is good at.

[–] faltryka@lemmy.world 20 points 3 days ago (2 children)

The natural outcome of making jobs easier in a profit driven business model is to either add more work or reduce the number of workers.

[–] ferb@sh.itjust.works 13 points 3 days ago (3 children)

This is exactly the result. No matter how advanced AI gets, unless the singularity is realized, we will be no closer to some kind of 8-hour workweek utopia. These AI Silicon Valley fanatics are the same ones saying that basic social welfare programs are naive and un-implementable - so why would they suddenly change their entire perspective on life?

load more comments (3 replies)
[–] pennomi@lemmy.world 4 points 3 days ago (3 children)

Yes, but when the price is low enough (honestly free in a lot of cases) for a single person to use it, it also makes people less reliant on the services of big corporations.

For example, today’s AI can reliably make decent marketing websites, even when run by nontechnical people. Definitely in the “good enough” zone. So now small businesses don’t have to pay Webflow those crazy rates.

And if you run the AI locally, you can also be free of paying a subscription to a big AI company.

load more comments (3 replies)
[–] count_dongulus@lemmy.world 9 points 3 days ago (2 children)

Mayne pedantic, but:

Everyone seems to think CEOs are the problem. They are not. They report to and get broad instruction from the board. The board can fire the CEO. If you got rid of a CEO, the board will just hire a replacement.

[–] Zorque@lemmy.world 13 points 3 days ago (1 children)

And if you get rid of the board, the shareholders will appointment a new one. If you somehow get rid of all the shareholders, like-minded people will slot themselves into those positions.

The problems are systemic, not individual.

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] pjwestin@lemmy.world 23 points 2 days ago (4 children)

Maybe that's because every time a new AI feature rolls out, the product it's improving gets substantially worse.

[–] MangoCats@feddit.it 9 points 2 days ago (2 children)

Maybe that's because they're using AI to replace people, and the AI does a worse job.

Meanwhile, the people are also out of work.

Lose - Lose.

[–] null_dot@lemmy.dbzer0.com 3 points 2 days ago

Even if you're not "out of work", your work becomes more chaotic and less fulfilling in the name of productivity.

When I started 20 years ago, you could round out a long day with a few hours of mindless data entry or whatever. Not anymore.

A few years ago I could talk to people or maybe even write a nice email communicating a complex topic. Now chatGPT writes the email and I check it.

It's just shit honestly. I'd rather weave baskets and die at 40 years old of a tooth infection than spend an additional 30 years wallowing in self loathing and despair.

load more comments (1 replies)
load more comments (3 replies)
[–] dylanmorgan@slrpnk.net 37 points 3 days ago (18 children)

It’s not really a matter of opinion at this point. What is available has little if any benefit to anyone who isn’t trying to justify rock bottom wages or sweeping layoffs. Most Americans, and most people on earth, stand to lose far more than they gain from LLMs.

load more comments (18 replies)
[–] IndiBrony@lemmy.world 26 points 2 days ago (2 children)

The first thing seen at the top of WhatsApp now is an AI query bar. Who the fuck needs anything related to AI on WhatsApp?

[–] kautau@lemmy.world 8 points 2 days ago

Who the fuck needs ~~anything related to AI on~~ WhatsApp?

[–] alphabethunter@lemmy.world 5 points 2 days ago (1 children)

Right?! It's literally just a messenger, honestly, all I expect from it is that it's an easy and reliable way of sending messages to my contacts. Anything else is questionable.

load more comments (1 replies)
[–] ininewcrow@lemmy.ca 41 points 3 days ago (2 children)

This is like asking tobacco farmers what their thoughts are on smoking.

[–] WhatAmLemmy@lemmy.world 16 points 3 days ago* (last edited 3 days ago) (3 children)

More like asking the slaves about productivity advances in slavery. "Nothing good will come of this".

load more comments (3 replies)
[–] MangoCats@feddit.it 2 points 2 days ago

Al Gore's family thought that the political tide was turning against it, so they gave up tobacco farming in the late 1980s - and focused on politics.

[–] moonlight@fedia.io 17 points 3 days ago (6 children)

Depends on what we mean by "AI".

Machine learning? It's already had a huge effect, drug discovery alone is transformative.

LLMs and the like? Yeah I'm not sure how positive these are. I don't think they've actually been all that impactful so far.

Once we have true machine intelligence, then we have the potential for great improvements in daily life and society, but that entirely depends on how it will be used.

It could be a bridge to post-scarcity, but under capitalism it's much more likely it will erode the working class further and exacerbate inequality.

[–] MangoCats@feddit.it 2 points 2 days ago

Machine learning? It’s already had a huge effect, drug discovery alone is transformative.

Machine learning is just large automated optimization, something that was done for many decades before, but the hardware finally reached a power-point where the automated searches started out-performing more informed selective searches.

The same way that AlphaZero got better at chess than Deep Blue - it just steam-rollered the problem with raw power.

load more comments (5 replies)
[–] Sibshops@lemm.ee 11 points 3 days ago

No surprise there. We just went through how blockchain is going to drastically help our lives in some unspecified future.

[–] spankmonkey@lemmy.world 9 points 3 days ago

Experts are working from their perspective, which involves being employed to know the details of how the AI works and the potential benefits. They are invested in it being successful as well, since they spent the time gaining that expertise. I would guess a number of them work in fields that are not easily visible to the public, and use AI systems in ways the public never will because they are focused on things like pattern recognition on virii or idendifying locations to excavate for archeology that always end with a human verifying the results. They use AI as a tool and see the indirect benefits.

The general public's experience is being told AI is a magic box that will be smarter than the average person, has made some flashy images and sounds more like a person than previous automated voice things. They see it spit out a bunch of incorrect or incoherent answers, because they are using it the way it was promoted, as actually intelligent. They also see this unreliable tech being jammed into things that worked previously, and the negative outcome of the hype not meeting the promises. They reject it because how it is being pushed onto the public is not meeting their expectations based on advertising.

That is before the public is being told that AI will drive people out of their jobs, which is doubly insulting when it does a shitty job of replacing people. It is a tool, not a replacement.

[–] carrion0409@lemm.ee 7 points 3 days ago* (last edited 3 days ago) (7 children)

Because it won't. So far it's only been used to replace people and cut costs. If it were used for what it was actually intended for then it'd be a different story.

load more comments (7 replies)
[–] CosmoNova@lemmy.world 6 points 3 days ago (3 children)

AI is mainly a tool for the powerful to oppress the lesser blessed. I mean cutting actual professionals out of the process to let CEOs wildest dreams go unchecked has devastating consequences already if rumors are to believed that some kids using ChatGPT cooked up those massive tariffs that have already erased trillions.

[–] applemao@lemmy.world 6 points 3 days ago (1 children)

Yet my libertarian centrist friend INSISTS that AI is great for humanity. I keep telling him the billionaires don't give a fuck about you and he keeps licking boots. How many others are like this??

load more comments (1 replies)
load more comments (2 replies)

Lol they get a capable chatbot that blows everything out of the water and suddenly they are like "yeah, this will be the last big thing"

load more comments
view more: next ›