this post was submitted on 12 Mar 2025
515 points (98.0% liked)

Technology

66501 readers
6999 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The onrushing AI era was supposed to create boom times for great gadgets. Not long ago, analysts were predicting that Apple Intelligence would start a “supercycle” of smartphone upgrades, with tons of new AI features compelling people to buy them. Amazon and Google and others were explaining how their ecosystems of devices would make computing seamless, natural, and personal. Startups were flooding the market with ChatGPT-powered gadgets, so you’d never be out of touch. AI was going to make every gadget great, and every gadget was going to change to embrace the AI world.

This whole promise hinged on the idea that Siri, Alexa, Gemini, ChatGPT, and other chatbots had gotten so good, they’d change how we do everything. Typing and tapping would soon be passé, all replaced by multimodal, omnipresent AI helpers. You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you. Tech companies large and small have been betting on virtual assistants for more than a decade, to little avail. But this new generation of AI was going to change things.

There was just one problem with the whole theory: the tech still doesn’t work. Chatbots may be fun to talk to and an occasionally useful replacement for Google, but truly game-changing virtual assistants are nowhere close to ready. And without them, the gadget revolution we were promised has utterly failed to materialize.

In the meantime, the tech industry allowed itself to be so distracted by these shiny language models that it basically stopped trying to make otherwise good gadgets. Some companies have more or less stopped making new things altogether, waiting for AI to be good enough before it ships. Others have resorted to shipping more iterative, less interesting upgrades because they have run out of ideas other than “put AI in it.” That has made the post-ChatGPT product cycle bland and boring, in a moment that could otherwise have been incredibly exciting. AI isn’t good enough, and it’s dragging everything else down with it.

Archive link: https://archive.ph/spnT6

top 50 comments
sorted by: hot top controversial new old
[–] lightnsfw@reddthat.com 17 points 2 days ago (1 children)

This whole promise hinged on the idea that Siri, Alexa, Gemini, ChatGPT, and other chatbots had gotten so good, they’d change how we do everything. Typing and tapping would soon be passé, all replaced by multimodal, omnipresent AI helpers. You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you. Tech companies large and small have been betting on virtual assistants for more than a decade, to little avail. But this new generation of AI was going to change things.

I have never and will never interact with my phone by speaking to it and I don't want to be around other people who are doing that. The beauty of a touch screen and buttons is you can silently operate the device. Software can always be updated. They should be focusing on hardware features if they want to be innovative. Maybe they could start by adding back some of the shit they've removed.

[–] mihaella@feddit.uk 14 points 2 days ago* (last edited 2 days ago) (3 children)

the bar is so low that even a lean secure android OS without bloatware would be revolutionary.

load more comments (3 replies)
[–] tacobellhop@midwest.social 12 points 1 day ago

The this isn’t on topic necessarily but if you wanna know what they are betting on for ai look into contentcyborg.ai

They wanna flood the internet with fake people, opinions, engagement etc. This creates a feedback loop of marketing budgets flooding social media for the engagement frenzy and creating ideological Dutch disease where anything will be said for a buck. We’re already there obviously culture wise, but now we’re offshoring fake souls I guess.

[–] oakey66@lemmy.world 117 points 2 days ago (5 children)

I would argue that they moved to LLMs because they had run out of ideas on actually improving cellphones. It wasn't that they were distracted by them. They are trying to distract us because they need to cell new phones every year and nothing they've come up with is really justifying shelling out $1200 for a phone that's virtually the same as the previous 3-5 iterations.

[–] thesohoriots@lemmy.world 65 points 2 days ago (1 children)

This “new phone every year” is the worst consumer crapfest we have going. AI features feel like clutching at straws when seemingly everyone hates the battery life on every single phone. Slap a larger battery in there? Well now you get shit AI that burns whatever extra capacity was gained. I can’t name a single quality on an iPhone model from the last 6 years that I truly wanted, other than the size of my 13 mini. It works fine and it fits in my pocket. Now make one that stays on for a full 24 hours and doesn’t need a battery replacement every 2 years.

[–] mitrosus@discuss.tchncs.de 10 points 2 days ago (4 children)

Blame the isheep for purchasing every crap offered.

[–] Mac@mander.xyz 14 points 2 days ago* (last edited 2 days ago)

There are plenty on Android as well and they also existed before smartphones.

load more comments (3 replies)
[–] ch00f@lemmy.world 10 points 2 days ago (1 children)

I've been using a Sunbeam flip phone for a year or so. Paid for the phone up front, and pay $3/mo for use of maps, speech recognition, and continued bugfixes.

Even if phones never got new features, dev time still needs to be committed to security updates, and services (like Siri) need to be paid for. The model of getting 100% of your revenue from new phone sales is starting to break. If I could pay $3/mo for Siri or whatever and never have my phone go obsolete, I think that'd be a good deal.

[–] ByteJunk@lemmy.world 8 points 2 days ago (1 children)

What the heck are you on about. That's the worst possible solution to this, are you some sort of masochistic?

If Siri is something that needs to be paid for, don't bundle it with the system. Charge extra from the start, and people can opt in to that shit.

Also, they run a massively profitable software store, and THAT is what justifies and pays for the bug fixing and security patches to the overall OS.

The "cell a year" practice isn't to cover development costs, it's to bring in massive profit by milking the consumeristic herd that buys their crap.

load more comments (1 replies)
[–] Crackhappy@lemmynsfw.com 7 points 2 days ago (1 children)

I'm not sure if that's a typo or brilliant. They need to "cell" new phones every year, indeed.

load more comments (1 replies)
[–] tbh@lemmy.zip 1 points 1 day ago

Weird. Couldn't they ask AI what features to develop next based on brand reputation, time and resources?

load more comments (1 replies)
[–] billwashere@lemmy.world 11 points 2 days ago (1 children)

This just reminds me of the blockchain/NFT craze. NFT is stupid as shit but blockchain has its uses, just like LLMs. I refuse to call it AI because it’s not, it’s a language generator. A particularly expensive language generator that cost a lot of in terms of resources but still just a language generator. It’s not all that different from the crypto craze, especially if you want a GPU for other things.

load more comments (1 replies)
[–] ABetterTomorrow@lemm.ee 6 points 1 day ago

I mean…. Anyone could have told you that.

[–] Darkcoffee@sh.itjust.works 61 points 2 days ago (1 children)

Honestly yeah, none of the crap being made right now is going to appear relevant in the future, just like 3d tvs

[–] TwoBeeSan@lemmy.world 17 points 2 days ago (2 children)

3d tvs is my favorite analogy. Easiest way to illustrate the bubble of hype.

load more comments (2 replies)
[–] OhVenus_Baby@lemmy.ml 28 points 2 days ago* (last edited 2 days ago) (2 children)

Generations* Let's not forget we produce 3 or 4 models of phones a year, per manufacturer. That's an alarming planet amount of E-waste and we don't have the raw materials to keep up this pace forever.

[–] barsoap@lemm.ee 9 points 2 days ago

And I still can't find a phone that has a replaceable battery, proper IP rating, and doesn't cost an arm and a leg, alternatively, costs thrice as much as the potato display and CPU would warrant. You can get two of the things, but not all three. I won't even begin to speak of having an unlocked bootloader, or, while having the rest in place, also a flush camera. FFS I'd be fine with no camera I just don't want a hump. I'd be fine with 720p, it's a tiny screen after all, but good contrast and not 8k doesn't seem to be a thing that companies think anyone would be interested it.

Stop fucking innovating, just apply lessons already learned. Design a phone with the mindset of designing a bottle opener.

[–] Sturgist@lemmy.ca 4 points 2 days ago

Depends on what you mean by forever. Who knows what tomorrow brings. We could be smashed back to the stone age, and effectively extinct, sometime next week.

You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you.

Ah, the promise made by every futurist ever.

They’re always wrong. New inventions are used to unemploy people, insert themselves between you and what you want to extract money, or to try to sell you something.

[–] metaStatic@kbin.earth 42 points 2 days ago (1 children)

I've heard it put very well that AI is either having a Napster moment in which case we will not recognise the world 10 years from now, or it's having an iPhone moment and it will get marginally better at best but is essentially in it's final form.

I personally think it's more like 3D movies and in 20 years when it comes back around we'll look at this crap like it was Red and Blue glasses.

[–] DaGeek247@fedia.io 24 points 2 days ago (3 children)

I think it's iphone stage. We've had predictive text in some form or other for a long time now. But that's just LLMs. Can't speak for the image/video generators, but I expect those will become another tool in the box that gets better but does the same thing.

I just can't see a whole lot of improvement in these products making any changes top how we use them already.

[–] pennomi@lemmy.world 8 points 2 days ago (4 children)

Transformer based LLMs are pretty much at their final form, from a training perspective. But there’s still a lot of juice to be gotten from them through more sophisticated usage, for example the recent “Atom of Thoughts” paper. Simply by directing LLMs in the correct flow, you can get much stronger results with much weaker models.

How long until someone makes a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.

[–] belit_deg@lemmy.world 12 points 2 days ago (4 children)

When an LLM fabricates a falsehood, that is not a malfunction at all. The machine is doing exactly what it has been designed to do: guess, and sound confident while doing it.

When LLMs get things wrong they aren't hallucinating. They are bullshitting.

source: https://thebullshitmachines.com/lesson-2-the-nature-of-bullshit/index.html

load more comments (4 replies)
[–] bamboo@lemm.ee 9 points 2 days ago (1 children)

Detecting a hallucination programmatically is the hard part. What is truth? Given an arbitrary sentence, how does one accurately measure the truthfulness of it? What about the edge cases, like a statement that is itself true but misrepresents something? Or what if a statement is correct in a specific context, but generally incorrect?

I’m an AI optimist but I don’t see hallucinations being solved completely as long as LLMs are statistical models of languages, but we’ll probably have a set of heuristics and techniques that can catch 90% of them.

[–] SkyeStarfall@lemmy.blahaj.zone 5 points 2 days ago (1 children)

I mean, in the end, I think it's literally an unsolvable problem of intelligence. It's not like humans don't "hallucinate" ourselves. Fundamentally your information processing is only as good as the information you get in, and if the information is wrong, you're going to be wrong. Or even just mistakes. We make mistakes constantly, and we're the most intelligent beings we know of in the universe.

The question is what issue exactly we're attempting to solve regarding AI. It's probably more useful to reframe it as "The AI not lying/giving false information when it should know better/has enough information to know the truth". Though, even that is a higher bar than we humans set for ourselves

load more comments (1 replies)
[–] vrighter@discuss.tchncs.de 4 points 2 days ago (2 children)

I read that as "if you do the thinking for them, LLMs are quite good"

load more comments (2 replies)
[–] Robaque@feddit.it 3 points 2 days ago

... a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.

Turing Completeness maybe?

load more comments (2 replies)

Typing and tapping would soon be passé,

The tech certainly isn't ready for this. My voice input to chatgpt gets automatically translated into Welsh.

[–] Reverendender@sh.itjust.works 21 points 2 days ago (3 children)

I haven't gotten anything of use from Apple Intelligence. Even just using it is difficult, and Siri is possibly dumber than she was before.

[–] acosmichippo@lemmy.world 8 points 2 days ago

siri has not been integrated with AI yet. they pushed that to 2026.

Based on what I’ve seen of my partners phone, it provides an assessment of text messages. Why would someone want that?

[–] egrets@lemmy.world 7 points 2 days ago

I've used the "writing tools" extensively for minor changes, like changes to capitalization on a large block of text. It makes the phone a little less of a consumption-only device.

I've also found the image editing tools handy from time to time, and the automatic calls to ChatGPT on the more complex natural-language questions can sometimes be handy, even if you need to wait a while for the response.

The notification summaries are sometimes very handy and sometimes absurdly incorrect and misleading.

I'm really looking forward to Siri being less frustratingly stupid, but we've got a while to wait for that, and we probably shouldn't set our expectations too high. I do respect that they've not shipped it rather than shipping something broken, though.

[–] Pregnenolone@lemmy.world 13 points 2 days ago (3 children)

I’m curious as to what the opinion of AI will be in 10 years

[–] ICastFist@programming.dev 22 points 2 days ago (1 children)

I'm betting the same opinion we have today about 3D TVs

load more comments (1 replies)
[–] pHr34kY@lemmy.world 9 points 2 days ago* (last edited 2 days ago) (1 children)

Blockchain 10 years ago was hyped like AI now.

Blockchain is now used by the US president to make money in barely legal ways.

That’s not a good outlook.

[–] pulsewidth@lemmy.world 5 points 2 days ago

Probably the same as we have now, "be neat if and when it eventually arrives".

[–] Donkter@lemmy.world 2 points 1 day ago

Some More News had the right take on this: all these companies just dumped (either in investment or development) (hundreds of) billions of dollars into AI development.

The problem is, we're still 10-15 years away from AI being actually useful in gadgets and stuff. But these companies want to get paid now, so they're shoving the cheapest, shittiest "functional" AI onto the market just to try and recoup some losses. And it's painfully apparent it isn't working.

[–] PeteWheeler@lemmy.world 12 points 2 days ago (7 children)

I just don't get why they haven't put AI in the already established 'assistants' yet.

Why isn't siri or google home not integrated? Why make new things instead of improving the tech you already have?

If I had to guess, its either because of branding, or because they know it doesn't work that well yet. Probably both.

  1. It doesnt work that well.
  2. If they do that then they cant trick everyone into buying new devices, thus helping recoup the untold billions dumped into LLM-based content theft.
load more comments (6 replies)
[–] Jimius@lemmy.ml 2 points 2 days ago (2 children)

Just look at how ppl use their smart speakers. They ask it to set timers or ask for the weather. AI will be the norm once the benefit is obvious to everyone. When I can trust my AI with my credit card info and allow it to purchase stuff for me. Right now AI is basically a self-organizing dictionary which is often confidently incorrect. Not once has GPT told me it didn't know something.

load more comments (2 replies)
[–] DogEarBookmark@reddthat.com 9 points 2 days ago

A whole generation of basically disposable devices at that

load more comments
view more: next ›