862
The air begins to leak out of the overinflated AI bubble
(www.latimes.com)
This is a most excellent place for technology news and articles.
No you can't. Simplifying it grossly:
They can't do the most low-level, dumbest detail, splitting hairs, "there's no spoon", "this is just correct no matter how much you blabber in the opposite direction, this is just wrong no matter how much you blabber to support it" kind of solutions.
And that happens to be main requirement that makes a task worth software developer's time.
We need software developers to write computer programs, because "a general idea" even in a formalized language is not sufficient, you need to address details of actual reality. That is the bottleneck.
That technology widens the passage in the places which were not the bottleneck in the first place.
I think you live in a nonsense world. I literally use it everyday and yes, sometimes it's shit and it's bad at anything that even requires a modicum of creativity. But 90% of shit doesn't require a modicum of creativity. And my point isn't about where we're at, it's about how far the same tech progressed on another domain adjacent task in three years.
Lemmy has a "dismiss AI" fetish and does so at its own peril.
First off, are you extrapolating the middle part of the sigmoid thinking it's an exponential. Secondly, https://link.springer.com/content/pdf/10.1007/s11633-017-1093-8.pdf
Dismiss at your own peril is my mantra on this. I work primarily in machine vision and the things that people were writing on as impossible or "unique to humans" in the 90s and 2000s ended up falling rapidly, and that generation of opinion pieces are now safely stored in the round bin.
The same was true of agents for games like go and chess and dota. And now the same has been demonstrated to be coming true for languages.
And maybe that paper built in the right caveats about "human intelligence". But that isn't to say human intelligence can't be surpassed by something distinctly inhuman.
The real issue is that previously there wasn't a use case with enough viability to warrant the explosion of interest we've seen like with transformers.
But transformers are like, legit wild. It's bigger than UNETs. It's way bigger than ltsm.
So dismiss at your own peril.
Tell me you haven't read the paper without telling me you haven't read the paper. The paper is about T2 vs. T3 systems, humans are just an example.
Yeah I skimmed a bit. I'm on like 4 hours of in flight sleep after like 24 hours of air ports and flying. If you really want me to address the points of the paper, I can, but I can also tell it doesn't diminish my primary point: dismiss at your own peril.
Oooo I'm scared. Just as much as I was scared of missing out on crypto or the last 10000 hype trains VCs rode into bankruptcy. I'm both too old and too much of an engineer for that BS especially when the answer to a technical argument, a fucking information-theoretical one on top of that, is "Dude, but consider FOMO".
That said, I still wish you all the best in your scientific career in applied statistics. Stuff can be interesting and useful aside from AI BS. If OTOH you're in that career path because AI BS and not a love for the maths... let's just say that vacation doesn't help against burnout. Switch tracks, instead, don't do what you want but what you can.
Or do dive into AGI. But then actually read the paper, and understand why current approaches are nowhere near sufficient. We're not talking about changes in architecture, we're about architectures that change as a function of training and inference, that learn how to learn. Say goodbye to the VC cesspit, get tenure aka a day job, maybe in 50 years there's going to be another sigmoid and you'll have written one of the papers leading up to it because you actually addressed the fucking core problem.
I mean I've been doing this for 20 years and have led teams from 2-3 in size to 40. I've been the lead on systems that have had to undergo legal review at a state level, where the output literally determines policy for almost every home in a state. So you can be as dismissive or enthusiastic as you like. I could truly actually give a shit about ley opinion cus I'm out here doing this, building it, and I see it every day.
For any one with ears to listen, dismiss this current round at your at your own peril.
Perilous, eh. Threatening tales of impeding doom and destruction. Who are you actually trying to convince, here. I doubt it's me I'd be flattered but don't think you care enough.
If Roko's Basilisk is forcing you, blink twice.
Spreading FUD is just this guy's way of trying to keep the hype alive. Techbro bullshittery 101. Reminds me of Crypto YouTube a few years back.
Those shitty investments won't pay themselves back on their own, you know?
I wish I could ignore this, but it's harming the environment so much that we can't just ignore those greedy shitheads.