10
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 10 Jul 2023
10 points (81.2% liked)
Technology
59137 readers
2038 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
I think it’s the same reason the CEO’s of these corporations are clamoring about their own products being doomsday devices: it gives them massive power over crafting regulatory policy, thus letting them make sure it’s favorable to their business interests.
Even more frustrating when you realize, and feel free to correct me if I’m wrong, these new “AI” programs and LLMs aren’t really novel in terms of theoretical approach: the real revolution is the amount of computing power and data to throw at them.
The funniest thing I've seen on this is the ChatGPT CEO, Altman, talking about how he's a bit afraid of what they've created and how it needs limitations -- and then when the EU begins to look at regulations, he immediately rejects the concept, to the point of threatening to leave the European market. It's incredibly transparent what they're doing.
Unfortunately I don't know enough about the technology to say if the algorithms and concepts themselves are novel, but without a doubt they couldn't exist without modern computing power capabilities.
The concepts themselves are some 30 years old, but storage capacity and processing speed have only recently reached a point where generative AI outperforms competing solutions.
But regarding the regulation thing, I don't know what was said or proposed, and this is just me playing devil's advocate: but could it be that the CEO simply doesn't agree with the specifics of the proposed regulations while still believing that some other, different kind of regulation should exist?
Certainly could be, but probably an optimistic take. Most likely they're just trying to do what corporations have been doing for ages, which is to weaponize government policy to prevent competition. They don't want restrictions that will materially impact their product, they want restrictions that will materially impact startups to make it more difficult for them to intrude on the established space.
I think if you fed your response into ChatGPT and asked it to summarize in two words it would return,
"Regulatory Capture"
I can tell for a fact that there's nothing new going on. Only the MASSIVE investment from Microsoft to allow them to train on an insane amount of data. I am no "expert" per se, but I've been studying and working with AI for over a decade - so feel free to judge my reply as you please
And what are they doing? To remind, OpenAI is non-profit.
I thought they moved to for profit back in 2019?
Wikipedia lists them as non-profit https://en.m.wikipedia.org/wiki/OpenAI
They're a non-profit managed by a for-profit, who's received most of their funding from another for-proft.
LLMs are pretty novel. They are made possible by invention of the Transformer model, that operates significantly different compared to, say, RNN.
It also plays into the hype cycle they’re trying to create. Saying you’ve made an AI is more likely to capture the attention of the masses then saying you have a LLM. Ditto that point for the existential doomerism that they ceo’s have. Saying your tech is so powerful that it might lead to humanity’s extinction does wonders in building hype.
Agreed. And all you really need to do is browse any of the headlines from even respectable news outlets to see how well it’s working. It’s just article after article uncritically parroting whatever claims these CEO’s make at face value at least 50% of the time. It’s mind-numbing.
The fear mongering is pretty ridiculous.
"AI could DESTROY HUMANITY. It's like the ATOMIC BOMB! Look at it's RAW POWER!"
AI generates an image of cats playing canasta.
"By God...."
This is 100% true. LLMs, neural networks, markov chains, gradient descent, etc. etc. on down the line is nothing particularly new. They've collectively been studied academically for 30+ years. It's only recently that we've been able to throw huge amounts of data, computing capacity, and time to tweak said models to achieve results unthinkable 10-ish years ago.
There have been efficiencies, breakthroughs, tweaks, and changes over this time too, but that's just to be expected. But largely its just sheer raw size/scale that's just been achievable recently.
Okay, I’m glad I’m not too far off the mark then (I’m not an AI expert/it’s not my field of study).
I think this also points to/is a great example of another worrying trend: the consolidation of computing power in the hands of a few large companies. Without even factoring in the development of true AI/whether that can or will happen anytime soon, the LLMs really show off the massive scale of both computational power consolidation AMD data harvesting by only a very few entities. I’m guessing I’m not alone here in finding that increasingly concerning, particularly since a lot of development is driving towards surveillance applications.
by that logic there was nothing novel about solid state transistors since they just did the same thing as vacuum tubes; no innovation there I guess. No new ideas came from finally having a way to pack cooler, less power hungry, smaller components together.
We all remember SmarterChild....right?
I remember Tay