Have they tried replacing their workers with AI to save money?
Now that capital has integrated them into their system they will not be allowed to fail. At least for now.
The iron law of "nothing ever happens" necessitates this
spoiler
Nah but for real how much life can this bubble still have left?
A lot, because nothing ever happens.
The iron law of "nothing ever happens"
There are decades where nothing ever happens, and there are weeks where we are so back
Or Microsoft and Meta will make sure there's less competition in the future for their own LLMs?
It seems like MS could really fuck them up if they stopped using OpenAI for all their azure stuff. As of now I don't think MS relies on their own LLM for anything?
MS abandons basically anything new that doesn't make them even more absurdly rich instantly these days.
Good, please take the entire fake industry with you
No offense to the AI researchers here (actually maybe only one person lol), but the people who lead/make profit off of/fundraise off of your efforts now are demons
I do think that if OpenAI goes bust that's gonna trigger a market panic that's gonna end the hype cycle.
Inshallah I am fed up of dealing with these charlatans at work
A solution in search of a problem
I just know the AI hype guys in my dept are gonna get promoted and I'll be the one answering why our Azure costs are astronomical while we have not changed our portfolio size at all lol
My guess for the dynamics: openAI investors panic, force the company to cut costs and increase pricing, other AI company investors panic, same result, AI becomes prohibitively expensive for a lot of use cases ending the hype cycle.
I think that's the best argument for why the tech industry won't let that happen. All of the big tech stocks are getting a boost from this massive grift.
Worst case scenario one of the tech giants buys them. Then they pare back the expenses and hide it in their balance sheet, and keep everyone thinking AGI is just around the corner.
It's certainly possible, but I don't think any of the tech giants are in a position to do that today. Google, Microsoft, and Amazon are in a cost cutting cycle, Meta's csuite is probably on a short leash after the metaverse boomdoggle. Apple is the most likely one because they're generally behind everyone else across all ML products but especially LLMs, but afaik they're bracing for seeing drops in sales for the first time in 15 years, so buying openAI might be a tough pitch.
I believe that Microsoft owns a huge portion of OpenAI, like just short of majority stake
yeah I think that's very plausible
Inshallah
I hate when people say 'LLMs have legitimate uses but...'. NO! THEY DONT! Its entirely a platform for building scams! It should be burnt to the ground entirely
But then how will people write 20 cover letters a day to keep up with the increasing rate of instant rejections?
Saw a really depressing ad at work the other day where Google was advertising their thing and it was some person asking their LLM to write a letter for their daughter to this athlete bragging about how she'll break her record one day. They couch it in "here's a draft" but it's just so bleak. The idea that a child so excited about doing a sport and dreaming of going to the Olympics and getting a world record can't just write a bit of a clumsy letter expressing themselves to their hero is just beyond depressing. Writing swill for automated systems that are going to reject you anyway is one thing, but the idea that they think that this is a legitimate use of these models just highlights how obnoxiously out of touch they are.
How do we learn and grow as people and find our own writing voices if we don't write some of the most cringe shit imaginable when we're young. I wrote a weird letter to Emma Watson in middle school, nobody ever read it, but it was a learning experience and made me actually have to think about my own feelings. These techbros have to have been grown in vats.
I've hesitated to ever write anything about it thinking it'd come across as too or Luddite, but this comment kind of inspired me to flesh out something that's been simmering in the back of my head ever since LLMs became latest fad after the NFT boom.
One of the most unnerving things to me about "AI" in the common understanding is that its entire hype cycle and main use cases are all tacit admissions that all of the professional and academic uses of it are proof that their pre-"AI" standards were perfunctory hoop jumping bullshit to join the professional managerial class, and their "artistic" uses are almost entirely utilized by people with zero artistic sensibilities or weirdo porno sickos. All of it belies a deep cynicism about the status quo where what could have been heartfelt but clumsy writing by young students or the athlete in your example are being unknowingly robbed of their agency and the humanizing future of looking back on clunky immature writing as a personal marker of growth. They're just hoops to jump through to get whatever degree or accolade you're seeking, with whatever personal growth that those achievements originally meant stripped of anything other than "achieving them is good because it advances your career and earning potential." Techbros' most fawning and optimistic pitches of "AI" and "The Singularity" instead read to me as the grimmest and most alienating version of neoliberal "end of history" horseshit where even art and language themselves are reduced to SEO marketized min/maxxed rat races.
I hope this doesn't sound too but I had to get that rant out
Maybe I'll expand that into something
So the emotional resonance I felt when I asked ChatGPT to write me a song about my experiences still loving the parent that abused me was what to you?
Like the results were objectively artless glurge of course but I needed that in that moment.
I don't think it's purpose and mechanisms being non-artistic is incompatible with people finding meaning in it. We find meaning in random stuff all the time, it's kind of just our thing
I mean this is exactly part of the reason they’re going bankrupt which is good so you should keep doing it. Companies have been using other forms of AI with some success whereas LLM just regurgitates too much random fake information for anyone serious to use professionally.
If it goes under, use open source LLMs which have been steadily improving and almost surpassing proprietary ones.
I promise this isn't true. AI is absolutely a scam in the sense that it's overhype as fuck, but LLMs are frequently of practical use to me when doing basically anything technical. It has helped me solve real-life problems that actually materially helps others.
1 trillion more parameters just a trillion more parameters bro i swear we'll be profitable then bro
Lol
Lmao even
As far as "AI" goes, it's here to stay. As for OpenAI they will probably be bought off by one of the big ones, as is usually the case with these companies.
I agree that this tech has lots of legitimate uses, and it's actually good for the hype cycle to end early so people can get back to figuring out how to apply this stuff where it makes sense. LLMs also managed to suck up all the air in the room, but I expect the real value is going to come from using them as a component in larger systems utilizing different techniques.
Yeah but integrating LLMs with other systems is already happening.
Most recent case is out of Deepmind, where they managed to get silver medalist score in the International Mathematics Olympiad (IMO) using a LLM with a formal verification language (LEAN) and then using synthetic data and reinforcement learning. Although I think they had to manually formalize the problem before feeding it to the algorithm, and also it took several days to solve the problems (except for one that took minutes), so there's still a lot of space for improvement.
Nature is healing.
and nothing of value is at risk of being lost
good
Is this because AI LLMs don't do anything good or useful? They get very simple questions wrong, will fabricate nonsense out of thin air, and even at their most useful they're a conversational version of a Google search. I haven't seen a single thing they do that a person would need or want.
Maybe it could be neat in some kind of procedurally generated video game? But even that would be worse than something written by human writers. What is an LLM even for?
I think there are legitimate uses for this tech, but they're pretty niche and difficult to monetize in practice. For most jobs, correctness matters, and if the system can't be guaranteed to produce reasonably correct results then it's not really improving productivity in a meaningful way.
I find this stuff is great in cases where you already have domain knowledge, and maybe you want to bounce ideas off and the output it generates can stimulate an idea in your head. Whether it understands what it's outputting really doesn't matter in this scenario. It also works reasonably well as a coding assistant, where it can generate code that points you in the right direction, and it can be faster to do that than googling.
We'll probably see some niches where LLMs can be pretty helpful, but their capabilities are incredibly oversold at the moment.
big holders with insider information change to short positions to make money during the crash by putting their shares up as collateral to investment banks in exchange for loans, the bubble bursts, smaller investors lose money, the government steps in and bails them out because they're "too big to fail" the torment nexus continues humming along
it's because chatgpt didn't say enough slurs
The thing that isn't really mentioned here is that the largest OpenAI investor is Microsoft, and most of the money OpenAI spends is on Microsoft cloud services. So basically OpenAI is an internal Microsoft capital investment. They won't let it fail, but they might kill it if it loses money for long enough.
I think a solution could be to make it burn even more fossil fuels per query
I like how it mentions Nvidia and Microsoft as if this shit is an anomaly and it's actually profitable for the other guys and won't collapse we promise
Nvidia is in sell the shovels business, they'll be fine even if stock craters
Startups having 12 months of runway before insolvency is pretty normal. OpenAI's valuation and burn rate might be a problem since they'll need to do a bigger round, but I doubt it. They are basically the hottest startup on the planet right now. I think this article is interesting but ultimately doesn't mean anything.
technology
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.