148
Evidence that LLMs are reaching a point of diminishing returns - and what that might mean
(garymarcus.substack.com)
Unofficial ChatGPT community to discuss anything ChatGPT
But that's not how the industry defines AI winter. You're thinking of hype in the context of public perception, but that's not what matters.
Previous AI interest was about huge investments into research with the hope of a return on that investment. But since it didn't pan out, the interest (from investors) dried up and progress drastically slowed down.
GPUs are what made the difference. Finally AI research could produce meaningful results and that's where we're at now.
Previously AI research could not exist without external financial support. Today AI is fully self-sustaining, meaning companies using AI are making a profit while also directing some of that money back into research and development.
And we're not talking chump change, we're talking hundreds of billions. Nvidia has effectively pivoted from a gaming hardware company to the number one AI accelerator manufacturer in the world.
There's also a number of companies that have started developing and making analogue AI accelerators. In many cases the so the same workload for a fraction of the energy costs of a digital one (like the H100).
There's so much happening every day and it keeps getting faster and faster. It is NOT slowing down anytime soon, and at this point it will never stop.
How can I verify this?
Look at the number of companies offering AI based video surveillance. That sector alone is worth tens of billions each year and still growing.
Just about every large company is using AI in some way. Google and Microsoft are using AI in their backend systems for things like spam filtering.
You're thinking of AI as "ChatGPT" but the market for AI has been established well before ChatGPT became popular. It's just the "new" guy on the scene that the news cycle is going crazy over.
I'm interested in LLMs and how they are being used because that's what large sums of money is being thrown at with very uncertain future returns.
I have no idea how LLMs are being used by private companies to generate profit.
What I do know is that other forms of AI are employed in cybersecurity, fintech, video surveillance, spam filtering, etc.
The AI video surveillance market is huge and constantly growing. This is a proven and mature segment worth tens of billions every year and constantly growing.
I find it interesting that you keep driving the point about LLMs and constantly ignore my point that AI is way bigger than just LLMs, and that AI is making billions of dollars for companies every year.
Oh, I see. If that's how we would define it then yes of course. I mean, I already saw upscaler and other "AI" technologies being used on consumer hardware. That is actually useful AI. LLM usefulness compared to their resource consumption is IMHO not worth it.
If you worked in that industry you'd have a different opinion. Using LLMs to write poetry or make stories is frivolous, but there are other applications that aren't.
Some companies are using them to find new and better drugs, to solve diseases, invent new materials, etc.
Then there's the consideration that a number of companies are coming out with AI accelerators that are analogue based and use a tiny fraction of the energy current systems use for the same workloads.
I have seen the claims of this sort of thing be refuted when results of the work using LLMs is reviewed. For example.
That's one company and one model referring only to material discovery. There are other models and companies.
Yes, it's an example of how there are claims being made that don't hold up.
You can find that kind of example for literally every segment of science and society. Showing a single example out of many and then saying "see? The claims are false". It's disingenuous at best.
https://www.artsci.utoronto.ca/news/researchers-build-breakthrough-ai-technology-probe-structure-proteins-tools-life
https://www.broadinstitute.org/news/researchers-use-ai-identify-new-class-antibiotic-candidates
I think you're not seeing the nuance in my statements and instead are extrapolating inappropriately, perhaps even disingenuously.
I'm not missing the nuance of what you said. It's just irrelevant for the discussion in this thread.
My comment that you initially replied to was talking about much more than just LLMs, but you singled out the one point about LLMs and offered a single article talking about DeepMind's results on material discoveries. A very specific
It's about the relevance of AI as a tool for profit stemming from the top level comment implying an AI winter is coming.
But to go back to your point about the article you shared, I wonder if you've actually read it. The whole discussion is about what is effectively a proof-of-concept by Google, and not a full effort to truely find new materials. They said that they "selected a random sample of the 380,000 proposed structures released by DeepMind and say that none of them meet a three-part test of whether the proposed material is “credible,” “useful,” and “novel.” "
And in the actual analysis, which the article is about, they wrote: "we have yet to find any strikingly novel compounds in the GNoME and Stable Structure listings, although we anticipate that there must be some among the 384,870 compositions. We also note that, while many of the new compositions are trivial adaptations of known materials, the computational approach delivers credible overall compositions, which gives us confidence that the underlying approach is sound."
Ultimately, everyone involved in analysing the results agreed the concept is sound and will likely lead to breakthroughs in the future, but this specific result (and a similar one done by another group), have not produced any significant and noteworthy new materials.
I'm not reading that because you clearly would rather argue than have a conversation. Enjoy the rest of your day.
Sure, just like you didn't read the article you linked to.
I did read it btw, since you shared it.
I work in the field for a company with 40k staff and over 6 million customers.
We have about 100 dedicated data science professionals and we have 1 LLM we use for our chatbots vs a few hundred ML models running.
LLMs are overhyped and not delivering as much as people claim, most businesses doing LLM will not exist in 2-5 years because Amazon, Google and Microsoft will offer it all cheaper or free.
They are great at generating content but honestly most content is crap because it's AI rejuvenating something it's been trained on. They are our next gen spam for the most part.
I absolutely agree it's overhyped, but that doesn't mean useless. And these systems are getting better everyday. And the money isn't going to be in these massive models. It's going to be in smaller domain specific models. MoE models show better results over models that are 10x larger. It's still 100% early days.
I somewhat agree with this, but since the LLM hype train started just over a year ago, smaller open source fine-tuned models have been keeping ahead of the big players that are too big to shift quickly. Google even mentioned in an internal memo that the open source community had accomplished in a few months what they thought was literally impossible and could never happen (to prune and quantize models and fine-tune them to get results very close to larger models).
And there are always more companies that spring up around a new tech than the number that continue to exist after a few years. That's been the case for decades now.
Well, this is actually demonstrably false. There are many thorough examples of how LLMs can generate novel data, even papers written on the subject. But beyond generating new and novel data, the use for LLMs are more than that. They are able to discern patterns, perform analysis, summarize data, problem solve, etc. All of which have various applications.
But ultimately, how is "regurgitating something it's been trained on" any different from how we learn? The reality is that we ourselves can only generate things based on things we've learned. The difference is that we learn basically about everything. And we have a constant stream of input from all our senses as well as ideas/thoughts shared with other people.
Edit: a great example of how we can't "generate" something outside of what we've learned is that we are 100% incapable of visualizing a 4 dimensional object. And I mean visualize in your mind's eye like you can with any other kind of shape or object. You can close your eyes right now and see a cube or sphere, but you are incapable of visualizing a hyper-cube or a hyper-sphere, even though we can describe them mathematically and even render them with software by projecting them onto a 3D virtual environment (like how a photo is a 2D representation of a 3D environment).
/End-Edit
It's not an exaggeration that neural networks are trained the same way biologic neural networks (aka brains) are trained. But there's obviously a huge difference in the inner workings.
Maybe the last gen models, definitely not the current gen SOTA models, and the models coming in the next few years will only get better. 10 years from now is going to look wild.
I also worked in the field for a decade up until recently. And I use LLMs for a few things professionally, particularly code generation. It can't write "good and clean" code, but what it does do is help get the ball rolling writing boilerplate stuff and helps solve issues that aren't immediately clear.
I actually run a number of models locally also.
What a condescending thing to say. It has nothing to do with being excited or not. The broader issue is that people are approaching the topic from a "it'll replace programmers/writers/accountants/lawyers, etc" standpoint. And I bet that's what all the suits at various companies expect.
Whereas the true usefulness in LLMs are as a supplementary tool to help existing jobs be more efficient. It's no different than spell check, autocomplete, code linting, and so on. It's just more capable than those tools.
This statement proves my point. Everyone thinks LLMs will "do the job" when they're just a tool to HELP with doing the job.
Said by someone who's never written a line of code.
Is autocorrect always right? No, but we all still use it.
And I never said "poorly generated", I decidedly used "good and clean". And that was in the context of writing larger segments of code on it's own. I did clarify after that it's good for writing things like boilerplate code. So no, I never said "poorly generated boilerplate". You were just putting words in my mouth.
Boilerplate code that's workable can help you get well ahead of a task than if you did it yourself. The beauty about boilerplate stuff is that there's not really a whole lot of different ways to do it. Sure there are fancier ways, but generally anything but code that's easy to read is frowned upon. Fortunately LLMs are actually great at the boilerplate stuff.
Just about every programmer that's tried GitHub Copilot agrees that it's not taking over progressing jobs anytime soon, but does a fine job as a coding assistant tool.
I know of at least three separate coding/tech related podcasts with multiple hosts that have come to the same conclusion in the past 6 months or so.
If you're interested, the ones I'm thinking of are Coder Radio, Linux After Dark, Linux Downtime, and 2.5 Admins.
Your reply also demonstrates the ridiculous mindset that people have about this stuff. There's this mentality that if it's not literally a self aware AI then it's spam and worthless. Ya, it does a fairly basic and mundane thing in the real world. But that mundane thing has measurable utility that makes certain workloads easier or more efficient.
Sorry it didn't blow your mind.
Ditto
AI is not self-sustaining yet. Nvidia is doing well selling shovels, but most AI companies are not profitable. Stock prices and investor valuations are effectively bets on the future, not measurements of current success.
From this Forbes list of top AI companies, all but one make their money from something besides AI directly. Several of them rode the Web3 hype wave too, that didn't make them Web3 companies.
We're still in the early days of AI adoption and most reports of AI-driven profit increases should be taken with a large grain of salt. Some parts of AI are going to be useful, but that doesn't mean another winter won't come when the bubble bursts.
AI is absolutely self-sustaining. Just because a company doesn't "only do AI" doesn't matter. I don't even know what that would really look like. AI is just a tool. But it's currently an extremely widely used tool. You don't even see 99% of the applications of it.
How do I know? I worked in that industry for a decade. Just about every large company on the planet is using some form of AI in a way that increases profitability. There's enough return on investment that it will continue to grow.
This is like saying only computer manufacturers make money from computers directly, whereas everyone and their grandmas use computers. You're literally looking at the news cycle about ChatGPT and making broad conclusions about an AI winter based solely on that.
Industries like fintech and cybersecurity have made permanent shifts into AI years ago and there's no going back. The benefits of AI in these sectors cannot be matched by traditional methods.
Then, like I said in my previous comment, there are industries like security and video surveillance where object recognition, facial recognition, ALPR, video analytics, etc, have been going strong for over a decade and it's still growing and expanding. We' might reach a point where the advancements slow down, but that's after the tech becomes established and commonplace.
There will be no AI winter going forward. It's done.
You're using "machine learning" interchangeably with "AI." We've been doing ML for decades, but it's not what most people would consider AI and it's definitely not what I'm referring to when I say "AI winter."
"Generative AI" is the more precise term for what most people are thinking of when they say "AI" today and it's what is driving investments right now. It's still very unclear what the actual value of this bubble is. There are tons of promises and a few clear use-cases, but not much proof on the ground of it being as wildly profitable as the industry is saying yet.
No I'm not
Machine learning, deep learning, generative AI, object recognition, etc, are all subsets or forms of AI.
It doesn't matter what people are "thinking of", if someone invokes the term "AI winter" then they better be using the right terminology, or else get out of the conversation.
There are loads and loads of proven use cases, even for LLMs. It doesn't matter if the average person thinks that AI refers only to things like ChatGPT, the reality is that there is no AI winter coming and AI has been generating revenue (or helping to generate revenue) for a lot of companies for years now.