this post was submitted on 14 Apr 2024
148 points (94.0% liked)

ChatGPT

8912 readers
1 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] bitfucker@programming.dev 3 points 7 months ago (1 children)

Oh, I see. If that's how we would define it then yes of course. I mean, I already saw upscaler and other "AI" technologies being used on consumer hardware. That is actually useful AI. LLM usefulness compared to their resource consumption is IMHO not worth it.

[–] CeeBee@lemmy.world 1 points 7 months ago (2 children)

LLM usefulness compared to their resource consumption is IMHO not worth it.

If you worked in that industry you'd have a different opinion. Using LLMs to write poetry or make stories is frivolous, but there are other applications that aren't.

Some companies are using them to find new and better drugs, to solve diseases, invent new materials, etc.

Then there's the consideration that a number of companies are coming out with AI accelerators that are analogue based and use a tiny fraction of the energy current systems use for the same workloads.

[–] ericjmorey@discuss.online 1 points 7 months ago (1 children)

Some companies are using them to find new and better drugs, to solve diseases, invent new materials, etc.

I have seen the claims of this sort of thing be refuted when results of the work using LLMs is reviewed. For example.

[–] CeeBee@lemmy.world 0 points 7 months ago (1 children)

That's one company and one model referring only to material discovery. There are other models and companies.

[–] ericjmorey@discuss.online 1 points 7 months ago (1 children)

Yes, it's an example of how there are claims being made that don't hold up.

[–] CeeBee@lemmy.world 0 points 7 months ago (1 children)

it's an example of how there are claims being made that don't hold up.

You can find that kind of example for literally every segment of science and society. Showing a single example out of many and then saying "see? The claims are false". It's disingenuous at best.

https://www.artsci.utoronto.ca/news/researchers-build-breakthrough-ai-technology-probe-structure-proteins-tools-life

https://www.broadinstitute.org/news/researchers-use-ai-identify-new-class-antibiotic-candidates

[–] ericjmorey@discuss.online 1 points 7 months ago (1 children)

I think you're not seeing the nuance in my statements and instead are extrapolating inappropriately, perhaps even disingenuously.

[–] CeeBee@lemmy.world -1 points 7 months ago (1 children)

I'm not missing the nuance of what you said. It's just irrelevant for the discussion in this thread.

My comment that you initially replied to was talking about much more than just LLMs, but you singled out the one point about LLMs and offered a single article talking about DeepMind's results on material discoveries. A very specific

It's about the relevance of AI as a tool for profit stemming from the top level comment implying an AI winter is coming.

But to go back to your point about the article you shared, I wonder if you've actually read it. The whole discussion is about what is effectively a proof-of-concept by Google, and not a full effort to truely find new materials. They said that they "selected a random sample of the 380,000 proposed structures released by DeepMind and say that none of them meet a three-part test of whether the proposed material is “credible,” “useful,” and “novel.” "

And in the actual analysis, which the article is about, they wrote: "we have yet to find any strikingly novel compounds in the GNoME and Stable Structure listings, although we anticipate that there must be some among the 384,870 compositions. We also note that, while many of the new compositions are trivial adaptations of known materials, the computational approach delivers credible overall compositions, which gives us confidence that the underlying approach is sound."

Ultimately, everyone involved in analysing the results agreed the concept is sound and will likely lead to breakthroughs in the future, but this specific result (and a similar one done by another group), have not produced any significant and noteworthy new materials.

[–] ericjmorey@discuss.online 0 points 7 months ago (1 children)

I'm not reading that because you clearly would rather argue than have a conversation. Enjoy the rest of your day.

[–] CeeBee@lemmy.world 0 points 7 months ago

Sure, just like you didn't read the article you linked to.

I did read it btw, since you shared it.

[–] AustralianSimon@lemmy.world 1 points 7 months ago (1 children)

I work in the field for a company with 40k staff and over 6 million customers.

We have about 100 dedicated data science professionals and we have 1 LLM we use for our chatbots vs a few hundred ML models running.

LLMs are overhyped and not delivering as much as people claim, most businesses doing LLM will not exist in 2-5 years because Amazon, Google and Microsoft will offer it all cheaper or free.

They are great at generating content but honestly most content is crap because it's AI rejuvenating something it's been trained on. They are our next gen spam for the most part.

[–] CeeBee@lemmy.world 1 points 7 months ago* (last edited 7 months ago) (1 children)

LLMs are overhyped and not delivering as much as people claim

I absolutely agree it's overhyped, but that doesn't mean useless. And these systems are getting better everyday. And the money isn't going to be in these massive models. It's going to be in smaller domain specific models. MoE models show better results over models that are 10x larger. It's still 100% early days.

most businesses doing LLM will not exist in 2-5 years because Amazon, Google and Microsoft will offer it all cheaper or free.

I somewhat agree with this, but since the LLM hype train started just over a year ago, smaller open source fine-tuned models have been keeping ahead of the big players that are too big to shift quickly. Google even mentioned in an internal memo that the open source community had accomplished in a few months what they thought was literally impossible and could never happen (to prune and quantize models and fine-tune them to get results very close to larger models).

And there are always more companies that spring up around a new tech than the number that continue to exist after a few years. That's been the case for decades now.

They are great at generating content but honestly most content is crap because it's AI rejuvenating something it's been trained on.

Well, this is actually demonstrably false. There are many thorough examples of how LLMs can generate novel data, even papers written on the subject. But beyond generating new and novel data, the use for LLMs are more than that. They are able to discern patterns, perform analysis, summarize data, problem solve, etc. All of which have various applications.

But ultimately, how is "regurgitating something it's been trained on" any different from how we learn? The reality is that we ourselves can only generate things based on things we've learned. The difference is that we learn basically about everything. And we have a constant stream of input from all our senses as well as ideas/thoughts shared with other people.

Edit: a great example of how we can't "generate" something outside of what we've learned is that we are 100% incapable of visualizing a 4 dimensional object. And I mean visualize in your mind's eye like you can with any other kind of shape or object. You can close your eyes right now and see a cube or sphere, but you are incapable of visualizing a hyper-cube or a hyper-sphere, even though we can describe them mathematically and even render them with software by projecting them onto a 3D virtual environment (like how a photo is a 2D representation of a 3D environment).

/End-Edit

It's not an exaggeration that neural networks are trained the same way biologic neural networks (aka brains) are trained. But there's obviously a huge difference in the inner workings.

They are our next gen spam for the most part.

Maybe the last gen models, definitely not the current gen SOTA models, and the models coming in the next few years will only get better. 10 years from now is going to look wild.