this post was submitted on 07 Oct 2024
113 points (100.0% liked)

Technology

37800 readers
499 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/20858435

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

you are viewing a single comment's thread
view the rest of the comments
[–] ChairmanMeow@programming.dev 21 points 2 months ago (3 children)

The actual paper is an interesting read. They present an actual computational proof, stating that even if you have essentially infinite memory, a computer that's a billion times faster than what we have now, perfect training data that you can sample without bias and you're only aiming for an AGI that performs slightly better than chance, it's still completely infeasible to do within the next few millenia. Ergo, it's definitely not "right around the corner". We're lightyears off still.

They prove this by proving that if you could train an AI in a tractable amount of time, you would have proven P=NP. And thus, training an AI is NP-hard. Given the minimum data that needs to be learned to be better than chance, this results in a ridiculously long training time well beyond the realm of what's even remotely feasible. And that's provided you don't even have to deal with all the constraints that exist in the real world.

We perhaps need some breakthrough in quantum computing in order to get closer. That is not to say that AI won't improve or anything, it'll get a bit better. But there is a computationally proven ceiling here, and breaking through that is exceptionally hard.

It also raises (imo) the question of whether or not we can truly consider humans to have general intelligence or not. Perhaps we're not as smart as we think we are either.

[–] BarryZuckerkorn@beehaw.org 10 points 2 months ago (1 children)

The paper's scope is to prove that AI cannot feasibly be trained, using training data and learning algorithms, into something that approximates human cognition.

The limits of that finding are important here: it's not that creating an AGI is impossible, it's just that however it will be made, it will need to be made some other way, not by training alone.

Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.

So it may still be the case that AGI via computation alone is possible, and that creating such an AGI will not require solution of an NP-hard problem. But this paper closes one potential pathway that many believe is a viable pathway (if the paper's proof is actually correct, I definitely am not the person to make that evaluation). That doesn't mean they've proven there's no pathway at all.

[–] ChairmanMeow@programming.dev 4 points 2 months ago (1 children)

Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.

That's assuming that we are a general intelligence. I'm actually unsure if that's even true.

That doesn't mean they've proven there's no pathway at all.

True, they've only calculated it'd take perhaps millions of years. Which might be accurate, I'm not sure to what kind of computer global evolution over trillions of organisms over millions of years adds up to. And yes, perhaps some breakthrough happens, but it's still very unlikely and definitely not "right around the corner" as the AI-bros claim (and that near-future thing is what the paper set out to disprove).

[–] BarryZuckerkorn@beehaw.org 1 points 2 months ago (1 children)

That's assuming that we are a general intelligence.

But it's easy to just define general intelligence as something approximating what humans already do. The paper itself only analyzed whether it was feasible to have a computational system that produces outputs approximately similar to humans, whatever that is.

True, they've only calculated it'd take perhaps millions of years.

No, you're missing my point, at least how I read the paper. They're saying that the method of using training data to computationally develop a neural network is a conceptual dead end. Throwing more resources at the NP-hard problem isn't going to solve it.

What they didn't prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It's just that this particular method of inferential training, what they call "AI-by-Learning," is an NP-hard computational problem.

[–] ChairmanMeow@programming.dev 2 points 2 months ago (1 children)

What they didn't prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It's just that this particular method of inferential training, what they call "AI-by-Learning," is an NP-hard computational problem.

This is exactly what they've proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).

They merely mentioned these methods to show that it doesn't matter which method you pick. The explicit point is to show that it doesn't matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI. It could be a good AI of course, but that G is pretty important here.

But it's easy to just define general intelligence as something approximating what humans already do.

No, General Intelligence has a set definition that the paper's authors stick with. It's not as simple as "it's a human-like intelligence" or something that merely approximates it.

[–] BarryZuckerkorn@beehaw.org 1 points 2 months ago

This isn't my field, and some undergraduate philosophy classes I took more than 20 years ago might not be leaving me well equipped to understand this paper. So I'll admit I'm probably out of my element, and want to understand.

That being said, I'm not reading this paper with your interpretation.

This is exactly what they've proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).

But they've defined the AI-by-Learning problem in a specific way (here's the informal definition):

Given: A way of sampling from a distribution D.

Task: Find an algorithm A (i.e., ‘an AI’) that, when run for different possible situations as input, outputs behaviours that are human-like (i.e., approximately like D for some meaning of ‘approximate’).

I read this definition of the problem to be defined by needing to sample from D, that is, to "learn."

The explicit point is to show that it doesn't matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI

But the caveat I'm reading, implicit in the paper's definition of the AI-by-Learning problem, is that it's about an entire class of methods, of learning from a perfect sample of intelligent outputs to itself be able to mimic intelligent outputs.

General Intelligence has a set definition that the paper's authors stick with. It's not as simple as "it's a human-like intelligence" or something that merely approximates it.

The paper defines it:

Specifically, in our formalisation of AI-by-Learning, we will make the simplifying assumption that there is a finite set of possible behaviours and that for each situation s there is a fixed number of behaviours Bs that humans may display in situation s.

It's just defining an approximation of human behavior, and saying that achieving that formalized approximation is intractable, using inferences from training data. So I'm still seeing the definition of human-like behavior, which would by definition be satisfied by human behavior. So that's the circular reasoning here, and whether human behavior fits another definition of AGI doesn't actually affect the proof here. They're proving that learning to be human-like is intractable, not that achieving AGI is itself intractable.

I think it's an important distinction, if I'm reading it correctly. But if I'm not, I'm also happy to be proven wrong.

[–] zygo_histo_morpheus@programming.dev 9 points 2 months ago (1 children)

A breakthrough in quantum computing wouldn't necessarily help. QC isn't faster than classical computing in the general case, it just happens to be for a few specific algorithms (e.g. factoring numbers). It's not impossible that a QC breakthrough might speed up training AI models (although to my knowledge we don't have any reason to believe that it would) and maybe that's what you're referring to, but there's a widespread misconception that Quantum computers are essentially non-deterministic turing machines that "evaluate all possible states at the same time" which isn't the case.

[–] ChairmanMeow@programming.dev 8 points 2 months ago (2 children)

I was more hinting at that through conventional computational means we're just not getting there, and that some completely hypothetical breakthrough somewhere is required. QC is the best guess I have for where it might be but it's still far-fetched.

But yes, you're absolutely right that QC in general isn't a magic bullet here.

[–] zygo_histo_morpheus@programming.dev 6 points 2 months ago (1 children)

Yeah thought that might be the case! It's just a thing that a lot of people have misconceptions about so it's something that I have a bit of a knee jerk reaction to.

[–] ChairmanMeow@programming.dev 5 points 2 months ago

Haha it's good that you do though, because now there's a helpful comment providing more context :)

[–] Umbrias@beehaw.org 3 points 2 months ago (1 children)

the limitation is specifically using the primary machine learning technique, same one all chatbots use at places claiming to pursue agi, which is statistical imitation, is np-hard.

[–] ChairmanMeow@programming.dev 2 points 2 months ago (1 children)

Not just that, they've proven it's not possible using any tractable algorithm. If it were you'd run into a contradiction. Their example uses basically any machine learning algorithm we know, but the proof generalizes.

[–] Umbrias@beehaw.org 1 points 2 months ago

via statistical imitation. other methods, such as solving and implementing by first principles analytically, has not been shown to be np hard. the difference is important but the end result is still no agigpt in the foreseeable and unforeseeable future.

[–] Azzk1kr@feddit.nl 1 points 2 months ago (1 children)

Nitpick: a lightyear is a measure of distance, not of time :)

[–] ChairmanMeow@programming.dev 4 points 2 months ago

Yes, hence we're not "right around the corner", it's a figure of speech that uses spatial distance to metaphorically show we're very far away from something.