this post was submitted on 07 Oct 2024
113 points (100.0% liked)

Technology

37724 readers
470 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/20858435

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

you are viewing a single comment's thread
view the rest of the comments
[–] ContrarianTrail@lemm.ee 11 points 1 month ago* (last edited 1 month ago) (33 children)

AGI is inevitable unless:

  1. General intelligence is substrate independent and what the brain does cannot be replicated in silica. However, since both are made of matter, and matter obeys the laws of physics, I see no reason to assume this.

  2. We destroy ourselves before we reach AGI.

Other than that, we will keep incrementally improving our technology and it's only a matter of time untill we get there. May take 5 years, 50 or 500 but it seems pretty inevitable to me.

[–] zygo_histo_morpheus@programming.dev 7 points 1 month ago (3 children)

Another possibility is that humans just aren't smart enough to figure out AGI. While I'm sure that we will continue incrementally improving technology in some form, it's not at all self-evident that these improvements will eventually add up to AGI.

[–] ContrarianTrail@lemm.ee 1 points 1 month ago (2 children)

I get what you're saying but to me, that still just sounds like a timescale issue. I can't think of a scenario where we've improved something so much that there's just absolutely nothing we could improve on further. With AI we only need to reach the point of making it have human-level cognitive capabilities and from there on it can improve itself.

[–] BarryZuckerkorn@beehaw.org 2 points 1 month ago

I can't think of a scenario where we've improved something so much that there's just absolutely nothing we could improve on further.

Progress itself isn't inevitable. Just because it's possible doesn't mean that we'll get there, because the history of human development shows that societies can and do stall, reverse, etc.

And even if all human societies tends towards progress, it could still hit dead ends and stop there. Conceptually, it's like climbing a mountain through the algorithm of "if there is a higher elevation near you, go towards that, and avoid stepping downward in elevation." Eventually that algorithm brings you to a local peak. But the local peak might not be the highest point on the mountain, and while it is theoretically possible to have gotten to the other true peak from the beginning, the person who is insistent on never stepping downward is now stuck. Or, it's possible to get to the true peak but it requires climbing downward for a time and climbing up past elevations we've already been to, on paths we hadn't been on. One can imagine a society that refuses to step downward, breaking the inevitability of progress.

This paper identifies a specific dead end and advocates against hoping for general AI through computational training. It is, in effect, arguing that even though we can still see plenty of places that are higher elevation than where we are standing, we're headed towards a dead end, and should climb back down. I suspect that not a lot of the actual climbers will heed that advice.

load more comments (1 replies)
load more comments (1 replies)
load more comments (30 replies)