this post was submitted on 07 Oct 2024
113 points (100.0% liked)

Technology

37800 readers
499 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/20858435

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

you are viewing a single comment's thread
view the rest of the comments
[–] DdCno1@beehaw.org 5 points 2 months ago* (last edited 2 months ago)

I was also based on the assumption that the rapid progress of aerospace technology that happened in the 1920s to 1960s would continue onward at the same pace, whereas what actually happened was that barriers emerged that nobody was able to circumvent, like for example engineering things to withstand incredibly abrasive Moon dust (or really do anything productive on that lifeless rock), how to deal with the endless pitfalls of a long Mars journey, how to bring down the cost of launch vehicles so that grand projects like giant space stations would even be remotely possible (von Braun was already thinking about huge space stations all the way back in 1945). Many of these issues couldn't simply be solved by throwing more money at them, which is important. Deciders, both in Washington and Moscow, were smart enough to realize this in the 1970s, for the most part at least (the Space Shuttle and its Soviet clone, each a gigantic waste of money, are major counter example from this era).

The point I'm making here is that everyone assumed linear progress in this area, just like there are people currently making many billion dollar bets on linear progress in regards to computer technology in general and AI in particular, but at least, with the benefit of hindsight given past examples, there's a reasonable amount of doubt this time around.