this post was submitted on 05 Jun 2024
94 points (100.0% liked)

Technology

37716 readers
397 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] darkphotonstudio@beehaw.org 19 points 5 months ago* (last edited 5 months ago) (3 children)

I believe much of our paranoia concerning ai stems from our fear that something will come along and treat us like we treat all the other life on this planet. Which is bitterly ironic considering our propensity for slaughtering each other on a massive scale. The only danger to humanity is humans. If humanity is doomed, it will be our own stupid fault, not AI.

[–] Kichae@lemmy.ca 15 points 5 months ago (2 children)

I think much of it comes from "futurologists" spending too much time smelling each others' farts. These AI guys think so very much of themselves.

[–] darkphotonstudio@beehaw.org 2 points 5 months ago* (last edited 5 months ago) (1 children)

Agreed, partially. However, the "techbros" in charge, for the most part, aren't the researchers. There are futurologists who are real scientists and researchers. Dismissing them smacks of the anti-science knuckleheads ignoring warnings about the dangers of not wearing masks and getting vaccines during the pandemic. Not everyone interested in the future is a techbro.

[–] Kichae@lemmy.ca 1 points 5 months ago

"Futurologist" is a self-appointed honorific that people who fancy themselves "deep thinkers" while thinking of nothing more deeply than how deep they are. It's like declaring oneself an "intellectual".

[–] verdare@beehaw.org 4 points 5 months ago (1 children)

The only danger to humans is humans.

I’m sorry, but this is a really dumb take that borders on climate change denial logic. A sufficiently large comet is an existential threat to humanity. You seem to have this optimistic view that humanity is invincible against any threat but itself, and I do not think that belief is justified.

People are right to be very skeptical about OpenAI and “techbros.” But I fear this skepticism has turned into outright denial of the genuine risks posed by AGI.

I find myself exhausted by this binary partitioning of discourse surrounding AI. Apparently you have to either be a cult member who worships the coming god of the singularity, or think that AI is either impossible or incapable of posing a serious threat.

[–] darkphotonstudio@beehaw.org 3 points 5 months ago

You seem to have this optimistic view that humanity is invincible against any threat but itself

I didn't say that. You're making assumptions. However, I don't take AGI as a serious risk, not directly anyway. AGI is a big question mark at this time and hardly comparable to a giant comet or pandemic, of which we have experience or solid scientific evidence. Could it be a threat? Yeah. Do I personally think so? No. Our reaction to and exploitation of will likely do far more harm than any direct action by an AGI.

[–] flux@lemmyis.fun 2 points 5 months ago (1 children)
[–] darkphotonstudio@beehaw.org 1 points 5 months ago

True. But we are still talking about what is essentially an alien mind. Even if it can do a good impression of a human intelligence, doesn't mean it is a human mind. It won't have billions of years of evolution and thousands of years of civilization and development.