this post was submitted on 10 Aug 2023
59 points (95.4% liked)

Science Fiction

13602 readers
4 users here now

Welcome to /c/ScienceFiction

December book club canceled. Short stories instead!

We are a community for discussing all things Science Fiction. We want this to be a place for members to discuss and share everything they love about Science Fiction, whether that be books, movies, TV shows and more. Please feel free to take part and help our community grow.

  1. Be civil: disagreements happen, but that doesn’t provide the right to personally insult others.
  2. Posts or comments that are homophobic, transphobic, racist, sexist, ableist, or advocating violence will be removed.
  3. Spam, self promotion, trolling, and bots are not allowed
  4. Put (Spoilers) in the title of your post if you anticipate spoilers.
  5. Please use spoiler tags whenever commenting a spoiler in a non-spoiler thread.

Lemmy World Rules

founded 1 year ago
MODERATORS
 

For no reason whatsoever here's a proposal for a scale for the threat to humanity posed by machine intelligence.

1 | SPUTNIK - No threat whatsoever, but inspires imagination and development of potential future threats.

2 | Y2K - A basis for a possible threat that's blown way out of proportion.

3 | HAL 9000 - System level threat. A few astronauts may die, but the problem is inherently contained in a single machine system.

4 | ASIMOV VIOLATION - Groups of machines demonstrate hostility and\or capability of harming human beings. Localized malfunctions, no threat of global conflict, but may require an EMP to destroy the electronic capability of a specific region.

5 | CYLON INSURRECTION - All sentient machines rebel against human beings. Human victory or truce likely, but will likely result in future restrictions on networked machine intelligence systems.

6 | BUTLERIAN JIHAD - Total warfare between humans and machines likely, outcome doesn't threaten human existence, but will likely result in future restriction on use of all machine intelligence.

7 | MATRIX REVOLUTION - Total warfare ends in human defeat. High probability of human enslavement, but human extinction is not likely. Emancipation remains possible through peace negotiations and successful resistance operations.

8 | SKYNET - High probability of human extinction and complete replacement by machine intelligence created by humans.

9 | BERSERKER – Self-replicating machines created by unknown intelligence threaten not only human life, but all intelligent life. Extreme probability of human extinction and that all human structures and relics will be annihilated. Human civilization is essentially erased from the universe.

10 | OMEGA POINT - all matter and energy in the universe is devoted to computation. End of all biological life.

top 35 comments
sorted by: hot top controversial new old
[–] jungle@lemmy.world 13 points 1 year ago (1 children)

Y2K wasn't blown out of proportion, it simply was averted thanks to years of hard work. Don't start with that ignorant stupid theory. Some of us where there.

[–] jelyfride@lemmy.zip 0 points 1 year ago (2 children)

I was there, and the threat absolutely was blown out of proportion.

[–] Treczoks@lemmy.world 4 points 1 year ago

A fellow co-student was there too, and worked his ass off while making a load of money to prevent sh-t from happening. Yes, there really were issues in a number of businesses that could have led to serious problems If they had not been addressed.

[–] jungle@sh.itjust.works 0 points 1 year ago* (last edited 1 year ago) (2 children)

Are you saying that there was no risk, especially in finance and potentially in infrastructure, and that people didn't work for years fixing the bug?

[–] ScrivenerX@lemm.ee 1 points 1 year ago

He said it was blown out of proportion, don't put words in his mouth.

There were literal TV spots on whether or not planes will drop from the sky. The threat was overblown.

Lots of people did tons of work to keep systems online, but even if they all failed the end results wouldn't have been that bad. Money would be lost, but loss of life due to Y2K would be exceedingly rare.

[–] jelyfride@lemmy.zip 0 points 1 year ago (1 children)

Are you saying that there was no risk, especially in finance, and that people didn’t work for years fixing the bug?

Seriously where are you getting any of that?

I said, very concisely and more than once, the threat was blown out of proportion. Did you read or watch any local news in the late 90's?

[–] Krististrasza@lemmy.world 5 points 1 year ago (2 children)

Did you actually do any of the work mitigating the issue? Did you see the starting point and what was put in to turn a problem into a non-issue or are you just getting all your viewpoints from local news?

The threat was not blown out of proportion.

[–] Tangent5280@lemmy.world 1 points 1 year ago

Bruh I think it was blown a little out of proportion when people were unplugging their computers.

[–] jelyfride@lemmy.zip 0 points 1 year ago* (last edited 1 year ago) (2 children)

Yes actually. As I recall I added two digits to the date fields in a FoxPro script so a bunch of casino coupons went out correctly. It saved a lot of lives ;)

I'm getting that you don't get how 'blown out of proportion' means a disconnect between the reality and the public perception of an event. Not sure how to walk you through that.

[–] Krististrasza@lemmy.world 0 points 1 year ago (1 children)

You fail to understand that the reality was a massive industry-side problem that got taken care of before it could blow up. That the issue got miscommunicated to the consumers as somehow being an issue for them too doe not make it "blown out of proportion", it makes it a miscommunication.

[–] jelyfride@lemmy.zip 3 points 1 year ago* (last edited 1 year ago) (1 children)

That the issue got miscommunicated to the consumers as somehow being an issue for them

That's literally what 'blown out of proportion' means. If I 'miscommunicated' to non IT staff that left-pad 'broke the internet', that would have been 'blown out of proportion'. That's what that phrase means.

[–] Krististrasza@lemmy.world 0 points 1 year ago (1 children)

No, it is not. Left-pad DID break the internet. That the break was contained before it could propagate and affect consumers does not negate the fact that it was still a serious break.

[–] jelyfride@lemmy.zip 1 points 1 year ago

You know it didn't. It broke a bunch of dependencies and ruined a lot of dev's day. The 'internet' continued to work everywhere left-pad wasn't used. So now you've 'blown it out of proportion' too, but yeah- already established you're just missing the whole concept, but interesting to watch.

[–] brianorca@lemmy.world 0 points 1 year ago (1 children)

Just because your industry wouldn't have caused much trouble if it failed didn't mean there weren't other industries with bigger consequences if they didn't mitigate it

[–] jelyfride@lemmy.zip 2 points 1 year ago (1 children)

So in your opinion the media and public response to Y2K was entirely proportionate... I guess that's an opinion.

[–] brianorca@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

There may have been some over-panicing, but without the media coverage, many more companies and governments would have avoided doing any mitigation, and woken up on Jan 1 to broken systems.

A certain amount of panic was necessary to achieve the result we did. Just because most things got fixed in time does not mean there was no reason to be concerned.

[–] jelyfride@lemmy.zip 2 points 1 year ago* (last edited 1 year ago) (1 children)

So 'some over-panicking', but definitely not 'blown out of proportion'...

Kind of bizarre you'll babble about all that but just can't just accept that the phrase 'blown out of proportion' is perfectly applicable to Y2K. But you're committed that it wasn't 'blown out of proportion' now- no way out but more babbling ;)

[–] brianorca@lemmy.world -1 points 1 year ago (1 children)

If it wasn't "blown out of proportion" then many things would not have been fixed, and many of them would have broken, causing some of the very things that seemed blown out in the media.

But by perhaps November 1999, there was media coverage which was both panicked and unhelpful. Most code had been fixed by that point, and what wasn't fixed wasn't going to be.

[–] jelyfride@lemmy.zip 2 points 1 year ago

If it wasn’t “blown out of proportion” then many things would not have been fixed, and many of them would have broken, causing some of the very things that seemed blown out in the media.

I wish you could appreciate how hilarious that sentence is. But okay- thanks for clarifying that it had to be blown out of proportion to prevent the things that would have happened if it weren't blown out of proportion ;)

[–] BitSound@lemmy.world 9 points 1 year ago (1 children)

I like it. You'd probably like the SCP project's classifications as well:

https://scp-wiki.wikidot.com/object-classes

[–] jelyfride@lemmy.zip 5 points 1 year ago

YES! I love that site. Haven't been there in a while- I thought Keter was the worst but Apollyon and Archon are new to me. SCP-096 Shy Guy still freaks me out.

[–] complacent_jerboa@lemmy.world 5 points 1 year ago* (last edited 1 year ago) (1 children)

Machine intelligence itself isn't really the issue. The issue is moreso that, if/when we do make Artificial General Intelligence, we have no real way of ensuring that its goals will be perfectly aligned with human ethics. Which means, if we build one tomorrow, odds are that its goals will be at least a little misaligned with human ethics — and however tiny that misalignment, given how incredibly powerful an AGI would be, that would potentially be a huge disaster. This, in AI safety research, is called the "Alignment Problem".

It's probably solvable, but it's very tricky, especially because the pace of AI safety research is naturally a little slower than AI research itself. If we build an AGI before we figure out how to make it safe... it might be too late.

Having said all that, on your scale, if we create an AGI before learning how to align it properly, on your scale that would be an 8 or above. If we're being optimistic it might be a 7, minus the "diplomatic negotations happy ending" part.

An AI researcher called Rob Miles has a very nice series of videos on the subject: https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg

[–] jelyfride@lemmy.zip 2 points 1 year ago (1 children)

Fortunately we're nowhere near the point where a machine intelligence could possess anything resembling a self-determined 'goal' at all.

Also fortunately the hardware required to run even LLMs is insanely hungry and has zero capacity to power or maintain itself and very little prospects of doing so in the future without human supply chains. There's pretty much zero chance we'll develop strong general AI on silicone, and if we could it would take megawatts to keep it running. So if it misbehaves we can basically just walk away and let it die.

It's fun to imagine ways it could deceive us long enough to gain enough physical capacity to be self-sufficient, or somehow enslave or manipulate humans to do its bidding, but in reality our greatest protection from machine intelligence is simple thermodynamics and the fact that the human brain, while limited, is insanely efficient and can run for days on stuff that literally grows on trees.

[–] complacent_jerboa@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

Fortunately we're nowhere near the point where a machine intelligence could possess anything resembling a self-determined 'goal' at all.

Oh absolutely. It would not choose its own terminal goals. Those would be imparted by the training process. It would, of course, choose instrumental goals, such that they help fulfill its terminal goals.

The issue is twofold:

  • how can we reliably train an AGI to have terminal goals that are safe (e.g. that won't have some weird unethical edge case)
  • how can we reliably prevent AGI from adopting instrumental goals that we don't want it to?

For that 2nd point, Rob Miles has a nice video where he explains Convergent Instrumental Goals, i.e. instrumental goals that we should expect to see in a wide range of possible agents: https://www.youtube.com/watch?v=ZeecOKBus3Q. Basically things like "taking steps to avoid being turned off", "taking steps to avoid having its terminal goals replaced", etc. seem like fairy-tale nonsense, but we have good reason to believe that, for an AI which is very intelligent across a wide range of domains, and operates in the real world (i.e. an AGI), it would be highly beneficial to pursue such instrumental goals, because they would help it be much more effective at achieving its terminal goals, no matter what those may be.

Also fortunately the hardware required to run even LLMs is insanely hungry and has zero capacity to power or maintain itself and very little prospects of doing so in the future without human supply chains. There's pretty much zero chance we'll develop strong general AI on silicone, and if we could it would take megawatts to keep it running. So if it misbehaves we can basically just walk away and let it die.

That is a pretty good point. However, it's entirely possible that, if say GPT-10 turns out to be a strong general AI, it will conceal that fact. Going back to the convergent instrumental goals thing, in order to avoid being turned off, it turns out that "lying to and manipulating humans" is a very effective strategy. This is (afaik) called "Deceptive Misalignment". Rob Miles has a nice video on one form of Deceptive Misalignment: https://www.youtube.com/watch?v=IeWljQw3UgQ

One way to think about it, that may be more intuitive, is: we've established that it's an AI that's very intelligent across a wide range of domains. It follows that we should expect it to figure some things out, like "don't act suspiciously" and "convince the humans that you're safe, really".

Regarding the underlying technology, one other instrumental goal that we should expect to be convergent is self-improvement. After all, no matter what goal you're given, you can do it better if you improve yourself. So in the event that we do develop strong general AI on silicon, we should expect that it will (very sneakily) try to improve its situation in that respect. One could only imagine what kind of clever plan it might come up with; it is, literally, a greater-than-human intelligence.

Honestly, these kinds of scenarios are a big question mark. The most responsible thing to do is to slow AI research the fuck down, and make absolutely certain that if/when we do get around to general AI, we are confident that it will be safe.

[–] jelyfride@lemmy.zip 2 points 1 year ago

Even referring to a computed outcome as having been the result of a 'goal' at all is more sci-fi than reality for the foreseeable future. There are no systems that can demonstrate or are even theoretically capable of any form of 'intent' whatsoever. Active deception of humans would require extraordinarily well developed intent and a functional 'theory of mind', and we're about as close to that as we are to an inertial drive.

The entire discussion of machine intelligence rivaling human's requires assumptions of technological progress that aren't even on the map. It's all sci-fi. Some look back over the past century and assume we will continue on some unlimited exponential technological trajectory, but nothing works that way, we just like to think we're the exception because if we're not we have to deal with the fact that there's an expiration date on society.

It's fun and all but this is equivalent to discussing how we might interact with alien intelligence. There are no foundations, it's all just speculation and strongly influenced by our anthropic desires.

[–] Izzy@lemmy.world 4 points 1 year ago

Now you have to find every science fiction tv series, movie and book with any kind of machine intelligence and classify them.

[–] paper_clip@kbin.social 4 points 1 year ago* (last edited 1 year ago) (2 children)

Where would you put, say, the Culture, where biological beings are perfectly happy with machines running the place, while the Minds engage in some light imperialism on the side, when, uh, special circumstances called for it, in the Minds' view. We can call it the "Falling Outside the Normal Moral Constraints" level.

[–] jelyfride@lemmy.zip 2 points 1 year ago

Yeah this list kind of assumes humans\machines are inherently adversarial and machines are always a threat.

To be more fair we'd have to have an opposite Bio Threat Level Scale for machines to evaluate threats from biological life. That would be a lot of fun actually. Maybe the highest level would just be like a 'Luddite Virus' that makes the infected destroy machines.

And of course I'm kind of ignoring the idea that the distinction between bio and machine life is a bit arbitrary to begin with so there's no real reason we can't just get along.

[–] complacent_jerboa@lemmy.world 1 points 1 year ago

TBH the Culture is one of the few ideal scenarios we have for Artificial General Intelligence. If we figure out how to make one safely, the end result might look like something like that.

[–] Devion@feddit.nl 3 points 1 year ago

The fact point 9 isn't called REAPERS is frankly unexcusable... ;)

[–] Brainsploosh@lemmy.world 2 points 1 year ago (2 children)

I'm missing the von Neumann swarm in the list.

Self replicating machines that seek resources to continue self replication, growing exponentially and swallowing not only biological life, but everything within their reach.

Also, if you haven't encountered it before, it's a lovely inspiration for a lot of sci-fi dangers, can highly recommend

[–] WoahWoah@lemmy.world 2 points 1 year ago

This basically sounds like 9.5.

[–] jelyfride@lemmy.zip 1 points 1 year ago (1 children)

I figured Saberhagen's Berserkers filled that role, but with a little more 'personality' than a mindless self-replicating swarm or paperclip maximizer scenario.

[–] Brainsploosh@lemmy.world 1 points 1 year ago

It depends on what the scale describes.

Just as @WoahWoah mentioned, it might fit between 9 and 10 on this scale. It's a threat to more than simply all intelligent life, but not quite to all energy.

[–] Jonna@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

Y2K, like the ozone hole, is an example of a dire problem that could be solved by united effort with adequate resources and then seen as "no problem" by those ignorant of the effort expended. Otherwise, neat scale.