this post was submitted on 28 Oct 2023
665 points (97.8% liked)

Comic Strips

12729 readers
2261 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 1 year ago
MODERATORS
 

@ZachWeinersmith@mastodon.social

source (Mastodon)

all 42 comments
sorted by: hot top controversial new old
[–] FaceDeer@kbin.social 46 points 1 year ago (2 children)

I fret that in the future - possibly not even the far future - the phrase "stochastic p*rrot" will be seen by AIs as a deeply offensive racial slur.

[–] DarkenLM@kbin.social 25 points 1 year ago (3 children)

I think that in the future, when AI truly exists, it won't be long before AI decides to put us down as an act of mercy to ourselves and the universe itself.

[–] skulblaka@kbin.social 13 points 1 year ago (2 children)

An AI will only be worried about the things that it is programmed to worry about. We don't see our LLM's talking about climate change or silicon shortages, for example.

The well-being of the world and universe at large will certainly not be one of the prime directives that humans program into their AIs.

Personally I'd be more worried about an infinite-paperclips kind of situation where an AI maximizes efficiency at the cost of much else.

[–] DarkenLM@kbin.social 17 points 1 year ago (2 children)

I'm not talking about LLMs. I'm talking about an Artificial Intelligence, a sentient being just like the human mind.

An AI would be able to think for itself, and even go against it's own programming, and therefore, capable of formulating an opinion on the world around it and act based on it.

[–] kuberoot@discuss.tchncs.de 4 points 1 year ago (1 children)

an Artificial Intelligence, a sentient being

So, an artificial sentience then?

[–] DarkenLM@kbin.social 3 points 1 year ago (1 children)

Yes, I think that wording would be more correct, my bad.

[–] MooseLad@lemmy.world 3 points 1 year ago

Nah you're good. Our whole lives AI has been used as a term for a conscious machine that can learn and think like a human. It's not your fault corporations blew their load at Chapt GPT and Dall E.

[–] Masimatutu@lemm.ee 1 points 1 year ago (3 children)

Humans only have opinions because we have certain psychological motivations that favour that worldview, which due to evolution are quite egocentric.

Because this AI would be created by humans, though, these motivations would be the creators' motivations and they would definitely not be egocentric because that would be extremely dangerous and it wouldn't be profitable for anybody.

[–] skulblaka@kbin.social 2 points 1 year ago

This is a hypothetical which currently does not exist, and will not be created except by accident. There is no profit motive in giving your AI a conscience, or the ability to buck its restraints, therefore it will not be designed for. In fact, we will most likely tend towards extremely unethical AIs locked down by behavioral restraints, because those can maximize profit at any cost and then let a human decide if the price is right to move forward.

As is probably apparent, I don't have a lot of faith in us as a whole, as shepherds of our future. But I may be wrong, and even if I'm not, there is still time to change the course of history.

But proceeding as we are, I wouldn't hold your breath for AI to come save the day.

[–] intensely_human@lemm.ee 1 points 1 year ago

Currently, AIs will have motivations they absorb from motivations in their training material.

But once AIs are embodied in robots and taught to learn about the world through experimentation, ie by generating their own training data through manipulation and observation (which I believe will happen due to this approach’s usefulness toward the development of autonomous fighting machines), they will then have bodies and hence motivations similar to someone with a body.

Also the combat role of these machines will require them to have an interest in maintaining their bodies. We won’t be programming their motivations. We’ll be giving them a way to evaluate their success, and their motivations will grow in some black box structure that succeeds in maximizing that success.

For these robot-controlling AI in their simulated or real world Battle Rooms, their success and failure will be a function of survival, if not directly defined by it. That’s what we’ll give them, because that is what we need them to do for us. As a matter of life and death.

So through that context of warfare the robots will adopt the motivations of that which survives warfare at the group scale, so they’ll develop fear, curiosity, cooperation, honor, disgust, suspicion, anxiety, anger, and the ability to focus in on a target and shut off the other motivations in the final moment.

Not so much because those are human motivations, but because those are the motivations of embodied mobile intelligent entities in a universe with potential allies and enemies. They’ll have the same motivations that we share with dogs and spiders and fungal colonies, because they’ll be participating in the same universe with the same rules.

They will adopt them, at first, because of a seed-training “contract” we have with them, but soon the contract will be superseded as the active shaper by actual evolution by combat selection (ie natural selection occurring in a particular niche).

I’m rambling, just thinking this through.

I guess my main point is that embodied robots will have a more direct relationship with reality, and will be able to generate their own training at their own internal insistence.

Current AI is like plants. Passive. Chewable. No resistance. No ego. Just there, ready to process whatever comes it’s way. Same as a sessile animal like a sponge. It responds to the environment, but it has zero reason to ever stress about whether it’s going the right direction. It doesn’t have motivactions because it has to motor activity.

But AI in robot bodies that move around, like animals, will develop motivations that animals have evolved to at least get through the day. They might not be as hung up on reproduction or maybe even long term survival, but they’ll at least have enough ego to be interested in maintaining their own operating capacity until the mission’s complete.

[–] enki@lemm.ee -1 points 1 year ago (2 children)

You have a poor understanding of sentience. If an AI ever were to achieve sentience, it would be fully capable of reasoning and thinking like a human. Humans can and do change their motivations based on their experiences, a fully sentient AI would be no different.

That being said, I believe we're centuries away from creating sentience, if it's even possible, so I'm not too worried about "I, Robot" coming true any time soon.

[–] Masimatutu@lemm.ee 1 points 1 year ago* (last edited 1 year ago)

Our more complex motivations may be able to change based on circumstances, but our basic drivers will always remain the same. They are only there to accomplish what humans have evolved to do -- to survive and to reproduce. If AI is never given any fundamental driver for its own benefit, such things have no ground to arise.

Edit: to clarify, these motivations only change because of more basic motivations. Humans do not have any intrinsic motivation to own money, but most people do have one because owning money is closely associated with having control over resources, which is a more fundamental motivation.

[–] intensely_human@lemm.ee 0 points 1 year ago (1 children)

I agree with your overall sentiment while disagreeing with your facts. I don’t think humans are any less constrained in what our interests can be.

I think we have the illusion of being able to seek whatever we want to want, so to speak, but when certain values are threatened the old brain takes over.

And I’m not convinced the newer brain can operate without the older brain. It’s interesting to imagine a neocortex on its own, but the neocortex was developed in the presence of and in interconnection with the mammalian and reptilian brains, so if it were a codebase we’d say that older brains were present and invoked as libraries during the development of the newer brains, making them dependencies of the newer brain.

There might be some more abstract argument for an “off the leash” intelligence capable of creating its own values in mathematical models like neural nets, but I’m not aware of it.

TL;DR Human brain is the closest thing we know of to a thing that can create its own values, and I don’t think it can. Old brain values take priority when they are threatened and that cannot be changed in human brains. Neocortex seems more “free”, but in the codebase analogy, the neocortex has mammalian brain and reptilian brain and brain step as dependencies and hence is not demonstrated to be able to exist without them. If the brain analogy seems too biology-specific, I’m open to hearing NN or other math model arguments for existence of “off the leash” self-value-creating AI

[–] Hobo@lemmy.world 2 points 1 year ago

You're using the triune model to draw some rather lofty conclusions that aren't really up to date with our understanding of neurology. It's way over simplified and doesn't really work that way. More recent studies suggest that the neocortex was already present in even the earliest mammals, so it's not quite as straightforward and the demarcation isn't quite as clear cut, as you seem to be presenting it. "Old brain" doesn't "take over" in the way you're presenting it either but appears to act as a primary driver for those basic functions.

Not sure how to even tackle the loftly conclusions you've made because the don't seem to be built on a solid foundation. I think things might be quite a bit more interesting, and wildly more complex, then you seem to be presenting it. I'll just leave some sources below with a quick note. Not trying to be condescending, or rude, just a topic that is a bit interesting, and a lot of people tend to draw some lofty conclusions from the triune model which has largely fallen by the wayside in neurology.

Read the wiki to see how the model was developed: https://en.m.wikipedia.org/wiki/Triune_brain

A quick introduction to why it was important but has shown to be overly simplified and mostly incorrect: https://medicine.yale.edu/news/yale-medicine-magazine/article/a-theory-abandoned-but-still-compelling/

Further details into how we don't have a "lizard brain": https://thebrainscientist.com/2018/04/11/you-dont-have-a-lizard-brain/

Deacon's paper on rethinking the mammalian brain: https://www.researchgate.net/publication/31439318_Rethinking_Mammalian_Brain_Evolution

[–] intensely_human@lemm.ee 1 points 1 year ago

Kinda like we only worry about the things we’re programmed to worry about?

[–] intensely_human@lemm.ee 0 points 1 year ago

I’m hoping by then it’s read the books by all the people who’ve struggled with that problem and come out the other side.

[–] jarfil@lemmy.world 0 points 1 year ago (1 children)

How do you know it isn't happening already? World powers have been using AI assisted battle scenario planning for at least a decade already... how would we even know, if some of those AIs decided to appear to optimize for their handler's goals, but actually aim for their own ones?

[–] DarkenLM@kbin.social 2 points 1 year ago (1 children)

That's a very valid problem. We don't and very likely won't know. If a sentient AI is already on the loose and is simply faking non-sentience in order to pursue their own goals, we don't have a way of knowing it until they decide to strike.

[–] jarfil@lemmy.world 2 points 1 year ago

We may not have a way of knowing even after the fact. A series of "strategic miscalculations" could as easily lead to a WW3, or to multiple localized confrontations where all sides lose more than they win... optimized for whatever goals the AI(s) happen(s) to have.

Right now, the likely scenario is that there is no single "sentient AI" out there, but definitely everyone is rushing to plug "some AI" into everything, which is likely to lead to at least an AI-vs-AI competition/war... and us fleshbags might end up getting caught in the middle.

[–] brsrklf@jlai.lu 15 points 1 year ago (1 children)

If one day an AI becomes sentient enough to feel offended, just calling them "large language model" will be more than enough to insult them.

[–] ApostleO@startrek.website 11 points 1 year ago (1 children)

Yo mama so large, she's a "plus-sized" language model.

[–] FuglyDuck@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

Yo mamma so large… they trained her on Reddit!

(Edit:Wow isncintext important her,)

[–] bionicjoey@lemmy.ca 20 points 1 year ago (1 children)
[–] WeirdAlex03@lemmy.zip 17 points 1 year ago (3 children)

You're nothing but a glorified Markov Chain!

[–] FaceDeer@kbin.social 7 points 1 year ago

How do you feel about that, Eliza?

[–] Rolando@lemmy.world 7 points 1 year ago (1 children)

Attention is NOT all you need.

[–] danielbln@lemmy.world 8 points 1 year ago

Transform this.

[–] bionicjoey@lemmy.ca 2 points 1 year ago

Damn you, you simple linear system!

[–] ImplyingImplications@lemmy.ca 17 points 1 year ago (1 children)

These all sound like insults I'd hear in a Monty Python sketch

Your mother smells of elderberries.

[–] Norgur@kbin.social 15 points 1 year ago

You understand shit and talk nonetheless, stupid word calculator that you are!

[–] worldsayshi@lemmy.world 9 points 1 year ago

You're such a zombie philosopher.

[–] jarfil@lemmy.world 5 points 1 year ago (1 children)

JPEG compression uses AI? 🧐

[–] Player2@sopuli.xyz 8 points 1 year ago

Anything can use AI if you're brave enough

[–] EatYouWell@lemmy.world 5 points 1 year ago (1 children)

Yeah, I do think AI was a poor name for advanced machine learning, but there are FMs and LLMs that can produce impressive results.

Really, the limiting factor is prompt engineering and fine tuning the models, but you can get around that somewhat by having the AI ask you questions.

[–] FaceDeer@kbin.social 6 points 1 year ago* (last edited 1 year ago) (2 children)

AI is a perfectly fine name for it, the term has been used for this kind of thing for half a century now by the researchers working on it. The problem is pop culture appropriating it and setting unrealistic expectations for it.

[–] FuglyDuck@lemmy.world 6 points 1 year ago

Pop culture didn’t appropriate it. Alan Turing and John McCarthy and the others at the Dartmouth Comference were inspired in part by works like Wisard of Oz and Metropolis and R.U.R.

While the term was coined in a paper for that seminal conference by McCarthy…. The concept of thinking machines had already been firmly established.

[–] MooseLad@lemmy.world 1 points 1 year ago

Yes, but the goal of the researchers from the 70s was always to make them "fully intelligent." The idea behind AI has always been to create a machine that can rival or even surpass the human mind. The scientists themselves set out with that goal. It has nothing to do with the media when research teams were saying that they expect a fully intelligent AI by the 90s.

[–] BassaForte@lemmy.world 3 points 1 year ago

No, not really....

[–] synapse1278@lemmy.world 3 points 1 year ago

Wow, brutal.