this post was submitted on 28 Oct 2023
665 points (97.8% liked)

Comic Strips

12729 readers
2261 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 1 year ago
MODERATORS
 

@ZachWeinersmith@mastodon.social

source (Mastodon)

you are viewing a single comment's thread
view the rest of the comments
[–] intensely_human@lemm.ee 1 points 1 year ago

Currently, AIs will have motivations they absorb from motivations in their training material.

But once AIs are embodied in robots and taught to learn about the world through experimentation, ie by generating their own training data through manipulation and observation (which I believe will happen due to this approach’s usefulness toward the development of autonomous fighting machines), they will then have bodies and hence motivations similar to someone with a body.

Also the combat role of these machines will require them to have an interest in maintaining their bodies. We won’t be programming their motivations. We’ll be giving them a way to evaluate their success, and their motivations will grow in some black box structure that succeeds in maximizing that success.

For these robot-controlling AI in their simulated or real world Battle Rooms, their success and failure will be a function of survival, if not directly defined by it. That’s what we’ll give them, because that is what we need them to do for us. As a matter of life and death.

So through that context of warfare the robots will adopt the motivations of that which survives warfare at the group scale, so they’ll develop fear, curiosity, cooperation, honor, disgust, suspicion, anxiety, anger, and the ability to focus in on a target and shut off the other motivations in the final moment.

Not so much because those are human motivations, but because those are the motivations of embodied mobile intelligent entities in a universe with potential allies and enemies. They’ll have the same motivations that we share with dogs and spiders and fungal colonies, because they’ll be participating in the same universe with the same rules.

They will adopt them, at first, because of a seed-training “contract” we have with them, but soon the contract will be superseded as the active shaper by actual evolution by combat selection (ie natural selection occurring in a particular niche).

I’m rambling, just thinking this through.

I guess my main point is that embodied robots will have a more direct relationship with reality, and will be able to generate their own training at their own internal insistence.

Current AI is like plants. Passive. Chewable. No resistance. No ego. Just there, ready to process whatever comes it’s way. Same as a sessile animal like a sponge. It responds to the environment, but it has zero reason to ever stress about whether it’s going the right direction. It doesn’t have motivactions because it has to motor activity.

But AI in robot bodies that move around, like animals, will develop motivations that animals have evolved to at least get through the day. They might not be as hung up on reproduction or maybe even long term survival, but they’ll at least have enough ego to be interested in maintaining their own operating capacity until the mission’s complete.