this post was submitted on 25 Nov 2023
772 points (96.8% liked)

Technology

59665 readers
3035 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] cosmicrookie@lemmy.world 11 points 1 year ago (2 children)

The only fair approach would be to start with the police instead of the army.

Why test this on everybody else except your own? On top of that, AI might even do a better job than the US police

load more comments (2 replies)
[–] ElBarto@sh.itjust.works 11 points 1 year ago

Cool, needed a reason to stay inside my bunker I'm about to build.

[–] 5BC2E7@lemmy.world 11 points 1 year ago (4 children)

I hope they put some failsafe so that it cannot take action if the estimated casualties puts humans below a minimum viable population.

[–] EunieIsTheBus@feddit.de 9 points 1 year ago (3 children)

There is no such thing as a failsafe that can't fail itself

load more comments (3 replies)
load more comments (3 replies)
[–] rustyriffs@lemmy.world 10 points 1 year ago (21 children)

Well that's a terrifying thought. You guys bunkered up?

load more comments (21 replies)
[–] afraid_of_zombies@lemmy.world 10 points 1 year ago

It will be fine. We can just make drones that can autonomously kill other drones. There is no obvious way to counter that.

Cries in Screamers.

[–] Uranium3006@kbin.social 9 points 1 year ago (1 children)
load more comments (1 replies)
[–] autotldr@lemmings.world 9 points 1 year ago

This is the best summary I could come up with:


The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.

Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.

The use of the so-called "killer robots" would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.

"This is really one of the most significant inflection points for humanity," Alexander Kmentt, Austria's chief negotiator on the issue, told The Times.

Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.

The New Scientist reported in October that AI-controlled drones have already been deployed on the battlefield by Ukraine in its fight against the Russian invasion, though it's unclear if any have taken action resulting in human casualties.


The original article contains 376 words, the summary contains 158 words. Saved 58%. I'm a bot and I'm open source!

[–] GutsBerserk@lemmy.world 9 points 1 year ago

So, it starts...

[–] FlyingSquid@lemmy.world 6 points 1 year ago (1 children)

I'm guessing their argument is that if they don't do it first, China will. And they're probably right, unfortunately. I don't see a way around a future with AI weapons platforms if technology continues to progress.

[–] shrugal@lemm.ee 12 points 1 year ago (1 children)

We could at least make it a war crime.

[–] FlyingSquid@lemmy.world 7 points 1 year ago (1 children)

That doesn't seem to have stopped anyone.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›