772
The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
No. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.
Autonomous killings is an absolutely terrible, terrible idea.
The incident I'm thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:
In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a "retaliatory" nuclear strike.
https://en.m.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident
As faulty as humans are, it's a good a safeguard as we have to tragedies. Keep a human in the chain.
Self-driving cars lose their shit and stop working if a kangaroo gets in their way, one day some poor people are going to be carpet bombed because of another strange creature no one every really thinks about except locals.