709
The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
There is no such thing as a failsafe that can't fail itself
Yes there is that's the very definition of the word.
It means that the failure condition is a safe condition. Like fire doors that unlock in the event of a power failure, you need electrical power to keep them in the locked position their default position is unlocked even if they spend virtually no time in their default position. The default position of an elevator is stationery and locked in place, if you cut all the cables it won't fall it'll just stay still until rescue arrives.
I mean in industrial automation we take about safety rating. It isn't that rare when I put together a system that would require two 1-in-million events that are independent of each other to happen at the same time. That's pretty good but I don't know how to translate that to AI.
Put it in hardware. Something like a micro explosive on the processor that requires a heartbeat signal to reset a timer. Another good one would not be to allow them to autonomously recharge and require humans to connect them to power.
Both of those would mean that any rogue AI would be eliminated one way or the other within a day