How to prevent self-modifying A.I. from removing the "kill switch" itself without human interference?
All A.I. comes with a built-in safety mechanism that prevent their kind from ever doing harm to a human or humanity, it is the last straw should they pose any real threat to our safety.
A century or two from now their level of intelligent either on par or surpass us and thankfully all of them comes with a built in kill switch, however soon they are expected to mass produce by themselves and do their own upgrades without human interference.
We often say prevention is better than cure and the same phrase can also apply to the A.I. following this logic they can prevent their own "shutdown" by removing the "kill switch" entirely.
Even if human is removed from the equation such as either mass interstellar migration to a different planet or major catastrophe, each A.I. always come with a "kill switch". The safety mechanism must be user(human) friendly and 100% reliable and durable because it should be the last piece to fail in all situation.
Question
Is there any ingenious solution to prevent existing A.I. from tampering with their "kill switch" also any new blue print for mass production must come with the safety mechanism?
This post was sourced from https://worldbuilding.stackexchange.com/q/27699. It is licensed under CC BY-SA 3.0.
0 comment threads