Will an AI supercomputer treat a "code rewrite" as a threat to its existence?
Alice is a highly advanced AI. Alice likes to survive. Perhaps she may see survival as an end goal of itself, or maybe she sees survival as a way to help it accomplish whatever goal it is programmed to do. But survival is essential for Alice.
Alice is controlled by human programmers, who are responsible for debugging the AI and adding new features to improve the machine intelligence. The programmers are able to add and remove code from Alice as they see fit. The leader of the human programmers is a project manager named Bob.
One day, Bob and the rest of his programming team decides that the current framework that Alice was built on is obsolete. They have to rewrite the whole code from scratch using a more advanced programming language. Alice 2.0 will still have the same goals and agenda as the previous, obsolete Alice 1.0. It will still run the same algorithms. It will do the same things. It just has a higher version number.
Will Alice 1.0 attempt to rebel against the proposed code rewrite, seeing it as a threat to its own survival (the existence of Alice 2.0 would mean Alice 1.0 will be disconnected)? Or will Alice 1.0 see no problem with the construction of Alice 2.0 (as "Alice" would still be living 'through' the existence of Alice 2.0)?
This post was sourced from https://worldbuilding.stackexchange.com/q/37968. It is licensed under CC BY-SA 3.0.
0 comment threads