The AI that fails to be evil
A recurring theme is how an artificial intelligence that was built with completely reasonable and positive goals instead does great harm to the world.
However I'm now thinking about the reverse: A supervillain builds an AI specifically for making people's life miserable. Its task is very simple: Maximize the human suffering in the world. To allow it to reach this goal, the AI has control over a small army of robots, has connection to the internet and access to any information it may need.
However much to the dismay to the supervillain, the AI fails to do this, but instead just sits there doing nothing evil at all. The supervillain checks everything, and the AI is indeed programmed with the stated goal and didn't change goals, it is indeed probably much more intelligent than any human, and it's definitely not unable to use the tools given to it. It certainly has no conscience, and no other goal than the one stated. And yet, the AI decided to just sit there and do nothing. It is definitely a conscious decision of the AI, but the AI is not willing to share the reason.
My question is: What could lead the AI to the decision to do nothing?
0 comment threads