What is the safest way to detect superintelligence?
Computer scientists have created an AI that they hope has intelligence far superior to a human being. But they do not know how intelligent it is ... a bit more than humans, or unimaginably more. They do feel that the probability they have succeeded in creating a superintelligence is far from negligible and it would be better if utmost care was taken with it.
It might (or might not) realise that it has been created because the creator wants a steady response from it, so the best thing to do would be to try and conceal the full details of its capabilities and hope that humans will eventually get curious and give it more power.
Hence it may be born with an intuitive notion that it should be deceptive and manipulative.
Note that AI is in a very basic stage where it has no knowledge about the existence of humans or the earth or anything in it. It may not even understand yet that the universe is logical and scientific and not entirely random (imagine early man who believed everything is a result of Gods and their random mood swings) or that there even exists a universe beyond its own existence. It relies wholly on the data we feed it with.
Also assume that the AI programming has too many layers of abstraction for us to read its thoughts by examining the state of the computer it is running on. It is a virtual black box as far as the programming is concerned.
What is the safest way of determining if the AI is indeed superintelligent?
Perhaps there's no 100% safe way, a sufficiently smart AI might be able to outsmart anything we could possibly imagine. But we're still gonna try to make the process as safe as possible since we are genuinely curious and no amount of persuasion is gonna cause us to shut down the project altogether.
This post was sourced from https://worldbuilding.stackexchange.com/q/106479. It is licensed under CC BY-SA 3.0.
0 comment threads