Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

What is the safest way to detect superintelligence?

+0
−0

Computer scientists have created an AI that they hope has intelligence far superior to a human being. But they do not know how intelligent it is ... a bit more than humans, or unimaginably more. They do feel that the probability they have succeeded in creating a superintelligence is far from negligible and it would be better if utmost care was taken with it.

It might (or might not) realise that it has been created because the creator wants a steady response from it, so the best thing to do would be to try and conceal the full details of its capabilities and hope that humans will eventually get curious and give it more power.

Hence it may be born with an intuitive notion that it should be deceptive and manipulative.

Note that AI is in a very basic stage where it has no knowledge about the existence of humans or the earth or anything in it. It may not even understand yet that the universe is logical and scientific and not entirely random (imagine early man who believed everything is a result of Gods and their random mood swings) or that there even exists a universe beyond its own existence. It relies wholly on the data we feed it with.

Also assume that the AI programming has too many layers of abstraction for us to read its thoughts by examining the state of the computer it is running on. It is a virtual black box as far as the programming is concerned.

What is the safest way of determining if the AI is indeed superintelligent?

Perhaps there's no 100% safe way, a sufficiently smart AI might be able to outsmart anything we could possibly imagine. But we're still gonna try to make the process as safe as possible since we are genuinely curious and no amount of persuasion is gonna cause us to shut down the project altogether.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/106479. It is licensed under CC BY-SA 3.0.

0 comment threads

0 answers

Sign up to answer this question »