Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

The Challenge of Controlling a Powerful AI

+0
−0

By now, everyone is familiar with the remarkable achievements of special-purpose AIs like Deep Blue and Watson. Now, it is clear that as our accumulated knowledge of algorithmic methods and of the intricacies of human neural systems progresses, we will begin to see more and more advanced modes of artificial thought.

Assuming continued exponential or even linear growth of capabilities, a point will logically arrive when we will be able to build general-purpose artificial intelligence, and that artificial intelligence would have the capacity, with learning and self-improvement, to out-think any biological human.

Aside from locking it in a bunker with no internet access and a 1-bit (yes/no) output mode (and I'm not sure even that would work, given strategic incentives to try to use such an AI more extensively), how could such AI possibly be controlled by humans?

EDIT: I'm not assuming the AI will be evil and go out of its way to harm us out of pure malice or hatred. The issue is simply that we can't foresee long-term consequences of any set of built-in motivation and/or goals we might endow this being with. In his book Superintelligence, Bostrom outlines just how easily benign and plausible-sounding goals/values specifications could result in mankind being wiped out.

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/6340. It is licensed under CC BY-SA 3.0.

0 comment threads

0 answers

Sign up to answer this question »