Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Do we fall into an "artificial intelligence" trope or is it reality?

+0
−0

In regards to questions on , it seems like there is always usually an immediate opinion that the goals an AI is tasked with is going to end up with the opposite effects than what the programmers want. Or that a "powerful" AI is somehow "against humans" because it is "smarter" than us.

The Challenge of Controlling a Powerful AI

AI tasked with bringing down medical costs? What could possibly go wrong?

The AI that fails to be evil

It seems that, if we give an advanced AI any kind of "goal" and let it loose, there is no preventing it from going absolutely wrong in the worst possible way (in regards to that goal anyway).

Is this just a trope arising from Isaac Asimov's books and investigations on the topic, as well as other stories where it is claimed that "we found the perfect rules for intelligent robots"? Is this so dependable that we can tell the AI to do the exact opposite, and attempt to program it to be evil (see link above), and it will turn out to be good?

Given a setting where robots maximize human happiness (the ways in which that is defined will have to be handwaved), can it be realistic that the AI actually works the way it is meant to, or is it actually more realistic that the AI will turn out opposite than what the programmer intends?

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/33279. It is licensed under CC BY-SA 3.0.

0 comment threads

0 answers

Sign up to answer this question »