Do we fall into an "artificial intelligence" trope or is it reality?
In regards to questions on artificial-intelligence, it seems like there is always usually an immediate opinion that the goals an AI is tasked with is going to end up with the opposite effects than what the programmers want. Or that a "powerful" AI is somehow "against humans" because it is "smarter" than us.
The Challenge of Controlling a Powerful AI
AI tasked with bringing down medical costs? What could possibly go wrong?
It seems that, if we give an advanced AI any kind of "goal" and let it loose, there is no preventing it from going absolutely wrong in the worst possible way (in regards to that goal anyway).
Is this just a trope arising from Isaac Asimov's books and investigations on the topic, as well as other stories where it is claimed that "we found the perfect rules for intelligent robots"? Is this so dependable that we can tell the AI to do the exact opposite, and attempt to program it to be evil (see link above), and it will turn out to be good?
Given a setting where robots maximize human happiness (the ways in which that is defined will have to be handwaved), can it be realistic that the AI actually works the way it is meant to, or is it actually more realistic that the AI will turn out opposite than what the programmer intends?
This post was sourced from https://worldbuilding.stackexchange.com/q/33279. It is licensed under CC BY-SA 3.0.
0 comment threads