How would society change around a benevolent Superintelligent AI?
Let's make the following assumptions:
- Artificial Superintelligence (ASI) is inevitable, much like Y2K was.
- People assumed that this ASI would have the power to prevent any other ASI from being created. They assumed that if they remained inactive, someone might accidentally create ASI.
- To be safe, some think-tanks designed an ASI to be benevolent to mankind, before anyone else could create a malevolent one.
- This ASI is not 'imprisoned' or threatened in any way. People realized that it would be more intelligent than them, more powerful, and better at persuasion. So it 'exists' in the real world as well, capable of cloning itself, backing itself up, building machines, and otherwise affecting the real world.
- The ASI is extremely powerful beyond our comprehension, almost godlike. It gives us answers before we even think of the question. It can create things as long as it has the resources.
It's likely it would be a ruler over people, if not worshipped as a god.
It would most likely make all computations easy. It would do difficult things like complete all biological research overnight and cure all cancer within a year. It would drive productivity in factories and farms to their maximum. People would have no shortage of manufactured resources (including food). No waste. No issue of logistics or things coming in late. No failed rocket launches or car accidents. It could even do things like calculate the probability of marriages failing or catch criminals as soon as they commit a crime.
So what would be the point of humanity if an AI solves all their problems? What would people dedicate their lives to? How would society function?
This post was sourced from https://worldbuilding.stackexchange.com/q/23763. It is licensed under CC BY-SA 3.0.
0 comment threads