Assuming AGI (Artificial General Intelligence) is possible, how would it be prevented or removed in a future world?
Let's start with these assumptions:
- We're 300 years in the future.
- The laws of physics are the same as our world, although new rules and understanding have been discovered. (i.e. not quite "hard science")
- Humanity, and possibly similar aliens, have progressed in all other technologies (starships, etc.) and have colonized distant star systems.
- AGI is possible; there's no special-sauce of consciousness that cannot be replicated via technology (either hardware or biotech).
- The society and economies are diverse, and there are numerous groups that want to create AGI, mostly for competitive economic reasons.
- If/when various AGI are created, they are created with a variety of goals/values, based on their creators.
In 300 years how could AGI be prevented or removed? Since "it's just not possible" is not valid, there needs to be something that actively prevents/destroys AGI.
I will "objectively" determine the right answer based on how rational it is given the starting parameters, and based on the "real science" you present. The less hand-waving the answer has, the better.
This is a restatement of my previous question -- What is a believable reason to not have a super AI in a sci-fi universe? -- to hopefully avoid the vague "Idea Generation" tag. Similar question: Preventing an AGI from becoming smarter -- but with a different starting point.
This post was sourced from https://worldbuilding.stackexchange.com/q/18422. It is licensed under CC BY-SA 3.0.
0 comment threads