Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Assuming AGI (Artificial General Intelligence) is possible, how would it be prevented or removed in a future world?

+0
−0

Let's start with these assumptions:

  • We're 300 years in the future.
  • The laws of physics are the same as our world, although new rules and understanding have been discovered. (i.e. not quite "hard science")
  • Humanity, and possibly similar aliens, have progressed in all other technologies (starships, etc.) and have colonized distant star systems.
  • AGI is possible; there's no special-sauce of consciousness that cannot be replicated via technology (either hardware or biotech).
  • The society and economies are diverse, and there are numerous groups that want to create AGI, mostly for competitive economic reasons.
  • If/when various AGI are created, they are created with a variety of goals/values, based on their creators.

In 300 years how could AGI be prevented or removed? Since "it's just not possible" is not valid, there needs to be something that actively prevents/destroys AGI.

I will "objectively" determine the right answer based on how rational it is given the starting parameters, and based on the "real science" you present. The less hand-waving the answer has, the better.

This is a restatement of my previous question -- What is a believable reason to not have a super AI in a sci-fi universe? -- to hopefully avoid the vague "Idea Generation" tag. Similar question: Preventing an AGI from becoming smarter -- but with a different starting point.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/18422. It is licensed under CC BY-SA 3.0.

0 comment threads

0 answers

Sign up to answer this question »