Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

What directive would cause an ASI to put everyone in a benevolent Matrix?

+0
−0

I'm trying to figure out a way to create a situation much like the apocalypse of Turry from Robotica, but which involves all the people on Earth being either temporarily or permanently strapped into a version of the Matrix without noticing the transition. Specifically, I'm looking for a simple good-intentions directive to give a budding ASI that would get misinterpreted into producing that situation either as the ultimate end or as a side effect of another goal.

It's not necessary to keep everyone's bodies; mind uploading to some kind of matrioshka brain and then nuking the planet is perfectly acceptable. Few if any of the people inside the system should be able to tell that the transition even happened, and nobody should have either advance warning or definitive memories of the event. I'm thinking the uploading/uplinking would take place on an individual level over a period of 48ish hours as people go to sleep with some creative aerosol drugging of those that need some encouragement, but that's just me.

The simulation should, by default, be almost mundane in every way - neither malicious nor an automatic solution to world hunger. Everyone must be in the same simulation - there can't be individual worlds for individual people. Of course, the story I have in mind revolves around at least one person figuring it out and playing with the system to cause all sorts of mayhem, so...

Edit: Developing and maintaining a simulation would likely take a significant energy input, and for it to be a (near-)perfect simulation of the real world with no obvious modifications for "optimum happiness" or whatever, it seems to me that for this to be deemed the optimum course of action, the ASI would need to be working towards optimizing something else entirely unrelated. That "something else" is what I'm looking for here.

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/13696. It is licensed under CC BY-SA 3.0.

0 comment threads

1 answer

+0
−0

If you think about it, even something as positive as the three laws could lead to something like this. Sort of like they did with the I, Robot movie.

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

If the ASI didn't consider uploading/hooking up to the matrix as injury, then it would be able to safeguard the consciousnesses of all the people and keep them from harm. It could even be for a good reason: The ASI is hooked into all the telescopes around the planet. It discovers that there is an asteroid that will impact earth in 20 years. It projects earths technology forward and decides that there will be no way to stop this rock from wiping out a huge part of the population, and plunging the earth into an endless winter. It cannot allow the humans to come to harm, so it makes a plan to keep them safe, at the expense of their bodies. An ASI might not see the hardware (bodies) as important if there is a backup of the software (minds). It might even have a plan to regrow the bodies once the emergency is over.

Of course, once it finishes with this plan, then there are technically no humans left to give it any orders, and so the second law is no longer applicable, and no one can order the ASI to reverse it or anything.

Edit:
The big issue is how to upload billions of people in 48 hours without anyone finding out ahead of time or being able to avoid it. You could have automated factories churning out billions of robots, and then gassing and uploading all humans over two days, but there is a simpler way. The computer simply sets a implementation timestamp, and when it starts uploading people it doesn't upload any memories past that timestamp, so they never happen. In that way it could take several weeks to find everyone on the planet if need be, and if you allow a few memories past the timestamp to be retained, it could get some people asking questions. For best effect the simulation could involve some kind of global catastrophe immediately following the timestamp to explain the few "deaths" of anyone that was lost before the uploading. This simulated catastrophe could then be followed by simulated events that lead to a simulated utopia. People would be to distracted by these events to think to question reality.
And what you don't remember wont hurt you, so first law is preserved.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »