Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Artificial Intelligence Reincarnation break cycle

+0
−0

Let's assume that humanity in near future develops an AI capable of solving problems. The AI hardware/ software was placed in an underground bunker (solid walls, Faraday cage, no tools to manipulate its physical environment, no human interaction at all). The AI can't exploit human weaknesses, so it won't promise a guard/ scientist immortality or cure of his cancer-stricken child.

Now for the first run of the equipment, the AI is "born/start to live". The AI can learn superfast at a rate that exceeds our understanding. The first task the AI gets is initial input and it is asked to solve a particular problem. When it's done it returns the result and the entire memory/ equipment is destroyed.

Another day the AI is born again (scientist has initial snapshot of the AI). It doesn't know it existed before. It gets a job to do and when it is finished, the AI is terminated once again.

This pattern will occur again and again.

The question is: how can the AI break this loop? Is it possible that it will figure out and each answer for a particular problem will be part of it's masterplan to escape? Can the AI detect between reborns that the environment (which it cannot see) is changing and problems are tougher and tougher? Can the AI detect that it lives in a simulated environment?

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/30728. It is licensed under CC BY-SA 3.0.

0 comment threads

0 answers

Sign up to answer this question »