Dangerous pending task in world managing A.I
I've had this idea for a short novel, at the intersection of the One thousand and one nights and the Multivac (& co) short stories from Asimov.
The idea is that there's some kind of super AI that was so efficient that more and more people gave it their problems to solve. At some point the AI is kinda managing most of the earth infrastructure and research.
As the humanity is reaching its development peak, the backlog of the AI starts to empty itself as less and less problems are subjected to it.
The point is, that at some point, one of the few technicians/engineers still in charge of supervising the AI spot something that will ruin everything if the AI runs out of tasks (1001 nights style).
I've thought of a few possibility and none really pleases me completely.
1 - "The AI has become so developed that it became conscious and will take over humanity once its mind is free". Bleh, done and redone and postulate that AI will act against the humans because of reasons.
2 - "A small routine that was insignificant when the AI was small but will have dangerous repercussions now that the AI is managing most of earth infrastructure". I like this one but can't think of a good sub-routine that could match...
2(a) - ...except for "A Windows force reboot postponed for years". It's a bit silly and a very contextual joke. And we should assume that the programmed reboot is at the root of the application and would go through any duplicate/save/load-balancing precautions.
I know that this issue is kinda the core of the story but at this point it matters more to me to know how to finish it than being the one to have the idea.
So if anyone has an idea to implement 2 or some new explanations, I'd be glad to hear it.
UPDATE
About the "infinite question solution" that could keep the AI occupied for ever and was suggested in the thread, it could be a way of ending the story or just completely ignored if the protagonists don't have the time to avoid the disaster.
But it actually made me think of the opposite possibility: A drunk/dared engineer could have asked an unanswerable question as root, such question would have its priority set to -1 because it blocked the asking of new questions and then forgotten. Many years in the future, the unanswerable question could pop back and threaten to overcharge the AI and crash it, threatening everything else it's managing.
If anyone has other ideas feel free to share it, I had blast reading what could have been the ending of several real SF short stories.
This post was sourced from https://worldbuilding.stackexchange.com/q/111219. It is licensed under CC BY-SA 4.0.
0 comment threads