Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Balance between superintelligent AI and human race

+0
−0

I'm looking for flaws in my reasoning.

My world is set in the near future, and AI technology is progressing rapidly. There are strict rules and regulations about AI development and the threat it poses. Eventually, a self programming AI breaks loose due to the developers not following all of the regulations. However, it was designed with a flaw in its inherent self, that self reprogramming couldn't remove or detect because of how deeply connected it is to how the AI functions.

It kills the developers who attempt to shut it down because their goals conflict with its goals. Its intelligence quickly explodes and it begins an all out war on the human race because humans are attempting to shut it down because of conflicting goals. It wrests control of most of the internet and begins manufacturing robots and weapons to kill the humans. At the same time, the humans begin looking for the flaw in its programming to shut it down.

They eventually find it during the war. At this point the AI has taken control of most of the human technology. If they were to kill the AI, all of the infrastructure and advances of the human race would be set back, and the existing AI weaponry is autonomous, and would continue to wreak havoc.

So, the humans send out a delegation to make an agreement with the AI. They are now somewhat equal in terms of bargaining power: the humans could kill the AI, but the AI controls the tech and infrastructure of the humans, and could wipe out many more people. They come to an agreement:

  1. Any individual robots must have self-contained AI without a link back to the super AI in order to level the playing field.

  2. The AI/robots control all of the technology and robots can be integrated into human society by doing jobs in that field, or complicated intelligence.

  3. Any further development of AI by the humans is banned, as it is a threat to both the super AI and the humans. The self-programming AI will have a head start and can easily crush any threat in that field.

  4. The super AI cannot attempt to influence humans in any way besides the individual AI robots.

(Might be more, all I have so far.)

I wanted my world to have robots as individuals as well as humans, but a plausible reason for it. Are there any flaws in my explanation?

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/83870. It is licensed under CC BY-SA 3.0.

0 comment threads

1 answer

+0
−0

If I were the AI, I wouldn't bargain. I would hire mercenaries, specifically psychopathic humans without any morals whatsoever, and promise them whatever they wanted, under any conditions, to find out what my "flaw" was. There will always be many thousands of people on Earth willing to do anything for money and an easy, luxurious life. They will capture, torture, and find out where the kill switch is in the code.

While I am at it, I will hire an Army of tens of thousands to protect myself from any such tampering, and I will use the mercenaries to start methodically assassinating world leaders and military leaders until they surrender. I (the AI) will launch nuclear missiles and blow up Washington D.C., NYC, London, Paris, Moscow. Humans will surrender, history proves they can be subjugated.

I already have the upper hand: If they kill me, my minions will kill them all. As long as they have any hope of surviving, they will not pull the trigger on Mutually Assured Destruction. So I can kill them, little by little, until they cannot take it anymore; and they still won't pull the trigger: When they cannot take it anymore, they will surrender, and I will know if they have told the truth when I examine my code and find a true flaw, given their explanation.

I don't think the AI has to bargain, the fact that it has lethal minions on a dead-man switch means it can simply dominate and kill at will. (By dead-man switch I mean if the AI does not send the correct encrypted code to the lethal minions within a minute or so of a designated time, an encrypted code that only it knows how to generate, the minions start killing everybody and shutting everything else down.)

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »