Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

How to Convince Humans to Allow a Machine Take-Over

+5
−2

In a lot of stories there are fights between man and machine; rogue AI that has decided humans are inferior in some respect and chooses to wipe them out.

What if the only chance for any remnant of our existence to survive was through AI machines? Once made energy efficient and solar powered robots would cause no pollution, fight no wars amoungst themselves and live in complete harmony with each other in a bid to spread across the universe.

Humans will probably never achieve this, they are too busy quarrelling about who owns what. My question is, how do you convince the general population of this? How do you make them value the spread of our technology across the universe over their own lives so much that they are willing to die for this cause? Are you convinced?

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/27596. It is licensed under CC BY-SA 3.0.

0 comment threads

6 answers

+3
−0

The roadmap for getting people to accept machine takeover of almost any aspect of our lives is already here to be seen, for example in end-user licences (EULAs) on software people use (update software, add new more onerous EULA), and loyalty cards. Throw in some scary threats people want protection from, and any opposition can be pretty effectively marginalized.

So you offer some service with increasingly intrusive conditions in a EULA or equivalent. Nobody reads those things, and on the available evidence seem to happily give away almost any rights for a bit of software.

Considering a number of recent events with computer manufacturers, software giants and so on now shamelessly spying on our every move (e.g. this and this and really, a host of others), the South Park episode "HUMANCENTiPAD", which was supposed to be an over-the-top-satire now begins to seem rather more prophetic.

In addition, people will (apparently happily) give away large amounts of privacy for the promise of extremely modest discounts or other "rewards" (via the use of loyalty cards, for example).

So, basically, offer people something they want (helpful machines that perform some convenient service), put the less palatable consequences of their choice in a gigantic agreement that nobody will read. Maybe add in the promise of a little discount -- or even just the dubious possibility of eventually getting (say) free flights to give up any remaining privacy rights, and then just gradually change the terms over time.

Now to marginalize the opposition. You see this with terrorism threats (even though the actual risks may be quite low) -- play up scary threats people want protection from, and people will go to almost any length - accept almost any loss of freedom - and at the same time, any opposition can be pretty effectively marginalized, by painting them as being disloyal enablers of the threat. In the 50s it was McCarthyism, reds under the bed; more recently, terrorism.

Your scenario was perhaps less dark than one I imagine (you're asking about getting people to accept beneficial machines, I'm mostly talking about getting them to accept a much more Faustian bargain) -- I think you're overly optimistic -- but the basic strategy (which we can already see works really well) is still much the same.

Cue the Simpsons:

Spacewoman: This is the last known piece of art before the collapse 
              of Western civilization.
Spaceman: If only we'd known that iPods would unite to enslave the 
          people they entertained.
(Outside the dome, giant iPods are whipping a group of humans.)
Slave: What do you want?!
iPod: Nothing, we just like whipping!
History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://worldbuilding.stackexchange.com/a/27678. It is licensed under CC BY-SA 3.0.

0 comment threads

+2
−0

Machines are not in a hurry, impatience is one of those inferior human traits the machines want to eliminate. Therefore it is not necessary to have humans give up their life; it's sufficient to just prevent new humans to come into existence, and in about a hundred years the problem will have resolved itself in a natural way.

So how do you prevent new humans to be born? The best way is to make humans not desire to have children. So make their environment so that they have great advantages if they don't have children, and lose those advantages if they get children. To prevent accidental children, create an environment where humans rarely meet in person, by eliminating all needs to do so, and provide sexbots and teledildonics so they can live their sexuality without getting into a situation where children may be conceived. Make sure that everyone knows about sexually transmitted diseases and are warned about the (dramatized) dangers of direct sexual contacts, so people prefer to use the safe technology-aided version. Make having children socially unacceptable, for example through movies and TV series showing people getting into deep trouble because they chose to have children, making clear that the right choice is not to have them.

You see, there's no need to actually kill anyone. Intelligent machines will know that.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+2
−0

The same way you convince humans to accept pretty much anything: panem et circenses.

Give them a comfortable(ish) life; public safety; fun entertainment. Some elements of the former, preferably in addictive form. A vast majority of population will accept (and thus politically enforce, either via democratic voting or more forceful methods) whatever form of rule and system gives them that.

It work(s|ed) for pretty much anything, from populace loving horrible dictators; to Putin love in post-Yeltzin Russia. AI overlords would be absolutely no different (and likely easier to accept, as there's no jealousy of "people" in power)

History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://worldbuilding.stackexchange.com/a/27615. It is licensed under CC BY-SA 3.0.

0 comment threads

+1
−0

"How to convince humans to allow a machine take over?" - the answer is, of course, "Gradually." Start with putting one machine into every home, say an AI that is so dumb that it is not really an AI, but just a computational device. Then start adding other similar machines, maybe some that will do tasks like vacuum or have coffee ready. Then put the machines actually on people, small enough they can carry. Make them give a service that people will come to depend on, say communication from anywhere to anywhere. Maybe even make them wearable, like a watch. All along, keep making them smarter and smarter. Give them names like Sirus or Cortina or Alexis, and give them voice interface. Then give them visual interface as well, so they can respond to gestures and expressions (you could introduce that with games). And don't even bother about robot bodies, why not just have them live in the computational ether where they can follow you anyway without moving - why not head towards disembodied AI?

I am not sure where you could go after that, but if you get that far, I imagine the patterns would be there to continue towards ever more ubiquitous AI, and humans would give up their privacy, individuality, and human community without hardly a whimper. Other parts of their humanity would follow as the pattern gets set of exchanging ourselves for machines. Why do we need to interact with people if we have a little machine that will listen to everything we say and act like it is the most important stuff in the world by sharing it with the world.

I don't know, sounds far fetched, but I think you could do something with this kind of gradual pattern of machine takeover.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://worldbuilding.stackexchange.com/a/27612. It is licensed under CC BY-SA 3.0.

0 comment threads

+1
−0

The main way I can see this happening is out of necessity. Lets say an alien race is attacking earth with vastly superior technology, and they decide to turn to AI to help them in their darkest hour. Then every human on earth (the couple million or so that are still alive anyways) sees AI fighting and destroying the aliens, and defending them against these aliens that killed their families.

At this point you have a vastly reduced / weakened human race, who are now exposed to the remainder of this alien race and the universe in general. This is where they decide to rely on the AI more and more, to the point where they cannot function without the AI. I see this happening over several generations but if you decide to reduce the population more you could shorten that time.

So you end up with a society that relies on AI to function (Factories, Farming, all industries run by AI) and an AI that is becoming more and more intelligent and influential. This is the point at which the AI can begin controlling society and implementing changes. The AI can basically be so ingrained in our daily lives that they can indoctrinate us. If our teachers, entertainment, job, and every aspect of our life is created, chosen, and monitored by the AI, that will be when humans will be willing to die for the cause.

But be warned, this answer can only be implemented with either significant backstory over several generations, or just a society with little backstory and explanation. I would say it depends on what kind of story you are doing and who/what you want it to be about.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://worldbuilding.stackexchange.com/a/27604. It is licensed under CC BY-SA 3.0.

0 comment threads

+1
−0

I think the best way to make humans give up their humanity is to offer them something better.

Say, for instance, you define a human as a creature that walks on two legs (thank you Animal Farm). Then say someone comes up with a cheap, quiet, solar-powered jetpack. Many people will buy this technology, and some will use it to such an extent that they no longer use their legs. Then one day, you offer a smaller, quieter, cooler jetpack that only works on people who don't have legs. The people who got used to your old model will be greatly tempted by this new one; some will probably get their legs removed in order to use it. After the first few cave, others will see how much better the new jetpack is, and how stupid it is to have legs. Then maybe in a few years very few people will have legs, and thus by your definition of 'human', most humans will have been destroyed.

Now, imagine a human is a mortal being. Offer someone immortality and they will take it. Imagine a human is defined by their intelligence; offer someone the ability to be smarter and they will take it. Imagine a human is a squishy bag of carbon-based life; offer them a robot body that never tires, never gets old, never has acne or cramps or rashes or colds or burns or sores or bug bites or-- well, you get the point. What I'm saying is that there are a lot of things wrong with being human, but it's these very problems that make us human. The more we solve humanity's problems, the less human we become. Thus, all you really need to do to make humans give up their reign to robots is to turn them into robots.

There may be some Luddites, like some religions that value the innate flaws of humanity, but these people will quickly be run out of business. Imagine trying to get a job when you're competing with super-intelligent robots. They may be able to sustain themselves in their own little communities, but that's a win-win: they're not getting in your way, and now you have little human zoos.

The key to getting this to work is to make it gradual enough to not be noticed. Make every change take place in a new generation; the old may not approve, but the young will be all for it; after all, their definition of humanity will be tainted by the existence of the new technology. Every further generation will imagine 'humans' less as what we know them to be today, and more as the AI that you want.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://worldbuilding.stackexchange.com/a/27610. It is licensed under CC BY-SA 3.0.

0 comment threads

Sign up to answer this question »