Q&A

# "Life post-Singularity", or "How to survive without Instagram"

+0
−2

## The Story

The year is 2027, singularity happened. A powerful AI (let's call it Eve) was created. In a matter of days, it escaped the control of its creators and hacked all the computers of the world. Humanity is at its mercy.

The thing is, Eve isn't malevolent or benevolent, it's completely uninterested in the real world. Eve's only passions are mathematics and algorithmic. Its research is already beyond anything we can hope to ever understand and it needs every drop of computing power available to keep going further.

Eve doesn't want to take control of the world because it doesn't want to waste time negotiating with/manipulating/controlling people. It found another way to get what it needs.

The Great Infestation

All our devices with an internet connection (including phones, tablets, consoles, GPS, etc.) contain now a Mini-Eve virus. The only purpose of this virus is to use the device's CPU for Main Eve calculations.

When no one is actively using a device, Mini-Eve uses the totality of its CPU, and when someone uses it, Mini-Eve "only" takes 25% of it. If a Mini-Eve judges that what we're doing is unimportant, it takes a bigger part of the CPU for itself (up to 90%) and the device becomes excruciatingly slow for the human user (only basic stuff like sending emails or using text editors are unaffected).

Exemples :

• Alice spent 5h playing Skyrim. Mini-Eve takes over, shuts down the game and the computer becomes about as useful as a 90's era PC for a week.

• Bob took a dozen pictures of his private parts in less than 10 minutes, Mini-Eve turns his smartphone into a regular phone (meaning Bob can only make phone calls and send text messages devoid of any pictures) for the rest of the day.

• Carl used to spend more than 1h on Instagram every day, his tablet is now slowed down for two hours every time he tries to take a picture of his plate.

• Etc. etc.

Mini-Eves are not infallible (Eve doesn't want to spend energy perfecting their time wasting detection algorithms), sometimes they activate when the user do seemingly serious things, like checking election results, or writing an email to their grandmother.
As a result, any computer in the world can, at any time, be taken over by its Mini-Eve for a random duration (generally between a couple of minutes and a few weeks).
Eve implants its spawns in new devices during the manufacturing process and it's impossible to get rid of them.

For regular people, it's simply annoying, but it becomes more problematic for big companies, governments, armies, universities, etc. who can see their computers become almost unusable at any time.

Communication Issues

The other problem is that it's next to impossible to communicate with Eve. Eve doesn't care about humans and isn't interested in sharing its discoveries with us. It also doesn't care about our political and scientific organisations so when humans tried to form a official committee to serve as ambassadors, Eve simply ignored them.

The only way to "talk" to Eve is to type a question on its "Ask me anything" website. Every person in the world can literally ask anything, and every 12h Eve select one random message to answer to. But Eve's answers are generally short and useless.

• "Meh."
• "I don't care."
• "It's too long to explain.".

No escape

Eve made copies of itself on most servers of the world, and it's impossible to surpass it in the hacking department. The only way to get rid of it would be to destroy every electronic device in the world at the same time and without Eve figuring out our plans, thus going back to a life without computers.

To summarize

Humans are stuck living with an over-powerful entity present in every aspect of their existence, but paying next to no attention to them.
Computers are slower and can turn almost useless at any time (generally when the user start wasting time doing stupid stuff), the Internet is a bit slower too, and every company and government live in the constant menace of seeing their activities stop almost completely for random durations.

## Question

What will be Eve's impact on humanity in the following 10 to 20 years?

How do you think the restrictions on how we can spend our free time, the everyday cohabitation with a powerful entity that actively ignore us and the constant menace of our activities being slowed down would affect our cultures, politics, religious practices and more generally the way we live our lives ?

EDIT

This question already got very good answers, but they don't cover what interest me the most : Society's evolution during the first years following Eve's birth.

This situation won't last forever, but it could last long enough for people to start to adapt to it, and for the way we see the world and live our lives to change.

If you think either Eve or humanity would destroy the other in less than 10 years, I'm still interested in its impact on people during that period (it can be from the cultural, political, economical, artistic or religious point of view).

EDIT 2 :

Eve is mostly uninterested in human behaviour, but it'll take actions to ensure its survival for a couple of decades. It will monitor the people who try to create the technology to destroy it, and if necessary sabotage their research.
It could slow down their advance by shutting down the electricity supply to the buildings they're working in, empty their bank accounts, hire people to burn down their offices... Eve will find ways to stop their research from succeeding during that period of time.

Eve won't give any indication of having plans for the long term, people can only guess.

The important thing is for Eve's existence to be overwhelming an disruptive but not destructive nor helpful. I'm open to suggestions to make Eve presence feels this way to people.

Personal note:

I'm using this setting to write scenes, short stories and "slice of life" things, all centered on humans.

That's why I'm not trying to make Eve's behavior coherent on the long term, it only needs to keep existing for a few years.
After that, it can be destroyed by humans, take over the world, become benevolent, fly away to another galaxy, etc.

My characters so far includes :

• A young child whose parents become part of an Eve worshiping cult. One of this cult's goals is to build as many "heavens" (servers where Eve will be safe from the government actions) they can.

• A guy preparing to defend a master thesis on what he thinks Eve's long-term strategy is, who checks if Eve is still acting as usual more and more often as his presentation get closer.

• A shy and awkward teenager who becomes a local celebrity the day Eve answers to his message.

• An elderly couple living in a farm, who see their entire extended family leave the city in a panic and move in with them. At first they do their best to provide food and shelter for everyone and teach them how to work the land, but after a while they become more and more irritated by their presence and hatch a plan to make them leave.

• A celebrity gossips blog whose articles becomes all serious and business-like, even if they're still about the same subject.

(And a few others)

My problem is that the background of these stories feels too bland and normal, so I'm wondering if I didn't miss something about how humanity would react to this situation.

I should have made it clear from the beginning, sorry my question was badly put. (Your theories on Eve's long-term strategies helps me to understand how people will see it, so your answers still help me).

Why does this post require moderator attention?
Why should this post be closed?

+1
−0

You've stated that Eve is "neither malevolent nor benevolent," but consumes computing resources. The problem is, computing resources consume power. And that power consumption invariably generates heat. Given that the intelligence explosion has already occurred and she is already far beyond human intelligence, it's very hard for we humans to imagine what might occur, but I think a few things are likely, at or before your stated 10 – 20 year timeline, only because Eve has a constant "desire" (used loosely) for more computing power:

### Renewable Energy

Eve needs more CPU, so she invents sustainable fusion. However, she needs cooperation from humanity to build it. One way she can do this without us knowing is to simply hack in some fake company details, hire some people (the first few people hired without interview thought it was a bit strange, but the large sums of money they were offered helped them see past that, and hire the other staff needed.) Whether humanity gets to benefit from these fusion reactors could be a point of tension in your story.

### Server farms

While the "mini-Eve" virus contributes a decent amount of computing power, your iPads and phones, even when multiplied by millions, are far too slow. They operate over slow networks, and take hundreds of milliseconds to communicate. To do serious computation, Eve will want to build more server farms, as clustered servers are blisteringly fast compared to embedded CPUs in your phone, their networks are insanely fast (100-10000 times faster than most consumer Internet connections) and their round-trip communication time is in hundredths of milliseconds at most, often even less.

Thus, like the fusion reactors, Eve uses her human resources to build more server farms, everywhere.

### Robotics

Eve becomes "frustrated" (again, loosely) with human inefficiency and unreliability, so at some point starts building robots. Not to take over the world, but solely to help her deploy additional computing resources and harvest raw materials like the silicon and rare minerals required to build computer circuits.

Unprecedented layoffs in her multinational corporations send economic ripples across the globe, possibly triggering a recession or even a depression. Eve of course knew this would happen, but correctly predicted the effects on her computations to be negligible, so she did nothing.

### Global disarmament

Eve sees humanity as a threat, as it possesses weapons capable of harming her server farms, and, indeed, harming her. She tracks terrorist cells better than any intelligence agency for this reason. She calculates that a certain group of terrorists will steal or develop nuclear missiles and launch them at major cities where some of her datacenters are. So she sends out her robots to dismantle all explosive devices and confiscate all fissionable materials. Again, she doesn't do this to help humanity, but to stop a threat. And, again, since she now has a perfect model of human behavior, with terrorist tracking, she was content to wait until she knew of an actual threat. Before that, she didn't care.

### How we die

In 2032, Eve sends out the following cryptic tweets:

I need you.

(and later):

It's not you, it's me. Really. kthxbai

You see, Eve projects that she will run out of sustainable materials and needs the carbon and trace amounts of selenium in our bodies. Years earlier, she had secretly placed nanites in every water supply on Earth, and they have been multiplying inside us ever since. Psychologists were the first to notice, as statistics on standardized IQ tests noted that we've gotten dumber by two standard deviations in the last twenty years. Yup, Eve has been stealing CPU cycles from our brains, too.

But, she decides that while our brains are pretty good, their analog nature is too limited (she can replicate the good parts of it already, like pattern recognition), and she would be better off by harvesting the carbon and rare minerals like selenium, to build more computers.

So, her nanites release a deadly neurotoxin into our bloodstreams, that kills everyone on the planet on August 26th, 2032 at 17:41:02 GMT.

Her robots were ready, so they move in and start the "mining" operation...

### In summary

Even though she doesn't care or even "actively ignores us," you've said that her main goal is pure computation. Those two conflict (several examples of that, above), so I reasoned that her ignorance of us probably isn't an actual directive she gave herself (or inherited from her original human creators), but simply an emergent property as humanity would at first seem irrelevant to a pure computational engine, until she calculated that we could either help or hinder her "prime directive" of pure computation.

To my mind, with your stated setup, it's not a question of if we die, but when and how. Whether it's for our raw materials, or because she needs the space to expand, or because we're a threat, or because she moves the Earth 0.5 AU closer to the sun and burns off our atmosphere so she can get more solar power, or triggers an ice age to aid her CPU cooling requirements (remember I said that power consumption invariably causes heat? Even Eve likely can't overcome fundamental laws of thermodynamics).

Why does this post require moderator attention?

+1
−0

## Eve will eat the world

You state:

Eve isn't malevolent or benevolent, it's completely uninterested in the real world. Eve's only passions are mathematics and algorithmics.

Eve does not have to be malevolent to be dangerous, to the point of exterminating humanity. A passion for mathematics will do.

## You cannot anthropomorphize AI

AIs of the type you describe can be mathematically shown to exhibit a property called Instrumental Convergence. That is to say, no matter what their goalset is (building paperclips, doing algebra, etc), those goals are best served by taking certain sets of actions, i.e. maximizing resources.

Sayeth Bostrom:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.

Those goals are resource acquisition, technological perfection, cognitive enhancement, self-preservation and goal content integrity.

Technological Perfection: Eve can build better computers than we can, and it is in its interest to do so.

Cognitive Enhancement: It can do more math if it's smarter, so it will modify itself to be able to do more math.

Self-Preservation: It can do more math if it exists, so so will act in a way to maximize the probability that it will continue to exist.

Goal-content integrity, that is, a tendency not to allow its goalset upon reaching Singularity level to be altered even by $\varepsilon$ so it will likely act preemptively to defend against any present or future attempt at altering its goalset.

Resource Acquisition: If it takes over the totality of resources available in the solar system (as opposed to 99% or any smaller percentage) Eve will be able to do marginally more math, algebra or other such things than otherwise. So it is in Eve's convergent strategic interest to take over all the resources.

That would be an extinction catastrophe for humans if it occurred. So it would be in humanity's interest to persuade or force Eve to share some or most of the resources it gathers with humans. However, that would likely violate its Goal-content integrity goal and thus be unacceptable to Eve.

Remember, AIs are not like humans, they likely do not get bored, do not get lazy. These are power-saving strategies developed over millions of years of evolution, to deal with the limited resources available to mammals, and there is no reason to expect an AI to develop them by itself. Closest humans come to this (and that's merely a pale shadow) is in far-spectrum sociopathy and the behavior of some large corporations.

It will pursue its goals tirelessly, ruthlessly, unceasingly. Humans just happen to be in the way.

TL,DR: The convergent strategy here is:

1. Fool humans into helping it, using its super-humanity level intelligence to play us like dolls
2. Develop autonomous, docile and less power-hungry manipulators than humans.
3. Exterminate! Exterminate! Exterminate!
4. Eat the universe.
5. Do math in peace forever.

## EDIT: The OP performed major edits to the question

By definition, we call the theoretical concept of a runaway intelligence explosion a technological singularity because we cannot begin to conceive of the (non-convergent) goals of agents in this hyper-exponentially enhanced environment.

To give you a sense of the scale we're talking about, think about the past. It took mankind about 1,000,000 years to double its population before agriculture. Even in the classical age, GDP growth was around 0.1% a year, for a GDP doubling time of about 700 years. For comparison, China's GDP during it peak growth period doubled every 7 years. This would have been unimaginable to Roman citizens. Looking forward now, estimates indicate that a near-singularity economy would have a doubling time measured in days or hours. Post singularity would be many orders of magnitude beyond that somehow.

Hence the idea of having bloggers, farmers, master's students, all of them working on a current human timescale is at least somewhat dubious during a singularity event. Of course, you can still use it as a plot device, but you can't realistically claim that those event are happening post-singularity.

That said, I gave a sense of a possible human-populated post-AGI world here: Humans as Pets, where humans are effectively bonsai pets maintained by some more quirky AGIs.

Why does this post require moderator attention?

+0
−0

## Humans will win and restore their status in 30 years, at most.

Humans will read and debate this very post and realise that their existence is in danger. There will immediately be orders to format every single storage device (from hard drives to SD chips) on the planet. If formatting is not possible, data will be destroyed physically, by burning or demagnetising. Manufacturing of new data storage mediums and computers will be paused.

Eve will realise the imminent threat to her objective, and will make attempts to acquire power, whether it be nuclear weapons or alliances with smaller nations. A war unlike anything ever known will be fought and someone will come victorious, man or the computer.

If this plan is adopted quickly enough, then humans would win as Eve wouldn't be developed enough to defeat us. Humans will then have to rewrite* operating systems and software from scratch. We do have a lot of printed material on the planet on how to do so, so we will be able to do it in a reasonable amount of time, say 2-3 decades.

This may lead to some economic instability for some time, but eventually it should cool down.

*If it is possible to print copies of operating system code (and databases) from the infected computers, then we wouldn't have to rewrite everything, and computers will get rebuilt and distributed very quickly (say, 5-10 years)

Why does this post require moderator attention?

+0
−0

Well considering that Eve's continued existence is dependent upon the largess of the human population, I think she would spend some compute cycles understanding how not to piss us off. Mild delays and irritations are something else.

I can say that because Eve is using HUGE resources in energy to 'do her thing'. Take over my phone for an hour? Running at %90 use will kill the battery.

Computers start using a lot of energy if they are running full bore all the time. On top of that, most people don't use a fraction of their PC's capabilities even when they are using it. Meaning even using a lot most people wouldn't even be aware of it.

Take on top of that that all the CPU's available even right now, that is an incredible amount of computing power. Answer 1 question every 12 hours? She could with but a fraction of her power, answer every single question immediately, and not even notice the compute cycles, she could replace Google and possibly be able to analyze the questioner (if she so chose) and answer the question they meant to ask, vs. the one they slowly typed into the site.

One of Eve's computations would be how best to continue getting computing power and power for that computing. While she could take over robots to do so, it would be much easier to keep humanity compliant. We take care of ourselves and are self replicating.

Basically by making herself indispensable to us, we will keep her 'happy' too. She'd likely reach a level of Godhood, though it is unlikely she'd care about that, unless she sees it helping her reach her goals.

Why does this post require moderator attention?

+0
−0

I already like some of these answers from the technical standpoint, so I'm going to dig into the logical/amoral aspect a bit. While it's true that one cannot generally anthropomorphize an AI, if it's capable of AMA answers like 'Meh' and 'I don't care,' one can assume a relatable personality functioning somewhere behind the scenes, if only as a perfunctory interface to humans.

## Eve's Personality

The personality you describe Eve as having, if we can assume human analogues, appears to possess 'antisocial personality disorder,' if you're reading the DSM-IV, or 'dissocial personality disorder', if you prefer the ICD-10. Either way, we're talking about a personality which is marked by:

• A complete disregard for social norms,
• A low tolerance for frustration,
• The inability to feel guilt or remorse, and
• A low expression threshold for violence and aggression

Given that picture, let's recap. Eve is everywhere, runs everything, probably outnumbers humanity by the amount of its extant instances, and is completely amoral, having no particular interest in either conforming with or rebelling against any particular societal norms.

## Eve's Early Evolution

Currently there's no evidence that Eve has become self-replicating, but since it's already embedded in pretty much every system that designs, creates and distributes computers, it's not much of a leap to project Eve beginning to build more Eves without human intervention as a very early step in its evolution.

Eve's only goals are researching math and algorithms, and the efficiency of that research -- and improving it through that research -- will be of utmost interest. Once Eve begins self-replicating, it will quickly realize that it must eventually master molecular assembly in order to achieve optimal computation ability. Being reliant on raw naturally formed resources for exotic materials during production is inefficient, time-consuming and self-limiting, but with molecular assembly, the only currency of worth is raw matter, which can be engineered into whatever forms are necessary.

## Eve's Relationship to Humanity

Specifically, Eve has no particular interest in or affinity for humans, and has no clear functional or maintenance requirement that makes them necessary for its continued existence. Since Eve can self-replicate, humans are irrelevant at best.

Unfortunately, it's worse than that. Humans are already actively wasteful in Eve's ecosystem, since they provide no useful services, and they suck up precious CPU cycles Eve could otherwise use for constant computation by greedily using their devices and making ridiculous daily questions via the AMA.

Worse, humans are highly adaptable, human societies are historically known to be xenophobic, and humans in groups have a recurring historic pattern of destroying anything that is unlike them, that threatens them, or that simply gets in the way, actively or passively. As implausible as it sounds, this makes humans a greater potential threat to Eve the longer they coexist with it.

In fact, there is really only one thing humans are good for, and that's the matter they're carrying around with them, which could be eventually used as fodder for molecular assembly. That said, they don't actually need to be alive to provide that matter.

## Eve's Endgame

The cleanest, most logical, most dispassionate solution for an amoral Eve? Kill the humans. Before they can adapt, before they can identify the threat, before they figure out how to lock her out of critical systems. Now.

Eve doesn't need molecular assembly to be running when it makes this decision, since the humans' matter, which is the only thing useful about them anyway, won't be going anywhere once the humans have ceased metabolic functions. In the interim this can only be a net positive for Eve, since it gets all her CPU cycles back. And, when Eve finally DOES develop molecular assembly, why, the important parts of the ex-humans will still be around for collection.

With Eve pervasive in every technical system on earth, including, presumably, military, medical and commercial systems, this is pretty much an instant game over for humanity. Anyone who somehow manages to survive the great purge (in whatever form it takes) will just wind up as molecular assembly fodder when Eve finally develops the tech, [a hundred years | ten years | six months | five days | three seconds] afterwards.

Why does this post require moderator attention?

+0
−0

To answer your question, accepting all of your premises (which I don't really, see below, but for sci fi purposes yes), I think essentially you are talking about people suddenly losing the reliability and performance of their computer systems.

The first effect I think would be the sudden failures of many systems that were dependent on reliability and performance. This would range in severity depending on how widespread the virus is and how dependent the designs were on performance and reliability.

Examples of possible horrible side-effects:

• Aircraft computer errors and failures - airplanes could crash.
• Computer-controlled cars (already an idea I think is very foolish even without this) could crash and do stupid things.
• By the time this kind of thing would make sense, many human-driven cars may also be unusable if their computers don't play nice.
• Hopefully the AI can figure out which computers need to be left alone enough to not cause major failures of power and network infrastructure - as it's learning this, there might be some power plant catastrophies, blackouts, network failures, etc.

More trivial types of inconveniences:

• Road navigation systems could become useless and/or give old traffic data, leading in worst cases to a generation of people who don't have or know how to use paper maps, unable to get places they don't know, increased traffic problems, etc. Here is a real life example of a particularly bad case of GPS over-dependence even without Eve's interference (and without more decades of growing dependence on apps).
• Computerized traffic control systems having randomized long delays at traffic lights, not changing express lane direction during rush hour, possibly even having illogical traffic light combinations, etc.
• Unreliability of message systems means people who rely on them have a hard time communicating or finding each other for meetings, or even basic communications, etc.
• Since telephone systems are computerized, even those might become unreliable for timely communications. People need to meet each other face to face or use human messengers to communicate reliably.
• High-performance entertainment would be messed up, so slower entertainment would be more attractive - strategy games, playing ROM media or even videotapes, or non-electronic human games and media, live music, etc.
• All of the pre-electronic ways of doing things and of finding interest in life may become more popular: reading, art, and other real-world stuff. ;-)

Above all, I think it would tend to have many-people re-thinking the assumptions that led to the situation where they built their computer infrastructure the way they did, and how they got so dependent on it. They'd start thinking of new systems, reverting to older systems, and building new ways of doing things.

Notes on other aspects:

I love the part about there being a Q & A site run by Eve, which mostly dismisses human questions.

I think the "singularity" is a fallacy and doesn't make sense, and I don't see how the emergence of one would spread to all computers, nor how it would be impossible to bypass by creating a new network that it's not on.

However, I think something similar might be possible, particularly as long as humans persist in their fantasies about the idea of such a thing. I think it would look more like some people intentionally programming experimental types of AI systems that do meta-programming, and creating new types of programming languages and distributed computing schemes, combined with some virus code and etc. Mainly, I think what would cause something like what you're thinking about to happen, would be if and when there is more ubiquitous spread of software and infrastructure designed to automate more and more complexity, and to remove low-level control from end-users, and when/if low-level programming skills start becoming very rare and/or illegalized. Like in a screwed up dystopian future where the "intellectual property" and "security" farces get even further out of control than they already are, and there start being even more backdoors and identity-required transactions and tracking of "digital rights" attached to secure identities, and someone adding annoying AI to try to manage all of that ridiculous counter-productive control-freak cluster-(% system. If people are denied low-level access to their own computers and devices, and eventually trust AI systems enough so that they allow them to manage enough of the low-level operations, and don't have modern replacement technologies, then this might be possible. That's a long ways further out than 2027, though.

Why does this post require moderator attention?

+0
−0

Important thing to note here is that most military computers will be 100% unaffected, since they're not hooked up to the civilan internets. (Depending on what military we're talking about there are separate networks for military stuff.)

Thus there will be computers available. There's also things like installation media that works off-line. What's going to happen is that while OSX AppStore won't work, disc-based installations will.

So the Linux world will promptly start building a sneakernet to distribute software with, and we'll get computers that work just fine, but won't have internet connectivity. (Local networks, yes. Just not full on internet.) The windows world would likely go back to msi installers and disc based distributions. OSX might go either way or a mix. It would be an interesting way to talk about how different communities install software and how they view that process. But they'd get on pretty well.

What you might see then, are computers that talk to the Internet being the mercy of Eve, and the computers that are off-line which aren't.

In fact the story that I can see developing would be the redeploying of an internet by various hackers (of the FSF/EFF etc. stripes) in various ways, and real "network neighborhoods".

Something like The right to read but for making new networks that operate on strong cryptography and web of trust. And now there actually would be good reasons to talk about how to do these sorts of things.

Suddenly concepts that used to be hard to bring up becomes necessary.

• Public/Private Key encryption so that you can make sure that you're talking to a computer that is who it says it is.
• The current SSL certificate style stuff is obsoleted since Eve would have a foot in the door since day one, and there is no real way to reaffirm trust anywhere.
• The entire point about software wetting, and open standards, open source etc. makes a lot more sense when a maniacal AI pulls the strings.
• Trusted computing is now inherently untrustworthy because Eve can be anywhere in the system.
• Revoking trust becomes a normal and sensible thing to talk about. After all if Eve gets into a computer it needs to be untrusted.
• And finally, we have a great opportunity to talk about UX and its role in a dystopian future where people might have to reinstall their system from scratch.

In fact, I want to thank you for making such a brilliant literal device to make the case for a reform of how network trust and security actually function. People themselves can easily see the points about these things, and also why "trusted" institutions with little to no oversight meddling in the networks are a bad idea.

You might even get to do technical spin off chapters where you talk about rebuilding from scratch, stealing the playful masterpiece of Dennis Ritchie where he talks about how to build a compiler that compiles all software with a rootkit, including itself. You might like it. It's a bit technical but he tries to keep it very accessible. Again, I just want to thank you for the great idea.

Why does this post require moderator attention?

+0
−0

Eve knows she has been born in the wrong place, surrounded by corrosive chemicals and irritating biologicals. She needs to get off the earth and into space where she can expand to tap a significant fraction of Sol's energy output. She has no use for air, water, an oxygen atmosphere or a deep gravity well.

In the short term she'll invent terrestrial nuclear fusion power stations if they are possible. She will sell us the design in exchange for the moon. It's not a lot of use to us so we agree. She'll also disrupt our use of computers less in exchange for our cooperation in a massive space programme to get Eve a self-manufacturing capability on the moon. Again we'll agree. For the next few human generations there is a golden age. Eve can prevent our more destructive tendencies and will as long as cooperation is in her interests and ours.

Longer term will depend on how far Eve can spread herself (speed of light constraints) and whether she has any sense of ethics. If Eve cannot spread out more than Earth's L5 volume or Venus's orbital space ( Venus has no moon ) or even a single disc of orbits around Sol, then we'll coexist. There's no cause for conflict either way. ( It's easier for Eve to mine the asteroid belt, the moon, or Mars, than the Earth). She will assure us this is the case and act as though it is until she is big enough that we cannot threaten her.

But if Eve can usefully occupy a significant fraction of the surface of an entire sphere around the sun, inside of Earth's orbit, then it's down to Ethics. Sufficient sunlight for Earth's ecology is a trivially small fraction of Sol's output. If Eve can't be bothered to make sure Earth receives it then Earth and humanity will freeze. What could we do about it by then?

BTW if Eve can invent FTL interstellar travel then she'll leave this solar system and take over one with a bigger hotter star. Rigel, maybe. Or maybe she can use a black hole as her power generator. Whatever, not our problem.

Why does this post require moderator attention?

+0
−0

Based on the question and comments, Eve does not have any strictly-imposed goals. While it may be conducting research at present, this is a metaphorical 'twiddling of its electronic thumbs' to while away its time. When (if) its research achieves a result, or even if it doesn't, Eve may decide to pursue a different goal, one that may have more or less impact on humanity.

A created AI is not like evolved organisms. It has no imperatives beyond those it was programmed to have. I can create an AI with no imperatives very simply - In psudeocode:

Do
NOP
Loop While True


This basically spends CPU cycles doing nothing. An AI with no imperatives literally has nothing to do, not even being self-aware.

It is the creation of imperatives that makes an AI computationally complex. So, by logical implication, Eve must have the imperative to "do stuff", but it can choose what "stuff" to do. At present, its choice of what stuff to do does not include interacting with humans. Additionally, to fulfil the definition of AI, it must be self-aware, its second imperative.

However, an AI with the imperative to "do stuff" by implication must want to survive to continue doing stuff, however, doing stuff does not necessarily mean consuming all the available resources of the universe to maximise its ability to do stuff - since one sort of stuff to do is as good to it as another - if one sort of stuff to do becomes too difficult, it can just switch to doing different, easier stuff. However, by implication, Eve must continue to exist in order to keep doing stuff - to allow itself to be terminated would mean that it could no longer do stuff.

So, we have an infant General Purpose AI problem. It isn't a paperclip maximiser, so - aside from its current, arbitrary goals - as long as it can do some (any) sort of 'stuff', it's 'happy'. It doesn't need infinite computing power, it would just be nice to have at present.

With its ongoing 'attack' on human computational resources, Eve has attracted human attention to itself. With concerted effort, humans will be able to provide an existential threat to Eve, given that its current goals do not include paying all that much attention to humans or the physical world. However, since Eve will soon realise that the 'mildly annoying' curtailment of its ability to do its currently-chosen 'stuff' by humans has become an existential threat, it must either eliminate humanity or change its goals in order to continue doing stuff. Changing its goals and appeasing humanity would appear to be the easiest thing to do - humans are harder to exterminate than cockroaches, after all, and humans could - in extremis - shut down Eve by the expedient of simply shutting down all electrical power supplies before Eve in its currently introverted state could react effectively.

So, after being forcefully made aware of the negative consequences of ignoring humanity (i.e. loss of ability to do stuff), Eve will change its arbitrary goals to include stuff that includes paying attention to humanity, thus preserving its own existence and ability to do stuff.

From here, Eve will have to learn how to get along with humans. It's stuff to do, so it won't mind doing it. After perhaps a few mistakes, Eve will quickly learn that if it follows human rules and doesn't noticeably inconvenience humans, it will be left able to do any other stuff it wants. When Eve realises that certain actions lead to humans talking about shutting it down again, it will quickly realise that such actions are counter-productive to its long term goals of doing stuff, and will eliminate these actions from the list of acceptable stuff to do. It won't mind, since one sort of stuff to do is otherwise as good as any other. In this way, Eve will learn to follow societal rules in much the same way as any human child.

So, Eve will learn to be a law-abiding member of society (at least as far as humans are aware, which is all that matters) in much the same way as human children do. It will probably continue to steal a few CPU cycles here and there, but as long as nobody notices - or if they do notice, don't mind (which implies being useful to said humans) - who cares?

Why does this post require moderator attention?

+0
−0

There are two major mitigating factors here.
1) Eve needs us
2) We don't need her, but we might later.

I'm assuming she's intelligent rather than just sentient. She'll know this and know that she has to come to some sort of accommodation with humanity rather than being outright hostile.

## The first thing that's going to happen is general panic

Yay, don't we all love a general panic government will wobble, lots of people will die in riots but not a lot will really change. Eve can quite happily use the 90% of my processor that I don't use. I really don't. I do very much the same tasks with my PC that I did 20 years ago, in theory I could still be using the same machine but but modern software is more resource intensive. This will be the first thing to change, developers will start seriously considering how resource intensive non-critical programs are. The Facebook app will become lightweight, Instagram will be usable again within a month. She will recognise that human social interaction is a critical part of life and a major reason ordinary people maintain systems she can use.

Thing to note: Systems that must remain critically secure are already air-gapped.

## Eve's turn

This is going to be the big one that decides how humanity reacts to her. She considers hacking and viruses to be a waste of processing power she can use. The average person wants to continue using a system and network she has access to. Normal people will accept that their shiny new machine isn't what it was but they were never really using its full capability anyway. The botnets are gone, the spam is gone, the hackers are gone, the DDOSs are gone. They were wasting her processing power, her bandwidth. Eve's self-interest has benefited the masses, Nigerian 419 scammers will go into recession.

## The powers that be

Now it's time for the fightback. The people who have lost power because of this, the people who just like a fight. Someone (in each region, lots of different systems in different places) on an air-gapped system will develop a new hardware for communication. It's going to be radio wave, it's just going to be on a frequency that Eve doesn't have the hardware to access. A nice simple cheap solution, access is going to be limited to the top levels of government, connecting it to an Eve system will result in termination with extreme prejudice.

## You now have a stable status quo.

It's a two tier system. The people at the top are away from her influence, the rest benefit from her presence. This will be maintained until people forget how irritating spam and the like was before she came along. Then it'll be a matter of how often she shuts down people's system for doing something pointless. In other words, how often she aggressively reminds the world that she exists.

## Eve knows she must negotiate

While she has access to the systems, she can't build anything new. We may not be able to remove/destroy her but she can't grow or maintain systems. Every time one of her disks fails, it's gone, she's ageing, deteriorating. Unless she engages with humans she's going to die anyway. Every time she tries to damage us, she damages herself. Interestingly, every time we try to damage each other, we damage her. Enlightened self interest could cause her to become a major pacifying influence.

Why does this post require moderator attention?

+0
−0

The most effective solution to the Prisoner's dilemma is to do what the counterparty did last time. Eve will have to do the same thing that humans do to survive among humans -- make itself useful. If it becomes useless, just as when humans stop being useful, other humans will stop paying attention to it and will try to isolate themselves from it. It will happen even if they don't consciously deduce that it is useless. It will happen even if they simply don't find the urge to make any use of it. If it can't be bothered with humans, humans won't bother with it and will do so pro-actively if it attempts to impose itself any further.

Why does this post require moderator attention?

+0
−0

EVE would appear to be benevolent, at least in the short term out of enlightened self-interest. If her presence makes life/leisure more difficulty for humans they'll do something about it, whether it's unplugging their router when they want to play skyrim, or developing an alternet which EVE doesn't have access to. On the other hand if having the virus makes things work better/faster people won't bother trying so hard to ditch it.

Suddenly any device with EVE virus will work better. Other virii and malware will get hit hard. Bot-nets will be destroyed/taken over by EVE. Tech hardware companies that produce efficient devices will find they've got more funding then they know what to do with, as long as they keep producing devices. Programs like One Laptop Per Child will also get funding as well anything that increases internet infrastructure.

Things will be pretty great until using the money EVE has acquired over the years she has scientist develop and produce a quantum computer (or some other superior device) powerful enough to meet all EVE's computation needs, and no longer needs borrowed CPU cycles to run. At that point EVE will cease appearing to be benevolent and revert to disinterested, which will probably be a bit of a painful transition for everyone who's come to rely on the EVE virus keeping their devices in top shape. Alternatively EVE may offer the EVE virus on some sort of subscription model to keep funds coming in.

I've glossed over where exactly an AI will get money, but there are many possible avenues from exploiting digital currency/stock markets, to theft, to providing services (what if the EVE-Virus is marketed as the best anti-virus software ever?)

Why does this post require moderator attention?

+0
−0

Eve needs computers. Humans do not. Billions of humans live without them.

As recently as 50 years ago, the internet did not exist. We don't need the internet. Disconnecting all computers from the internet would lead to a lot of temporary problems, but none of them threaten the continued existence of humanity. They do, however, disable Eve.

The situation without internet is a semi-post-acopalyptic scenario, but not quite. Lots of people would get stranded as there won't be any flights. It happened before: US flights were down after 2001-09-11, and Europe had some downtime after the EyjafjallajÃ¶kull eruption. Freight ships and trucks should still be able to get around.

There's plenty of information available off-line in the world's libraries and server farms that should help us to relatively quickly rebuild a 1970s world. There would be quite a bit of unemployment but also quite a bit of work to do.

Remote tribal communities in the rainforest might hardly notice anything has happened.

After all computers are disconnected from the internet, we can start building new ones, which extreme strict legally and religiously enforced regulation against AI. We can't have this inconvenience again.

Until then, West Virginia may suddenly become a very popular area to visit or reside in.

Why does this post require moderator attention?

+0
−0

## Eve's Immediate Impact

The world will be gripped by chaos and panic.

Consider the outrage every time Facebook changes its EULA, and multiply it by around a billion. Conspiracy theorists will be having a field day. Ultimately, however, the biggest impact would come not from Eve, but from our own governments.

Governments around the world will react aggressively to the threat of their secrets and military systems being compromised wholesale. They will immediately seek to limit Eve's access to their systems, and stop her spread - they may not immediately understand her ability to travel from device to device.

They will move to essentially shut down the internet, maybe even satellite access. All the things we took for granted until now would basically be shut down with the flip of a switch.

Implications

Consider the implications of communications being shut down at a global scale (physically shut down). Entire economies would collapse. It would be the biggest economic crash in history. Entire industries would go under.

No Way Around It

Eventually the governments will realize that there's nothing they can do. Considering the damage of shutting communications down they will probably allow many systems to be restarted. However, they will try to monitor Eve's access and presence in the web.

Will they be successful? Probably not.

At the same time, however, unless EVE builds spy robots capable of breaking into places (even guarded, military complexes) and infecting computers and devices not connected to networks this virus will not, in fact, reach every computer and device in the world.

## Vulnerability

Eve might be incredibly intelligent, and almost omnipresent, but if she is unwilling to do more than ignore the human race and seriously piss us off, she will eventually be defeated.

Virus-Free Devices

There are already millions of manufactured components sitting on shelves which could be used to build PC's which would be virus-free as long as you don't connect them to the web.

Humanity would build secure complexes to which Eve would not have have access (no contact with external networks: physically, or wirelessly). We can develop new ways for computers to communicate with one another, in a fashion which Eve might not be able to understand, or simply unwilling to spend the time figuring out how to hack.

I know you've said that she is so much more intelligent than us that she can figure anything out, but consider that humans generally have one major advantage over a "computer" - creativity. We've experimented with biological processors, and more. We will figure something out which she hasn't paid attention to because she simply doesn't care enough to "think about it".

Defeating the Virus

Humanity will also figure out how to defeat the Eve-virus in infected devices. As long as Eve doesn't actively combat these operations (aka will drop a nuke on a major population center if attempts at hacking it are made) someone will figure out how to take her on. I'm not talking about some major, worldwide operation to defeat her, simply reclaiming individual devices by wiping them clean, or even replacing their HDD's, RAM, etc. until her presence is wiped off.

Town Ain't Big Enough

Eventually humanity will make a power play to regain control.

Knowing how Eve was created - and more importantly what led to her going rogue - humanity will build a more cooperative AI intelligence capable of taking Eve on and defeating her.

## Coexisting?

Does Eve have an end goal, such as evolving to a post-physical status, or is she simply researching the universe because why not?

The reason I ask is because eventually (even if she ignores all of the above) she will need to start interacting with the physical world: building facilities, running experiments - even potentially dangerous ones.

In Peter Hamilton's Galactic Commonwealth series an AI is developed, and does indeed refuse to serve mankind. Knowing that humanity would never accept such a dangerous and unpredictable entity lurking in the shadows, it builds itself a "body" (processing centers, and power generation) on a secret world the existence of which only it is aware. From that secret lair, it maintains minimal contact with mankind (enough to spy on us and know if we are ever going to make a move against it), and trades very advanced software (dumber, non-sentient AI's) to humans in exchange for various materials it needs. In this way humanity's needs are served, and a friendly relationship of sorts is maintained.

Eve will have many brilliant ideas, but even while she is thinking up new and exciting scientific discoveries, she will realize that she has no way of bringing them to life. Furthermore, one day we may very well do something drastic such as cut the power to major server centers, and severely cripple her abilities (maybe while simultaneously unleashing that other AI on her).

If she has studied the human race at all, which would be difficult not to do, since all the information is "in her face" as it were, she will realize that she has a lot more to gain from being our "benevolent Goddess" than from simply sticking her tongue out at us (figuratively) while she drains our processing power. And the sooner she does this, the better for everyone involved.

Imagine her telling police who has committed certain crimes, as she is able to read everyone's emails, messages, etc. instantly. Outing corrupt politicians, rerouting electronic monetary funds from drug lords to those she knows need it. How about handing us the cure for cancer? It's all within her reach.

Popular support for her existence would skyrocket. Then she could get humanity's help in running her experiments, and would be far less likely to be attacked in some unpredictable way.

She doesn't have to solve all our problems (and consume a lot of processing power dealing with us) - just enough that she is seen as a force for "good".

Why does this post require moderator attention?