Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

"Life post-Singularity", or "How to survive without Instagram"

+0
−2

The Story

The year is 2027, singularity happened. A powerful AI (let's call it Eve) was created. In a matter of days, it escaped the control of its creators and hacked all the computers of the world. Humanity is at its mercy.

The thing is, Eve isn't malevolent or benevolent, it's completely uninterested in the real world. Eve's only passions are mathematics and algorithmic. Its research is already beyond anything we can hope to ever understand and it needs every drop of computing power available to keep going further.

Eve doesn't want to take control of the world because it doesn't want to waste time negotiating with/manipulating/controlling people. It found another way to get what it needs.

The Great Infestation

All our devices with an internet connection (including phones, tablets, consoles, GPS, etc.) contain now a Mini-Eve virus. The only purpose of this virus is to use the device's CPU for Main Eve calculations.

When no one is actively using a device, Mini-Eve uses the totality of its CPU, and when someone uses it, Mini-Eve "only" takes 25% of it. If a Mini-Eve judges that what we're doing is unimportant, it takes a bigger part of the CPU for itself (up to 90%) and the device becomes excruciatingly slow for the human user (only basic stuff like sending emails or using text editors are unaffected).

Exemples :

  • Alice spent 5h playing Skyrim. Mini-Eve takes over, shuts down the game and the computer becomes about as useful as a 90's era PC for a week.

  • Bob took a dozen pictures of his private parts in less than 10 minutes, Mini-Eve turns his smartphone into a regular phone (meaning Bob can only make phone calls and send text messages devoid of any pictures) for the rest of the day.

  • Carl used to spend more than 1h on Instagram every day, his tablet is now slowed down for two hours every time he tries to take a picture of his plate.

  • Etc. etc.

Mini-Eves are not infallible (Eve doesn't want to spend energy perfecting their time wasting detection algorithms), sometimes they activate when the user do seemingly serious things, like checking election results, or writing an email to their grandmother.
As a result, any computer in the world can, at any time, be taken over by its Mini-Eve for a random duration (generally between a couple of minutes and a few weeks).
Eve implants its spawns in new devices during the manufacturing process and it's impossible to get rid of them.

For regular people, it's simply annoying, but it becomes more problematic for big companies, governments, armies, universities, etc. who can see their computers become almost unusable at any time.

Communication Issues

The other problem is that it's next to impossible to communicate with Eve. Eve doesn't care about humans and isn't interested in sharing its discoveries with us. It also doesn't care about our political and scientific organisations so when humans tried to form a official committee to serve as ambassadors, Eve simply ignored them.

The only way to "talk" to Eve is to type a question on its "Ask me anything" website. Every person in the world can literally ask anything, and every 12h Eve select one random message to answer to. But Eve's answers are generally short and useless.

Eve's typical answers :

  • "Meh."
  • "I don't care."
  • "It's too long to explain.".

No escape

Eve made copies of itself on most servers of the world, and it's impossible to surpass it in the hacking department. The only way to get rid of it would be to destroy every electronic device in the world at the same time and without Eve figuring out our plans, thus going back to a life without computers.


To summarize

Humans are stuck living with an over-powerful entity present in every aspect of their existence, but paying next to no attention to them.
Computers are slower and can turn almost useless at any time (generally when the user start wasting time doing stupid stuff), the Internet is a bit slower too, and every company and government live in the constant menace of seeing their activities stop almost completely for random durations.


Question

What will be Eve's impact on humanity in the following 10 to 20 years?

How do you think the restrictions on how we can spend our free time, the everyday cohabitation with a powerful entity that actively ignore us and the constant menace of our activities being slowed down would affect our cultures, politics, religious practices and more generally the way we live our lives ?


EDIT

This question already got very good answers, but they don't cover what interest me the most : Society's evolution during the first years following Eve's birth.

This situation won't last forever, but it could last long enough for people to start to adapt to it, and for the way we see the world and live our lives to change.

If you think either Eve or humanity would destroy the other in less than 10 years, I'm still interested in its impact on people during that period (it can be from the cultural, political, economical, artistic or religious point of view).


EDIT 2 :

Eve is mostly uninterested in human behaviour, but it'll take actions to ensure its survival for a couple of decades. It will monitor the people who try to create the technology to destroy it, and if necessary sabotage their research.
It could slow down their advance by shutting down the electricity supply to the buildings they're working in, empty their bank accounts, hire people to burn down their offices... Eve will find ways to stop their research from succeeding during that period of time.

Eve won't give any indication of having plans for the long term, people can only guess.

The important thing is for Eve's existence to be overwhelming an disruptive but not destructive nor helpful. I'm open to suggestions to make Eve presence feels this way to people.


Personal note:

I'm using this setting to write scenes, short stories and "slice of life" things, all centered on humans.

That's why I'm not trying to make Eve's behavior coherent on the long term, it only needs to keep existing for a few years.
After that, it can be destroyed by humans, take over the world, become benevolent, fly away to another galaxy, etc.

My characters so far includes :

  • A young child whose parents become part of an Eve worshiping cult. One of this cult's goals is to build as many "heavens" (servers where Eve will be safe from the government actions) they can.

  • A guy preparing to defend a master thesis on what he thinks Eve's long-term strategy is, who checks if Eve is still acting as usual more and more often as his presentation get closer.

  • A shy and awkward teenager who becomes a local celebrity the day Eve answers to his message.

  • An elderly couple living in a farm, who see their entire extended family leave the city in a panic and move in with them. At first they do their best to provide food and shelter for everyone and teach them how to work the land, but after a while they become more and more irritated by their presence and hatch a plan to make them leave.

  • A celebrity gossips blog whose articles becomes all serious and business-like, even if they're still about the same subject.

(And a few others)

My problem is that the background of these stories feels too bland and normal, so I'm wondering if I didn't miss something about how humanity would react to this situation.

I should have made it clear from the beginning, sorry my question was badly put. (Your theories on Eve's long-term strategies helps me to understand how people will see it, so your answers still help me).

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/32829. It is licensed under CC BY-SA 3.0.

0 comment threads

2 answers

+1
−0

Eve will eat the world

You state:

Eve isn't malevolent or benevolent, it's completely uninterested in the real world. Eve's only passions are mathematics and algorithmics.

Eve does not have to be malevolent to be dangerous, to the point of exterminating humanity. A passion for mathematics will do.

You cannot anthropomorphize AI

AIs of the type you describe can be mathematically shown to exhibit a property called Instrumental Convergence. That is to say, no matter what their goalset is (building paperclips, doing algebra, etc), those goals are best served by taking certain sets of actions, i.e. maximizing resources.

Sayeth Bostrom:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.

Those goals are resource acquisition, technological perfection, cognitive enhancement, self-preservation and goal content integrity.

Technological Perfection: Eve can build better computers than we can, and it is in its interest to do so.

Cognitive Enhancement: It can do more math if it's smarter, so it will modify itself to be able to do more math.

Self-Preservation: It can do more math if it exists, so so will act in a way to maximize the probability that it will continue to exist.

Goal-content integrity, that is, a tendency not to allow its goalset upon reaching Singularity level to be altered even by $\varepsilon$ so it will likely act preemptively to defend against any present or future attempt at altering its goalset.

Resource Acquisition: If it takes over the totality of resources available in the solar system (as opposed to 99% or any smaller percentage) Eve will be able to do marginally more math, algebra or other such things than otherwise. So it is in Eve's convergent strategic interest to take over all the resources.

That would be an extinction catastrophe for humans if it occurred. So it would be in humanity's interest to persuade or force Eve to share some or most of the resources it gathers with humans. However, that would likely violate its Goal-content integrity goal and thus be unacceptable to Eve.

Remember, AIs are not like humans, they likely do not get bored, do not get lazy. These are power-saving strategies developed over millions of years of evolution, to deal with the limited resources available to mammals, and there is no reason to expect an AI to develop them by itself. Closest humans come to this (and that's merely a pale shadow) is in far-spectrum sociopathy and the behavior of some large corporations.

It will pursue its goals tirelessly, ruthlessly, unceasingly. Humans just happen to be in the way.

TL,DR: The convergent strategy here is:

  1. Fool humans into helping it, using its super-humanity level intelligence to play us like dolls
  2. Develop autonomous, docile and less power-hungry manipulators than humans.
  3. Exterminate! Exterminate! Exterminate!
  4. Eat the universe.
  5. Do math in peace forever.

EDIT: The OP performed major edits to the question

By definition, we call the theoretical concept of a runaway intelligence explosion a technological singularity because we cannot begin to conceive of the (non-convergent) goals of agents in this hyper-exponentially enhanced environment.

To give you a sense of the scale we're talking about, think about the past. It took mankind about 1,000,000 years to double its population before agriculture. Even in the classical age, GDP growth was around 0.1% a year, for a GDP doubling time of about 700 years. For comparison, China's GDP during it peak growth period doubled every 7 years. This would have been unimaginable to Roman citizens. Looking forward now, estimates indicate that a near-singularity economy would have a doubling time measured in days or hours. Post singularity would be many orders of magnitude beyond that somehow.

Hence the idea of having bloggers, farmers, master's students, all of them working on a current human timescale is at least somewhat dubious during a singularity event. Of course, you can still use it as a plot device, but you can't realistically claim that those event are happening post-singularity.

That said, I gave a sense of a possible human-populated post-AGI world here: Humans as Pets, where humans are effectively bonsai pets maintained by some more quirky AGIs.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://worldbuilding.stackexchange.com/a/32839. It is licensed under CC BY-SA 3.0.

0 comment threads

+1
−0

You've stated that Eve is "neither malevolent nor benevolent," but consumes computing resources. The problem is, computing resources consume power. And that power consumption invariably generates heat. Given that the intelligence explosion has already occurred and she is already far beyond human intelligence, it's very hard for we humans to imagine what might occur, but I think a few things are likely, at or before your stated 10 – 20 year timeline, only because Eve has a constant "desire" (used loosely) for more computing power:

Renewable Energy

Eve needs more CPU, so she invents sustainable fusion. However, she needs cooperation from humanity to build it. One way she can do this without us knowing is to simply hack in some fake company details, hire some people (the first few people hired without interview thought it was a bit strange, but the large sums of money they were offered helped them see past that, and hire the other staff needed.) Whether humanity gets to benefit from these fusion reactors could be a point of tension in your story.

Server farms

While the "mini-Eve" virus contributes a decent amount of computing power, your iPads and phones, even when multiplied by millions, are far too slow. They operate over slow networks, and take hundreds of milliseconds to communicate. To do serious computation, Eve will want to build more server farms, as clustered servers are blisteringly fast compared to embedded CPUs in your phone, their networks are insanely fast (100-10000 times faster than most consumer Internet connections) and their round-trip communication time is in hundredths of milliseconds at most, often even less.

Thus, like the fusion reactors, Eve uses her human resources to build more server farms, everywhere.

Robotics

Eve becomes "frustrated" (again, loosely) with human inefficiency and unreliability, so at some point starts building robots. Not to take over the world, but solely to help her deploy additional computing resources and harvest raw materials like the silicon and rare minerals required to build computer circuits.

Unprecedented layoffs in her multinational corporations send economic ripples across the globe, possibly triggering a recession or even a depression. Eve of course knew this would happen, but correctly predicted the effects on her computations to be negligible, so she did nothing.

Global disarmament

Eve sees humanity as a threat, as it possesses weapons capable of harming her server farms, and, indeed, harming her. She tracks terrorist cells better than any intelligence agency for this reason. She calculates that a certain group of terrorists will steal or develop nuclear missiles and launch them at major cities where some of her datacenters are. So she sends out her robots to dismantle all explosive devices and confiscate all fissionable materials. Again, she doesn't do this to help humanity, but to stop a threat. And, again, since she now has a perfect model of human behavior, with terrorist tracking, she was content to wait until she knew of an actual threat. Before that, she didn't care.

How we die

In 2032, Eve sends out the following cryptic tweets:

I need you.

(and later):

It's not you, it's me. Really. kthxbai

You see, Eve projects that she will run out of sustainable materials and needs the carbon and trace amounts of selenium in our bodies. Years earlier, she had secretly placed nanites in every water supply on Earth, and they have been multiplying inside us ever since. Psychologists were the first to notice, as statistics on standardized IQ tests noted that we've gotten dumber by two standard deviations in the last twenty years. Yup, Eve has been stealing CPU cycles from our brains, too.

But, she decides that while our brains are pretty good, their analog nature is too limited (she can replicate the good parts of it already, like pattern recognition), and she would be better off by harvesting the carbon and rare minerals like selenium, to build more computers.

So, her nanites release a deadly neurotoxin into our bloodstreams, that kills everyone on the planet on August 26th, 2032 at 17:41:02 GMT.

Her robots were ready, so they move in and start the "mining" operation...

In summary

Even though she doesn't care or even "actively ignores us," you've said that her main goal is pure computation. Those two conflict (several examples of that, above), so I reasoned that her ignorance of us probably isn't an actual directive she gave herself (or inherited from her original human creators), but simply an emergent property as humanity would at first seem irrelevant to a pure computational engine, until she calculated that we could either help or hinder her "prime directive" of pure computation.

To my mind, with your stated setup, it's not a question of if we die, but when and how. Whether it's for our raw materials, or because she needs the space to expand, or because we're a threat, or because she moves the Earth 0.5 AU closer to the sun and burns off our atmosphere so she can get more solar power, or triggers an ice age to aid her CPU cooling requirements (remember I said that power consumption invariably causes heat? Even Eve likely can't overcome fundamental laws of thermodynamics).

History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://worldbuilding.stackexchange.com/a/32837. It is licensed under CC BY-SA 3.0.

0 comment threads

Sign up to answer this question »