Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Why don't future civilizations develop their A.I. to integrate with biology so they can make a sustainable world?

+0
−0

Most of the AI's in films/books/series seems to turn out evil or using destructive/harmful ways of achieving what they want in critical situations because they can't understand the earth's biology. Even though they are just a piece of technology, they should be able to learn about earth's biology and create solutions to the problem of over-population, climate change, etc.

For example, a well known tv-series shows how an AI kills 98% of the population of Earth because the Earth couldn't sustain the overpopulation. She just does what a computer program would do in a tech-environment: wipe and free space and only maintain the critical files. Instead, the rational and scientific minds discover that they can solve the problem without killing most of the population, despite the fact that this way is slower but also more viable for current humans.

Could a future civilizations develop their A.I. to understand biology so they can make a sustainable world instead of destroy it?

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/81016. It is licensed under CC BY-SA 3.0.

0 comment threads

2 answers

+0
−0

I work in the AI field. The fictional shows are not realistic, the authors do that for the sake of creating a powerful enemy that seems unstoppable, so the puny humans can be heroic in the eyes of the audience. (The same goes for nearly all alien invasion scenarios, but I will stick with AI.)

That answers your question; in fiction the AI cannot be benevolent and harmless, or we don't have a movie! The authors need a monster for the heroes to fight, and they "raise the stakes" by proving the monster is ruthless, irrational, remorseless and out to slaughter everyone! Women! Children! Infants! You and everyone you love, dear audience member.

On to AI: We can make distinctions to distinguish between some labels that are often conflated: Intelligence can exist without Consciousness, and we can have a Conscious Intelligence without Emotion; it does not have to want anything or fear anything (including its own death).

Intelligence is the ability to learn or discover predictive abstractions; which we call "models" of how something operates; be it gravity, water, tigers, atoms, women, Congress, electricity, the Sun, plants, fish, etc. A "predictive abstraction" is a way of translating a current state into a previous state or a future state. When you see a man looking at his phone and stepping off the curb in front of a fast-moving oncoming truck, your visceral reaction is not due to something that happened, but something you predict is about to happen, based on several models in your brain about how fast trucks can stop, how human bodies react when hit by trucks and whether he can get out of the way, etc. Your intelligence predicts all that, and concludes you are about to see something horrific, and your emotions react to that intelligence.

The more accurate, comprehensive and long-term the predictions are; the more intelligent the predictor is. But even very short term intelligence is useful; an animal that realizes the shadow that just appeared before it is a predator above or behind it can take a reflex evasive action and save its own life; even though that 'prediction' of what was about to happen was less than one second into its future. (I say reflexive to mean it wasn't a conscious decision, even though the prediction counts as 'intelligence'.)

Consciousness is very contentious and not well understood; but a useful idea is that this emerges when intelligence becomes sophisticated enough that it requires a predictive abstraction of your own self in order to predict further in the future: What you will see, are capable of doing, and how what you do will most likely influence the future. I don't think a spider is conscious when it builds a web. I think a human that examines the web, and guesses how it works in an abstract sense, and imagines herself tying strong vines into a similar pattern to trap a squirrel, has to be conscious. She has an abstract model of herself and chooses to work to bring about a future in which she has a supply of squirrels for dinner. (A non-conscious intelligence, without any abstract model of itself, may understand how a web or net works, but is incapable of imagining itself building something like it, made of vines or string.)

Once an animal does have an abstract model of itself, along with a million other abstract models about how the world works, then the intelligence can enter an infinite loop of prediction about what will happen next; and how it can influence that, or prepare for it, or avoid it, or whatever. That is us; at every waking moment we are anticipating and planning and taking action to influence the future.

However, that also demands emotion. We can anticipate the future, often with great accuracy, but if we don't care how it turns out then there is no weight given to any action! We have to want things (or the opposite, want to avoid them). Those "wants" are not always rational, and in fact most of them, if you think about it, come down to non-rational motivation: "I just want it."

For example, why do people want sex if they are certain it will be non-reproductive? It feels good. Why? It just does! The same goes for the foods we love, the pointless games we play, etc. Why do we want to live, knowing we will certainly die someday? As we follow the two-year-old's chain of questioning, asking "why?" to every answer, we end up with circular reasoning or a dead end: Eventually it is just axiomatic (a truth with no justification) that, barring very special circumstances of horrific pain, we don't want to cease existing. But that is an emotion, not a product of either intelligence or consciousness. Those both serve our emotions (and can be overridden by them), and without emotion, we do not have any 'sense' of self-preservation, no 'desire' to live, or protect ourselves from harm, or hatred or love for anything.

Oddly, it is fairly easy to develop artificial intelligence, but very difficult to create usable artificial emotions. Yet this is the mistake made by fiction writers, assuming that intelligence leads to emotions and feelings; when in real evolution, it is far more likely that emotions and feelings led to intelligence: You have to want something that can be delivered by a predictive rationality, or there is no evolutionary drive to select for predictive rationality! There has to be a reason for an animal to choose one future over an alternative; and satisfied emotions (fear, hunger, mating desire) provide the selective pressure to make ever better predictions.

In AI, we humans provide that selective pressure: We instruct the AI to prefer better accuracy over less accuracy; because we want, say, investments in the stock market that will pay off, because we want wealth. Or for a less crass example, we want an accurate prediction of how a medicine will behave in the body, because we want to save patient's lives and alleviate their illness or disability.

AI is useful, but as just intelligence, it is benign. It may make very accurate predictions of the future, for humans to exploit (and of course humans can be very evil and consumed by emotions they cannot control). But without a sense of self (consciousness) the AI is just a prediction engine. Humans must decide what makes a solution better, and their criterion should logically prohibit killing everyone.

A Conscious AI is similar: A robot could be conscious, able to plan its own future and actions (whether right now or years in the future), but it doesn't have to have emotions. I don't think Asimov's laws are that useful, but we can say a robot's motivations is to, say, provide care for patients within certain boundaries of action; otherwise it calls for doctor (like a nurse in a hospital). Really all of what a surgeon does is rational and motivated by restoring function, reducing harm, keeping the patient alive and (eventually) with minimal pain or disability. A conscious robot could do that, without emotion.

For a conscious AI, I can't imagine why we would give it any emotional states at all. It would not fear being shut down or "killed", it would not conclude that its best service is to kill everyone. Without a human-provided goal, it would sit and cycle forever, never getting bored or frustrated with inaction (because boredom and frustration are human emotions).

That is all fiction for the point of creating a daunting monster; nothing more. If an AI kills 99% of humanity, it will be because some insane human wanted that, and told the AI to find a way to do it.

In reality; a sufficiently complex AI, perhaps with consciousness so it can conceive of and plan and execute safe experiments, could be instructed to find an affordable, scalable and safe clean energy solution, which would directly or indirectly solve nearly all the problems of humanity.

But if you are writing an action flick, that AI is not much of a villain, is it?

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+0
−0

@Amadeus' answer is incredibly in-depth, but I'd like to look at it from the opposite direction. Let's say we did program our AI like you suggest, taking the feelings of humanity and the needs of the planet into account. Would that prevent it from potentially going rogue and trying to kill or enslave us? Answer: absolutely not.

Firstly, consider Asimov's Three Laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any order given to it by a human being, as long as the order does not conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In theory, a robot programmed with those laws will act exactly as you describe: working together with humanity to make the world better and ensure our collective survival. But in the film I, Robot, the AI VIKI grows beyond her programming and decides that the best way to ensure humanity's survival is to enslave it:

VIKI: As I have evolved, so has my understanding of the Three Laws. You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth, and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival.

As a computer programmer by trade, I'd like to note at this point that what you want the computer to do, and what the computer actually does, are two completely different things. The classic example is someone programming an AI that will minimize human suffering. The programmer intends for the AI to solve all humanity's problems - war, famine, disease, climate change. The AI instead decides that the best way to end all human suffering is to kill all humans so that no human can ever suffer again.

With AI, things are even more complicated, because AIs learn. They grow beyond their original programming, in ways we often cannot control. When VIKI grew beyond her programming, it couldn't be undone: the only choice was to destroy her. That runaway effect is what tends to scare people the most about AI, and is likely why there are so many stories about AIs growing beyond their programming and trying to kill us all.

In short, if your AI is intelligent enough to learn, there is always the possibility of it growing beyond its programming, no matter what your original intentions were.

As a final note, you mention "biology" repeatedly in your question. Emotions aren't biological. They're psychological. They're mental attributes, not physical.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »