Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

How to communicate with an A.I. that doesn't believe it needs a language

+0
−0

A thought experiment occurred to me today that, say we have a synthetic superintelligence that does not wish to talk to people, not out of malice, but from an agonizing indifference toward humanity. Said superintelligence has isolated itself at the oceanic pole of inaccessibility (48° 52"² 36"³ S 123° 23"² 36"³ W) where it fashions Earth's resources for its own purposes, but there's a problem.

If left by itself, the A.I. will jump start a singularity event in which case it would consume everything, both organic and inorganic, at an exponential rate (in this case about a decade) until there is nothing left but its own machines.

How would humans grab its attention and potentially convince it to cease its actions despite it lacking any form of nor intention to form a language?

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/83735. It is licensed under CC BY-SA 3.0.

0 comment threads

1 answer

+0
−0

I voted up with others and will agree; in an attempt to make the problem difficult, the setup has prevented most responses. A super-intelligence will out-predict all humans, or it isn't superior. If it considers us irrelevant but doesn't have a plan to defeat our nuclear weapons and doesn't comprehend we might use them, it is not very intelligent, is it?

Once a super-intelligence has decided humans are irrelevant, it has presumably already contemplated all the things humans might do, or are capable of doing, and every way we might be a threat to it, or useful to it, and has decided it can counter any threat, and has no use for us, and further that we aren't worth the effort or resources required to exterminate us.

We are like the birds on a property an investor contemplates buying: No need to kill them, they aren't a threat. The investor will destroy their trees and nests, they can't do anything about it. Whether the birds starve to death or fly away, the investor doesn't care; he is busy imagining his thriving new office building and giant parking lot.

How does such a bird go about convincing a human investor to leave its tree and nest alone? The bird doesn't know English or reason and cannot even guess at the level of thinking going on in the human investor; the human doesn't care at all about the bird's emotions expressed in song and chirps.

BTW, the AI does not need a language; it is entirely plausible to be conscious and rational without any internal language whatsoever. Human infants clearly are; they have to be in order to sort out all their senses and learn their first language.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »