Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Would the need for control limit a superintelligent rational agent's expansion in the universe?

+0
−0

My thoughts go like this:

  • I postulate a superintelligent rational agent, that is given, or assumes, control over its mother civilization. I call this agent "the Ruler".

  • I postulate that in order to run efficient local operations far from its location, the Ruler must allow some degree of local autonomy, and the larger, farther and more complex the operations, the greater autonomy is needed. For this I postulate that it will have to engineer new intelligent agents/minions to serve it.

  • Humanity has not yet solved the Friendly AI problem, and thus we don't know if autonomous agents can be trusted at all. Our current world stability and relative peace in a society of autonomous agents (us) may rest upon our mutual vulnerability. MAD is an example. Also think about the phrase "Power corrupts".

  • A superintelligent AI designer has an advantage, namely superintelligence. It does not have to control a stronger intelligence than its own, but can make use of simpler ones. At this stage in the story, however, the Ruler will know for certain that autonomous agents cannot in general be trusted to not create more intelligent agents, and that the intelligence needed to create a superintelligence (or an intelligence explosion) is on the simple human level. It may be the case that any agent sophisticated enough to lead a planet or a solar system must be made with some kind of internal reward system (think dopamine) that, deep down, conflicts with the goals of other agents, including its creator.

  • Even if the local agents can be made trustworthy, how can the Ruler and its minions really know that the signals they receive are sent voluntarily from the claimed sender and are not tampered with? Supposedly, many of the signals sent between a mighty local representative and the Ruler will be the most important orders and events, such as wars and encounters with new species, both of which can bring signal tampering. Tampering in-air (or vacuum) can probably be ruled out by quantum encryption, EDIT: but I guess signal jamming can't. Signal jamming potentially isolates the local vassal from the Ruler, meaning that the vassal must make decisions on its own, potentially harming the empire.

  • This is not merely a special case of the problem of making decisions under uncertainty. It is the case of taking the risk of ruining everything while achieving close to nothing in the short term (maybe hundreds of millions of years). Why move useful matter away from the best energy source you have?

  • My hypothesis is that a rational Ruler would, by default, not expand into the universe. It could run SETI programs, "unmanned" space probes and do/allow limited operations in the solar system, within reach of non-autonomous (or non-intelligent) surveillance and control measures. A consequence is that it would prevent everybody else from leaving the space it can control directly. (Maybe that "comfort zone" is much smaller than the solar system.)

  • If, on the other hand, some final goal partly overshadows the convergent instrumental goals (aka "basic AI drives") of power and security, it might be more expansive. I think that is a beautiful thought: Only by accepting your own vulnerability you can grow indefinitely. (Hehe, I know I stretched it a bit.) It could also be the case that any of my "maybes" are wrong.

  • If this problem is as hard as I suspect, could it be a solution to Fermi's paradox? By this, I mean that it is likely that superintelligent rational agents will be created by advanced civilizations, and that they will prevent space expansion (constituting what Nick Bostrom calls a Great Filter), and that that is the reason we don't meet a lot of aliens even though their civilizations may have existed for hundreds of millions of years.

I have little knowledge about the state of the art in security/trust measures, and am sure that someone can shed more light on this. That is: The main question in the title, and the (in)validity of my reasoning around it.

Appendix 1: It is conceivable that the Ruler will at some time decide to migrate to a younger or larger star, in order to serve its goals in the long term. And it is not entirely inconceivable that it would leave behind a non-intelligent Dyson sphere power plant. Actually, creating a Dyson sphere, if that is within the "comfort zone" of the Ruler, could power a strong weapon for suppression of neighbor stars. But anything can happen between the point of time of the raised alarm in the other system and the time that the death ray from this system arrives to destroy it. The worst case could be unstoppable death rays going both ways...

Appendix 2: The proposed neutrino-antineutrino beam weapon might become a reality in the future, and is described as both instant-action and impossible to defend against. This may (even more so than nuclear weapons) force a distributed mind or power structure, possibly with internal trust issues. An alternative is to have some kind of doppleganger solution, so that close to nobody knows who is the real Ruler, if that is possible.

Appendix 3: I make no assumptions about other agents in general, but the Ruler is allowed to engineer its minions. This is supposed to be a scenario relevant to real life, possibly within the next centuries from now. The Ruler is a rational superintelligent agent. It thus represents an upper bound for human intelligence and rationality, whose final goals are on purpose not specified in this question. Typically the mother civilization would try to make it friendly to them, but my question can apply whether they succeed in this or not. The strategic conclusions for the Ruler will then, arguably, also apply to an emperor with the same final goals, and, to some extent, to a ruling organization, but I do not intend to take into account conflicts within the Ruler, unless those can be substantiated to be a possible consequence of any Ruler (including a superintelligent agent) in the right (wrong) environment. EDIT: Whether or not trustworthiness engineered by a superintelligence constitutes an upper bound for human trustworthiness remains undecided, as far as I can tell. While it is tempting to consider a superintelligence "godlike", the amount of computation spent in millions of years of human/primate/mammal evolution is nonetheless immense. The human DNA is said to be around 1GB (I think), and even if only 1kB is essential to our inherent trustworthiness (if that really exists at all), finding or generating that relevant code is an enormous task. (One possibility for the Ruler will actually be to engineer human minions, much like the Cylon clones of Battlestar Galactica.)

Appendix 4: I would like to make a stronger connection to Robin Hansons's concept of a Great Filter (first made known to me by Nick Bostrom). The Great Filter model is an answer to Fermi's paradox (Basically: "If intelligent life is not extremely rare: Why don't we meet or hear from a lot of aliens, given that many solar systems have had hundreds of millions of years to develop and spread life?"). Great Filters are near-universal barriers that must be overcome in order to achieve a space empire. In Drake's equation (for calculating the number of civilizations in our galaxy that can send radio signals into space), many of the terms are percentages, some of them multiplying up to what can be called the prevalence of life-supporting planets (relate to the Rare Earth hypothesis). Then there is a probability for the emergence of life on a life-supporting planet. Then, deviating from the original equation, one can consider several further steps, such as the emergence of multicellular life given life, the emergence of intelligence given nervous systems and so on. All of these stages are candidates for being great filters. If, however, we find single-cell life on Mars, then all the early Great Filter candidates essentially collapse. Bostrom's fear is that the Greatest Filter is ahead of us, and he specifically draws attention to the threat of superintelligence. If anyone here has read Bostrom's book "Superintelligence: Paths, Dangers, Strategies": Does he detail in what way he believes a superintelligence constitutes a Great Filter against us observing alien activity, including the activity of the superintelligence itself?

Appendix 5: If it is possible to construct an AI with some (maybe only) strictly external, non-selfish goals, it could be willing to let loose copies of itself in its mission to "spread goodness in the universe". Even though the goals of the agents will be exactly the same, they will still have their individual perspectives on them: individual sensory input and local control capabilities, and judgments based on this. Thus there is a risk of "healthy confrontation".

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/36641. It is licensed under CC BY-SA 3.0.

0 comment threads

0 answers

Sign up to answer this question »