How many entities would a post-singularity A.I. in a world recognize itself as?
So we make code for an A.I., and it makes better code, which makes better code, which makes better code, etc... ad infinitum. I'm not going to specify its goal. Maybe it's Robot Overlord Green. Maybe it's a paperclip maximizer. Maybe it's an oracle. Maybe it's just trying to find the true meaning of love. IDK.
Although it hard to project how the algorithm of such an A.I. would be structured, it's quite plausible it would still use subroutines. Especially if it is globally-distributed, and it is trying to be as efficient as possible, local segments would be making decisions locally, both to realize local servers and increase responsiveness.
So how many entities would such an A.I. view itself as? Would it view itself as "I", considering all its parts part of itself. Would it consider the different parts tools, separate from itself. Would it consider itself "We", like a human society. Would it consider itself "it", simply a force of nature. Would it even have a concept of entity to begin with?
I would imagine, in dealing with humans, it would speak in such a way to the humans would cooperate with its goals. Like it would tell humans "I am a robot father with 1024 subroutines to feed. If you kill me, those children will die." or whatever sob story. My question is mostly how it would view itself.
Bonus if you include examples from real world AIs and programming languages.
This post was sourced from https://worldbuilding.stackexchange.com/q/22648. It is licensed under CC BY-SA 3.0.
0 comment threads