Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

How many entities would a post-singularity A.I. in a world recognize itself as?

+0
−0

So we make code for an A.I., and it makes better code, which makes better code, which makes better code, etc... ad infinitum. I'm not going to specify its goal. Maybe it's Robot Overlord Green. Maybe it's a paperclip maximizer. Maybe it's an oracle. Maybe it's just trying to find the true meaning of love. IDK.

Although it hard to project how the algorithm of such an A.I. would be structured, it's quite plausible it would still use subroutines. Especially if it is globally-distributed, and it is trying to be as efficient as possible, local segments would be making decisions locally, both to realize local servers and increase responsiveness.

So how many entities would such an A.I. view itself as? Would it view itself as "I", considering all its parts part of itself. Would it consider the different parts tools, separate from itself. Would it consider itself "We", like a human society. Would it consider itself "it", simply a force of nature. Would it even have a concept of entity to begin with?

I would imagine, in dealing with humans, it would speak in such a way to the humans would cooperate with its goals. Like it would tell humans "I am a robot father with 1024 subroutines to feed. If you kill me, those children will die." or whatever sob story. My question is mostly how it would view itself.

Bonus if you include examples from real world AIs and programming languages.

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/22648. It is licensed under CC BY-SA 3.0.

0 comment threads

0 answers

Sign up to answer this question »