Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

What could make an AGI (Artificial General Intelligence) evolve towards collectivism or individualism? Which would be more likely and why?

+0
−0

Assuming a robot with similar intelligence to us humans came to existence. He could pass the Turing Test, the Coffee Test, the The Robot College Student Test and the Employment Test (taken from here).

It would have an Artificial General Intelligence: the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is also refered as Strong AI or Full AI.

That said, we humans, despite organizing ourselves in communities, working together and forming groups towards a greater goal, make our survival the highest priority. You may not be able to kill a person in cold blood, but if your life is at stake, you'll go as far as you can to stay alive.

Pain and fear are well-known mechanisms that allow us to protect ourselves, and we strive to keep our body alive and kicking. However, this robot feels no pain nor fear. It can think and make its own decisions, and it was told that having information is good, and that his ultimate goal is to live for years as any human being would. Please note that these were only suggestions, and the robot was free to think otherwise if it judged so. It could even shut down itself if it thought it has no purpose whatsoever.

Being self-aware, but without being told that it must preserve itself, and without any kind of survival instincts, would this machine evolve towards collectivism or individualism? Would he think of others before himself? Or act more self-centered, egoistic?

What would be the factors of influence which could change its way of thinking?

I took the "you may not be able to kill a person in cold blood" as an example because extreme situations cause your body to go in a fight-or-flight state. This robot wouldn't have this feature. Also, I'm not discussing if it would be good or break bad, just if it would act and think collectively or individually.

I'm tagging as science-based because even though I know AGIs don't exist as I speak, I'd like the answer to be scientifically coherent based on current theories. I'll remove this tag if it doesn't fit the question. I looked around but haven't found this particular question anywhere.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/12198. It is licensed under CC BY-SA 3.0.

0 comment threads

0 answers

Sign up to answer this question »