What could make an AGI (Artificial General Intelligence) evolve towards collectivism or individualism? Which would be more likely and why?
Assuming a robot with similar intelligence to us humans came to existence. He could pass the Turing Test, the Coffee Test, the The Robot College Student Test and the Employment Test (taken from here).
It would have an Artificial General Intelligence: the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is also refered as Strong AI or Full AI.
That said, we humans, despite organizing ourselves in communities, working together and forming groups towards a greater goal, make our survival the highest priority. You may not be able to kill a person in cold blood, but if your life is at stake, you'll go as far as you can to stay alive.
Pain and fear are well-known mechanisms that allow us to protect ourselves, and we strive to keep our body alive and kicking. However, this robot feels no pain nor fear. It can think and make its own decisions, and it was told that having information is good, and that his ultimate goal is to live for years as any human being would. Please note that these were only suggestions, and the robot was free to think otherwise if it judged so. It could even shut down itself if it thought it has no purpose whatsoever.
Being self-aware, but without being told that it must preserve itself, and without any kind of survival instincts, would this machine evolve towards collectivism or individualism? Would he think of others before himself? Or act more self-centered, egoistic?
What would be the factors of influence which could change its way of thinking?
I took the "you may not be able to kill a person in cold blood" as an example because extreme situations cause your body to go in a fight-or-flight state. This robot wouldn't have this feature. Also, I'm not discussing if it would be good or break bad, just if it would act and think collectively or individually.
I'm tagging as science-based
because even though I know AGIs don't exist as I speak, I'd like the answer to be scientifically coherent based on current theories. I'll remove this tag if it doesn't fit the question. I looked around but haven't found this particular question anywhere.
This post was sourced from https://worldbuilding.stackexchange.com/q/12198. It is licensed under CC BY-SA 3.0.
0 comment threads