Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

What evolutionary history would support Neophobic sapience

+0
−0

One of the benefits of humanity is our subconscious need to explore. It is a simple instinct of primitive people wanting to know, not only what is in your territory, but almost what is around your territory. From this came greed, ambition and a bunch of other things humans have. But let's say, for some reason, humans were out of the picture.

Assuming that I want a Neophobic species (like rats for a random example) to evolve sapience, what environment would best support this? Why would a species that, not only avoids, but fears new things evolve sapience.

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/45004. It is licensed under CC BY-SA 3.0.

0 comment threads

1 answer

+0
−0

Basically I agree with @Kys; but will expand a bit: Intelligence (with or without sapience) is effectively the ability to learn predictive patterns.

Whether they predict the future, or the unknown past. Although predicting the future (whether it is one second, one minute, or billions of years from now) is obviously useful, we can also use predictive patterns to understand the past: Sciences such as geology, astrology, forensic crime investigation, archaeology, paleontology, evolution; all of those use patterns to infer what must have happened. Most of those patterns extend into the future, but not all are predictive of the future: For example, evolution does not tell us anything specific about the future, just the generality that mutations will occur and may be adaptive and preserved. But the theory of evolution does not tell us if it is possible for any species with brains like ours to be smarter than humans. (Size may not matter and our most amazing prodigies may represent the peak of possible intelligence using neurons).

Put another way, intelligence is learning predictive abstractions; or "models" of how natural forces (gravity, weather, etc) work, and how other animals will behave and react. These can be useful for survival and successful reproduction. Such learning does not demand consciousness or sapience; in my field AI techniques are very adept at learning such patterns and trading them in the stock market. But they aren't conscious or sapient, they have no sense of self.

On this theory, sapience emerges when the patterns learned end up being complex enough to demand a predictive abstraction of yourself as an actor in the outcomes. As an actor in the model, the prediction becomes an "imagination", imagining the outcomes of our own actions is an exercise of such a model, and leads to planning and intentional manipulation of the environment and others. (In fact, we call people with poor models of themselves, and imaginations that are poor at predicting what will happen or the consequences of their actions, "dumb.")

Consciousness does not require any language; it is just the constant cycling of this predictive models of yourself as an agent, first, and others and the environment and situations, to determine what you will or should do next to accomplish some goal or desire.

Using this as the model of distinguishing between "intelligence" and "sapience / consciousness", we can answer the question: The species does not need to explore, but it does need a high motivation to survive and reproduce.

To develop sentience, it needs to (like humans) be weak against predators so it cannot rely on speed, claws, camouflage or any natural physical advantage at all, it must on slightly higher intelligence than the predators that lets it predict how they will behave so it can avoid being ambushed, or poisoned (snakes, insects, spiders), or chased down. Or develop unnatural tools (spears, nets, deadfalls, spike pits) to give them a chance against their attackers.

pre-Humans were prey at one point, frequently. We were not always hunters.

So you just need a strong evolutionary pressure to make better predictive models a survival advantage, particularly for a weak species that has nothing else. Neophobia is not an issue, being physically afraid of the new is fine, but does not prevent one from developing an abstract predictive model of the new thing (aka "understanding it"). In fact, if there is pressure to expand one's territory, for more space and food for the kids, better models will help do that: So loving a big family can suffice: They don't like the new, but they need the space, they need the safety ("safety" is itself a prediction of the future), they need the food.

In humans it is hypothesized that once we used intelligence to conquer most physical threats, it was our social environment (other humans) that created a feedback loop of higher intelligence to understand other humans, and out-do them for resources needed for survival and reproduction. So every advancement in our ability to understand affords a reproductive advantage, but becomes the standard 'floor' within a dozen generations or so, until another mutational advancement comes along, which then becomes the new standard 'floor', etc.

Which may lead to our current state of very high intelligence, compared to other animals, but for most people still barely enough to hold their own against other humans: We are our own biggest competitors.

So a similar thing could happen for a fictional species, small advances in intelligence first afford them survival in a hostile world, but once that world is mostly tamed and controlled, even better predictive models are needed for them to compete against each other for their reproductive resources.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »