Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Would super-sized humans be super-intelligent?

+0
−0

A mad, egomaniac, scientist has developed a way of extending the natural growth of humans to create an army of 20ft (6 meter) tall soldiers. Physiologically, they are three times the size of normal people in terms of height and mass. He's managed (through genetic enhancements) to increase the efficiency of the heart and lungs to cope with the increase of body mass and the skeleton is somewhat strengthened.

The brain has also increased in volume.

Given the physiology of the human brain, would increasing its size also increase the intelligence of these people to a whole new level, or would they be of lesser intelligence?

If their intelligence wouldn't naturally increase with size, what can the scientist do in order to meet the required (evil genius) level of intelligence?

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/87336. It is licensed under CC BY-SA 3.0.

0 comment threads

1 answer

+0
−0

Most likely not. Like our own transistors in real life, Human neurons have already evolved to the very edge of being as small as they can be and still function without being overwhelmed by electrical noise (noise causes errors); and evolution preserves this tiny size (and myelin coating) for the same reason we prefer tiny transistors: It makes communications between parts faster, getting more done in less time.

If everything in the giants is multiplied by 3, their brains internal communications would run 3 times slower than that of a normal human; making them very slow dullards.

If their neurons are small like ours, they still have this problem: Without a complete brain reorganization, they will inevitably have communications between remote parts of the brain that take 3 times what our communication takes. On top of that, their neural signals to muscles will take three times as long to reach the muscles; making reaction times slower, and inputs (ears, eyes, touch) will take three times as long to reach their neural centers, also contributing to making reaction times much slower.

Also, most of their larger brain will be taken up by having to process a much larger volume of body, skin sensors (pain, cold, warmth, pressure) and other operations; there is a strong correlation between brain size and body size for that reason.

Finally, you need some working definition of what super-intelligent means. Borrowing from the artificial intelligence field; one such working definition would be what most of us think of as a generalization of Sherlock Holmes type abilities, when investigating crimes. To solve crimes, Sherlock typically spots tiny obscure clues and translates them into what must have happened: In one case, the Dog that Didn't Bark, he deduces that because a dog did NOT bark at whomever committed a crime, the dog must have known the perpetrator, which with other clues narrowed the list of suspects to one.

Similarly, intelligence is the ability to solve puzzles, extrapolate from clues, and arrive at theories or models of reality that have a high probability of being correct. This includes "prediction" about the past and present, not just the future: Geologists study patterns in rocks and deduce what must have happened (volcanoes, earthquakes, floods). Astronomers study patterns in the light of stars and deduce supernova must have happened hundreds of millions of years ago. Archaeologists study patterns of fossils to deduce what must have gone on hundreds of millions of years ago, here. We have patterns that let us deduce what must be happening now: a column of smoke indicates a probable fire, even if we do not sense any fire directly. We have patterns to deduce what will happen; in the weather tomorrow, in politics next month, in our health, in our economy, in our sciences.

As a working definition of higher intelligence, we can measure it as better pattern interpretation with a higher probability of being correct when deducing what most likely happened, is happening or will happen. So better deductions, or being better at finding obscure patterns that are useful in such deductions, or have meaning.

"Meaning" is about ramifications or constraints; when somebody says "Do you know what this means?", they are saying they have identified a pattern that will most likely have specific consequences in the future, which may be good or bad, but are in their mind are highly likely. "Meaning" is about a distinct difference; e.g. if I want my work to have "meaning" it should mean the world is different (and presumably better) because I did that work, it had impact I consider positive, etc. If somebody says something is "meaningless", they consider it to have no impact and create no difference in any outcomes they care about. (of course due to butterfly effects everything can make some difference in the future, but in human terms "meaningless" and "makes no difference" are talking about the same idea; "meaning" and "made a difference" are the flip sides.)

Back to your story: It is not clear that neural mechanisms can get much better than the best of what we have now. For soldiers, that is not likely to be a good outcome: If they are more intelligent than their creator, they are unlikely to take orders from their creator for very long, and are likely to outsmart their creator's attempts to control them, control him instead, and implement their own agenda rather quickly and effectively: That is what greater intelligence means: Better anticipation of outcomes and reactions, better predictions of what strategies will work, and thus fewer mistakes and greater successes.

When we build a trap and bait it; we are anticipating the behavior and reactions of an animal, and the animal's lower intelligence is failing to anticipate that taking the bait will have various consequences (mechanical for a trap, or biological in the case of spiked bait). It is our ability to see correctly predict what will happen (and because of that ability, devise a way to make something happen) that costs the animal its life and produces our dinner.

If your giants truly are far more intelligent than ordinary humans and can predict outcomes with 3 times the accuracy of humans, then your "genius creator" will be no match for them; like a monkey charging a man with a shotgun.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »