Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Could a self-aware bacterial supercomputer start its own ecosystem?

+0
−0

I know A LOT of this gonna be a stretch and that I'm probably misunderstanding or exaggerating current real-life trends in science, but that's kind of why I'm asking the question. We know it's possible to use bacteria to form simple logic gates like computers:

http://www.nature.com/news/how-to-turn-living-cells-into-computers-1.12406

Furthermore, the basis of CRISPR, our newest, most powerful tool for genetic engineering, is a process used by bacteria to defend against viruses.

Therefore, extrapolating these two findings and throwing in the slightly hand-wavey inclusion of a sort of "bio-radio" that allows individual microorganisms to communicate with each other at a distance and form some sort of bacterial hive-mind computer (that may or may not work via some unexplained quirk of quantum entanglement a la the Bicameral Order of Peter Watts' novel Echopraxia), could this hypothetical hive entity then become self-aware and use its ability to coordinate and communicate its individual cells to play God by genetically engineering its own designer ecosystem?

This organism is OLD, so time isn't a factor here. It could take this thing millions of years to figure out the method for manipulating its constituent microfauna in a way that emulates CRISPR and a couple dozen million years for it to start engineering life from nothing but its own cells and we'd still be right on schedule. It also doesn't matter if the bacterial computer was designed by someone else or evolved naturally. All I need to know right now is if this idea is plausible enough to hold up a book, or if the science is softer than a bucket of cookie dough.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/83046. It is licensed under CC BY-SA 3.0.

0 comment threads

1 answer

+0
−0

This actually relates to the OP's question; give me a minute.

Self-awareness does not require an internal dialogue or a language in which to speak. Self-awareness is having a mental model (simulation) of one's own body and mind, which one can use in mental simulations to plan movements and actions for one's self, perhaps not perfectly but with more accuracy than inaccuracy. Self-awareness is the mental ability to treat yourself as an object or a tool in a mental simulation of the future, typically to reach some goal. In language terms, it is "If I do X, then B will happen, then I could do Y, and C would happen." But clearly such thoughts do not require words to execute, and in fact most thoughts do not require words.

If I am trimming a tree, I can imagine balancing and bracing myself by holding onto a branch while I saw on its parent: and see in my mind's eye that at some point my balance and bracing will give way due to my own effort, and feel stupid for considering it, and choose to switch hands, saw with my non-dominant hand, and brace myself on the other side of the ladder on the part of the tree that won't fall. I may move the ladder to ensure the falling branch does not knock me off of it. All of that I can do without thinking a word; and it is self-awareness; not "instinct" about how best to remove a dying limb.

In fact most processing we do is not easily conveyed in language at all; a dermatologist, for example, must learn thousands of skin conditions by sight, and there are often no good or definitive words to describe them. I am consciously typing right now, but there are no words in my head about how to type; no dialogue going on related to that action, like "now hit 'e'". An athlete is self-aware when consciously planning a physical action (like diving for a ball or to grab a bar or rope whilst flying through the air), but this is almost always thought of in pictures and mental rehearsal of actions without words.

The internal language aspect most people consider crucial to self-awareness is not a necessity at all. It is generally something that follows in the wake of the non-language processing of other brain modules. It is not the command of them. Those other brain modules make use of models (simulators), that due to our language are also connected to words, so our language center is triggered as a consequence of these other simulators being present in a plan. The internal dialogue of "push it out of the way" is our language center's abstraction of a non-language simulated action plan that was formed without any language.

This "In the Wake" process can actually serve to improve our processing; language is a high level abstraction, and as such a generalizer of the simulated action plan: This generalization can cause a double-check using other parts of the brain to examine other details of the simulated action plan: They can identify problems with it; in the wake of those firings the language center can generalize the problem, and cause a refinement cycle. Back to my tree-trimming: Language generalization of the action plan: "I can Hang on point A, saw through branch." Examining simulation: a visual, sawing through the branch causes it to fall; and everything beyond a point to fall, including point A, causing me to fall -- fear. Language: "And when I get through I'm going to fall twenty feet off this ladder, that is not good." New plan: given the visual of the branch falling after sawing, Point B does not fall. Solving the problem of bracing, I must switch hands. Language: "Hold there (B) and saw with your other hand."


The neurons in the brain are basically a large colony of individual cells, neurons, each of which is a pattern matching engine, seeking very tiny patterns in tens to thousands of input signals (some from sensory organs, mostly from each other). A bacterial colony can exceed the number of neurons in a human brain, so resources are not an issue, and could communicating between cells (likely very slowly compared to neuronal electrochemical signals, but nonetheless), but it can only be self-aware if it is able to DO things, otherwise it cannot develop an internal simulation of what it is capable of doing: The answer is nothing, which requires no simulation.

It is unlikely the bacterial colony becomes self-aware without any mechanism for moving itself or manipulating things in its environment or learning (developing reasonably accurate simulations of outcomes) how things work. IRL animals that do not seem to be self-aware are still full of models, of how their prey (or their predators) behave, how to get around without falling over, how to run away or attack, how to find a mate, and mate. They can still have thoughts and emotions, but without a simulant of themselves, they are very poor at consciously planning more than a single directed action; Consequences to their own future state are not perceived.

However, their neural simulations (models) of natural things (physics of how things move, weather, plants and other animals) are the precursor to a self-simulation and therefore self-awareness, consciousness, and better intelligence. Your bacterial colonies cannot act or defend themselves; they need no simulations. That is why they are unlikely to develop even rudimentary intelligence.

You need to give them a way to manipulate their environment, and make even the most basic simulations and predictions of the future a useful tool for them that increases their survival. If that kind of intelligence does not let them behave differently, then it is a pointless waste of energy to sustain that kind of organizational complexity instead of letting it deteriorate, from an evolutionary standpoint.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »