Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

How viable would an analog computing revolution be?

+0
−0

I am trying to write a sci-fi setting in a not so distant future in which analog signal (brainwaves, in this case) processing is one of the main points of the plot and pretty much required to explain some of the mechanics going on in the universe.

Thing is, analog to digital conversion is expensive, and it may compress data into an easier to handle finite set of values that would, in exchange, make it drop some information in the process. This is something I don't want since some of the mechanics required to develop the plot require subtle differences in a person's brainwaves (in this case, used as some sort of biometric key). This last part can be avoidable by just throwing more resources at a regular digital computer, but that's lazy and something I don't want, as an analog computer could allow for more interesting details and implications.

Just how viable would it be for the world to go back to analog? In the past, we had analog computers, but we changed to digital because apparently they weren't designed with programmability in mind (as in, they were like ASICs) and soon digital became better than analog so there was no reason to try to improve a deprecated technology. Likewise, most of our telecommunication devices operate on waves and analog signals, but they are translated to digital at some point of the process (ie. the modem) and lose all the properties an analog signal has.

DARPA tried to build an analog "cellular neuronal network" CPU (project UPSIDE) for computer vision back in 2012, but there is not much information about it. Apparently, it allows for much faster speeds at a lower energy cost, at the expense of some errors from time to time and what has been described as requiring a much different way of tackling problems. Problem is, it says nothing about how programmable it is (which it apparently is, but it doesn't mention if it's Turing complete by itself or not). In addition, it seems to be a hybrid analog-digital computer, which is the concept I initially thought about including in my story.

In the future, could we see the following things? How superior would they be to their digital counterparts? Would they have any limitations?

  • Purely analog CPU (could they run the programs we run today? As in, would they still be usable as a PC?)
  • Hybrid analog-digital CPU, where they complement each other depending on the problem at hand
  • Analog RAM/storage. May be digital-analog or purely analog. Could it be made persistent, like with memristors? How would this work, anyway?
  • Truly analog telecommunications. I know they are impractical due to possible signal noise, but let's assume we have a reliable solution to compensate that, such as algorithms capable of discerning the real signal
  • Holographic CPU? Both digital and analog. I know a digital optical CPU is viable in theory, but I have no idea about analog; I assume it could operate on light frequency/color or something.
  • Analog-oriented programming languages. How different would they be from the programming languages we use today if any? Could a uniform analog-digital language exist, where a smart compiler decides whether to use the analog CPU or the digital CPU in a fashion similar to what hUMA compiler optimizations already do?

Mind you that while this setting isn't supposed to be hard sci-fi at all, it isn't fantasy science either. Whatever the answers are, they should be at least remotely viable in reality, and more specifically, doable within the following 70 years or less, although computer science could always do some breakthroughs. No suspension of disbelief should be required to enjoy the setting, even if you were somewhat knowledgeable about the field.

Note: I guess you could draw some parallels from analog computing to quantum computing since both seem to work best (or only) on probabilistic algorithms instead of deterministic, but this is not about quantum. Quantum technology exists in this setting, but it's extremely rare and only used in some specific contexts, not to mention most of it is extremely experimental and the general public is oblivious about the existence of the somewhat viable prototypes.


Edit: to be more specific, the context and use cases of this technology are that user input is now handled through some sort of a matrix of electrodes implanted into the brain, capable of reading the user's brain activity/thoughts. The software handling the output of this matrix already tries to transform brain activity into some sort of "universal brain language" that covers up the differences between human brains, but still requires an analog/real numbers/wave signal for fine precision (not as in error-free, but as in descriptive) and high throughput. Analog signals were chosen because the brain can easily recover from small errors and discrepancies and because it is more similar to the way the human brain works, but due to limitations of the feedback system, "lag" and slow transmissions is something that you would generally not want to get in your wetware, which is why digital signals were discarded: the electrodes array requires a continuous stream of data, so buffering and processing something to compress it and send it over the network would make the brain "halt" waiting for the next signal, essentially damaging your psyche on the long run (think of becoming deprived of your senses or getting mind-frozen in place every two seconds, which is what the digital transmission could do, compared to seeing some static in your viewport every few seconds, which is what you could get when on analog).

This is also the reason computers perform operations on their continuous output analog stream concurrently, to reduce the time the user waits for a reply. It is also the reason all algorithms that directly read from the user wave are also concurrent: it is better to update the wave late than to halt it until the answer is processed. In addition, due to the nature of the brain, a thought or sequence of thoughts can be read and predicted as it is being formed, but can't be confirmed until it is fully formed. This detail is extremely important, as the plot device is based around this fact.

Think of the exchange of information between a computer and a human as a regular conversation between two humans (it would be more like telepathy, but for simplicity's sake, let's assume they are just speaking).

  1. Computer is just patiently nodding as the human talks to it, representing continuous feedback
  2. Human: Del...
  3. Computer predictions: I am 85% sure Human is going to ask me about deleting something.
  4. Human: Delete...
  5. Computer thinks: Deletion command confirmed.
  6. Human: Delete file...
  7. Computer predictions: I am 70% sure Human will ask to delete a single file. I am 30% unsure about Human asking me to delete a whole filesystem instead.
  8. Human: Delete file /home/myUser/delt...
  9. Computer thinks: File deletion confirmed. 52 files fulfilling the FILEPATH=/home/myUser/delt* criteria detected.
  10. Human: Delete file /home/myUser/delta.bwave
  11. Computer thinks: Filepath detected. Request for the deletion of file /home/myUser/delta.bwave confirmed. Initiating deletion.
  12. Computer continues nodding for a fraction of a second before replying
  13. Computer: File successfully deleted.

What really happened here is that the user made a request for the deletion of a specific file. As the user formed its sentence, the computer already started doing all the necessary preparations for its execution, much in the way we humans converse: we can identify a word by its lexeme before the word is completely formed, so we can more or less guess what will come next, but we can't completely understand the full implications of this word until we hear all the morphemes (if any). Likewise, we can try to make a wild guess about which word may come next and try to understand what the other person is trying to tell us, but we will not be sure about the specifics until the whole sentence is complete. Likewise, a single sentence might throw some light into the context of the topic at hand, etc. After the user's request was completed, the object vanished from the user's viewport in a fraction of a second.

This point is extremely important because the hackers of the future will attempt to trick the machine into doing something else (or just bring it to a halt) by surprising it with some sort of "punchline" capable of surprising it. Since security programs are concurrent, they can't really understand the full scope of the user's actions until it's already too late. Think of it like setting up a trap for the enemy king in chess over several turns: most of the "illogical" movements made earlier start to make sense the moment your king is killed. The paragraph talking about cranes was actually talking about birds and not boxes, but the computer could have never seen that coming since it mostly operates on sentences and not contexts as big as a paragraph or small text; generally, the precision of its predictions are drastically reduced the bigger the scope is, although it can still operate in bigger schemes if specifically programmed to do so.

To identify what the user's trying to say, modern CPU incorporates a neuronal network that allows the OS to retroactively make some sense of words after hearing a string of letters. More often than not this is abstracted from userland programs through the use of libraries and APIs, although they may get access to the wavestream depending on their permissions.

The "biometric authentication" system I mentioned before actually operates on big segments of the stream. Since the automatic conversion to "universal brain language" reduces (not removes!) the variance between user brains, trying to identify a user by these differences alone is impossible (not to mention that the, although small but random, noise the line may have, such a level of detail would be impossible). This is why the user authentication software operates on a larger set of thoughts: it detects the approximate state of mind of the user (excited, angry, relaxed, etc) and "mannerisms" they may have. This is more or less the equivalent of the accent a person may have or stylometric analysis of their texts: it identifies them with a high degree of precision, but it's not infallible. Hackers may again try to disguise themselves as the system operator of a device by using meditation techniques to appear as if they were thinking like the legitimate user of said computer.

This "universal brain language" I talk about would be more or less like any human language (such as English). It encodes information so everyone can understand it, but it's not digital because the way you speak it may say something more about your message than what the language can express. That means, in the conversation example, the user may be thinking of deleting as symbol A with modifying factor B, while the software translates it to symbol X with modifying factor Y (which may be equal to B, although I haven't thought of that yet. I don't think it actually matters). The modifying factor is what tells the computer that you didn't just think of deleting, but that it also seems to sound as if the user was somewhat distressed or angry: it is analog metadata that would be difficult to translate to digital without butchering its meaning. Here is where the CPU's neuronal networks try to take a guess about what does this metadata mean, much in the same way a human would try to guess what does that tone of voice mean; it may be easier to guess when the modifying factor is stronger.

What I originally meant with this question is: how could the CPU process this brainwave? Could some technology directly operate on this wave through the use of analog operation programs or would convert to digital be required for all cases? Mind you that the CPU has a digital coprocessor that can process those problems where the analog computer can't process that well, although communication between these two may be slightly slower in the same fashion memory to on-die cache transfers are slow. Could the analog CPU be a universal Turing machine, independently of how practical that could be? Alternatively, if this isn't the case, would analog emulation on a digital CPU (emulated neuronal network simulations, like a partial brain simulation) be the only way to tackle this problem? In addition, could information about a wave be persistently stored somewhere? Could said stored wave be actually stored as a wave and not as a "parametrization" of a wave?

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/35410. It is licensed under CC BY-SA 3.0.

0 comment threads

0 answers

Sign up to answer this question »