Making a machine more human by making it race-condition prone. Is it enough?
Humans make mistakes, computers don't. So I asked myself: Why do humans make mistakes? it we follow instructions one per one like a single threaded computer, why do we still making mistakes? So my theory is: this happens because we are always in a race-condition, our brains are executing hundreds of tasks at the same time, and we don't have any mechanism for avoiding race-conditions (like synchronization or locking). An example: we have our keys in our hands and we are thinking about putting the garbage bag out, and then we throw the keys in the garbage bin. From a software point of view, this looks like a race-condition where the variable "keys" is overwritten by "garbage". Following this reasoning, a way to make a computer or a robot more human (at least for passing the Turing Test) would be making it race-condition prone, so it would make mistakes like a human (like saying things out of context, putting stuff in the wrong places, typing the wrong keys in a keyboard).
Edit (thanks DaaaahWhoosh!): In short, I'm asking if, for a machine to become more human (at the point to make it pass the Turing test), is it enough making it mistake-prone?
I think this can be implemented in a computer with today's technology in a simulate environment (like a simulated house or an office) and having a bunch of humans to tell if the simulation is being executed by a human or a machine (or a software).
This post was sourced from https://worldbuilding.stackexchange.com/q/32672. It is licensed under CC BY-SA 3.0.
0 comment threads