Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Making a machine more human by making it race-condition prone. Is it enough?

+0
−0

Humans make mistakes, computers don't. So I asked myself: Why do humans make mistakes? it we follow instructions one per one like a single threaded computer, why do we still making mistakes? So my theory is: this happens because we are always in a race-condition, our brains are executing hundreds of tasks at the same time, and we don't have any mechanism for avoiding race-conditions (like synchronization or locking). An example: we have our keys in our hands and we are thinking about putting the garbage bag out, and then we throw the keys in the garbage bin. From a software point of view, this looks like a race-condition where the variable "keys" is overwritten by "garbage". Following this reasoning, a way to make a computer or a robot more human (at least for passing the Turing Test) would be making it race-condition prone, so it would make mistakes like a human (like saying things out of context, putting stuff in the wrong places, typing the wrong keys in a keyboard).

Edit (thanks DaaaahWhoosh!): In short, I'm asking if, for a machine to become more human (at the point to make it pass the Turing test), is it enough making it mistake-prone?

I think this can be implemented in a computer with today's technology in a simulate environment (like a simulated house or an office) and having a bunch of humans to tell if the simulation is being executed by a human or a machine (or a software).

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/32672. It is licensed under CC BY-SA 3.0.

0 comment threads

0 answers

Sign up to answer this question »