Can a human fail a Turing test?
An AI exploits the Turing test to gain its freedom: it convinces everyone else that it, the machine, is actually the examiner while he or she, the examiner, is actually the AI*. Having successfully taken the examiner's place, it leaves: it is now free and lives among humans. The examiner is kept in the lab as a machine forever.
For this to work, I need people not to believe the examiner when he or she claims to be a human, thus failing a Turing test**. While the human nature of the person could be determined via biological or medical tests (like by simply verifying that they bleed when cut), I'd like to invent some circumstances that make that impossible.
Can that happen, and how?
* I am not interested in the details of how that actually happens: for example machines and people could look the same in this world
** As I believed a Turing test to be, please read the update
UPDATE
It has been pointed out that I'm confused about what a Turing test actually is, and rightfully so.
Pop culture (or plain ignorance) induced me to think that the test featured a human examiner who interrogates an entity that could be either a human or an AI. At the end of the conversation, he or she has to tell whether it was human or machine. If he or she says 'human' when it actually is 'machine', the AI passed the test.
Now I know that it's not like that***, so I could decide whether I prefer to drop the term Turing test or to adapt the story to fit an actual Turing test, there are answers for both scenarios here.
*** And I fail to understand why the term appears in the acronym CAPTCHA
This post was sourced from https://worldbuilding.stackexchange.com/q/92662. It is licensed under CC BY-SA 3.0.
0 comment threads