Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Quantum Based AI getting around Isaac Asimov's "Three Laws of Robotics"

+0
−0

I am building a story where there is a rogue AI. However, that AI was under the governance of Isaac Asimov's Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The AI runs on a quantum computational system.

Is there a scientifically accurate way in which a robot with a quantum computer could get around the three laws?

For instance, perhaps the "First" law now being something that can be precisely enumerated in quantum, (because it is continuous and at any point in time the state of "1" might actually be closer to "1.01", never precisely "1.0"), therefore throwing a null pointer exception and breaking out.

Basically, I'm looking for some quirk of quantum physics which breaks an assumption that the binary three rules rely on.

As explained to me, the laws are societal and do not directly apply to a binary or qubit system. I suppose what I want is not a flaw in the 3 laws, but a potential flaw in the way a software developer could program the laws into a quantum computer. Something modelling a programming error but that would apply specifically to quantum machines rather than what we have today.

I tagged this , but if it "sounds science based" to someone with a small knowledge of quantum computing, that is still a great answer.

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://worldbuilding.stackexchange.com/q/71879. It is licensed under CC BY-SA 3.0.

0 comment threads

2 answers

+0
−0

Quantum computers operate differently than classical computers, and most importantly, doing measurements in the middle of a quantum computation process will disturb the process, and possibly lead to unpredictable behaviour.

So maybe the three-rule-enforcement unit monitors the quantum computer, by looking at its results, and if the result would violate the three laws, it adapts the inputs accordingly and restarts the calculation. But due to some bug in that unit, it reads out information while a quantum calculation is currently running (this happens rarely enough that it is not found during testing), and therefore disturbs that calculation. The device was taken unchanged from previous robot generations using classical computation, where those extra reads didn't do any harm because reading from a classical computer doesn't affect its calculations at all.

Note that taking a working unit unchanged and putting it into a new environment where it has unexpected catastrophic results is not at all far-fetched; it's exactly what caused the failure of the first Ariane V rocket.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+0
−0

Asimov's laws of robotics are not technical laws; they're societal laws, imposed by humans to ensure that robots don't destroy mankind.

You can make a robot that breaks any of them, still using ordinary computer chips, although really, the laws as they are are just fine, thank you very much. Silicon, transistors, and (even) vacuum tubes are not limited by them, and quantum computers don't have any advantages in that department. While they are better than normal computers in many regards, including speed, they're not any better in the way you're thinking. Having a qubit in a superposition of states gives you no advantage here.

How quantum computing differs from normal computing:

  • Qubits, the quantum mechanical counterparts of bits, can be in what's called a superposition of states, rather than just two states. In fact, a quantum computer's qubits (with $n$ qubits) can be in a total of $2^n$ states simultaneously, while a classical computer's bits can be in a total of $n$ states simultaneously.
  • Quantum computers therefore have greater storage capacity and much greater computational speeds. Do not underestimate how important taking $2$ to a certain power can be. For instance, Shor's algorithm makes the factoring of large numbers possible - but only a quantum computer that can do many things simultaneously.

Your modified question about the quantum mechanical equivalents of logic errors is harder to answer. First, we don't have quantum computers yet which are capable of the really sophisticated calculations we expect to see in the future. In other words, while scientists are making strides in this every day, I can't point to a quantum computer that does anything really spectacular.

Second, the difference between quantum and classical computer isn't on the level of higher-level programming languages so much as it is on the level of machine code. If I write a program in Python that prints "Hello, World!":

print("Hello, World!")

I'm not doing anything on the machine level, i.e. working with bits themselves. I don't have to know how a computer works to write a program; I just need to know how that language works.

It won't necessarily make sense to write software on the machine level if it's being constantly written and rewritten, because that would be - ironically - inefficient. However, for smaller numbers of qubits, I suppose it might be practical to essentially work on a qubit-by-qubit basis.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »