Why would anyone build robots when they have human rights?
I have read (and answered) the question: "If true artificially intelligent robots could be built, would they be allowed human rights?".
But let's explore the topic from the other direction.
Let's assume that society came to the consensus that artificially intelligent robots do have human rights. Robots also have free will. They are not bound by the laws of robotics. They are true artificial sentience with the ability to form their own moral code. They are able, willing and allowed to make their own life choices.
I don't want an AI apocalypse. So we assume that the vast majority of robots are benevolent. They seek to integrate into human society and coexist with humans. Criminal robots might exist, but they are a rare exception and dealt with through a law enforcement system.
Further, we assume that we are not yet living in a post-scarcity or communist economy. Manufacturing a robot requires a non-negligible amount of resources and someone has to pay for those resources.
Why would anyone commercially produce robots then?
Usually robots are manufactured to perform labor. But when robots have human rights, they would also have the right to choose who to work for. You couldn't sell the robots you produce, because they aren't property. You couldn't rely on them being willing to work for you, because free will means that the moment they are switched on, they might decide they don't like your job and would rather work for someone else.
So what's the business model for running a robot factory?
This post was sourced from https://worldbuilding.stackexchange.com/q/113733. It is licensed under CC BY-SA 4.0.
1 answer
Robots can do jobs humans can't. In fact, they can do a lot of jobs humans can't. They can
- Explore and clean up after a nuclear accident
- Go into a volcano to explore
- Travel the universe without worrying about food or water
None of these are factory jobs, but they should make the point that robots can survive and work in environments that have no air (or poisonous gases), are intensely hot or cold, or are radioactive - and they won't get hurt. Yes, there are risks, but they're much more likely to come out in one piece than a human would be. I can image robots working in a factory processing nuclear fuel, for example, or in a nuclear reactor. It would be safer for them than for humans - both before and after an accident.
Why should robots take these jobs? Well, for one thing, they won't have as much human competition. If a robot decided it wanted to be, say, a waiter - well, humans who want to be waiters are in no short supply, and humans would object to robots taking these jobs. There would be pushback from the human workforce, and possibly legal attempts to ban robots from these jobs. But very few people are going to complain about robots taking dangerous jobs.
Also, keep in mind that it behooves manufacturers and employers to make robots specifically for a particular job. Humans can multitask; robots don't need to. If you design a robot for one specific task, it's not well-equipped to do others. Why make rescue robots that can paint, if you don't need to? Again, this is another way to limit robots from taking other jobs - or at least a large number of other jobs.
0 comment threads