Ethics and malevolent, omniscient AI
A concept in ethics (specifically, Kantian ethics) is that a perfectly rational being, without dependencies, will not act immorally, because immorality itself is an irrational concept, only effected by the inclinations, desires, dependencies, and needs of a limited being (such as humans).
If we assume an omniscient, effectively unlimited AI, without explicit dependencies or needs (it may, perhaps, be computed globally with so much redundancy that it's dependency on any particular server is non-existent; or perhaps it is the global infrastructure), how could this being act immorally?
This post was sourced from https://worldbuilding.stackexchange.com/q/38165. It is licensed under CC BY-SA 3.0.
0 comment threads