Is an AI improving itself count as a Halting Problem paradox?
+0
−0
I am writing a story for school about a human level AI. I want to have him be able to optimize and improve upon its own AI, but does this breach the Halting Problem? Thanks in advance.
This post was sourced from https://worldbuilding.stackexchange.com/q/75855. It is licensed under CC BY-SA 3.0.
0 comment threads