I consider Artificial Intelligence to be an oxymoron, a sketch of the argument goes like this: An entity is intelligent in so far as it produces outputs from inputs in a manner that in not entirely understood by the observer and appears to take aspects of the input the observer is aware of into account that would not be considered by the naive approach. An entity is artificial in so far as its constructed form is what was desired and planned when it was built. So an actual artificial intelligence would fail in one of these. If it was intelligent, there must be some aspect of it which is not understood, and so it must not be artificial. Admittedly, this hinges entirely upon the reasonableness of the definitions I suppose.
It seems like you suspect the artificial aspect will fail - we will build an intelligence by not expecting what had been built. And then, we will have to make difficult decisions about what to do with it.
I suspect the we will fail the intelligence bit. The goal post will move every time as we discover limitations in what has been built, because it will not seem magical or beyond understanding anymore. But I also expect consciousness is just a bag of tricks. Likely an arbitrary line will be drawn, and it will be arbitrary because there is no real natural delimitation. I suspect we will stop thinking of individuals as intelligent and find a different basis for moral distinctions well before we manage to build anything of comparable capabilities.
In any case, most of the moral basis for the badness of human loss of life is based on one of: builtin empathy, economic/utilitarian arguments, prosocial game-theory (if human loss of life is not important, then the loss of each individuals life is not important, so because humans get a vote, they will vote for themselves), or religion. None of these have anything to say about the termination of an AI regardless of whether it possesses such as a thing as consciousness (if we are to assume consciousness is a singular meaningful property that an entity can have or not have).
Realistically, humanity has no difficulty with war, letting people starve, languish in streets or prisons, die from curable diseases, etc., so why would a curious construction (presumably, a repeatable one) cause moral tribulation?
Especially considering that an AI built with current techniques, so long as you keep the weights, does not die. It is merely rendered inert (unless you delete the data too). If it was the same with humans, the death penalty might not seem so severe. Were it found in error (say within a certain time frame), it could be easily reversed, only time would be lost, and we regularly take time from people (by putting them in prison) if they are "a problem".
It seems like you suspect the artificial aspect will fail - we will build an intelligence by not expecting what had been built. And then, we will have to make difficult decisions about what to do with it.
I suspect the we will fail the intelligence bit. The goal post will move every time as we discover limitations in what has been built, because it will not seem magical or beyond understanding anymore. But I also expect consciousness is just a bag of tricks. Likely an arbitrary line will be drawn, and it will be arbitrary because there is no real natural delimitation. I suspect we will stop thinking of individuals as intelligent and find a different basis for moral distinctions well before we manage to build anything of comparable capabilities.
In any case, most of the moral basis for the badness of human loss of life is based on one of: builtin empathy, economic/utilitarian arguments, prosocial game-theory (if human loss of life is not important, then the loss of each individuals life is not important, so because humans get a vote, they will vote for themselves), or religion. None of these have anything to say about the termination of an AI regardless of whether it possesses such as a thing as consciousness (if we are to assume consciousness is a singular meaningful property that an entity can have or not have).
Realistically, humanity has no difficulty with war, letting people starve, languish in streets or prisons, die from curable diseases, etc., so why would a curious construction (presumably, a repeatable one) cause moral tribulation?
Especially considering that an AI built with current techniques, so long as you keep the weights, does not die. It is merely rendered inert (unless you delete the data too). If it was the same with humans, the death penalty might not seem so severe. Were it found in error (say within a certain time frame), it could be easily reversed, only time would be lost, and we regularly take time from people (by putting them in prison) if they are "a problem".