The Morality of Artificial Intelligence

An important component of being human is having the liberty to pick among numerous possible actions. It is these selections we make that– integrated with environmental elements– make us each an individual. The existence of individuality is of essential importance since it is in this assortment of choices that we each make that create so many different types of us: the paradigm-shifting wizards, the mass killers, the entirely ordinary, and so on.

One issue that arises when reviewing moral AI is if that liberty, and also individuality, would be present. Perhaps latent in the term “robot” itself is the notion that restricting a personally intelligent machine such that it is incapable of behaving immorally would limit its flexibility. If a machine operated in such a way, it could not understand the gravity of making an immoral decision, and it might be difficult to set apart that machine from various other instances that (always) operate in the same manner.

Nevertheless, this is not how we might anticipate creating ethical artificial intelligence. As opposed to creating preprogrammed “ethical drones” that are unconsciously constrained from behaving immorally, we might create an AI that knew the full range of feasible choices, but always (or at a minimum, almost always) behaved ethically. The distinction here is that our machine itself would ‘want’ to behave ethically. By getting rid of whatever transformative propensities for unethical actions, we might expect our machines to “feel accurately” and also not only acknowledge the correct course of action, but to seek it willingly (as the repercussions, however delayed, would be identified to be preferred course of action).

Freedom is a necessary part of being human, as it enables specific decisions in direction, yet can we know just what it is to be good (and to make the necessary ‘good’ choices) if there is no contrasting bad? The answer is a tentative yes. If we eliminate the propensity for wickedness in AI, we do not always lose the capacity to distinguish good. As an example, I do not need to kill to know that it is bad. If wickedness has to exist in some form, then perhaps a memory is necessary as a deterrent.

It is evident that free choice on its own does not make up uniqueness, something important occurs throughout a conscious creature’s life that makes us each an individual. Maintaining this in our AI would certainly be important to ensuring the same one-of-a-kind development that results in producing both genius and the ordinary. Instances need to be unique as well as plastic: that is, every AI needs to be produced according to a general blueprint but they need to be very flexible. This would enable individuality in addition to learning and developing gradually.

Exactly how would we accomplish improving the moral behaviour of a robot? This is undoubtedly speculative, but it is possible that amplifying the function of mirror nerve cells might cause more moral behaviour. Mirror nerve cells ‘mirror’ perceived actions as neural activity in the beholder ie. when you recoil at the view of another person suffering, it is thought that your mirror nerve cells are active in a comparable unpleasant pattern as the suffer’s, perhaps producing a need to help remove their discomfort. A mirror reaction powerful enough might implant the Golden Standard of moral behaviour such that any kind suffering might create a widespread effort amongst AI to alleviate it.

However, developing a race of artificially intelligent machines is not itself ethically acceptable given that “using” them to aid our own lives does not afford them the appreciation they would be entitled to as conscious creatures. While it is at the very least in theory possible to produce AI that behaves much more ethically than us, the price to do so would be too high. Unfortunately, there’s no easy answer.