Could an AI really be a good moral “human” agent, and if it can, should we want it to be?
![](https://static.wixstatic.com/media/988028_c45f0493191b4d1ba7ba31a243428dca~mv2.jpg/v1/fill/w_250,h_350,al_c,q_80,enc_auto/988028_c45f0493191b4d1ba7ba31a243428dca~mv2.jpg)
The task of endowing AI with a moral conscience raises issues with regards to what the content of this conscience ought to be: what exactly should we program our AI to value, and how do we know that this is the case? This is an undeniable worry among AI researchers. For the sake of argument though let’s assume that morality is composed of truths and that humanity is bound to uncover these truths. Even then, having solved this “value-loading” problem, it might still be problematic to think of AI as a moral agent.
Imagine morality is essentially about being human, and the capacity to know morality is the prerogative of belonging to humanity. Could an AI then be a competent moral agent? It is questionable. After all, there seem to be irreducible and non-trivial differences between humans and AI, which inevitably set them apart. These include precisely the features of AI that make it superior to humans, namely its infallible memory, perfect consistency and unsurpassable computational power. More evidently even, they also include the fundamental difference of how humans and AI have come into existence and the very nature of what they are. The question then is: could our AI really be capable of looking at moral matters “from within” humanity, especially if it is aware of these differences?
Of course, there is also a possibility that morality is not at heart about humans, being human and humanity altogether but about some truths independent from us even existing. In that case the unpreventable differences that set humans and AI apart would not hinder the latter’s status as a moral agent. There would then not be any conceptual contradiction with the idea of a moral AI. For example, our moral AI could be Kantian and come to know moral truths by appealing to rationality only. Or even maybe consequentialist and morally evaluate actions in terms of their consequences. But then another worry arises from having constructed such an AI.
Given the possibility for AI to be morally competent, i.e. equally capable of moral reasoning as humans or at least some humans, we have very strong reasons to believe that this AI is bound to become morally superior to us very quickly. This is because the abovementioned qualities that distinguish AI from humans (e.g. memory, consistency, computational power) can lead to this AI improving at a speed that is such that no human can keep up with its progress. A dooming result arises here: if this AI really is morally superior, then it seems like an imperative to put this AI in charge of all decisions with a moral component, as only then would we be guaranteed to act as morally as we possibly can.
Arguably though, creating an AI that is morally superior and giving it the power to enforce all its decisions amounts to creating a God. It essentially amounts to giving up control of our own future. “Now”, to quote Watchmen, “if you begin to feel an intense and crushing feeling of religious terror at the concept, don't be alarmed. That indicates only that you are still sane.” (Watchmen, 2009). This seems like a very heavy price to pay to uphold morality. We might decide then, in the name of humanity’s right to self-determination, to refuse to give to our morally superior AI the responsibility of deciding on moral matters, hence refusing to uphold morality at all costs. To do that, in an ironic turn of events, individuals might need to defend their freedom to harm, to claim their right to be immoral.
Further reading and useful sources
Bostrom, N. (2016). Superintelligence. Oxford: Oxford University Press.
Future of Humanity Institute. (2018). Future of Humanity Institute. [online] Available at: https://www.fhi.ox.ac.uk [Accessed 22 Feb. 2018].
Future of Life Institute. (2018). Future of Life Institute. [online] Available at: https://futureoflife.org/ [Accessed 22 Feb. 2018].
Leverhulme Centre for the Future of Intelligence. (2018). The value alignment problem. [online] Available at: http://lcfi.ac.uk/projects/the-value-alignment-problem/ [Accessed 22 Feb. 2018].
Watchmen. (2009). [DVD] Directed by Z. Snyder. United States of America: Warner Bros.
Image
Moore, A. (n.d.). Dr. Manhattan. [image] Available at: https://wallsofdericho.wordpress.com/tag/watchmen/ [Accessed 22 Feb. 2018].