The Possibility of Moral Artificial Intelligence [Classic]

In case you missed it, at the Atlantic, Patrick Tucker has written an article about the military’s project to create moral artificial intelligence — robots that can make moral decisions.

For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning.

Wendell Wallach’s book, Moral Machines: Teaching Robots Right from Wrong, argues that the quest to build such machines has already begun.

Wallach:

“Robots both domestic and militarily are going to find themselves in situations where there are a number of courses of actions and they are going to need to bring some kinds of ethical routines to bear on determining the most ethical course of action.”

But I would argue moral decision making in humans is not a result of “ethical routines” or any kind of rule following. We act based on evolved emotional reactions to situations and then construct post-hoc rationalizations of our intuitive judgments or emotionally-driven behaviors.

I find myself asking myself whether there is an isomorphism or rather a gap between our gut-based judgments and the reasons we post-hoc construct to justify those judgments. If there is not, then it would seem “okay” to build robots which would operate only for “good” reasons that we accept as justifying those actions. Even though they wouldn’t act in the way we do when we act morally, they would still act justifiably.

Additionally, I wonder if acting ethically takes seeing oneself as worthy of ethical consideration, and then extrapolating one’s own preferences etc to another who one sees as worthy of ethical consideration. If acting ethically worked that way, then these moral robots would have to first see themselves and their kind as worthy of moral consideration. So, eventually, they might run a calculus concluding that the greater good is served by saving the “lives” of 5 artificially intelligent and moral machines by sacrificing 1 human being in, say, the Trolley Problem.

Noel Sharkey at the Huffington Post:

The robot may be installed with some rules of ethics but it won’t really care.

But that is going to seem wrong headed soon. It’s, I think, a little but like saying that since our brains are made of neurons and so on there really isn’t any consciousness there. I think the reason we have the intuition that artificial intelligence does not understand (see Searle) or care is because we know too much about how it works to achieve that understanding or caring. If the thing gets all the behavior right, are we going to say that its behavior doesn’t count as understanding or caring just because we know how its insides work? It might (might!) be that the only reason we continue to possess the intuition that other human beings are conscious is because we do not yet understand the neurological mechanism that underlies the apparently conscious behavior we see. But that would mean that once we do understand that neurological underpinning to our consciousness we will lose the sense that we are conscious and free etc. I think that is the wrong headed move.

Instead, we should recognize that the project is to reconcile the “scientific image” — the image of the universe and of ourselves that the various sciences deliver — and our “humanistic image” — the way that we do indeed conceive of ourselves, and very likely must conceive of ourselves, in order for there to be individual agency and society, which would include conceiving of ourselves as free and responsible and conscious.