Monthly Archives: May 2014

The Possibility of Moral Artificial Intelligence

In case you missed it, at the Atlantic, Patrick Tucker has written an article about the military’s project to create moral artificial intelligence — robots that can make moral decisions.

For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning.

Wendell Wallach’s book, Moral Machines: Teaching Robots Right from Wrong, argues that the quest to build such machines has already begun.

Wallach:

“Robots both domestic and militarily are going to find themselves in situations where there are a number of courses of actions and they are going to need to bring some kinds of ethical routines to bear on determining the most ethical course of action.”

But I would argue moral decision making in humans is not a result of “ethical routines” or any kind of rule following. We act based on evolved emotional reactions to situations and then construct post-hoc rationalizations of our intuitive judgments or emotionally-driven behaviors.

I find myself asking myself whether there is an isomorphism or rather a gap between our gut-based judgments and the reasons we post-hoc construct to justify those judgments. If there is not, then it would seem “okay” to build robots which would operate only for “good” reasons that we accept as justifying those actions. Even though they wouldn’t act in the way we do when we act morally, they would still act justifiably.

Additionally, I wonder if acting ethically takes seeing oneself as worthy of ethical consideration, and then extrapolating one’s own preferences etc to another who one sees as worthy of ethical consideration. If acting ethically worked that way, then these moral robots would have to first see themselves and their kind as worthy of moral consideration. So, eventually, they might run a calculus concluding that the greater good is served by saving the “lives” of 5 artificially intelligent and moral machines by sacrificing 1 human being in, say, the Trolley Problem.

Noel Sharkey at the Huffington Post:

The robot may be installed with some rules of ethics but it won’t really care.

But that is going to seem wrong headed soon. It’s, I think, a little but like saying that since our brains are made of neurons and so on there really isn’t any consciousness there. I think the reason we have the intuition that artificial intelligence does not understand (see Searle) or care is because we know too much about how it works to achieve that understanding or caring. If the thing gets all the behavior right, are we going to say that its behavior doesn’t count as understanding or caring just because we know how its insides work? It might (might!) be that the only reason we continue to possess the intuition that other human beings are conscious is because we do not yet understand the neurological mechanism that underlies the apparently conscious behavior we see. But that would mean that once we do understand that neurological underpinning to our consciousness we will lose the sense that we are conscious and free etc. I think that is the wrong headed move.

Instead, we should recognize that the project is to reconcile the “scientific image” — the image of the universe and of ourselves that the various sciences deliver — and our “humanistic image” — the way that we do indeed conceive of ourselves, and very likely must conceive of ourselves, in order for there to be individual agency and society, which would include conceiving of ourselves as free and responsible and conscious.

An Approach to Psychiatric Disorder at the Neuronal Level

BishopBlog writes:

In 2013, Tom Insel, Director of the US funding agency, National Institute of Mental Health (NIMH), created a stir with a blogpost in which he criticised the DSM5 and laid out the vision of a new Research Domain Criteria (RDoC) project. This aimed “to transform diagnosis by incorporating genetics, imaging, cognitive science, and other levels of information to lay the foundation for a new classification system.”

He drew parallels with physical medicine, where diagnosis is not made purely on the basis of symptoms, but also uses measures of underlying physiological function that help distinguish between conditions and indicate the most appropriate treatment. This, he argued, should be the goal of psychiatry, to go beyond presenting symptoms to underlying causes, reconceptualising disorders in terms of neural systems.

BishopBlog objects to the whole paradigm:

The RDoC program embodies a mistaken belief that neuroscientific research is inherently better than psychological research because it deals with primary causes…

From the RDoC:

Imagine treating all chest pain as a single syndrome without the advantage of EKG, imaging, and plasma enzymes. In the diagnosis of mental disorders when all we had were subjective complaints (cf. chest pain), a diagnostic system limited to clinical presentation could confer reliability and consistency but not validity. To date, there has been general consensus that the science is not yet well enough developed to permit neuroscience-based classification. However, at some point, it is necessary to instantiate such approaches if the field is ever to reach the point where advances in genomics, pathophysiology, and behavioral science can inform diagnosis in a meaningful way. RDoC represents the beginning of such a long-term project.

Second, RDoC is agnostic about current disorder categories. The intent is to generate classifications stemming from basic behavioral neuroscience. Rather than starting with an illness definition and seeking its neurobiological underpinnings, RDoC begins with current understandings of behavior-brain relationships and links them to clinical phenomena.

“Constructs,” i.e., a concept summarizing data about a specified functional dimension of behavior (and implementing genes and circuits) that is subject to continual refinement with advances in science. Constructs represent the fundamental unit of analysis in this system, and it is anticipated that most studies would focus on one construct (or perhaps compare two constructs on relevant measures). Related constructs are grouped into major domains of functioning, reflecting contemporary thinking about major aspects of motivation, cognition, and social behavior; the five domains are Negative Valence Systems (i.e., systems for aversive motivation), Positive Valence Systems, Cognitive Systems, Systems for Social Processes, and Arousal/Regulatory Systems.

Here’s a matrix to illustrate what he’s got in mind:

The columns of the matrix represent different classes of variables (or units of analysis) used to study the domains/constructs. Seven such classes have been specified; these are genes, molecules, cells, neural circuits, physiology (e.g. cortisol, heart rate, startle reflex), behaviors, and self-reports.

In addition, since constructs are typically studied in the context of particular scientific paradigms, a column for “paradigms” has been added; obviously, however, paradigms do not represent units of analysis.

It may be that BishopBlog objects to Insel’s ideas because BishopBlog is more optimistic and less derogatory with respect to psychological intervention. After all, Insel admitted that “a diagnostic system limited to clinical presentation could confer reliability and consistency but not validity.” But what more do you need besides reliability and consistency to treat patients?

Can There Be Understanding With Questions Only Or Don’t We Need Answers At Least Sometimes?

Damon Linker’s criticism of science popularizer Neil deGrasse Tyson’s recent anti-philosophy statements begs the question I’m afraid. I want philosophy to win on this one, but I don’t think Linker has managed to pull off making the case.

We could conceive the main beef to be about whether asking unanswerable questions is worthwhile. That’s what Tyson seemed to take the main issue to be. Between the lines, if not more explicitly, he demonstrated that he values answers. He equates understanding with coming up with answers. And, naturally, he considers understanding worthwhile, although that valorization remains between the lines.

Linker’s response is great up to a point. He says that philosophy is about posing “searching questions” and we might give it to him that it’s about asking better and better questions, even though they may remain unanswered and even unanswerable.

But then he says the following:

If what you crave is answers, the study of philosophy in this sense can be hugely frustrating and unsatisfying. But if you want to understand yourself as well as the world around you — including why you’re so impatient for answers, and progress, in the first place — then there’s nothing more thrilling and gratifying than training in philosophy and engaging with its tumultuous, indeterminate history.

So, Linker equates value with understanding too. And he says that you can achieve understanding by posing better and better questions. But whether you can or not was the issue at hand. Tyson says the worthwhile understanding of the world means asking questions and arriving at answers. Linker says the worthwhile understanding of the world means asking better and better questions even in the absence of answers. But if your opponent has said X, it is not yet an argument to just assert not-X. Linker certainly disagrees with with Tyson on the fundamental issue at hand. But he has given no argument, that I can see, why understanding consists of asking better and better questions even in the absence of answers. He’s just said or asserted that it does.

And, after all, doesn’t it seem that understanding is going to take some answers at some point?

Linker seems to recognize this possibility when he acknowledges that many defenders of philosophy will try to argue that philosophy makes progress, which I understand as philosophy does arrive at a few answers sometimes. Even though the biggest part of the philosophical project is the asking of better and better questions which may not always arrive at answers, philosophy must have eventually arrived at some answer or answers to really be said to “understand” something.

If philosophy is just a critical method then it cannot be said to provide understanding. It may sharpen the understanding you get when the better questions it poses get answered, say by science. But philosophy conceived of as unrelenting criticism, questioning and critique, does not arrive at understanding on its own.

Might philosophy so conceived, a universal wolf, at last eat up itself?

Neil deGrasse Tyson on Philosophy

Tyson says:

My concern there is that the philosophers believe they are asking deep questions about nature, and to the scientist, it’s, “what are you doing? Why are you wasting your time? Why are you concerning yourself about the meaning of meaning?”… If you are distracted by your questions so that you cannot move forward, you are not being a productive contributor to our understanding of the natural world. And so the scientist knows when the question, what is the sound of one hand clapping, is a pointless delay in your progress… Then it becomes how do you define clapping and all of a sudden it devolves into a discussion of the definitions of words and I’d rather keep the conversation about ideas. And when you do that, and you don’t derail yourself on questions that you think are important because philosophy class tells you this, but the scientist says, “Look, I’ve got all this world of unknown out there, I’m moving on, I’m leaving you behind, and you can’t even cross the street, because you are distracted by what you are sure are deep questions that you’ve asked of yourself, I don’t have time for that.”

Here is Massimo Pigliucci’s response reblogged at the Huffington Post.

Wayne Myrvold says this:

Philosophy does carry with it a risk of getting bogged down in questions that are either pointless or meaningless, and it always has. There is, of course, a long tradition of philosophers saying just that. Insert your favourite examples here; my greatest hits list includes the resounding closing paragraph of Hume’s Enquiry, and Kant’s challenge to metaphysicians in the Prolegomena. The logical empiricists, of course, tried to demarcate between sense and nonsense in such a way as to keep science on one side and the sorts of pointless metaphysical disputes they wished to avoid on the other.

But doesn’t this thought show even more where Neil deGrasse Tyson has gone wrong? Isn’t he engaging in a time-honored philosophical discussion with no evident knowledge of what others have said on the topic? All things considered, I do not think that’s the takedown I’d prefer to marshal against Tyson. It smells of mere interdisciplinary squabbles and professionalized turf warfare. After all, maybe he has read all the logical positivists and this is his considered position on the non-sense of most philosophical questions.

I think the better move against Tyson will involve discussing the issues on their merits and not reducing the discussion to interdisciplinary turf warfare. Can there be understanding in the absence of answers? I think that is an interesting question worth pursuing and I’ll do so in a blog post tomorrow.