Here’s what I wrote about that kid with so-called “Affluenza” and “Bat Kid” (remember him?) earlier this year.
Philosophers make a distinction between the possibility of there being a meaning of life and there being a possibility of a meaning in life. If there really was a meaning of life that would indicate that there is a purpose to humanity which would apply to each of us and could dictate how we ought to live. Finding a meaning in life would mean something that an individual could do to have a “significant” or “valuable” life even if there was no meaning of life.
As our reviewer at the Notre Dame Philosophical Review, Thaddeus Metz, writes, the book, “more or less supposes from the outset that there is no meaning of life, i.e., that there is neither a God who created human life for a purpose nor some other property such that the universe makes sense or gives us a reason to live one way rather than another or even to live at all.” That’s the sense in which the universe is silent, as the title has it. The universe doesn’t care about us. It doesn’t have plans for us nor are we in any way ontological primary in the universe. We are just more random, meaningless stuff as far as that goes.
But Todd May, the author of “A Significant Life: Meaning in a Silent Universe,” wants to argue that even in that meaningless silence there can still be meaning in life.
Broadly speaking, May’s view of what can make a life significant is naturalist, maintaining that meaning in a life is possible in a purely physical world, i.e., a spatio-temporal universe that is made up of sub-atomic particles and best known through empirical means. More than that, though, May also maintains that a life can be significant even if normativity is not ‘built into the fabric’ of the universe as per an Aristotelian, teleological construal of it.
According to Metz, May’s basic position will be that a life has meaning in it if the life has the virtue of “narrativity”:
… a desirable theme characterizing a life trajectory that can be constituted by the moral or the aesthetic but need not be. Key examples of narrative values for May are steadfastness, intensity, integrity, adventurousness, courage, and creativity.
As Metz points out an immoral life could have a narrativity to it. As Metz says of May’s theory:
The glaring problem with it, as I have construed it so far, is that it is utterly non-moral and so entails that resolute and clever mass murder would be meaning-conferring.
Professor May wants to avoid this conclusion. But doing so is not so easy. One thing he’ll say is this:
… one must be engaged in or love what one is doing at the time in order for it to be meaningful. Of course, being subjectively attracted to one’s project is not sufficient for meaning, however; one must, for May, also be exemplifying a narrative value, the relevant sort of objectively attractive project.
Here we get a sentiment that lots of philosophers will be ready to agree with. But lots of regular folks (or let me just say non-academic, non-philosophers) would have a problem with. If it’s one thing I hear from my undergraduate students, it’s that if X is significant for you, then X is significant in any way that matters.
Think, for example, about the steadfast counter of blades of grass or detector of non-causal correlations. Or think about someone with impeccable integrity who strives to make his surroundings as ugly and more generally aesthetically bad as he can.
Can those lives really have meaning in them? It’s given that the individuals living those lives are subjectively enthralled with the value of what they are doing. But is what they are doing really valuable and significant?
Philosopher Peter Carruthers at the OUP blog:
You can, as it were, hear yourself as deciding to do something when the appropriate sensory-like episode — “I’ll do it now”, say — figures in consciousness. But your access to the underlying decision is just as indirect and interpretive as is your access to someone else’s decision when they say such a thing out loud. In our own case, however, we are under the illusion that the decision is a conscious one.
… what we take to be the conscious self is a puppet manipulated by our unconscious goals, beliefs, and decisions. Who’s in charge? Well, we are. But the “we” who are in charge are not the conscious selves we take ourselves to be, but rather a set of unconsciously operating mental states.
Header image credit: “Museo Internazionale delle Marionette”, by Leonardo Pilara. CC by 2.0 via Flickr.
The Boston Review critiques Richard Thaler’s latest book, “Misbehaving: The Making of Behavioral Economics.” Or actually the critique is of behavioral economics itself.
Before behavioral economics, there was classical economics which, “assumes people are rational, utility-maximizing decision-makers with consistent preferences, correct expectations, and unbiased beliefs; in economic parlance, everyone is homo economicus, ‘economic human.'”
John McMahon, the reviewer, goes on:
Behavioral economics, on the other hand, takes into account the “bounded rationality, bounded willpower, and bounded self-interest” of real humans—the propensity to be affected by perceptions, psychological factors, and other influences supposedly irrelevant to the neoclassical economic actor.
What behavioral economics gets right is human psychology. But it remains misguided, McMahon says, because it sticks to a blind faith in the marketization of every kind of thing.
Nearly everything, from behavioral economics’ standpoint, becomes a qualitatively similar choice. Choosing a mortgage belongs to the same category as choosing a spouse. Body weight is represented as a problem to be solved in the same way one might encourage people to save more for retirement.
Thaler situates regular purchases—buying lunch, groceries, and clothing—on the same list as high-stakes, infrequently occurring acquisitions, such as “cars, homes, careers choices, and spouses.” There is a revealing slip in this last example: Thaler describes this spectrum as “a list of products” arranged by purchase frequency, then ends the list with choosing a spouse. Surely, even in the most marketized dystopia, a spouse is not a product to be purchased?
So McMahon has some real problems with Thaler and behavioral economics:
There is nothing inherently economic to eating at a party with friends or building a relationship. Such activities, subjected to an economic frame by Thaler, are qualitatively different than shopping for groceries or buying a home insofar as their ends—joy, partnership, compassion, relaxation, and so on—are not the same as economic utility-maximization. They might even be read as attempts to gain respite from the ever-pressing market logic of the neoliberal world.
One area of research shows that the preferences of the wealthy are more influential on politics, actual governance and policy than the preferences of non-wealthy people. Here’s a link to the main academic paper Vox used as a source, which says, “economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.” Also, the preferences of the wealthy do not align with the preferences of the non-wealthy and are by and large self-serving, rarely directed at the welfare of others, only the wealthy themselves.
The second area of research is represented by Paul Piff, about whom I wrote before at Medium with respect to Affluenza. One of Piff’s papers, “Higher Social Class Predicts Unethical Behavior,” set up experiments where people who were wealthy or made to feel wealthy in the ecology of the experiment behaved more anti-socially than controls. Lisa Miller in the New York Times Magazine summarized Piff’s findings this way:
[The paper] showed through quizzes, online games, questionnaires, in-lab manipulations, and field studies that living high on the socioeconomic ladder can, colloquially speaking, dehumanize people. It can make them less ethical, more selfish, more insular, and less compassionate than other people. It can make them more likely, as Piff demonstrated in one of his experiments, to take candy from a bowl of sweets designated for children.
“While having money doesn’t necessarily make anybody anything,” Piff said to Miller, “the rich are way more likely to prioritize their own self-interests above the interests of other people. It makes them more likely to exhibit characteristics that we would stereotypically associate with, say, assholes.”
Piff theorizes that low-status people cultivate empathy and pro-social behavior because they figure they may require the help of others in the community in the future. High-status individuals are more insulated from needing the help of others and so do not place a premium on empathy or pro-social behavior. They can afford to flout the norms of the community, while low-status individuals cannot for fear of losing the benefits of the collective.
That’s just a theory. I’m not confident that it’s totally correct. And it’s also not clear that being rich makes you a jerk or being a jerk makes you rich. But there is a correlation found in these studies between anti-social behavior and out-sized wealth.
Suppose we learn that a moral judgment we thought reasonable has an origin in merely adaptive emotions? Is that moral judgment now debunked?
Perhaps you only care for your loved ones because they incite higher levels of oxytocin in your body. This article I just published considers this question by thinking about the famous “trolley problem.”
Imagine there’s a runaway trolley that is definitely going to kill five people who are working on the tracks. You have the power to pull a switch that would divert the trolley to another track, where it would definitely kill only one person who is working there. What would you do?
Now imagine the same runaway trolley that would definitely kill five people, but in this scenario, you are standing next to an obscenely obese man on a footbridge overlooking the tracks. The only way to save the five people would be to push him off the footbridge onto the tracks. His heavy body would stop the trolley and save the five, but he would certainly die as a result. What would you do? Most people pull the switch but judge that pushing the obese man is impermissible.
Why is there this asymmetry? After all, it’s a one-for-five sacrifice in each case. What makes the difference? Philosophers have argued for a variety of principles that would license the standard judgments on these two cases. But recently psychologist Joshua Greene argues that we ought to judge differently. We ought to push the obese man. In explaining why, this article shows that the role of emotion in moral judgment does not subtract from the role of reason in moral judgment.
There is still a role for philosophy in making correct moral judgments.
Read the whole article, which is much longer, at Medium.
When we like a TV show, why do we like it? What is it about a TV show or any media content that makes it good?
Perhaps you don’t care. “It’s enough that I like it,” you say. “I don’t need to know more than that.”
That may be fine for you. But media content creators and executive producers (the money people) desperately want to know what makes us like TV shows. They want to know what makes media content good, so they can make more content like that.
It’s also an interesting philosophical question. It’s about aesthetics, or the study of the beautiful. Essentially, it’s a question about value and what is valuable or good and why.
Theories in this ballpark have been put forth by academics in mass communication, sociology, psychology, etc. And popular culture critics take stabs at it too.
Today let me discuss just one such theory. The so-called “disposition theory,” attributable to Dolf Zillmann among others, says that the appeal of a TV show depends on the viewer’s emotional attitude (or “disposition”) to the characters as well as what happens to or befalls the characters or what the characters actively choose to do.
In other words: Do we like the characters? What actions do they undertake? And if those actions are morally good or morally bad, are they rewarded or punished?
That’s very general. It gets more specific.
A TV show in which a main character is likeable will appeal to viewers if the character acts morally and is rewarded within the plot. If the character acts immorally and is rewarded in the plot, viewers will find the show less appealing.
There are other permutations, according to a paper by Tamborini et al, “Predicting Media Appeal from Instinctive Moral Values” in the journal Mass Communication and Society from 2013. If a character who acts morally is not rewarded but rather suffers unjust punishment, then the show will be less appealing to viewers. And, finally, if an immoral character is justly punished, viewers find the show more appealing.
In short, viewers want good people to be rewarded and bad people to be punished in the fictional, narrative media content they consume. And they don’t like it flipped where the good are punished and the bad rewarded.
A theory should make predictions. And the disposition theory of media appeal does make predictions which can then be tested to see if they came out true.
The paper, “Predicting Media Appeal from Instinctive Moral Values” provides data from surveys, but I will discuss it another day. Today I want to get philosophical.
It may all be fine and good to see what people self-report as liking and disliking among a variety of types of media content on offer. But I want to ask some philosophical questions, or at least questions that a good student in a critical thinking class should appreciate as going unasked in Tamborini et al.’s paper in the disposition theory school of thought.
First, let’s notice that the whole exercise is non-normative. It is descriptive, sure, but not prescriptive. I mean, the data is a description of what types of media content viewers do indeed profess to like in that specific experimental ecology. But what it doesn’t do is make a proposal about what we should like in a TV show. There’s no claim that a TV show must have such and such properties to qualify as good or beautiful or moving or valuable. Philosophers might wonder what a bunch of data on people’s preferences has to do with what is actually good or valuable in fact. After all, those people represented in the data could be wrong or have bad taste or not know what they are talking about.
There are a number of ways to object to what I am saying. One, you could say that the project of disposition theory was never meant to be normative. It aims only at description. I’ll admit that’s probably true. But notice it leaves my position intact because we are agreeing, disposition theory is non-normative.
Then it will come down to what is “interesting.” You may want to possess a descriptive theory of media content appeal, while I may want to possess a normative or prescriptive theory of media content appeal. That is, a theory of what makes good TV shows, even if the majority doesn’t like them.
This last thought that something might be good even if only a minority think so, brings us back to a second way to object to what I am saying. That is, you might challenge the whole distinction between descriptive and prescriptive. What is good is just what everybody (or a majority) says is good, you might say; whereas I’ve been saying that what is good is whatever meets a certain criteria, the criteria of goodness or the good.
That is, I’m interested in what makes something good. For the moment, I’m not going to be satisfied that what makes something good is simply people agree in liking it and calling it good. I want to know what it is intrinsically about the thing that makes it good, or, if you prefer, that makes people like it.
After all, I think we would all agree, it’s “people say something is good because it is good.” It’s not “something is good because people say that it is.” (For those of you who are interested in this tongue-twisting brain teaser, it’s generally called “The Euthyphro Question” after a Socratic dialogue by Plato. If you want to learn more, look up what I’ve written on the Euthyphro question and ask yourself, “Is the good thing good because the gods love it or do the gods love it because it’s good?”)
Anyway, I think Tamborini et al. need to think about how they would answer the Euthyphro question. Only one answer leaves their work as interesting and it’s not my answer.
A second critical thinking or philosophical point I want to make is this: The deep and wide appeal of “anti-heroes” on TV and media content generally is a counterexample to the disposition theory. After all, anti-heroes are those characters who are not “likeable” and/or don’t always do the morally right thing, and yet we root for them. Disposition theory would predict that if a TV show featured a character who was an anti-hero then that character would be doing immoral things and viewers would only find the show appealing if that character was punished in the end.
But that is just simply not what happens. Viewers absolutely love TV shows in which an anti-hero does immoral things, and the appeal does not turn on rewards or punishments the character gets. So, to explain the appeal of media content with anti-heroes, such as The Sopranos, Breaking Bad, and arguably Seinfeld, Louie, and Girls, you’d need a different theory than disposition theory.
We need to separate a few things that disposition theory collapses all together. We need to separate a character being likeable from a character being moral. Tamborini et al.’s paper equates them. But it’s conceptually possible to “like” or have an emotional investment in a character who acts immorally. And there is almost certainly empirical data to be discovered that people do in fact “like” immoral characters, such as Don Draper of Mad Men and Walter White of Breaking Bad. Equating them will be confusing (and bad science) if it’s the liking of the characters and not their moral or immoral behaviors that causes us to find shows appealing or not.
(Also, it’s probably possible to not like a character while still having some kind of investment and in any case still come out appreciating the show.)
There’s more to say about Tamborini et al.’s paper such as the discussion about there being distinct moral judgment groups who would therefore like different media content according to the paper’s theory. Different people find different things immoral or moral, so they’re going to like or be well disposed to different characters and the media content’s appeal is going to depend on whether those whom you happen to like are rewarded or not.
And there might be other topics I could discuss.
But my present concern is with the psychology of the appeal of TV shows that feature anti-heroes. So some future posts are going to be about that as a problem for disposition theory and as an interesting matter in and of itself.
You need to get your theory of human nature and/or human psychology right in order for your explanations of why something appeals to us to be any good. “Blah blah blah appeals to us because we are such and so,” isn’t any good if we aren’t actually such and so psychologically speaking.
Assistant Professor Elizabeth Cohen at Scientific American says:
According to [Zillman’s] affective disposition theory, people enjoy entertainment when characters that they identify as the “good guys” win, and the “bad guys” get the justice they deserve.
And yet for a long time, we the audience have apparently enjoyed watching characters on TV do bad/immoral things and even root for them in some sense. It cannot be said that we watched The Sopranos just waiting for Tony’s comeuppance. Seinfeld, Breaking Bad, Dexter (and I have argued Girls) are shows where our watching has little to do with justice for the anti-heroes at the heart of their stories.
Cohen explains that Zillman’s original hypothesis in its details is more or less out of favor, even with his own school of thought:
Zillmann’s intellectual progeny question the premise that feeling good is the only reason we watch entertaining television. We often associate words like ‘fun,’ ‘enjoyment,’ or ‘escape’ when we think about our entertainment. These are all hedonic, or pleasurable, rewards of watching TV. But the work of Mary Beth Oliver, a professor of media studies at Pennsylvania State University, has shown us that entertainment can offer more than enjoyment. In step with the positive psychology movement, Oliver and her colleagues have identified many eudaimonic rewards of watching depressing, stressful, or even horrific television. Eudaimonia is an experience that meaningfulness, insight, and emotions that put us in touch with our own humanity. Eudaimonia might not make us happy [i.e., overflowing with pleasure-DF], but it can enrich us, leave us feeling fulfilled, touched, and perhaps even teach us something about ourselves.
Oliver is working with a very useful distinction that goes under the description of “hedonic happiness versus eudaimonic happiness”. There are two kinds of “happiness” that can be easily confused. Hedonic just means happiness, for lack of a better word, having to do with strict pleasure. Eudaimonia has to do with happiness, for lack of a better word, having to do with meaningfulness and purpose in life.
My title of this blog post suggests I am interested in the effects on a person’s mental states (his or her psychology) when he or she watches anti-heroes on TV. According to Zillman, the effect would be a kind of hedonic pleasure we get from seeing justice dispensed upon their deserving heads. But that doesn’t seem to capture what’s going on in lots of recent TV shows in which, yes, the season finale of the show’s final season might resolve into questions about justice, but in which the rest of the earlier episodes are not about dispensing justice and therefore do not trigger any supposed hedonic pleasure that would go with seeing it meted out.
For these other shows, it seems to make more sense to say, as Oliver does, that watching dramatic stories (whether justice prevails or not) provides eudaimonic happiness (let’s be careful not to confuse things by calling it “pleasure”) insofar as stories are realistic or at least relevant to our lives in the sense that they give us the opportunity to think (and more likely feel) our way through issues that once thought or felt through provide meaningfulness and understanding and purpose in life.
Crews and Adler believe that most cases of the yips probably have a psychological basis of some kind, but that in some percentage the ultimate cause will turn out to be neurological.[The New Yorker]
All philosophers of mind and most philosophers generally will recognize the error in this sentence, or should recognize it. Basically, the psychological cause of the yips (flinches in your golf swing, spasms in piano playing, or Knoblauch-like screw ups in baseball) is itself a neurological state or event. Because every psychological event is identical to or is also itself a neurological event.
In other words, there is nothing psychological that does not take place neurologically in the brain. So, saying that the yips are “not psychological but rather neurological” will not work because whatever is describable as psychological is at least in principle describable neurologically, since every psychological event is a neurological event somewhere in the central nervous system.
Philosophers who have thought carefully about the implications for nature and about the implications for the most perspicacious way for us to speak about nature, see that saying, “It’s not psychological, it’s neurological” seems to elide the fact that the psychological is neurological.
I am not talking about the debate between mind-body dualists and mind-body monistic physicalists. I am just talking about the naturalist position on the mind and the brain (on psychological events and neurological events and therefore on psychology and neuroscience as disciplines about those types of events, respectively). The naturalist position says the mind is the brain, psychological events are neurological events — we will return to the question of whether psychology as a science is identical to or reducible to neuroscience. (The latter is an epistemological question about our knowledge of things. I am talking about things-in-themselves like states of affairs or events, which is a metaphysical question.) Most philosophers assume or would like to assume that most if not all scientists were naturalists. So, the scientists who work on events in the brain at the level of psychology recognize and affirm that the mental processes which they describe qua mental process are in fact neurological processes.
Again: Philosophers who have thought carefully about the implications for nature and about the implications for the most perspicacious way for us to speak about nature, see that saying, “It’s not psychological, it’s neurological” seems to elide the fact that the psychological is neurological.
This is something I think Dan Dennett and others have taken scientists to task for.
But now consider the following quotations:
One of Adler’s hopes for the new study is that it will help define the division between psychological and neurological causes….
What could that mean? It seems straightforward, but if it is meant metaphysically (meant to be about things-as-they-are-in-themselves) then it perpetuates the mistake of thinking that some psychological events (or causes, i.e., causal-events) are not, at another level of description, neurological events. But of course they are, if we are to remain naturalistic. If, on the other hand, the statement is meant epistemologically (in the sense of being about what we can know and at the level of what interests us in the various goings-on in the mosaic of things and events in the world), then it’s not saying that we’ll learn the difference or division between psychological and neurological events-in-themselves, instead we will differentiate those times that it’s convenient or pragmatic for us to describe an event as psychological and not neurological and when it’s convenient or pragmatic (shall I say “explanatory”) to describe an event as neurological and not psychological.
Here’s another one:
At that time, ‘neurosis’ was merely a standard term for disorders whose origin was neurological rather than, say muscular….
This statement is better. It makes a conceptually non-confused distinction between a neurosis, like thinking everyone is out to get you (a mental event qua mental), which is understandable as “based” in the neurological, and maybe a twitch, that is based in the muscular system and not the neurological.
Beginning with the rise of psychoanalysis and continuing into the nineteen-seventies, he said, dystonias, including [this special sort of] writer’s cramp, were often treated as forms of mental illness….
“Treated as” implies a sense of “conceived as” it seems to me, as well as “medically treated”. Because of the rise of psychoanalysis and, one assumes, a proliferation of pop-psychology, dystonias were conceived as being mental and were intervened upon (treated) with mental interventions, like talk therapy.
What’s interesting to me is that some “problems” roughly thought of as “mental” will be amenable to talk therapy and others won’t. Any “mental” problem is, as we naturalists have it, neurological as well as psychological. It’s just that some are going to respond to “psychological” interventions like “here, think this new way,” or “isn’t that interesting, your nail-biting means this and that to you in your self-understanding,” while other “mental” problems can only be intervened upon neurologically at the level of medicines based on neurotransmitters or even at the level of brain surgery. It seems like no amount of talk therapy will rid someone of, I don’t know, face-blindness, which is understood primarily neurologically as damage to a brain area or at least a mis-functioning or malfunctioning system in the brain.
So are the yips a mental problem that can be dealt with only neurologically or can they be intervened upon with cognitive/talk therapy? The article goes on to list a number of psychological interventions: “A ball that isn’t a ball can’t miss the hole. Similarly, a putter that doesn’t feel like a putter may not jerk.”
What the author of the article is talking about is getting the golfer to think that they are not hitting a ball but just swinging at a marshmallow or something. This, it seems to me, is a an attempt, at the psychological level, to change to brain chemistry of the golfer while he or she swings. This way the brain chemistry behind the yips is circumvented and a non-yip swing can happen.
Indeed, any psychological intervention is metaphysically also a neurological intervention. It’s just that it’s easier to get a handle on the psychological description of what’s going on and perhaps at the present impossible to get a handle on what’s actually happening with the neurological events in themselves.