How do we decide what’s right or wrong?
Why do people so often disagree about moral issues?
When we do agree, where does that agreement come from?

Can a better understanding of morality help us solve the problems that divide us?

We study the mechanics of moral thinking, with an eye on improving the choices we make as individuals and as a society.

Our current social scientific research addresses real-world challenges. One such project, led by Lucius Caviola, aims to make charitable giving more effective. Our research led to the creation of a charitable donation platform, GivingMultiplier.org, which has raised over $2,000,000, with most of those funds going to some of the world’s most effective charities. Other projects are aimed at shifting attitudes related to sustainability (led by Jacob Rode) and reducing animosity and building trust between political “tribes” (led by Evan DeFilippis).

Much of our research has focused on the respective contributions of intuitive emotional processes and more reasoned and reflective processes. We’ve applied this dual-process framework to classic hypothetical dilemmas (here, here, here, and here), real temptations toward dishonesty (here and here), beliefs about free will and punishment (here and here), belief in God (here), and cooperation (here and here). More recently, in a project led by Karen Huang (and in collaboration with Max Bazerman), we’ve applied John Rawls’ idea of veil-of-ignorance reasoning to moral dilemmas in the domains of bioethics, charitable giving, and the governance of autonomous vehicles.

Thinking about the respective influences of intuition and reason provides a general framework, but that’s just a first step. We’ve aimed to understand these dissociable processes in more detailed functional terms, characterizing the kinds of information being processed and the neural systems that do the processing. (See our papers here, here, here, and here). Our research indicates that there is no dedicated “moral sense” or “moral faculty.” Instead, moral judgment depends on the functional integration of multiple cognitive systems, none of which appears to be specifically dedicated to moral judgment.

As explained in this book chapter and this paper, the category “morality” is like the category “vehicle.” Sailboats and motorcycles are both vehicles, but their mechanics are very different. A sailboat works more like a kite than a motorcycle, and a motorcycle works more like a (gas-powered) lawnmower than a sailboat. This doesn’t mean that the things we call “vehicles” are too heterogeneous to form a meaningful category. Rather, vehicles are unified at a functional level (by what they do) rather than at a mechanical level (by how they do it). So, too, with morality. As explained in Moral Tribes, I (along with many others) believe that morality is a suite of psychological devices that allow otherwise selfish individuals to reap the benefits of cooperation. But these devices seem to rely on the same neural systems that we use for thinking, feeling, and deciding in general.

In light of this, our research strategy is not to isolate and characterize the moral parts of the brain, but rather to understand how moral judgments arise from the coordinated interaction of various domain-general cognitive systems. These include systems that enable reasoning and cognitive control (as shown here, here, here, and here), the representation of value and the motivation of its pursuit (as shown here, here, and here), the simulation of distal events using sensory imagery (here and here), and the representation of structured thoughts (here).

A bit of history: Moral Dilemmas and the Trolley Problem

In the late 1990s, Greene and Jonathan Cohen initiated a line of research inspired by the Trolley Problem, which was originally posed by the philosophers Philippa Foot and Judith Jarvis Thomson.  

First, we have the switch dilemma: A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one? Most people say "Yes."

Then we have the footbridge dilemma: Once again, the trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley. Is that morally permissible? Most people say "No."

These two cases create a puzzle for moral philosophers: What makes it OK to sacrifice one person to save five others in the switch case but not in the footbridge case? There is also a psychological puzzle here: How does everyone know (or "know") that it's OK to turn the trolley but not OK to push the man off the footbridge?

(And, no, you cannot jump yourself. And, yes, we’re assuming that this will definitely work.)

As the foregoing suggests, our differing responses to these two dilemmas reflect the influences of competing responses, which are associated with distinct neural pathways. In response to both cases, we have the explicit thought that it would be better to save more lives. This response is a more controlled response (see papers in here and here) that depends on the prefrontal control network, including the dorsolateral prefrontal cortex (see papers here and here). But in response to the footbridge case, most people have a strong negative emotional response to the proposed action of pushing the man off the bridge. Our research has identified features of this action that make it emotionally salient and has characterized the neural pathways through which this emotional response operates. (As explained in this book chapter, the neural activity described in our original 2001 paper on this topic probably has more to do with the representation of the events described in these dilemmas than with emotional evaluation of those events per se.)

Research from many labs has provided support for this theory and has, more generally, expanded our understanding of the neural bases of moral judgment and decision-making. For an overview, see this review. Theoretical papers by Fiery Cushman and Molly Crockett link the competing responses observed in these dilemmas to the operations of “model free” and “model based” systems for behavioral control. This is an important development, connecting research in moral cognition to research on artificial intelligence as well as research on learning and decision-making in animals. We consider some implications of this development here.

What does all of this mean for normative questions about right and wrong? As explained in this paper and in Moral Tribes, our dual-process moral brains are very good at solving some kinds of moral problems and very bad at solving others. We do not think that science can, by itself, tell us what’s right or wrong. However, we believe that scientific self-knowledge can help us make progress on distinctively modern moral problems—ones that our brains were not designed to solve. To make good moral decisions it helps to understand the tools that we bring to the job.