By Jill Suttie | Greater Good Magazine
We all face moral dilemmas that force us to make difficult choices, benefitting some people over others. For example, we may have to choose between giving to a homeless person we pass on the street or saving our money to donate to a homeless shelter. Or we may have to vote on regulations that reduce carbon emissions, which hurt some people’s livelihoods but improve a community’s overall health.
We make decisions like these based on our beliefs, our connections to the people affected, and our emotions. For example, we tend to care more about people who look like us or are part of our “tribe,” which can make us act unfairly toward people less like us or living far away. These biases can lead us astray, especially if we value doing the most good for the greatest number of people.
Now, a new study suggests a tool we can use to look at moral dilemmas: the “veil of ignorance.” It’s a concept that philosophers have used for centuries, but this is the first psychology research to find that it could help us make fairer moral choices.
In a series of seven different experiments, researchers asked a diverse group of individuals to ponder moral dilemmas and observed how inducing a veil of ignorance (VOI) affected their thinking. Under a veil of ignorance, you don’t know which person you are in the hypothetical scenario that you’re making a moral decision about; you imagine that you could be anyone affected by it.
For example, in one experiment, they asked participants to imagine a hospital with limited supplies of oxygen, where removing one patient from oxygen would save the lives of nine other emergency earthquake victims. Half the participants imagined that they could be any of the 10 people involved in this scenario—meaning, they had a 1 in 10 chance of being the current patient or a 9 in 10 chance of being one of the earthquake victims. The other half of the participants were not given this prompting.
Then, the participants chose whether, if they were in charge, they would take the current patient off of oxygen to save the others and how moral or immoral that decision would be. Those prompted with VOI thinking chose to take oxygen away from one person for the benefit of nine people significantly more often than those not prompted, and they felt their choice was more morally sound, as well.
This result encourages study coauthor Joshua Greene of Harvard University. Assuming that the moral decision is the one that benefits the greatest number of people, it implies that people can overcome a natural aversion to doing the right thing even when it makes them uncomfortable.
“If we think that it’s good when people make choices that promote the greater good, then [the veil of ignorance] is interesting because it seems to push people in that direction,” he says.
To further test this idea in a situation with real-world consequences, Greene and his colleagues introduced participants from the U.S. to two charities—one in India, where $200 would cure two people of blindness, and one in the U.S., where $200 would cure one person of blindness. The participants learned that the researchers were going to select one of them at random and have their decision actually determine where a real $200 would go. After half the participants were prompted to VOI thinking, participants chose one charity over another.
People taught to look in an unbiased way at the situation chose to give to the Indian charity much more frequently than those who weren’t. This suggests that people using VOI thinking will be less likely to automatically favor someone similar to themselves—e.g., fellow Americans—and more likely to make decisions that ultimately benefit more people.
“People are naturally inclined towards those who are closer to them—literally or socially. That’s what makes them more likely to give to that person,” says Greene. “But thinking about the question in this way gives greater weight to concerns for impartiality as opposed to concerns for partiality.”
Further testing revealed that introducing VOI thinking wasn’t just unconsciously priming people or manipulating them, nor was people simply relying on mathematical probabilities when making choices. Instead, Greene believes, the abstract reasoning prompted by the thought experiment helped participants to overcome biases—including our tendency to be more empathic toward people we like—that might otherwise get in the way of fairness. Greene believes this may be a promising way to encourage the greater good because it’s harder to get people to expand their circle of care than to go through an intellectual exercise—though the outcomes might be the same.
“The intervention doesn’t require any warm fellow feeling for humanity—it’s just asking the question, ‘What would want I under this assumption of equal probability?’” Greene says.
Interestingly, people’s choices didn’t differ based on their race, gender, or other characteristics, says Greene. This makes him optimistic that the technique could help a lot of people let go of their individual biases when trying to make moral choices.
Encouraging VOI thinking could have real-world consequences and translate into better decision making—not just for individuals but for groups, he says. If encouraged to look beyond their biases, policymakers might create better policies, or be able to convince disapproving constituents to consider the greater good: to see policies that help more people as fairer and more moral.
“People in our study are doing this privately by themselves, but my thought/hope is that this would be even more powerful in a group context,” says Greene.