Abstract
Would you sell your favorite cuddly toy to save children who have nothing to eat? This is a moral choice, a decision about what each person believes is right or wrong. Scientists have found ways to understand these types of decisions better by using an experimental scenario in which someone must choose whether to actively harm one person to save many others. These scenarios have helped scientists to unravel the mystery of moral decisions. Research shows that our choices come from a mix of emotions and reasoning. Studies also show that culture, values, and personal beliefs all play a role. Today, research into human morality provides answers that can help us improve our daily lives and our futures. Thanks to this kind of research, we will also be able to see whether our companions of the future, such as robots, self-driving cars, and artificial intelligence, make the same moral choices as we do.
What Would You Do?
Have you ever watched a superhero face an impossible choice, like having to save a group of people at the expense of someone else? That kind of situation is not just something you see in movies. Scientists actually study such situations to understand how people make moral decisions.
Imagine an emergency rescue situation. You are driving a fire truck at full speed toward a burning building to save a person trapped inside. Then you receive a call from the fire station: another fire has broken out and five people are in danger. You can keep driving to the first fire, but you will not arrive in time to save the five other people. If you turn your truck toward the second fire, you will not be able to save the one person. Unfortunately, you cannot save everyone. Which option would you choose?
What is a Sacrificial Dilemma?
The situation described above is a life-or-death scenario that, thankfully, does not happen every day, but such situations tell scientists a lot about how the brain deals with difficult moral choices. This type of moral problem, where someone must decide whether to actively harm one person to save many others, is known as a sacrificial dilemma. The first sacrificial dilemmas, which scientists still use today, were imagined in the 1970s by two women philosophers. One of the most famous, called the Footbridge Dilemma, asks whether you would push one person to save five others, or do nothing and let the five get hurt (Figure 1) [1]. No matter what you choose, someone always ends up getting hurt.
- Figure 1 - The Footbridge Dilemma.
- Imagine standing on a bridge over a railway line while a trolley is speeding toward five people who cannot move. If the trolley keeps going, it will hit them. However, a man is standing near you. If you push him onto the tracks, he will not survive but the trolley will stop before it reaches the five people. What would you do? From a utilitarian perspective, pushing him is acceptable because it saves more lives overall. From a deontological perspective, pushing him is wrong because intentionally harming someone is morally forbidden, whatever the outcome.
Sacrificial dilemmas highlight two contrasting ways of moral reasoning: the utilitarian perspective and the deontological perspective (Figure 1). According to the utilitarian perspective, you should do what brings the greatest good or saves the most people, even if it means making a tough choice. So, saving five people by sacrificing one is considered utilitarian because it is the option that causes the least amount of harm possible. According to deontologists, some actions (such as harming others) are fundamentally wrong, regardless of their potential (positive) outcomes. Thus, to do nothing is a deontological choice, as this follows the moral principle of “do not kill” or “do not cause harm”.
Sacrificial dilemmas might seem hard, but studying people’s responses to such dilemmas provides surprising insights about moral decision-making. It is important to note that the goal of scientists is to understand the factors and processes that contribute to individuals’ moral decisions, not to determine what is right or wrong. The goal is not to “improve” individuals’ decisions in any way. In the following sections, we will highlight some of the key discoveries sacrificial dilemmas reveal.
What Determines Moral Responses?
Sacrificial dilemmas allow scientists to uncover some of the factors that shape moral decisions. There are at least three major things that influence the choices people make.
The Context
Have you ever noticed how decisions can feel much harder when someone you love is involved? Imagine the person on the footbridge in Figure 1 is not a stranger, but a family member or a close friend. Would you still push them to save five strangers? Our guess is that you would not! Indeed, research shows that people are much less willing to sacrifice someone they love, even if it means saving more lives. This reveals how personal connections play a big role in moral decision-making. Other context factors are also at play. For example, the age of the potential victim can matter, as people are often more willing to sacrifice an older person rather than a younger one. Even species come into consideration: most people would choose to sacrifice an animal rather than a human. However, it depends on the person’s feelings toward animals or their beliefs about the value of different lives.
Culture
Research has shown that culture, shaped by our education, traditions, and beliefs, strongly influences how we respond to sacrificial dilemmas. For instance, utilitarian responses are more likely to occur in individualistic cultures, where people focus more on themselves and their own goals. On the other hand, deontological responses are more likely to occur in collectivist cultures, where people care more about the group and family than about themselves. Thus, sacrificial dilemmas help scientists to demonstrate how much upbringing and environment shape people’s moral choices.
Values
Research has also shown that personal values greatly influence people’s responses to sacrificial dilemmas. Personal values are things that are very important to you. The Moral Foundations Questionnaire is a tool commonly used by scientists to identify personal values. Anyone can take a version of this test online here. The questionnaire measures how important care, fairness, respect for authority, or loyalty are for people. People’s responses to sacrificial dilemmas greatly differ depending on their values. For example, people who highly value care would favor deontological responses, whereas those who prioritize fairness often choose utilitarian responses that treat people more equally. Values clearly help explain why different individuals make different choices in the same moral situations.
How Do People Make Moral Decisions?
By studying the brain while people make various moral choices, researchers have discovered that moral decisions are based on a mix of emotions and reasoning.
In the Footbridge Dilemma, only about three people out of 10 say they would push one person to save five others. This shows that many people find the idea of directly causing someone’s death too upsetting, even if it could save more lives. Now imagine a similar situation: instead of standing on a bridge, you are next to a lever that can switch the trolley to another track where only one person stands. This version is known as the Trolley Dilemma. Although this is similar to the Footbridge Dilemma, in that either one person or five people will survive, eight out of 10 people say they would pull the lever to save the five people [2]!
Why such a big difference? Scientists discovered that when people think about pushing someone in the Footbridge Dilemma, the parts of the brain involved in emotional processing become very active, like a powerful alarm going off in their heads. These strong emotions often guide people to choose not to sacrifice the person, even if it means more people will ultimately be sacrificed. The emotional reaction makes the idea of pushing someone unbearable, leading many to do nothing instead. By contrast, in the Trolley Dilemma, the situation feels less personal since no physical contact with the potential victim is necessary. In this case, the emotional “alarm” is quieter. Instead, the brain’s logical and problem-solving areas take charge, and people are more likely to focus on the numbers and to decide to sacrifice one person to save five. Thus, when people make moral decisions, their emotions can sometimes steer them away from making a sacrifice, while logical thinking can push them toward choosing the option that saves the most lives.
Modeling Moral Decisions
All the studies using sacrificial dilemmas led scientists to develop a model called the dual-process model to explain how people make moral decisions (Figure 2) [3]. According to this model, the brain uses two different systems when faced with moral dilemmas. System 1 is fast, automatic, and driven by emotions. It kicks in when people have a strong intuition that the action is bad. In the Footbridge Dilemma for instance, system 1 is often in charge, and most people avoid taking the action. System 2 is slower, more thoughtful, and logical. It helps people weigh the pros and cons, think about the consequences, and make an unemotional decision. In the Trolley Dilemma, system 1 is relatively quiet, so system 2 takes over and helps people focus on saving the most lives. The dual-process model shows that moral choices depend on which system takes control. When emotions are strong, system 1 tends to lead the way. But when the situation feels less emotional, system 2 can step in and guide people toward a more rational, numbers-based choice.
- Figure 2 - The dual-process model explains how two systems in the brain guide our moral choices in a sacrificial dilemma.
- The emotional system is fast and intuitive: it reacts strongly to the idea of directly harming someone and often tells us “do not do it”. The rational system is slower and more thoughtful: it compares the possible outcomes, such as saving more lives vs. fewer, and supports decisions based on reasoning. Which system is more active can change the choice we make.
Sacrificial Dilemmas in Daily Life
By creating scenarios that mirror everyday moral dilemmas, researchers can examine how people react in the real world when faced with life-and-death choices. One groundbreaking example is the Moral Machine experiment, an online platform where 40 million participants worldwide were asked to make moral choices in simulated accident scenarios involving self-driving cars (Figure 3). In most of the scenarios, participants must choose between sparing passengers or pedestrians, humans or animals, or young or old individuals.
- Figure 3 - In the Moral Machine experiment, participants see situations like this one, where a self-driving car must choose between hitting a cat or a dog.
- In one option, the car continues straight and hits the cat; in the other, the car turns and hits the dog. Participants must decide which outcome is better. This figure shows that people can have moral preferences even between animals and that these choices reflect how we value different lives.
The experiment revealed that participants had strong tendencies, such as choosing to save humans over animals and sparing more lives rather than fewer. However, Moral Machine confirms that preferences varied across cultures. In some countries, people showed a strong preference for sparing younger individuals, while other countries placed less emphasis on age. The Moral Machine experiment may help scientists understand how future technologies, like self-driving cars, might be influenced by human moral decision making. The Moral Machine experiment helps scientists understand how people think about right and wrong, and how this could help decide how self-driving cars should behave in dangerous situations.
Other studies have also begun exploring how adults and children respond to moral dilemmas that involve choosing between humans and robots. The interesting thing is, while adults and children show similar responses when a dilemma involves only humans, children are less willing than adults to sacrifice robots. Who knows? Maybe the younger generation will teach everyone a thing or two about compassion, even for machines!
Take-Home Messages
In this article, you have seen that sacrificial dilemmas help scientists better understand how people make moral decisions. They show that choices depend on the context, values, and relationships with the people involved. Sacrificial dilemmas also show that the brain does not rely on a single way of deciding, but on two different processes. Sometimes an emotional reaction is immediate and pushes people to avoid causing harm. At other times reasoning takes over and helps people think through the consequences for everyone. Together, these elements explain why people may make different choices in situations that seem very similar.
Understanding how people make moral decisions matters for all of us. It can help us think more clearly when we must choose between two difficult options, and it helps us understand why our friends or families do not always view a situation the same way. This knowledge can also help us see that some decisions are more complex than they seem and that there is not always a single right answer. In real life, doctors may need to decide which patient to treat first during a crisis, and police officers or firefighters often must make life-or-death decisions under intense pressure. A better understanding of how the brain responds to difficult choices helps us appreciate the challenges of these professions and understand why some decisions require considerable courage.
Glossary
Moral Decisions: ↑ Choices we make about what is right or wrong when our actions can help or harm others.
Sacrificial Dilemma: ↑ A situation where someone must make a very hard choice that involves sacrificing one person to save others.
Utilitarian Perspective: ↑ An ethical view focused on outcomes. An action is judged by whether it produces the greatest overall good, even if it involves harming one person to help many.
Deontological Perspective: ↑ An ethical view focused on following moral rules. For example, an action is considered wrong if it breaks a rule, such as intentionally harming an innocent person, regardless of outcome.
Individualistic: ↑ Focusing on the individual rather than the group. In individualistic cultures, people are encouraged to focus on their own dreams, goals, and achievements.
Collectivist: ↑ Focusing on the group rather than the individual. In collectivist cultures, people focus on their families and their communities.
Model: ↑ A simple way to represent or explain how something works.
Conflict of Interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
AI Tool Statement
The author(s) declared that generative AI was not used in the creation of this manuscript. However, AI tools were used to assist in the design of the characters appearing in Figure 2. These characters were generated using Google Gemini (Banana model) and were subsequently integrated into the figure by the authors.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
References
[1] ↑ Thomson, J. J. 1985. The trolley problem. Yale Law J. 94:1395. doi: 10.2307/796133
[2] ↑ Hauser, M., Cushman, F., Young, L., Kang-Xing Jin, R., and Mikhail, J. 2007. A dissociation between moral judgments and justifications. Mind Lang. 22:1–21. doi: 10.1111/j.1468-0017.2006.00297.x
[3] ↑ Greene, J. D. 2007. Why are VMPFC patients more utilitarian? A dual-process theory of moral judgment explains. Trends Cogn. Sci. 11:322–3. doi: 10.1016/j.tics.2007.06.004