This article continues a series where I attempt to categorize moral wrongdoing to better understand what motivates it. Here, I discuss ‘moral error,’ of which there are two varieties: value-based and fact-based.
Value Error
The first type, value-based wrongdoing, refers to sin based on a misunderstanding of what is ‘right and wrong.’
Value-based moral error often overlaps with akrasia1 — which we discussed in Part One — as our emotions affect the value structures we form, though not always. For example, it is true that humans are instilled with a certain set of values from a young age, which can be very hard to disconnect from.
But when it does connect with akrasia, it is usually a matter of our emotions, over time, leading us to develop immoral thinking. For example, a person can grow up in a cut-throat, materialistic environment, which causes them to develop a narcissistic worldview. Soon, rather than just being bound to narcissistic impulses, they begin to believe that all actions that benefit the self are right.
For this reason, the boundaries between akrasia and emotionally influenced value error are often unclear. Many, in particular the Socratic types, argue they are inseparable. Psychologists, for example, often theorize that our moral behavior is not a direct result of the values we come to develop.
Milgram's obedience study famously showed how many seemingly sacrifice their moral values in the face of authority. In the experiment, participants were instructed to administer increasingly severe electric shocks to a “learner” whenever they answered questions incorrectly under the supervision of an authoritative experimenter. Despite the learners’ severe emotional distress, the majority of participants delivered shocks up to the maximum voltage. Many argue that this shows that people often sacrifice moral reasoning under certain circumstances, such as the presence of a domineering authority figure. This experiment has even been used to explain the psychology behind why people have followed orders to commit genocide.
However, a good argument can be made that subconscious trust in authority is a value judgment in and of itself. Of course, it is shocking how many prioritize their trust in the authority of a researcher over preventing other people’s suffering, but it doesn’t necessarily mean that the participants abandoned moral reasoning altogether. Rather, it shows how people’s value prioritization is often very wrong. Humans often value authority even under the most dubious of circumstances.
Some argue that value-based moral error doesn’t exist. This is a common approach among many moral realists2 who argue that the human intuition for morality proves its mind-independent truth. Such people — intuitionists — often claim that cross-cultural moral disagreement does not exist and, instead, comes down to factual disagreements — the other type of moral error I will cover soon. They argue that moral truths are self-evident, much like logical or mathematical truths.
However, ‘facts’ do not exist in a vacuum, as our value-laden biases affect our interpretation of them. Take, for example, how various people will treat a bug in their house. Someone who values the lives of all living beings, regardless of their similarity to humans, might do everything they can to gently remove the bug or even allow it to stay if all else fails. A germaphobe, on the other hand, might just kill it by any means necessary.
Of course, many would counter that people’s life experiences — which shape their factual understanding — can change their values in response to a situation like this. An entomologist3 might learn how all life forms share many similarities and, thus, value them more equally than others would. On the other hand, a person whose family member died as a result of being bitten by an insect would likely prioritize their safety above all else — including insect life. Humans do not experience life the same way as each other, meaning they aren’t afforded the same set of facts to interpret. This leads to a chicken-and-egg-like situation where it’s unclear whether factual or normative4 differences come first in humans.
This naturally begs the question of how value judgments change in the face of new facts. If an entomologist learns about a factual similarity between humans and insects, why would that automatically lead them to value humans and insects the same?
Factual differences, on their own, cannot explain this. The is-ought problem, formulated by David Hume, demonstrates how values cannot be deduced from facts. Facts describe information — what is — but do not tell us what we ought to do with that information. In other words, facts about the world provide a foundation for knowledge but do not carry moral prescriptions. New factual information can help us confirm what actions align with our value system, but they do not explain how it arose in the first place. What can, then?
The answer isn’t entirely clear. As discussed in Part One, emotions, personal experiences, and differences in biological hardwiring all likely play a role in this transition. For example, scientific research has consistently found a connection between personality differences, which are at least partially innate, with variance in political and moral beliefs. Furthermore, if humans have free will, they have the capacity to reflect on or modify their values, giving them some agency in deciding which values they adopt.
So, at least in theory, moral discord based on a genuine conflict of values can exist.
Factual Error
Fact-based moral error, on the other hand, refers to the situation where a person has a perfectly correct understanding of ‘right’ and ‘wrong’ but acts immorally because they get the facts wrong.
Take, for example, a person who donates large sums of money to a charity, believing the money is going towards underprivileged children. Instead, the charity uses it to fund human trafficking. Unbeknownst to them, they are a large contributor to a ring causing the suffering of millions of people.
Certain traditional societies kill disabled children because they believe they carry evil spirits, which is not out of direct malice but a misunderstanding of natural phenomena.
Could any of these forms of factually-based moral error be classified as evil? In the world of pure philosophy, one could make a good argument that it shouldn’t. But, as mentioned before, our biases affect our interpretation of facts, which is why fully isolated examples are hard to find.
Furthermore, negligent or careless factual investigation, while not as egregious as intentional harm, still carries great ethical weight. Moral responsibility extends to the duty of ensuring factual accuracy before acting, especially when those actions affect others.
In the first example, if you have a person who is truly oblivious to what is happening and has made strong efforts to investigate the reliability of the charity, then this would be fairly close to a pure example of factual error.
This is why many argue that no moral wrongdoing occurred in a situation like this. Agent-focused normative ethics5 proponents, such as virtue ethicists,6 argue that the moral worth of an action must be evaluated based on the motivations of the agent in question — the person doing the action. Thus, regardless of the consequence, if the agent did not mean any harm and did not act carelessly, they are free of moral liability.
From this perspective, it would be much harder to argue the same regarding value-based moral error. If evil is defined as acting with bad moral motivation, then having the wrong values would definitionally fall under that.
This way of thinking contrasts with traditional normative ethical theories, such as deontology and utilitarianism, and the legal practice of strict liability7 found in the U.S. and other countries.
This agent-centered approach avoids many of the problems that plague such worldviews — one of them being ‘moral schizophrenia,’ where people’s motivations and desires are at perpetual odds with what they ‘ought’ to do. As I argued in Part One, the motivations I speak of are not unfiltered passions but rather integrated experience, emotion, and reason.
While moral error is complex because of its nuance and overlap with other types of sin, exploring and understanding it can shed light on the root of many of our moral disagreements — ultimately paving the way for meaningful dialogue and moral progress.
Desire overpowering the will in moral situations
A person who believes that moral truths are mind-independent
A scientist who studies insects and other arthropods
What ‘should’ or ‘should not’ be done
A branch of moral philosophy which examines how people should act
A normative ethical theory which holds that an action is right if a virtuous person would have carried it out under those circumstances
The practice of holding people liable for the consequences of their actions, regardless of fault or intent