IT IS PLAUSIBLE TO THINK THAT IT IS WRONG to cure many people's headaches rather than save someone else's life. On the other hand, it is plausible to think that it is not wrong to expose someone to a tiny risk of death when curing this person's headache. I will argue that these claims are inconsistent. For if we keep taking this tiny risk, then it is likely that one person dies, while many others' headaches are cured. In light of this inconsistency, there is a conflict in our intuitions about beneficence and chance. This conflict is perplexing. And I have not been able to find a satisfactory way of resolving it. Perhaps you can do better?
Intuitive Opposition to Aggregation
I will begin by fleshing out the two claims that I will argue are inconsistent. The first claim concerns decisions about whom to help. (1) Consider the following case:
ISLANDS. If you go to the north island, then you can save Jones's life. If you go to the south island, then you can cure the headaches of a billion people. You only have enough fuel to go to one island. (2)
You have two options. First, you can provide a large benefit to Jones by saving Jones's life. Second, you can provide a small benefit to a billion people by curing their headaches. Most of us have the intuition that you ought to save Jones's life. Moreover, you ought to do so, no matter how many people's headaches you could otherwise cure.
What principle would explain the fact that you must save Jones? A natural candidate is the following:
Many-Few. You may not provide small benefits to many people rather than save the life of someone else. (All else being equal.)
When I say you may not provide these benefits, "all else being equal," I am holding fixed features of these people, such as their relationship to you. I am also assuming that your behavior affects only the people who could receive the small benefits and the person whose life you could save. I am also assuming that the saved person would continue to live for a reasonably long period of time. (3)
If Many-Few is correct, then we should reject explanations that endorse a utilitarian approach to beneficence. According to this approach, it is always possible that small benefits to sufficiently many people "aggregate" to form a total amount of welfare that is more important than Jones's survival. If you ought to save Jones in ISLANDS, then the utilitarian approach to beneficence is misguided. (4)
Intuitive Tolerance of Risk-Taking
The second view concerns risk-taking. Sometimes, we take minuscule risks that someone will die in order to provide this person with a small benefit. To spare Amanda the inconvenience of taking the subway, I might drive her to the airport. I do so, even though this exposes her to a slightly increased chance of death. (I will ignore the risks to pedestrians and other motorists.) For Ben's culinary pleasure, I might serve him raw fish. I do so, even though cooking the fish would slightly reduce his risk of dying from a fatal bacterium. Do we sometimes kiss people for their sake? If so, we should pay heed to James Lenman's observation that:
it seems morally pretty unobjectionable to kiss one's lover in circumstances where there is (and I guess there always is) a fantastically small chance that, for example, one has contracted some deadly transmissible disease which one thereby passes on to them. (5) It seems a kiss is worth the coyest flirtation with death.
What would explain the fact that we may take these risks? A natural candidate is the following principle:
Risk Tolerance. You may expose someone to a negligible risk of death in order to otherwise provide this person with a small benefit. (All else being equal.)
Again, the "all else being equal" clause is in place to restrict the scope of the principle--the clause ensures that the principle has bite only in a case where the only morally significant feature of your action is that it has these chances of harms and benefits. I briefly postpone for now discussion of what difference it makes whether this risk is metaphysical or doxastic in nature. (The quick answer? It makes little difference.)
The Repetition Argument
We have seen two claims, Many-Few and Risk Tolerance, which are initially plausible. Now I will offer an argument that concludes that they are inconsistent. This argument turns on the likely effects of repeating risky actions. It is helpful to consider the repetition of risks when these risks are small. This allows us to put these small risks "under the moral microscope," to use Derek Parfit's apt phrase. (6) Parfit's preferred microscope is universalization: He magnifies a small effect of an action by considering everyone performing the action. In addition, we can magnify a small effect of an action by considering you performing the action many times: Another way of putting a risk under the moral microscope would be to consider your repeating the risk.
The main argument of this paper turns on the likely effects of repeating the risk. In this respect, my argument will follow the same broad strategy employed by Alastair Norcross. (7) Norcross argues as follows:
P1 Other things being equal, it is better that one person die than that five million each incur a one in a million risk of dying (153).
P2 Other things being equal, it is better that five million people each incur a one in a million risk of dying than that each suffers a moderate headache for twenty-four hours (156).
C Therefore, it is better that one person die a premature death than that five million people each suffers a moderate headache for twenty-four hours.
And if we accept the conclusion of this argument, then we must reject the thesis that Norcross is targeting:
Worse: Other things being equal, it is worse that one person die a premature death than that any number of people suffer moderate headaches for twenty-four hours (152).
Norcross's target, Worse, is a close cousin of the claim Many-Few: While Many-Few is a claim about permissibility, Worse is a parallel claim about which outcomes are better than others. Norcross argues against his target as follows. Norcross defends P1 by appealing to our intuition that one ought to direct a deadly gas so that it certainly kills one person, instead of exposing 5 million people to a one-in-a-million risk of death in the following way: Although the "expected utility" of the action is equivalent to a situation in which five people among the 5 million certainly die, "there is a finite, but small (less than one percent), chance that no one will die." (153). Meanwhile, Norcross defends P2 by appealing to our intuition that it is permissible for a single individual to take a tiny risk of death by driving to the pharmacy for a painkiller. The intuition to which Norcross is appealing here is the first-personal analogue to Risk Tolerance: While Risk Tolerance is the thesis that it is permissible to expose a stranger to a tiny fatal risk for a small benefit to her, Norcross's approach appeals to the claim that a better outcome results when these individuals take these risks for themselves. Norcross then "scales up" this risk by imagining 5 million individuals separately driving to the pharmacy. On the grounds that it is better for them to do so, he argues that we should accept P2.
The background idea that animates Norcross's argument is, I suggest, an important insight for our topic. However, the central inference in Norcross's argument presupposes a consequentialist approach to the ethics of risk and aggregation. Norcross's premises and conclusion are claims about which outcomes are better or worse than other outcomes, and he justifies his central inference by appealing to the "transitivity of 'better than'" (157). This inference will be happily accepted by nearly all consequentialists. But it is an inference that will be rejected by many non-consequentialists. They will respond that attempts to translate all ethical features of situations into a single metric of "goodness" is an illegitimate strategy, in general. Consequently, they are unlikely to be persuaded, by means of this strategy, that claims like Many-Few and Risk Tolerance are inconsistent. This potential opposition is dialectically significant because many of the friends of Many-Few are non-consequentialists. Indeed, as we noted earlier, it is claims such as Many-Few that some non-consequentialists see as the key grounds for rejecting an aggregative teleological approach to ethics. (8)
My aim is to show that the broader strategy of considering the repetition of risks can be used to formulate an argument against which nonconsequentialists should have no complaint. This argument will not appeal to the transitivity of "better than"; indeed, it will not even suppose that there is a property of a "better or worse" outcome--a property that some hardnosed non-consequentialists deny. (9) The argument will nonetheless follow the same insight that lies behind Norcross's argument by considering the repetition of a tiny risk. As such, I will call it the "Repetition Argument." It starts with a premise entailed by Risk Tolerance, and reaches a conclusion that contradicts Many-Few. I should stress at the outset that I intend this argument to show only that Risk Tolerance and Many-Few are inconsistent. I take no stance on which thesis we should reject. In this respect, my argument is less ambitious than that of Norcross, who affirms an analogue of Risk Tolerance in order to persuade us to reject an analogue of Many-Few.
To run my argument, I will need an example of a risky action that you would be permitted to perform, according to Risk Tolerance. I will choose the following toy case:
POISON. In a nearby room, Victim has swallowed poison. If you do nothing, then Victim will have a headache and then die. You can bring Victim only one of two antidotes:
The Reliable Antidote is certain to save Victim, but "will do nothing for Victim's headache.
The Chancy Antidote is most likely to cure Victim's headache and save...