Self-evidence and disagreement in ethics.

Author:Fanselow, Ryan
 
FREE EXCERPT

SUPPOSE JANE TELLS SARAH THAT SHE BELIEVES that active euthanasia is wrong and Sarah responds by questioning whether it is reasonable for Jane to hold this belief, or whether Jane's belief is justified. In a typical case, Jane will attempt to show that her belief about euthanasia is justified by citing a further moral belief that supports her belief about euthanasia, perhaps her belief that intentionally causing someone's death is wrong. Note that if this further belief about the wrongness of intentionally causing death is itself unjustified, it cannot justify Jane's belief about euthanasia. Thus, Sarah may ask how this further belief is justified.

Scenarios like this one show that in moral epistemology, as in general epistemology, there exists a regress problem. Most of our justified moral beliefs are inferentiallyjustified, that is, they are justified by further beliefs. But for these further beliefs to justify any beliefs, they too must be justified. We can justify these further beliefs by citing still further beliefs but this process cannot continue ad infinitum. (1) Nor is it plausible that justification travels in a circle (e.g., P is justified by Q which is justified by R which is in turn justified by P). For then we could only show that a belief is justified by assuming it in the first place. Thus, it seems that if we are to avoid moral skepticism, the regress must come to an end somewhere.

According to a traditional position in moral epistemology, which I will call moral foundationalism, the regress comes to an end with some moral beliefs. (2) In order to stop the regress, these moral beliefs must meet two conditions. First, they must be justified; otherwise it is doubtful that they could inferentially justify other beliefs. Second, they must not require further beliefs in order to be justified; otherwise the regress would begin again.

Moral foundationalism is an attractive position because it promises to answer the regress problem. However, the position inherits the burden of explaining why some moral beliefs have a particular privileged epistemic position--that is, why these beliefs are justified without requiring inferential support from other beliefs. The standard answer to this question is to insist that some moral beliefs have as their content propositions that are selfevident, (3) where self-evident propositions are those true propositions such that "if one adequately understands them, then by virtue of that understanding one is justified in believing them." (4)

This standard version of moral foundationalism comes in both strong and weak versions. According to strong moral foundationalism, an agent S is indefeasibly justified in believing any self-evident moral proposition P, so long as S adequately understands P. According to weak moral foundationalism, an adequate understanding of P renders S defeasibly justified in believing P. Note that strong moral foundationalism has the odd consequence that an agent could be justified in believing some self-evident proposition P even if P was radically incoherent with the other things S believes. For this reason, most contemporary moral foundationalists defend weak moral foundationalism, and I focus on this view in the discussion below.

Some philosophers who are not sympathetic to moral foundationalism object by denying that any moral proposition is self-evident. A well-known way of doing so is to note the deep disagreement about moral propositions among thoughtful, reflective people. As Richard Brandt puts it, "in ethics, even the doctors disagree." (5) Critics then infer from the nature of this disagreement that no moral proposition is self-evident.

Note, however, that it is not initially clear why the fact that a proposition is the subject of disagreement shows that it is not self-evident. Perhaps these philosophers think that a self-evident proposition must be obvious, and note that reflective, thoughtful people would not disagree about something that is obvious. However, one should not be so quick to claim that a self-evident proposition must be obvious. Certain logical and mathematical propositions are candidates for being self-evident, but in some cases it requires years of training to be able to grasp these propositions. In ethics, where one's judgment is frequently distorted by self-interest, among other things, it seems especially doubtful that all self-evident propositions will be obvious. Thus, this common argument from disagreement fails.

Nonetheless, I am going to argue that the nature of moral disagreement does pose serious difficulties for moral foundationalism. This argument is more restricted than the simple argument above because it focuses solely on cases of disagreement between epistemic peers (those who are roughly equivalent to us in terms of cognitive abilities), motivation to arrive at the truth and available evidence. The argument will draw on some recent work in epistemology about the nature of our epistemic obligations in cases of peer disagreement. I begin in section 1 by defending a weak principle about what these obligations are. In section 2, I discuss the nature and scope of moral disagreement. In section 3, I argue that the weak principle defended in section 1 and the facts about disagreement described in section 2 rule out the possibility that moral beliefs can serve as regress stoppers. Thus, the main upshot of this paper is negative. Moral foundationalism is faced with a serious objection. Nonetheless, in section 4, I argue that a coherentist position in moral epistemology can avoid these difficulties. Thus, the thesis of this paper should not be construed as a skeptical one. In the final section of this paper, I argue that this thesis has methodological implications. It rules out a position advocated by Peter Singer in his early work, according to which ethical inquiry must begin with "fundamental ethical axioms." (6) I suggest instead that the argument of this paper indirectly supports the method of reflective equilibrium.

  1. A Weak Principle Regarding Peer Disagreement

    As I mentioned above, there has been much recent work in epistemology about the nature of our epistemic obligations when we find that our epistemic peers disagree with us. Consider the following example:

    Suppose that five of us go out to dinner. It's time to pay the check, so the question we're interested in is how much we each owe. We can all see the bill total clearly, we all agree to give a 20% tip and we further agree to split the whole cost evenly, not worrying over who asked for imported water, or skipped desert, or drank more of the wine. I do the math in my head and become highly confident that our shares are $43 each. Meanwhile, my friend does the math in her head and becomes highly confident that our shares are $45 each. How should I react, upon learning of her belief? (7) Let us suppose that I know that my friend is genuinely my epistemic peer. We might, "suppose that my friend and I have a long history of eating out together and dividing the check in our heads, and that we've been equally successful in our arithmetic efforts: the vast majority of times, we agree; but when we disagree, she's right as often as I am." (8)

    Perhaps I was justified in believing that we owe $43 each before I learned of my friend's disagreement. However, it seems to most people who consider this case that after I become aware of my friend's disagreement, I should suspend judgment about whether or not we owe $43 each. If the proper attitude towards the proposition "we owe $43 each" is suspension of judgment, then it follows that I am not justified in believing that we owe $43 each. In this case, at least, it seems that the mere fact that I am aware of disagreement regarding the proposition "we owe $43 each," changes the epistemic status of the proposition.

    Of course, epistemologists disagree about what exactly we should do in the face of peer disagreement. Richard Feldman argues that, in cases where there is widespread peer disagreement, "suspension of judgment is the proper attitude. It follows that in such cases we lack reasonable belief and so, on standard conceptions, knowledge." (9) In contrast, Thomas Kelly writes, "disagreement does not provide a good reason for skepticism or to change one's view." (10)

    In what follows, I defend the following weak principle:

    D: If an agent S is aware of peer disagreement regarding some proposition P, then in order for S to be justified in believing P, S must have a further belief (a belief other than P itself) that serves as a reason to believe P. D is a weak principle because it only claims that awareness of disagreement defeats the justification of a belief B in certain cases, namely, cases in which one does not have a further belief (a belief other than B) that serves as a reason for the belief about which there is disagreement. Since most of our beliefs are supported by further beliefs, we can, at least most of the time, agree with Kelly that disagreement does not give us a reason to change our view or become skeptics.

    I will argue for D by claiming that disagreement can change the epistemic status of a proposition, such that, even if there was no need to have a further belief in order to be justified in believing P before one learned of peer disagreement, there is after one learns of such disagreement. The idea that disagreement can change the epistemic status of a proposition goes back to an oft-quoted passage from Henry Sidgwick:

    If I find any of my judgments, intuitive or inferential, in direct conflict with a judgment of some other mind, there must be error somewhere: and if I have no more reason to suspect error in the other mind than in my own, reflective comparison between the two judgments necessarily reduces me temporarily to a state of neutrality. (11) This idea persists. (12) Consider the following passage from Russ ShaferLandau:

    It is true that awareness of disagreement regarding one's moral...

To continue reading

FREE SIGN UP