Mele, A. R. (1998 in press) Real Self-Deception.  Behavioural and Brain Sciences

REAL SELF-DECEPTION

Alfred R. Mele
Department of Philosophy
Davidson College
Davidson
NC 28036
USA
 
 

Electronic mail: almele@davidson.edu
 

Keywords

belief; bias; contradictory beliefs; intention; motivation; self-deception; wishful thinking
 

Abstract

1. Introduction

Self-deception poses tantalizing conceptual conundrums and provides fertile ground for empirical research. Recent interdisciplinary volumes on the topic feature essays by biologists, philosophers, psychiatrists, and psychologists (Lockard & Paulhus 1988, Martin 1985). Self-deception's location at the intersection of these disciplines is explained by its significance for questions of abiding interdisciplinary interest. To what extent is our mental life present--or even accessible--to consciousness? How rational are we? How is motivated irrationality to be explained? To what extent are our beliefs subject to our control? What are the determinants of belief, and how does motivation bear upon belief? In what measure are widely shared psychological propensities products of evolution?[1]

A proper grasp of the dynamics of self-deception may yield substantial practical gains. Plato wrote, "there is nothing worse than self-deception--when the deceiver is at home and always with you" (Cratylus 428d). Others argue that self-deception sometimes is beneficial; and whether we would be better or worse off, on the whole, if we never deceived ourselves is an open question.[2] In any case, ideally, a detailed understanding of the etiology of self-deception would help reduce the frequency of harmful self-deception. This hope is boldly voiced by Jonathan Baron in a book on rational thinking and associated obstacles: "If people know that their thinking is poor, they will not believe its results. One of the purposes of a book like this is to make recognition of poor thinking more widespread, so that it will no longer be such a handy means of self-deception" (1988, p. 39). A lively debate in social psychology about the extent to which sources of biased belief are subject to personal control has generated evidence that some prominent sources of bias are to some degree controllable.[3] This provides grounds for hope that a better understanding of self-deception would enhance our ability to do something about it.

My aim in this article is to clarify the nature and (relatively proximate) etiology of self-deception. Theorists have tended to construe self-deception as largely isomorphic with paradigmatic interpersonal deception. Such construals, which have generated some much-discussed puzzles or "paradoxes," guide influential work on self-deception in each of the four disciplines mentioned (e.g., Davidson 1985, Gur & Sackheim 1979, Haight 1980, Pears 1984, Quattrone & Tversky 1984, Trivers 1985).[4] In the course of resolving the major puzzles, I will argue that the attempt to understand self-deception on the model of paradigmatic interpersonal deception is fundamentally misguided. Section 1 provides background, including sketches of two familiar puzzles: one about the mental state of a self-deceived person at a given time, the other about the dynamics of self-deception. Section 2, drawing upon empirical studies of biased belief, resolves the first puzzle and articulates sufficient conditions for self-deception. Section 3 challenges some attempted empirical demonstrations of the reality of self-deception, construed as requiring the simultaneous possession of beliefs whose propositional contents are mutually contradictory. Section 4 resolves the dynamic puzzle. Section 5 examines intentional self-deception.

Readers should be forewarned that the position defended here is deflationary. If I am right, self-deception is neither irresolvably paradoxical nor mysterious and it is explicable without the assistance of mental exotica. Although a theorist whose interest in self-deception is restricted to the outer limits of logical or conceptual possibility might view this as draining the topic of conceptual fun, the main source of broader, enduring interest in self-deception is a concern to understand and explain the behavior of real human beings.
 

2. Three Approaches to Characterizing Self-Deception and a Pair of Puzzles

Defining 'self-deception' is no mean feat. Three common approaches may be distinguished. One is lexical: a theorist starts with a definition of 'deceive' or 'deception', using the dictionary or common usage as a guide, and then employs it as a model for defining self-deception. Another is example-based: one scrutinizes representative examples of self-deception and attempts to identify their essential common features. The third is theory-guided: the search for a definition is guided by common-sense theory about the etiology and nature of self-deception. Hybrids of these approaches are also common.

The lexical approach may seem safest. Practitioners of the example-based approach run the risk of considering too narrow a range of cases. The theory-guided approach (in its typical manifestations) relies on common-sense explanatory hypotheses that may be misguided: ordinary folks may be good at identifying hypothetical cases of self-deception but quite unreliable at diagnosing what transpires in them. In its most pristine versions, the lexical approach relies primarily on a dictionary definition of 'deceive'. And what could be a better source of definitions than the dictionary?

Matters are not so simple, however. There are weaker and stronger senses of 'deceive' both in the dictionary and in common parlance, as I will explain. Lexicalists need a sense of 'deceive' that is appropriate to self-deception. On what basis are they to identify that sense? Must they eventually turn to representative examples of self-deception or to common-sense theories about what transpires in instances of self-deception?

The lexical approach is favored by theorists who deny that self-deception is possible (e.g., Gergen 1985, Haight 1980, Kipp 1980). A pair of lexical assumptions are common:

1. By definition, person A deceives person B (where B may or may not be the same person as A) into believing that p only if A knows, or at least believes truly, that ~p (i.e., that p is false) and causes B to believe that p.

2. By definition, deceiving is an intentional activity: nonintentional deceiving is conceptually impossible.

Each assumption is associated with a familiar puzzle about self-deception.

If assumption 1 is true, then deceiving oneself into believing that p requires that one know, or at least believe truly, that ~p and cause oneself to believe that p. At the very least, one starts out believing that ~p and then somehow gets oneself to believe that p. Some theorists take this to entail that, at some time, self-deceivers both believe that p and believe that ~p (e.g., Kipp 1980, p. 309). And, it is claimed, this is not a possible state of mind: the very nature of belief precludes one's simultaneously believing that p is true and believing that p is false. Thus we have a static puzzle about self-deception: self-deception, according to the view at issue, requires being in an impossible state of mind.

Assumption 2 generates a dynamic puzzle, a puzzle about the dynamics of self-deception. It is often held that doing something intentionally entails doing it knowingly. If that is so, and if deceiving is by definition an intentional activity, then one who deceives oneself does so knowingly. But knowingly deceiving oneself into believing that p would require knowing that what one is getting oneself to believe is false. How can that knowledge fail to undermine the very project of deceiving oneself? It is hard to imagine how one person can deceive another into believing that p if the latter person knows exactly what the former is up to. And it is difficult to see how the trick can be any easier when the intending deceiver and the intended victim are the same person.[5] Further, deception normally is facilitated by the deceiver's having and intentionally executing a deceptive strategy. If, to avoid thwarting one's own efforts at self-deception, one must not intentionally execute any strategy for deceiving oneself, how can one succeed?

In sketching these puzzles, I conjoined the numbered assumptions with subsidiary ones. One way for a proponent of the reality of self-deception to attempt to solve the puzzles is to attack the subsidiary assumptions while leaving the main assumptions unchallenged. A more daring tack is to undermine the main assumptions, 1 and 2. That is the line I will pursue.

Stereotypical instances of deceiving someone else into believing that p are instances of intentional deceiving in which the deceiver knows or believes truly that ~p. Recast as claims specifically about stereotypical interpersonal deceiving, assumptions 1 and 2 would be acceptable. But in their present formulations the assumptions are false. In a standard use of 'deceived' in the passive voice, we properly say such things as "Unless I am deceived, I left my keys in my car." Here 'deceived' means 'mistaken'. There is a corresponding use of 'deceive' in the active voice. In this use, to deceive is "to cause to believe what is false" (my authority is the Oxford English Dictionary). Obviously, one can intentionally or unintentionally cause someone to believe what is false; and one can cause someone to acquire the false belief that p even though one does not oneself believe that ~p. Yesterday, mistakenly believing that my son's keys were on my desk, I told him they were there. In so doing, I caused him to believe a falsehood. I deceived him, in the sense identified; but I did not do so intentionally, nor did I cause him to believe something I disbelieved.

The point just made has little significance for self-deception, if paradigmatic instances of self-deception have the structure of stereotypical instances of interpersonal deception. But do they? Stock examples of self-deception, both in popular thought and in the literature, feature people who falsely believe--in the face of strong evidence to the contrary--that their spouses are not having affairs, or that their children are not using illicit drugs, or that they themselves are not seriously ill. Is it a plausible diagnosis of what transpires in such cases that these people start by knowing or believing the truth, p, and intentionally cause themselves to believe that ~p? If, in our search for a definition of self-deception, we are guided partly by these stock examples, we may deem it an open question whether self-deception requires intentionally deceiving oneself, getting oneself to believe something one earlier knew or believed to be false, simultaneously possessing conflicting beliefs, and the like. If, instead, our search is driven by a presumption that nothing counts as self-deception unless it has the same structure as stereotypical interpersonal deception, the question is closed at the outset.

Compare the question whether self-deception is properly understood on the model of stereotypical interpersonal deception with the question whether addiction is properly understood on the model of disease. Perhaps the current folk-conception of addiction treats addictions as being, by definition, diseases. However, the disease model of addiction has been forcefully attacked (e.g., Peele 1989). The issue is essentially about explanation, not semantics. How is the characteristic behavior of people typically counted as addicts best explained? Is the disease model of addiction explanatorily more fruitful than its competitors? Self-deception, like addiction, is an explanatory concept. We postulate self-deception in particular cases to explain behavioral data. And we should ask how self-deception is likely to be constituted--what it is likely to be--if it does help to explain the relevant data. Should we discover that the behavioral data explained by self-deception are not explained by a phenomenon involving the simultaneous possession of beliefs whose contents are mutually contradictory or intentional acts of deception directed at oneself, self-deception would not disappear from our conceptual map--any more than addiction would disappear should we learn that addictions are not diseases.

A caveat is in order before I move forward. In the literature on self-deception, "belief," rather than "degree of belief," usually is the operative notion. Here, I follow suit, primarily to avoid unnecessary complexities. Those who prefer to think in terms of degree of belief should read such expressions as "S believes that p" as shorthand for "S believes that p to a degree greater than 0.5 (on a scale from 0 to 1)."
 

3. Motivated Belief and the Static Puzzle

In stock examples of self-deception, people typically believe something they want to be true--that their spouses are not involved in extramarital flings, that their children are not using drugs, and so on. It is a commonplace that self-deception, in garden-variety cases, is motivated by wants such as these.[6] Should it turn out that the motivated nature of self-deception entails that self-deceivers intentionally deceive themselves and requires that those who deceive themselves into believing that p start by believing that ~p, theorists who seek a tight fit between self-deception and stereotypical interpersonal deception would be vindicated. Whether self-deception can be motivated without being intentional--and without the self-deceiver's starting with the relevant true belief--remains to be seen.

A host of studies have produced results that are utterly unsurprising on the hypothesis that motivation sometimes biases beliefs. Thomas Gilovich reports:

A survey of one million high school seniors found that 70% thought they were above average in leadership ability, and only 2% thought they were below average. In terms of ability to get along with others, all students thought they were above average, 60% thought they were in the top 10%, and 25% thought they were in the top 1%! . . . A survey of university professors found that 94% thought they were better at their jobs than their average colleague. (1991, p. 77)

Apparently, we have a tendency to believe propositions we want to be true even when an impartial investigation of readily available data would indicate that they probably are false. A plausible hypothesis about that tendency is that our wanting something to be true sometimes exerts a biasing influence on what we believe.

Ziva Kunda, in a recent review essay, ably defends the view that motivation can influence "the generation and evaluation of hypotheses, of inference rules, and of evidence," and that motivationally "biased memory search will result in the formation of additional biased beliefs and theories that are constructed so as to justify desired conclusions" (1990, p. 483). In an especially persuasive study, undergraduate subjects (75 women and 86 men) read an article alleging that "women were endangered by caffeine and were strongly advised to avoid caffeine in any form"; that the major danger was fibrocystic disease, "associated in its advanced stages with breast cancer"; and that "caffeine induced the disease by increasing the concentration of a substance called cAMP in the breast" (Kunda 1987, p. 642). (Since the article did not personally threaten men, they were used as a control group.) Subjects were then asked to indicate, among other things, "how convinced they were of the connection between caffeine and fibrocystic disease and of the connection between caffeine and . . . cAMP on a 6-point scale" (pp. 643-44). In the female group, "heavy consumers" of caffeine were significantly less convinced of the connections than were "low consumers." The males were considerably more convinced than the female "heavy consumers"; and there was a much smaller difference in conviction between "heavy" and "low" male caffeine consumers (the heavy consumers were slightly more convinced of the connections).

Given that all subjects were exposed to the same information and assuming that only the female "heavy consumers" were personally threatened by it, a plausible hypothesis is that their lower level of conviction is due to "motivational processes designed to preserve optimism about their future health" (Kunda 1987, p. 644). Indeed, in a study in which the reported hazards of caffeine use were relatively modest, "female heavy consumers were no less convinced by the evidence than were female low consumers" (p. 644). Along with the lesser threat, there is less motivation for skepticism about the evidence.

How do the female heavy consumers come to be less convinced than the others? One testable possibility is that because they find the "connections" at issue personally threatening, these women (or some of them) are motivated to take a hyper-critical stance toward the article, looking much harder than other subjects for reasons to be skeptical about its merits (cf. Kunda 1990, p. 495). Another is that, owing to the threatening nature of the article, they (or some of them) read it less carefully than the others do, thereby enabling themselves to be less impressed by it.[7] In either case, however, there is no need to suppose that the women intend to deceive themselves, or intend to bring it about that they hold certain beliefs, or start by finding the article convincing and then get themselves to find it less convincing. Motivation can prompt cognitive behavior protective of favored beliefs without the person's intending to protect those beliefs. Many instances of self-deception, as I will argue, are explicable along similar lines.

Beliefs that we are self-deceived in acquiring or retaining are a species of biased belief. In self-deception, on a widely held view, the biasing is motivated. Even so, attention to some sources of unmotivated or "cold" biased belief will prove salutary. A number of such sources have been identified in psychological literature. Here are four.[8]

3.1.1. Vividness of information. A datum's vividness for an individual often is a function of individual interests, the concreteness of the datum, its "imagery-provoking" power, or its sensory, temporal, or spatial proximity (Nisbett & Ross 1980, p. 45). Vivid data are more likely to be recognized, attended to, and recalled than pallid data. Consequently, vivid data tend to have a disproportional influence on the formation and retention of beliefs.[9]

3.1.2. The availability heuristic. When we form beliefs about the frequency, likelihood, or causes of an event, we "often may be influenced by the relative availability of the objects or events, that is, their accessibility in the processes of perception, memory, or construction from imagination" (Nisbett & Ross, p. 18). For example, we may mistakenly believe that the number of English words beginning with 'r' greatly outstrips the number having 'r' in the third position, because we find it much easier to produce words on the basis of a search for their first letter (Tversky & Kahnemann 1973). Similarly, attempts to locate the cause(s) of an event are significantly influenced by manipulations that focus one's attention on a potential cause (Nisbett & Ross, p. 22; Taylor & Fiske 1975, 1978).

3.1.3. The confirmation bias. People testing a hypothesis tend to search (in memory and the world) more often for confirming than for disconfirming instances and to recognize the former more readily (Baron 1988, pp. 259-65; Nisbett & Ross, pp. 181-82). This is true even when the hypothesis is only a tentative one (as opposed, e.g., to a belief one has). The implications of this tendency for the retention and formation of beliefs are obvious.

3.14. Tendency to search for causal explanations. We tend to search for causal explanations of events (Nisbett & Ross, pp. 183-86). On a plausible view of the macroscopic world, this is as it should be. But given 1 and 2 above, the causal explanations upon which we so easily hit in ordinary life may often be ill-founded; and given 3, one is likely to endorse and retain one's first hypothesis much more often than one ought. Further, ill-founded causal explanations can influence future inferences.

Obviously, the most vivid or available data sometimes have the greatest evidential value; the influence of such data is not always a biasing influence. The main point to be made is that although sources of biased belief can function independently of motivation, they may also be primed by motivation in the production of particular motivationally biased beliefs.[10] For example, motivation can enhance the vividness or salience of certain data. Data that count in favor of the truth of a hypothesis that one would like to be true might be rendered more vivid or salient given one's recognition that they so count; and vivid or salient data, given that they are more likely to be recalled, tend to be more "available" than pallid counterparts. Similarly, motivation can influence which hypotheses occur to one (including causal hypotheses) and affect the salience of available hypotheses, thereby setting the stage for the confirmation bias.[11] When this happens, motivation issues in cognitive behavior that epistemologists shun. False beliefs produced or sustained by such motivated cognitive behavior in the face of weightier evidence to the contrary are, I will argue, beliefs that one is self-deceived in holding. And the self-deception in no way requires that the agents intend to deceive themselves, or intend to produce or sustain certain beliefs in themselves, or start by believing something they end up disbelieving. Cold biasing is not intentional; and mechanisms of the sort described may be primed by motivation independently of any intention to deceive.

There are a variety of ways in which our desiring that p can contribute to our believing that p in instances of self-deception. Here are some examples.[12]

3.2.1. Negative Misinterpretation. Our desiring that p may lead us to misinterpret as not counting (or not counting strongly) against p data that we would easily recognize to count (or count strongly) against p in the desire's absence. For example, Don just received a rejection notice on a journal submission. He hopes that his article was wrongly rejected, and he reads through the comments offered. Don decides that the referees misunderstood a certain crucial but complex point and that their objections consequently do not justify the rejection. However, as it turns out, the referees' criticisms were entirely justified; and when, a few weeks later, Don rereads his paper and the comments in a more impartial frame of mind, it is clear to him that the rejection was warranted.

3.2.2. Positive Misinterpretation. Our desiring that p may lead us to interpret as supporting p data that we would easily recognize to count against p in the desire's absence. For example, Sid is very fond of Roz, a college classmate with whom he often studies. Wanting it to be true that Roz loves him, he may interpret her refusing to date him and her reminding him that she has a steady boyfriend as an effort on her part to "play hard to get" in order to encourage Sid to continue to pursue her and prove that his love for her approximates hers for him. As Sid interprets Roz's behavior, not only does it fail to count against the hypothesis that she loves him, it is evidence for the truth of that hypothesis.

3.2.3. Selective Focusing/Attending. Our desiring that p may lead us both to fail to focus attention on evidence that counts against p and to focus instead on evidence suggestive of p. Attentional behavior may be either intentional or unintentional. Ann may tell herself that it is a waste of time to consider her evidence that her husband is having an affair, since he loves her too much to do such a thing; and she may intentionally act accordingly. Or, because of the unpleasantness of such thoughts, Ann may find her attention shifting whenever the issue suggests itself.

3.2.4. Selective Evidence-Gathering. Our desiring that p may lead us both to overlook easily obtained evidence for ~p and to find evidence for p that is much less accessible. A historian of philosophy who holds a certain philosophical position hopes that her favorite philosopher (Plato) did so too; consequently, she scours the texts for evidence of this while consulting commentaries that she thinks will provide support for the favored interpretation. Our historian may easily miss rather obvious evidence to the contrary, even though she succeeds in finding obscure evidence for her favored interpretation. Selective evidence-gathering may be analyzed as a combination of 'hyper-sensitivity' to evidence (and sources of evidence) for the desired state of affairs and 'blindness'--of which there are, of course, degrees--to contrary evidence (and sources thereof).[13]

In none of the examples offered does the person hold the true belief that ~p and then intentionally bring it about that he or she believes that p. Yet, assuming that my hypothetical agents acquire relevant false beliefs in the ways described, these are garden-variety instances of self-deception. Don is self-deceived in believing that his article was wrongly rejected, Sid is self-deceived in believing certain things about Roz, and so on.

It sometimes is claimed that while we are deceiving ourselves into believing that p we must be aware that our evidence favors ~p, on the grounds that this awareness is part of what explains our motivationally biased treatment of data (Davidson 1985, p. 146). The thought is that without this awareness we would have no reason to treat data in a biased way, since the data would not be viewed as threatening, and consequently we would not engage in motivationally biased cognition. In this view, self-deception is understood on the model of intentional action: the agent has a goal, sees how to promote it, and seeks to promote it in that way. However, the model places excessive demands on self-deceivers.[14] Cold or unmotivated biased cognition is not explained on the model of intentional action; and motivation can prime mechanisms for the cold biasing of data in us without our being aware, or believing, that our evidence favors a certain proposition. Desire-influenced biasing may result both in our not being aware that our evidence favors ~p over p and in our acquiring the belief that p. This is a natural interpretation of the illustrations I offered of misinterpretation and of selective focusing/attending. In each case, the person's evidence may favor the undesirable proposition; but there is no need to suppose the person is aware of this in order to explain the person's biased cognition.[15] Evidence that one's spouse is having an affair (or that a scholarly paper one painstakingly produced is seriously flawed, or that someone one loves lacks reciprocal feelings) may be threatening even if one lacks the belief, or the awareness, that that evidence is stronger than one's contrary evidence.

Analyzing self-deception is a difficult task; providing a plausible set of sufficient conditions for self-deception is less demanding. Not all cases of self-deception need involve the acquisition of a new belief. Sometimes we may be self-deceived in retaining a belief that we were not self-deceived in acquiring. Still, the primary focus in the literature has been on self-deceptive belief-acquisition, and I will follow suit.

I suggest that the following conditions are jointly sufficient for entering self-deception in acquiring a belief that p.

1. The belief that p which S acquires is false.

2. S treats data relevant, or at least seemingly relevant, to the truth value of p in a motivationally biased way.

3. This biased treatment is a nondeviant cause of S's acquiring the belief that p.

4. The body of data possessed by S at the time provides greater warrant for ~p than for p.[16]

Each condition requires brief attention. Condition 1 captures a purely lexical point. A person is, by definition, deceived in believing that p only if p is false; the same is true of being self-deceived in believing that p. The condition in no way implies that the falsity of p has special importance for the dynamics of self-deception. Motivationally biased treatment of data may sometimes result in someone's believing an improbable proposition, p, that, as it happens, is true. There may be self-deception in such a case; but the person is not self-deceived in believing that p, nor in acquiring the belief that p.[17]

My brief discussion of various ways of entering self-deception serves well enough as an introduction to condition 2. My list of motivationally biased routes to self-deception is not intended as exhaustive; but my discussion of these routes does provide a gloss on the notion of motivationally biased treatment of data.

My inclusion of the term 'nondeviant' in condition 3 is motivated by a familiar problem for causal characterizations of phenomena in any sphere (see, e.g., Mele 1992a, ch. 11). Specifying the precise nature of nondeviant causation of a belief by motivationally biased treatment of data is a difficult technical task better reserved for another occasion. However, much of this article provides guidance on the issue.

The thrust of condition 4 is that self-deceivers believe against the weight of the evidence they possess. For reasons offered elsewhere, I do not view 4 as a necessary condition of self-deception (Mele 1987a, pp. 134-35). In some instances of motivationally biased evidence-gathering, e.g., people may bring it about that they believe a falsehood, p, when ~p is much better supported by evidence readily available to them, even though, owing to the selectivity of the evidence-gathering process, the evidence that they themselves actually possess at the time favors p over ~p. As I see it, such people are naturally deemed self-deceived, other things being equal. Other writers on the topic do require that a condition like 4 be satisfied, however (e.g., Davidson 1985, McLaughlin 1988, Szabados 1985); and I have no objection to including 4 in a list of jointly sufficient conditions. Naturally, in some cases, whether the weight of a person's evidence lies on the side of p or of ~p (or equally supports each) is subject to legitimate disagreement.[18]

Return to the static puzzle. The primary assumption, again, is this: "By definition, person A deceives person B (where B may or may not be the same person as A) into believing that p only if A knows, or at least believes truly, that ~p and causes B to believe that p." I have already argued that the assumption is false and I have attacked two related conceptual claims about self-deception: that all self-deceivers know or believe truly that ~p while (or before) causing themselves to believe that p, and that they simultaneously believe that ~p and believe that p. In many garden-variety instances of self-deception, the false belief that p is not preceded by the true belief that ~p, nor are the two beliefs held simultaneously. Rather, a desire-influenced treatment of data has the result both that the person does not acquire the true belief and that he or she does acquire (or retain) the false belief. One might worry that the puzzle emerges at some other level; but I have addressed that worry elsewhere and I set it aside here (Mele 1987a, pp. 129-30).

The conditions for self-deception that I have offered are conditions specifically for entering self-deception in acquiring a belief. However, as I mentioned, ordinary conceptions of the phenomenon allow people to enter self-deception in retaining a belief. Here is an illustration from Mele 1987a (pp. 131-32):

Sam has believed for many years that his wife, Sally, would never have an affair. In the past, his evidence for this belief was quite good. Sally obviously adored him; she never displayed a sexual interest in another man . . .; she condemned extramarital sexual activity; she was secure, and happy with her family life; and so on. However, things recently began to change significantly. Sally is now arriving home late from work on the average of two nights a week; she frequently finds excuses to leave the house alone after dinner; and Sam has been informed by a close friend that Sally has been seen in the company of a certain Mr. Jones at a theater and a local lounge. Nevertheless, Sam continues to believe that Sally would never have an affair. Unfortunately, he is wrong. Her relationship with Jones is by no means platonic.

In general, the stronger the perceived evidence one has against a proposition that one believes (or "against the belief," for short), the harder it is to retain the belief. Suppose Sam's evidence against his favored belief--that Sally is not having an affair--is not so strong as to render self-deception psychologically impossible and not so weak as to make an attribution of self-deception implausible. Each of the four types of data-manipulation I mentioned may occur in a case of this kind. Sam may positively misinterpret data, reasoning that if Sally were having an affair she would want to hide it and that her public meetings with Jones consequently indicate that she is not sexually involved with him. He may negatively misinterpret the data, and even (nonintentionally) recruit Sally in so doing by asking her for an "explanation" of the data or by suggesting for her approval some acceptable hypothesis about her conduct. Selective focusing may play an obvious role. And even selective evidence-gathering has a potential place in Sam's self-deception. He may set out to conduct an impartial investigation, but, owing to his desire that Sally not be having an affair, locate less accessible evidence for the desired state of affairs while overlooking some more readily attainable support for the contrary judgment.

Here again, garden-variety self-deception is explicable independently of the assumption that self-deceivers manipulate data with the intention of deceiving themselves, or with the intention of protecting a favored belief. Nor is there an explanatory need to suppose that at some point Sam both believes that p and believes that ~p.
 

4. Conflicting Beliefs and Alleged Empirical Demonstrations of Self-Deception

I have argued that in various garden-variety examples, self-deceivers do not simultaneously possess beliefs whose propositional contents are mutually contradictory ("conflicting beliefs," for short). This leaves it open, of course, that some self-deceivers do possess such beliefs. A familiar defense of the claim that the self-deceived simultaneously possess conflicting beliefs proceeds from the contention that they behave in conflicting ways. For example, it is alleged that although self-deceivers like Sam sincerely assure their friends that their spouses are faithful, they normally treat their spouses in ways manifesting distrust. This is an empirical matter on which I cannot pronounce. But suppose, for the sake of argument, that the empirical claim is true. Even then, we would lack sufficient grounds for holding that, in addition to believing that their spouses are not having affairs, these self-deceivers also believe, simultaneously, that their spouses are so engaged. After all, the supposed empirical fact can be accounted for on the alternative hypothesis that, while believing that their spouses are faithful, these self-deceivers also believe that there is a significant chance they are wrong about this. The mere suspicion that one's spouse is having an affair does not amount to a belief that he or she is so involved. And one may entertain suspicions that p while believing that ~p.[19]

That said, it should be noted that some psychologists have offered alleged empirical demonstrations of self-deception, on a conception of the phenomenon requiring that self-deceivers (at some point) simultaneously believe that p and believe that ~p.[20] A brief look at some of this work will prove instructive.

Ruben Gur and Harold Sackheim propose the following statement of "necessary and sufficient" conditions for self-deception:

1. The individual holds two contradictory beliefs (p and not-p).

2. These two contradictory beliefs are held simultaneously.

3. The individual is not aware of holding one of the beliefs (p or not-p).

4. The act that determines which belief is and which belief is not subject to awareness is a motivated act. (Sackheim & Gur 1978, p. 150; cf. Gur & Sackheim 1979; Sackheim & Gur 1985)

Their evidence for the occurrence of self-deception, thus defined, is provided by voice-recognition studies. In one type of experiment, subjects who wrongly state that a tape-recorded voice is not their own, nevertheless show physiological responses (e.g., galvanic skin responses) that are correlated with voice recognition. "The self-report of the subject is used to determine that one particular belief is held," while "behavioral indices, measured while the self-report is made, are used to indicate whether a contradictory belief is also held" (Sackheim & Gur 1978, p. 173).

It is unclear, however, that the physiological responses are demonstrative of belief (Mele 1987b, p. 6).[21] In addition to believing that the voice is not their own (assuming the reports are sincere), do the subjects also believe that it is their own, or do they merely exhibit physiological responses that often accompany the belief that one is hearing one's own voice? Perhaps there is only a sub-doxastic (from 'doxa': belief) sensitivity in these cases. The threshold for physiological reaction to one's own voice may be lower than that for cognition (including unconscious belief) that the voice is one's own. Further, another team of psychologists (Douglas & Gibbins 1983; cf. Gibbins & Douglas 1985) obtained similar results for subjects' reactions to voices of acquaintances. Thus, even if the physiological responses were indicative of belief, they would not establish that subjects hold conflicting beliefs. Perhaps subjects believe that the voice is not their own while also "believing" that it is a familiar voice.

George Quattrone and Amos Tversky, in an elegant study (1984), argue for the reality of self-deception satisfying Sackheim and Gur's conditions. The study offers considerable evidence that subjects required on two different occasions "to submerge their forearm into a chest of circulating cold water until they could no longer tolerate it" tried to shift their tolerance on the second trial, after being informed that increased tolerance of pain (or decreased tolerance, in another sub-group) indicated a healthy heart.[22] Most subjects denied having tried to do this; and Quattrone and Tversky argue that many of their subjects believed that they did not try to shift their tolerance while also believing that they did try to shift it. They argue, as well, that these subjects were unaware of holding the latter belief, the "lack of awareness" being explained by their "desire to accept the diagnosis implied by their behavior" (p. 239).

Grant that many of the subjects tried to shift their tolerance in the second trial and that their attempts were motivated. Grant, as well, that most of the "deniers" sincerely denied having tried to do this. Even on the supposition that the deniers were aware of their motivation to shift their tolerance, does it follow that, in addition to believing that they did not "purposefully engage in the behavior to make a favorable diagnosis," these subjects also believed that they did do this, as Quattrone and Tversky claim? Does anything block the supposition that the deniers were effectively motivated to shift their tolerance without believing, at any level, that this is what they were doing? (My use of "without believing, at any level, that [p]" is elliptical for "without believing that p while being aware of holding the belief and without believing that p while not being aware of holding the belief.")

The study does not offer any direct evidence that the sincere deniers believed themselves to be trying to shift their tolerance. Nor is the assumption that they believed this required to explain their behavior. (The required belief for the purpose of behavior-explanation is a belief to the effect that a suitable change in one's tolerance on the second trial would constitute evidence of a healthy heart.) From the assumptions (1) that some motivation M that agents have for doing something A results in their doing A and (2) that they are aware that they have this motivation for doing A, it does not follow that they believe, consciously or otherwise, that they are doing A (in this case, purposely shifting their tolerance).[23] Nor, a fortiori, does it follow that they believe, consciously or otherwise, that they are doing A for reasons having to do with M. They may falsely believe that M has no influence whatever on their behavior, while not possessing the contrary belief.

The following case illustrates the latter point. Ann, who consciously desires her parents' love, believes they would love her if she were a successful lawyer. Consequently, she enrolls in law school. But Ann does not believe, at any level, that her desire for her parents' love is in any way responsible for her decision to enroll. She believes she is enrolling solely because of an independent desire to become a lawyer. Of course, I have simply stipulated that Ann lacks the belief in question. But my point is that this stipulation does not render the scenario incoherent. My claim about the sincere deniers in Quattrone and Tversky's study is that, similarly, there is no explanatory need to suppose they believe, at any level, that they are trying to shift their tolerance for diagnostic purposes, or even believe that they are trying to shift their tolerance at all. These subjects are motivated to generate favorable diagnostic evidence and they believe (to some degree) that a suitable change in their tolerance on the second trial would constitute such evidence. But the motivation and belief can result in purposeful action independently of their believing, consciously or otherwise, that they are "purposefully engaged in the behavior," or purposefully engaged in it "to make a favorable diagnosis."[24]

As Quattrone and Tversky's study indicates, people sometimes do not consciously recognize why they are doing what they are doing (e.g., why they are now reporting a certain pain-rating). Given that an unconscious recognition or belief that they are "purposefully engaged in the behavior," or purposefully engaged in it "to make a favorable diagnosis," in no way helps to account for what transpires in the case of the sincere deniers, why suppose that such recognition or belief is present? If one thought that normal adult human beings always recognize--at least at some level--what is motivating them to act as they are, one would opt for Quattrone and Tversky's dual belief hypothesis about the sincere deniers. But Quattrone and Tversky offer no defense of the general thesis just mentioned. In light of their results, a convincing defense of that thesis would demonstrate that whenever such adults do not consciously recognize what they are up to, they nevertheless correctly believe that they are up to x, albeit without being aware that they believe this. That is a tall order.

Quattrone and Tversky suspect that (many of) the sincere deniers are self-deceived in believing that they did not try to shift their tolerance. They adopt Sackheim and Gur's analysis of self-deception (1984, p. 239) and interpret their results accordingly. However, an interpretation of their data that avoids the dual belief assumption just criticized allows for self-deception on a less demanding conception. One can hold (a) that sincere deniers, due to a desire to live a long, healthy life, were motivated to believe that they had a healthy heart; (b) that this motivation (in conjunction with a belief that an upward/downward shift in tolerance would constitute evidence for the favored proposition) led them to try to shift their tolerance; and (c) that this motivation also led them to believe that they were not purposely shifting their tolerance (and not to believe the opposite). Their motivated false beliefs that they were not trying to alter their displayed tolerance can count as beliefs that they are self-deceived in holding without their also believing that they were attempting to do this.[25]

How did the subjects' motivation lead them to hold the false belief at issue? Quattrone and Tversky offer a plausible suggestion (p. 243): "The physiological mechanism of pain may have facilitated self-deception in this experiment. Most people believe that heart responses and pain thresholds are ordinarily not under an individual's voluntary control. This widespread belief would protect the assertion that the shift could not have been on purpose, for how does one 'pull the strings'?" And notice that a belief that one did not try to alter the amount of time one left one's hand in the water before reporting a pain-rating of "intolerable," one based (in part) upon a belief about ordinary uncontrollability of "heart responses and pain thresholds," need not be completely cold or unmotivated. Some subjects' motivation might render the "uncontrollability" belief very salient, e.g., while also drawing attention away from internal cues that they were trying to shift their tolerance, including the intensity of the pain.

Like Quattrone and Tversky, biologist Robert Trivers (1985, pp. 416-17) endorses Gur and Sackheim's definition of self-deception. Trivers maintains that self-deception has "evolved . . . because natural selection favors ever subtler ways of deceiving others" (p. 282, cf. pp. 415-20). We recognize that "shifty eyes, sweaty palms, and croaky voices may indicate the stress that accompanies conscious knowledge of attempted deception. By becoming unconscious of its deception, the deceiver hides these signs from the observer. He or she can lie without the nervousness that accompanies deception" (pp. 415-16). Trivers's thesis cannot adequately be assessed here; but the point should be made that the thesis in no way depends for its plausibility upon self-deception's requiring the presence of conflicting beliefs. Self-deception that satisfies the set of sufficient conditions I offered without satisfying the "dual belief" requirement is no less effective a tool for deceiving others. Trivers's proposal hinges on the idea that agents who do not consciously believe the truth (p) have an advantage over agents who do in getting others to believe the pertinent falsehood (~p): consciousness of the truth tends to manifest itself in ways that tip one's hand. But notice that an unconscious belief that p provides no help in this connection. Indeed, such a belief might generate tell-tale physiological signs of deception (recall the physiological manifestations of the alleged unconscious beliefs in Gur and Sackheim's studies). If unconscious true beliefs would make self-deceivers less subtle interpersonal deceivers than they would be without these beliefs, and if self-deception evolved because natural selection favors subtlety in the deception of others, better that it evolve on my model than on the "dual belief" model Trivers accepts.

In criticizing attempted empirical demonstrations of the existence of self-deception on Sackheim & Gur's model without producing empirical evidence that the subjects do not have "two contradictory beliefs," have I been unfair to the researchers? Recall the dialectical situation. The researchers claim that they have demonstrated the existence of self-deception on the model at issue. I have shown that they have not demonstrated this. The tests they employ for the existence of "two contradictory beliefs" in their subjects are, for the reasons offered, inadequate. I have no wish to claim that it is impossible for an agent to believe that p while also believing that ~p.[26] My claim, to be substantiated further, is that there is no explanatory need to postulate such beliefs either in familiar cases of self-deception or in the alleged cases cited by these researchers and that plausible alternative explanations of the data may be generated by appealing to mechanisms and processes that are relatively well understood.
 

5. The Dynamic Puzzle

The central challenge posed by the dynamic puzzle sketched in Section 1 calls for an explanation of the alleged occurrence of garden-variety instances of self-deception. If a prospective self-deceiver, S, has no strategy, how can S succeed? And if S does have a strategy, how can S's attempt to carry it out fail to be self-undermining in garden-variety cases?

It may be granted that self-deception typically is strategic at least in the following sense: when people deceive themselves they at least normally do so by engaging in potentially self-deceptive behavior, including cognitive behavior of the kinds catalogued in Section 2. Behavior of these kinds can be counted, in a broad sense of the term, as strategic, and the behavioral types may be viewed as strategies of self-deception. Such strategies divide broadly into two kinds, depending on their locus of operation. Internal-biasing strategies feature the manipulation of data that one already has. Input-control strategies feature one's controlling (to some degree) which data one acquires.[27] There are also mixed strategies, involving both internal biasing and input control.

Another set of distinctions will prove useful. Regarding cognitive activities that contribute to motivationally biased belief, there are significant differences among (1) unintentional activities (e.g., unintentionally focusing on data of a certain kind), (2) intentional activities (e.g., intentionally focusing on data of a certain kind), and (3) intentional activities engaged in with the intention of deceiving oneself (e.g., intentionally focusing on data of a certain kind with the intention of deceiving oneself into believing that p). Many skeptical worries about the reality of self-deception are motivated partly by the assumption that 3 is characteristic of self-deception.

An important difference between 2 and 3 merits emphasis. Imagine a twelve-year-old, Beth, whose father died some months ago. Beth may find it comforting to reflect on pleasant memories of playing happily with her father, to look at family photographs of such scenes, and the like. Similarly, she may find it unpleasant to reflect on memories of her father leaving her behind to play ball with her brothers, as he frequently did. From time to time, she may intentionally focus her attention on the pleasant memories, intentionally linger over the pictures, and intentionally turn her attention away from memories of being left behind. As a consequence of such intentional activities, she may acquire a false, unwarranted belief that her father cared more deeply for her than for anyone else. Although her intentional cognitive activities may be explained, in part, by the motivational attractiveness of the hypothesis that he loved her most, those activities need not also be explained by a desire--much less an intention--to deceive herself into believing this hypothesis, or to cause herself to believe this. Intentional cognitive activities that contribute even in a relatively straightforward way to self-deception need not be guided by an intention to deceive oneself.[28]

For the record, I have defended a detailed account of intentions elsewhere (Mele 1992a, chs. 7-13). Intentions, as I view them, are executive attitudes toward plans, in a technical sense of "plan" that, in the limiting case, treats an agent's mental representation of a prospective "basic" action like raising his arm as the plan-component of an intention to raise his arm. However, readers need not accept my view of intention to be persuaded by the arguments advanced here. It is enough that they understand intentions as belonging no less to the category "mental state" than beliefs and desires do and that they view intending to do something, A, as involving being settled (not necessarily irrevocably) upon A-ing, or upon trying to A.[29] Notice that one can have a desire (or motivation) to A without being at all settled upon A-ing. Desiring to take my daughter to the midnight dance while also desiring to take my son to the midnight movie, I need to make up my mind about what to do. But intending to take my daughter to the dance (and to make it up to my son later), my mind is made up. The "settledness" aspect of intentions is central to their "executive" nature, an issue examined in Mele 1992a.[30]

My resolution of the dynamic puzzle about self-deception is implicit in earlier sections. Such strategies of self-deception as positive and negative misinterpretation, selective attending, and selective evidence-gathering do not depend for their effectiveness upon agents' employing them with the intention of deceiving themselves. Even the operation of cold mechanisms whose functioning one does not direct can bias one's beliefs. When, under the right conditions, such mechanisms are primed by motivation and issue in motivated false beliefs, we have self-deception. Again, motivation can affect, among other things, the hypotheses that occur to one and the salience of those hypotheses and of data. For example, Don's motivational condition favors the hypothesis that his paper was wrongly rejected; and Sid's favors hypotheses about Roz's behavior that are consistent with her being as fond of him as he is of her. In "testing" these hypotheses, these agents may accentuate supporting evidence and downplay, or even positively misinterpret, contrary data without intending to do that, and without intending to deceive themselves. Strategies of self-deception, in garden-variety cases of this kind, need not be rendered ineffective by agents' intentionally exercising them with the knowledge of what they are up to; for, in garden-variety cases, self-deceivers need not intend to deceive themselves, strategically or otherwise. Since we can understand how causal processes that issue in garden-variety instances of self-deception succeed without the agent's intentionally orchestrating the process, we avoid the other horn of the puzzle, as well.
 

6. Intentionally Deceiving Oneself

I have criticized the assumptions that self-deception entails intentionally deceiving oneself and that it requires simultaneously possessing beliefs whose propositional contents are mutually contradictory; and I have tried to show how occurrences of garden-variety self-deception may be explained. I have not claimed that believing that p while also believing that ~p is conceptually or psychologically impossible. But I have not encountered a compelling illustration of that phenomenon in a case of self-deception. Some might suggest that illustrations may be found in the literature on multiple personality. However, that phenomenon, if it is a genuine one, raises thorny questions about the self in self-deception. In such alleged cases, do individuals deceive themselves, with the result that they believe that p while also believing that ~p? Or do we rather have interpersonal deception--or at any rate something more closely resembling that than self-deception?[31] These are questions for another occasion. They take us far from garden-variety self-deception.

Intentionally deceiving oneself, in contrast, is unproblematically possible. Hypothetical illustrations are easily constructed. It is worth noting, however, that the unproblematic cases are remote from garden-variety self-deception.

Here is an illustration. Ike, a forgetful prankster skilled at imitating others' handwriting, has intentionally deceived friends by secretly making false entries in their diaries. Ike has just decided to deceive himself by making a false entry in his own diary. Cognizant of his forgetfulness, he writes under today's date, "I was particularly brilliant in class today," and counts on eventually forgetting that what he wrote is false. Weeks later, when reviewing his diary, Ike reads this sentence and acquires the belief that he was brilliant in class on the specified day. If Ike intentionally deceived others by making false entries in their diaries, what is to prevent us from justifiably holding that he intentionally deceived himself in the imagined case? He intended to bring it about that he would believe that p, which he knew at the time to be false; and he executed that intention without a hitch, causing himself to believe, eventually, that p. Again, to deceive, on one standard definition, is to cause to believe what is false; and Ike's causing himself to believe the relevant falsehood is no less intentional than his causing his friends to believe falsehoods (by doctoring their diaries).[32]

Ike's case undoubtedly strikes readers as markedly dissimilar to garden-variety examples of self-deception--for instance, the case of the woman who falsely believes that her husband is not having an affair (or that she is not seriously ill, or that her child is not using drugs), in the face of strong evidence to the contrary. Why is that? Readers convinced that self-deception does not require the simultaneous presence of beliefs whose propositional contents are mutually contradictory will not seek an answer in the absence of such beliefs in Ike. The most obvious difference between Ike's case and garden-variety examples of self-deception lies in the straightforwardly intentional nature of Ike's project. Ike consciously sets out to deceive himself and intentionally and consciously executes his plan for so doing; ordinary self-deceivers behave quite differently.[33]

This indicates that in attempting to construct hypothetical cases that are, at once, paradigmatic cases of self-deception and cases of agents intentionally deceiving themselves, one must imagine that the agents' intentions to deceive themselves are somehow hidden from them. I do not wish to claim that "hidden-intentions" are impossible. Our ordinary concept of intention leaves room, e.g., for "Freudian" intentions, hidden in some mental partition. And if there is conceptual space for hidden intentions that play a role in the etiology of behavior, there is conceptual space for hidden intentions to deceive ourselves, intentions that may influence our treatment of data.

As I see it, the claim is unwarranted, not incoherent, that intentions to deceive ourselves, or intentions to produce or sustain certain beliefs in ourselves--normally, intentions hidden from us--are at work in ordinary self-deception.[34] Without denying that "hidden-intention" cases of self-deception are possible, a theorist should ask what evidence there may be (in the real world) that an intention to deceive oneself is at work in a paradigmatic case of self-deception. Are there data that can only--or best--be explained on the hypothesis that such an intention is operative?

Evidence that agents desirous of its being the case that p eventually come to believe that p owing to a biased treatment of data is sometimes regarded as supporting the claim that these agents intended to deceive themselves. The biasing apparently is sometimes relatively sophisticated purposeful behavior, and one may assume that such behavior must be guided by an intention. However, as I have argued, the sophisticated behavior in garden-variety examples of self-deception (e.g., Sam's case in Sec. 2) may be accounted for on a less demanding hypothesis that does not require the agents to possess relevant intentions: e.g., intentions to deceive themselves into believing that p, or to cause themselves to believe that p, or to promote their peace of mind by producing in themselves the belief that p. Once again, motivational states can prompt biased cognition of the sorts common in self-deception without the assistance of such intentions. In Sam's case, a powerful motivational attraction to the hypothesis that Sally is not having an affair--in the absence both of a strong desire to ascertain the truth of the matter and of conclusive evidence of Sally's infidelity--may prompt the line of reasoning described earlier and the other belief-protecting behavior. An explicit, or consciously held, intention to deceive himself in these ways into holding on to his belief in Sally's fidelity would undermine the project; and a hidden intention to deceive is not required to produce these activities.

Even if this is granted, it may be held that the supposition that such intentions always or typically are at work in cases of self-deception is required to explain why a motivated biasing of data occurs in some situations but not in other very similar situations (Talbott 1995). Return to Don, who is self-deceived in believing that his article was wrongly rejected. At some point, while revising his article, Don may have wanted it to be true that the paper was ready for publication, that no further work was necessary. Given the backlog of work on his desk, he may have wanted that just as strongly as he later wanted it to be true that the paper was wrongly rejected. Further, Don's evidential situation at these two times may have been very similar: e.g., his evidence that the paper was ready may have been no weaker than his later evidence that the paper was wrongly rejected, and his evidence that the paper was not ready may have been no stronger than his later evidence that the paper was rightly rejected. Still, we may suppose, although Don deceived himself into believing that the article was wrongly rejected, he did not deceive himself into believing that the article was ready for publication: he kept working on it--searching for new objections to rebut, clarifying his prose, and so on--for another week. To account for the difference in the two situations, it may be claimed, we must suppose that in one situation Don decided to deceive himself (without being aware of this) whereas in the other he did not so decide; and in deciding to do something, A, one forms an intention to A. If the execution of self-deceptive biasing strategies were a nonintended consequence of being in a motivational/evidential condition of a certain kind, the argument continues, then Don would either have engaged in such strategies on both occasions or on neither: again, to account for the difference in his cognitive behavior on the earlier and later occasions, we need to suppose that an intention to deceive himself was at work in one case and not in the other.

This argument is flawed. If on one of the two occasions Don decides (hence, intends) to deceive himself whereas on the other he does not, then, presumably, there is some difference in the two situations that accounts for this difference. But if there is a difference, D, in the two situations aside from the intention-difference that the argument alleges, an argument is needed for the claim that D itself cannot account for Don's self-deceptively biasing data in one situation and his not so doing in the other. Given that a difference in intention across situations (presence in one vs. absence in the other) requires some additional difference in the situations that would account for this difference, why should we suppose that there is no difference in the situations that can account for Don's biasing data in one and not in the other in a way that does not depend on his intending to deceive himself in one but not in the other? Why should we think that intention is involved in the explanation of the primary difference to be explained? Why cannot the primary difference be explained instead, e.g., by Don's having a strong desire to avoid mistakenly believing the paper to be ready (or to avoid submitting a paper that is not yet ready) and his having at most a weak desire later to avoid mistakenly believing that the paper was wrongly rejected? Such a desire, in the former case, may block any tendency to bias data in a way supporting the hypothesis that the paper is ready for publication.[35]

At this point, proponents of the thesis that self-deception is intentional deception apparently need to rely on claims about the explanatory place of intention in self-deception itself, as opposed to its place in explaining differences across situations. Claims of that sort have already been evaluated here; and they have been found wanting.

Advocates of the view that self-deception is essentially (or normally) intentional may seek support in a distinction between self-deception and wishful thinking. They may claim that although wishful thinking does not require an intention to deceive oneself, self-deception differs from it precisely in being intentional. This may be interpreted either as stipulative linguistic legislation or as a substantive claim. On the former reading, a theorist is simply expressing a decision to reserve the expression 'self-deception' for an actual or hypothetical phenomenon that requires an intention to deceive oneself or an intention to produce in oneself a certain belief. Such a theorist may proceed to inquire about the possibility of the phenomenon and about how occurrences of self-deception, in the stipulated sense, may be explained. On the latter reading, a theorist is advancing a substantive conceptual thesis: the thesis that the concepts (or our ordinary concepts) of wishful thinking and of self-deception differ along the lines mentioned.

I have already criticized the conceptual thesis about self-deception. A comment on wishful thinking is in order. If wishful thinking is not wishful believing, one difference between wishfully thinking that p and being self-deceived in believing that p is obvious. If, however, wishful thinking is wishful believing--in particular, motivationally biased, false believing--then, assuming that it does not overlap with self-deception (an assumption challenged in Mele 1987a, p. 135), the difference may lie in the relative strength of relevant evidence against the believed proposition: wishful thinkers may encounter weaker counter-evidence than self-deceivers (Szabados 1985, pp. 148-49). This difference requires a difference in intention only if the relative strength of the evidence against the propositions that self-deceivers believe is such as to require that their acquiring or retaining those beliefs depends upon their intending to do so, or upon their intending to deceive themselves. And this thesis about relative evidential strength, I have argued, is false.

Consciously executing an intention to deceive oneself is possible, as in Ike's case; but such cases are remote from paradigmatic examples of self-deception. Executing a "hidden" intention to deceive oneself is possible, too; but, as I have argued, there is no good reason to maintain that such intentions are at work in paradigmatic self-deception. Part of what I have argued, in effect, is that some theorists--philosophers and psychologists alike--have made self-deception more theoretically perplexing than it actually is by imposing upon the phenomena a problematic conception of self-deception.
 

7. Conclusion

Philosophers' conclusions tend to be terse; psychologists favor detailed summaries. Here I seek a mean. My aim in this paper has been to clarify the nature and relatively proximate etiology of self-deception. In sections 1-4, I resolved a pair of much-discussed puzzles about self-deception, advanced a plausible set of sufficient conditions for self-deception, and criticized empirical studies that allegedly demonstrate the existence of self-deception on a strict interpersonal model. In section 5, I argued that intentionally deceiving oneself is unproblematically possible (as in Ike's case), but that representative unproblematic cases are remote from garden-variety instances of self-deception. Conceptual work on self-deception guided by the thought that the phenomenon must be largely isomorphic with stereotypical interpersonal deception has generated interesting conceptual puzzles. But, I have argued, it also has led us away from a proper understanding of self-deception. Stereotypical interpersonal deception is intentional deception; normal self-deception, I have argued, probably is not. If it were intentional, "hidden" intentions would be at work; and we lack good grounds for holding that such intentions are operative in self-deception. Further, in stereotypical interpersonal deception, there is some time at which the deceiver believes that ~p and the deceived believes that p; but there is no good reason to hold, I have argued, that self-deceivers simultaneously believe that ~p and believe that p. Recognizing these points, we profitably seek an explanatory model for self-deception that diverges from models for the explanation of intentional conduct. I have not produced a full-blown model for this; but, unless I am deceived, I have pointed the way toward such a model--a model informed by empirical work on motivationally biased belief and by a proper appreciation of the point that motivated behavior is not coextensive with intended behavior.

I conclude with a challenge for readers inclined to think that there are cases of self-deception that fit the strict interpersonal model--in particular, cases in which the self-deceiver simultaneously believes that p and believes that ~p. The challenge is simply stated: Provide convincing evidence of the existence of such self-deception. The most influential empirical work on the topic has not met the challenge, as I have shown. Perhaps some readers can do better. However, if I am right, such cases will be exceptional instances of self-deception--not the norm.36
 

ACKNOWLEDGMENT

Parts of this article derive from my "Two Paradoxes of Self-Deception" (presented at a 1993 conference on self-deception at Stanford). Drafts were presented at the University of Alabama, Université du Québec à Montréal, and Mount Holyoke College, where I received useful feedback. Initial work on this article occurred during my tenure of a 1992/93 NEH Fellowship for College Teachers, a 1992/93 Fellowship at the National Humanities Center, and an NEH grant for participation in a 1993 Summer Seminar, "Intention," at Stanford (Michael Bratman, director). For helpful written comments, I am grateful to George Ainslie, Kent Bach, David Bersoff, John Furedy, Stevan Harnad, Harold Sackheim, and BBS's anonymous referees. REFERENCES

Ainslie, G. (1992) Picoeconomics. Cambridge University Press.

Audi, R. (1989) Self-deception and practical reasoning. Canadian Journal of Philosophy 19:247-66.

Audi, R. (1985) Self-deception and rationality. In: Self-deception and self-understanding, ed. M. Martin. University of Kansas Press.

Bach, K. (1981) An analysis of self-deception. Philosophy and Phenomenological Research 41:351-370.

Baron, J. (1988) Thinking and deciding. Cambridge University Press.

Baumeister, R. & Cairns, K. (1992) Repression and self-presentation: When audiences interfere with self-deceptive strategies. Journal of Personality and Social Psychology 62:851-62.

Davidson, D. (1985) Deception and division. In: Actions and events, ed. E. LePore & B. McLaughlin. Basil Blackwell.

Davidson, D. (1982) Paradoxes of irrationality. In: Philosophical essays on Freud, ed. R. Wollheim & J. Hopkins. Cambridge University Press.

Douglas, W. & Gibbins, K. (1983) Inadequacy of voice recognition as a demonstration of self-deception. Journal of Personality and Social Psychology 44:589-92.

Festinger, L. (1964) Conflict, decision, and dissonance. Stanford University Press.

Festinger, L. (1957) A theory of cognitive dissonance. Stanford University Press.

Fingarette, H. (1969) Self-deception. Routledge & Kegan Paul.

Frey, D. (1986) Recent research on selective exposure to information. In: Advances in experimental social psychology, vol. 19, ed. L. Berkowitz. Academic Press.

Gergen, K. (1985) The ethnopsychology of self-deception. In: Self-deception and self-understanding, ed. M. Martin. University of Kansas Press.

Gibbins, K. & Douglas, W. (1985) Voice recognition and self-deception: A reply to Sackheim and Gur. Journal of Personality and Social Psychology 48:1369-72.

Gilovich, T. (1991) How we know what isn't so. Macmillan.

Greenwald, A. (1988) Self-knowledge and self-deception. In Self-deception: An adaptive mechanism? ed. J. Lockard & D. Paulhus. Prentice-Hall.

Gur, R. & Sackheim, H. (1979) Self-deception: A concept in search of a phenomenon. Journal of Personality and Social Psychology 37:147-69.

Haight, M. (1980) A study of self-deception. Harvester Press.

Higgins, R., Snyder, C. & Berglas, S. (1990) Self-handicapping: The paradox that isn't. Plenum Press.

Johnston, M. (1988) Self-deception and the nature of mind. In: Perspectives on self-deception, ed. B. McLaughlin & A. Rorty. University of California Press.

Kipp, D. (1980) On self-deception. Philosophical Quarterly 30:305-17.

Kunda, Z. (1990) The case for motivated reasoning. Psychological Bulletin 108:480-98.

Kunda, Z. (1987) Motivated inference: Self-serving generation and evaluation of causal theories. Journal of Personality and Social Psychology 53:636-47.

Lockard, J. & Paulhus, D. (1988) Self-deception: An adaptive mechanism? Prentice-Hall.

Martin, M. (1985) Self-deception and self-understanding. University of Kansas Press.

McLaughlin, B. (1988) Exploring the possibility of self-deception in belief. In: Perspectives on self-deception, ed. B. McLaughlin & A. Rorty. University of California Press.

Mele, A. (1995) Autonomous agents: From self-control to autonomy. Oxford University Press.

Mele, A. (1992a) Springs of action. Oxford University Press.

Mele, A. (1992b) Recent work on intentional action. American Philosophical Quarterly 29:199-217.

Mele, A. (1987a) Irrationality. Oxford University Press.

Mele, A. (1987b) Recent work on self-deception. American Philosophical Quarterly 24:1-17.

Mele, A. (1983) Self-deception. Philosophical Quarterly 33:365-77.

Mele, A. & Moser, P. (1994) Intentional action. Noûs 28:39-68.

Nisbett, R. & Ross, L. (1980) Human inference: Strategies and shortcomings of social judgment. Prentice-Hall.

Pears, D. (1991) Self-deceptive belief-formation. Synthese 89:393-405.

Pears, D. (1984) Motivated irrationality. Oxford University Press.

Peele, S. (1989) Diseasing of America: Addiction treatment out of control. Lexington Books.

Plato (1953) Cratylus. In: The dialogues of Plato, trans. B. Jowett. Clarendon Press.

Quattrone, G. & Tversky, A. (1984) Causal versus diagnostic contingencies: On self-deception and on the voter's illusion. Journal of Personality and Social Psychology 46:237-48.

Rorty, A. (1980) Self-Deception, Akrasia, and Irrationality. Social Science Information 19: 905-22.

Sackheim, H. (1988) Self-deception: A synthesis. In Self-deception: An adaptive mechanism? ed. J. Lockard & D. Paulhus. Prentice-Hall.

Sackheim, H. & Gur, R. (1985) Voice recognition and the ontological status of self-deception. Journal of Personality and Social Psychology 48:1365-68.

Sackheim, H. & Gur, R. (1978) Self-deception, self-confrontation, and consciousness. In: Consciousness and self-regulation, vol. 2, ed. G. Schwartz & D. Shapiro. Plenum Press.

Silver, M., Sabini, J. & Miceli, M. (1989) On knowing self-deception. Journal for the Theory of Social Behaviour 19:213-27.

Sorensen, R. (1985) Self-deception and scattered events. Mind 94:64-69.

Szabados, B. (1985) The self, its passions, and self-deception. In: Self-deception and self-understanding, ed. M. Martin. University of Kansas Press.

Talbott, W. (1995) Intentional self-deception in a single, coherent self. Philosophy and Phenomenological Research 55:27-74.

Taylor, S. (1989) Positive illusions. Basic Books.

Taylor, S. & Fiske, S. (1978) Salience, attention and attribution: Top of the head phenomena. In: Advances in experimental social psychology, vol. 11, ed. L. Berkowitz. Academic Press.

Taylor, S. & Fiske, S. (1975) Point of view and perceptions of causality. Journal of Personality and Social Psychology 32:439-45.

Taylor, S. & Thompson, S. (1982) Stalking the elusive "vividness" effect. Psychological Review 89:155-81.

Trivers, R. (1985) Social evolution. Benjamin/Cummings.

Tversky, A. & Kahnemann, D. (1973) Availability: A heuristic for judging frequency and probability. Cognitive Psychology 5:207-32.

Weiskrantz, L. (1986) Blindsight: A case study and implications. Oxford University Press. NOTES


[1]

. I have addressed many of these questions elsewhere. Mele 1987a argues that proper explanations of self-deception and of irrational behavior manifesting akrasia or "weakness of will" are importantly similar and generate serious problems for a standard philosophical approach to explaining purposive behavior. Mele 1992a develops an account of the psychological springs of intentional action that illuminates the etiology of motivated rational and irrational behavior alike. Mele 1995 defends a view of self-control and its opposite that applies not only to overt action and to belief but also to such things as higher-order reflection on personal values and principles; this book also displays the place of self-control in individual autonomy. Several referees noted connections between ideas explored in this article and these issues; some expressed a desire that I explicitly address them here. Although I take some steps in that direction, my primary concern is a proper understanding of self-deception itself. Given space constraints, I set aside questions about the utility of self-deception; but if my arguments succeed, they illuminate the phenomenon whose utility is at issue. I also lack space to examine particular philosophical works on self-deception. On ground-breaking work by Audi (e.g., 1985), Bach (1981), Fingarette (1969), Rorty (e.g., 1980), and others, see Mele 1987b; on important work by Davidson (1982) and Pears (1984), see Mele 1987a, ch. 10.

[2]. On the occasional rationality of self-deception, see Audi 1985, 1989, and Baron 1988, p. 40. On the question whether self-deception is an adaptive mechanism, see Taylor 1989 and essays in Lockard & Paulhus 1988.

[3]. For example, subjects instructed to conduct "symmetrical memory searches" are less likely than others to fall prey to the confirmation bias (see Sec. 2), and subjects' confidence in their responses to "knowledge questions" is reduced when they are invited to provide grounds for doubting the correctness of those responses (Kunda 1990, pp. 494-95). Presumably, people aware of the confirmation bias may reduce biased thinking in themselves by giving themselves the former instruction; and, fortunately, we do sometimes remind ourselves to consider both the pros and the cons before making up our minds about the truth of important propositions--even when we are tempted to do otherwise. For a review of the debate, see Kunda 1990. For a revolutionary view of the place of motivation in the etiology of beliefs, see Ainslie 1992.

[4]. Literature on the "paradoxes" of self-deception is reviewed in Mele 1987b.

[5]. One response is mental partitioning: the deceived part of the mind is unaware of what the deceiving part is up to. See Pears 1984 (cf. 1991) for a detailed response of this kind and Davidson 1985 (cf. 1982) for a more modest partitioning view. For criticism of some partitioning views of self-deception, see Johnston 1988 and Mele 1987a, ch. 10, 1987b, pp. 3-6.

[6]. This is not to say that self-deception is always "self-serving" in this way. See Mele 1987a, pp. 116-18; Pears 1984, pp. 42-44. Sometimes we deceive ourselves into believing that p is true even though we would like p to be false.

[7]. Regarding the effects of motivation on time spent reading threatening information, see Baumeister & Cairns 1992.

[8]. The following descriptions derive from Mele 1987a, pp. 144-45.

[9]. For a challenge to studies of the vividness effect, see Taylor & Thompson 1982. They contend that research on the issue has been flawed in various ways, but that studies conducted in "situations that reflect the informational competition found in everyday life" might "show the existence of a strong vividness effect" (pp. 178-79).

[10]. This theme is developed in Mele 1987a, ch. 10 in explaining the occurrence of self-deception. Kunda 1990 develops the same theme, paying particular attention to evidence that motivation sometimes primes the confirmation bias. Cf. Silver et al. 1989, p. 222.

[11]. For a motivational interpretation of the confirmation bias, see Frey 1986, pp. 70-74.

[12]. Cf. Mele 1987a, pp. 125-26. Also cf. Bach 1981, pp. 358-61 on "rationalization" and "evasion," Baron 1988, pp. 258 and 275-76 on positive and negative misinterpretation and "selective exposure," and Greenwald 1988 on various kinds of "avoidance." Again, I am not suggesting that, in all cases, agents who are self-deceived in believing that p desire that p (see n. 6). For other routes to self-deception, including what is sometimes called "immersion," see Mele 1987a, pp. 149-51, 157-58. On self-handicapping, another potential route to self-deception, see Higgins et al. 1990.

[13]. Literature on "selective exposure" is reviewed in Frey 1986. Frey defends the reality of motivated selective evidence-gathering, arguing that a host of data are best accommodated by a variant of Festinger's (1957, 1964) cognitive dissonance theory.

[14]. For references to work defending the view that self-deception typically is not intentional, see Mele 1987b, p. 11; also see Johnston 1988.

[15]. This is not to deny that self-deceivers sometimes believe that p while being aware that their evidence favors ~p. On such cases, see Mele 1987a, ch. 8 and pp. 135-36.

[16]. Condition 4 does not assert that the self-deceiver is aware of this.

[17]. On a relevant difference between being deceived in believing that p and being deceived into believing that p, see Mele 1987a, pp. 127-28.

[18]. Notice that not all instances of motivationally biased belief satisfy my set of sufficient conditions for self-deception. In some cases of such belief, what we believe happens to be true. Further, since we are imperfect assessors of data, we might fail to notice that our data provide greater warrant for p than for ~p and end up believing that p as a result of a motivationally biased treatment of data.

[19]. This is true, of course, on "degree-of-belief" conceptions of belief, as well.

[20]. Notice that simultaneously believing that p and believing that ~p--i.e., Bp & B~p--is distinguishable from believing the conjunction of the two propositions: B(p & ~p). We do not always put two and two together.

[21]. In a later paper, Sackheim grants this (1988, pp. 161-62).

[22]. The study is described and criticized in greater detail in Mele 1987a, pp. 152-58. Parts of this section are based on that discussion.

[23]. For supporting argumentation, see Mele 1987a, pp. 153-56.

[24]. As this implies, in challenging the claim that the sincere deniers have the belief at issue, I am not challenging the popular idea that attempts are explained at least partly in terms of pertinent beliefs and desires.

[25]. Obviously, whether the subjects satisfy the conditions offered in Section 2 as sufficient for self-deception depends on the relative strength of their evidence for the pertinent pair of propositions.

[26]. Locating such cases is not as easy as some might think. One reader appealed to blindsight. There is evidence that some people who believe themselves to be blind can see (e.g., Weiskrantz 1986). They perform much better (and in some cases, much worse) on certain tasks than they would if they were simply guessing, and steps are taken to ensure that they are not benefitting from any other sense. Suppose some sighted people in fact believe themselves to be blind. Do they also believe that they are not blind, or, e.g., that they see x? If it were true that all sighted people (even those who believe themselves to be blind) believe themselves to be sighted, the answer would be yes. But precisely the evidence for blindsight is evidence against the truth of this universal proposition. The evidence indicates that, under certain conditions, people may see without believing that they are seeing. The same reader appealed to a more mundane case of the following sort. Ann set her watch a few minutes ahead to promote punctuality. Weeks later, when we ask her for the time, Ann looks at her watch and reports what she sees, "11:10." We then ask whether her watch is accurate. If she recalls having set it ahead, she might sincerely reply, "No, it's fast; it's actually a little earlier than 11:10." Now, at time t, when Ann says "11:10," does she both believe that it is 11:10 and believe that it is not 11:10? There are various alternative possibilities. Perhaps, e.g., although she has not forgotten setting her watch ahead, her memory of so doing is not salient for her at t and she does not infer at t that it is not 11:10; or perhaps she has adopted the strategy of acting as if her watch is accurate and does not actually believe any of its readings. (Defending a promising answer to the following question is left as an exercise for the reader: What would constitute convincing evidence that, at t, Ann believes that it is 11:10 and believes that it is not 11:10?)

[27]. Pears identifies what I have called internal biasing and input-control strategies and treats "acting as if something were so in order to generate the belief that it is so" as a third strategy (1984, p. 61). I examine "acting as if" in Mele 1987a, pp. 149-51, 157-58.

[28]. For further discussion of the difference between 2 and 3 and of cases of self-deception in which agents intentionally selectively focus on data supportive of a preferred hypothesis (e.g.) without intending to deceive themselves, see Mele 1987a, pp. 146, 149-51.

[29]. Readers who hold that intending is a matter of degree should note that the same may be said about being settled upon doing something.

[

]30. For criticism of opposing conceptions of intention in the psychological literature, see Mele 1992a, ch. 7. On connections between intention and intentional action, see Mele 1992a, 1992b, and Mele & Moser 1994.

[31]. Similar questions have been raised about partitioning hypotheses that fall short of postulating multiple personalities. For references, see Mele 1987b, p. 4; cf. Johnston 1988.

[32]. On "time-lag" scenarios of this general kind, see Davidson 1985, p. 145; McLaughlin 1988, pp. 31-33; Mele 1983, pp. 374-75, 1987a, pp. 132-34; Sackheim 1988, p. 156; Sorensen 1985.

[33]. Some readers may be attracted to the view that although Ike deceives himself, this is not self-deception at all (cf. Davidson 1985, p. 145; McLaughlin 1988). Imagine that Ike had been embarrassed by his performance in class that day and consciously viewed the remark as ironic when he wrote it. Imagine also that Ike strongly desires to see himself as exceptionally intelligent and that this desire helps to explain, in a way psychotherapy might reveal to Ike, his writing the sentence. If, in this scenario, Ike later came to believe that he was brilliant in class that day on the basis of a subsequent reading of his diary, would such readers be more inclined to view the case as one of self-deception?

[34]. Pears 1991 reacts to the charge of incoherence, responding to Johnston 1988.

[35]. Talbott suggests that there are different preference rankings in the two kinds of case. (The preferences need not be objects of awareness, of course.) In cases of self-deception, the agents' highest relevant preference is that they believe "that p is true, if p is true"; and their second-highest preference is that they believe "that p is true, if p is false": self-deceiving agents want to believe that p is true whether or not it is true. In the contrasting cases, agents have the same highest preference, but the self-deceiver's second-highest preference is the lowest preference of these agents: these agents have a higher-ranking preference "not to believe that p, if p is false." Suppose, for the sake of argument, that this diagnosis of the difference between the two kinds of case is correct. Why should we hold that in order to account for the target difference--namely, that in one case there is a motivated biasing of data and in the other there is not--we must suppose that an intention to deceive oneself (or to get oneself to believe that p) is at work in one case but not in the other? Given our understanding of various ways in which motivation can bias cognition in the absence of such an intention, we can understand how one preference ranking can do this while another does not. An agent with the second preference ranking may be strongly motivated to ascertain whether p is true or false; and that may block any tendency toward motivated biasing of relevant data. This would not be true of an agent with the first preference ranking.