Information

Why is self-deception necessary to better deceive others?

Why is self-deception necessary to better deceive others?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Robert Trivers argues that we deceive ourselves in order to better deceive others, in part because self-deception can reduce unconscious body signals of lying (von Hippel & Trivers, 2011).

On Wikipedia, self-deception is criticized for

… not being able to account for why evolutionary selection for lying would allow a body language that gives away lying to exist instead of simply selecting for lack of such signals. (Ekman, 2006; Damasio, Evans & Cruse, 2004)…

What are the main arguments against this criticism?

References

von Hippel, W., & Trivers, R. (2011). The evolution and psychology of self-deception. The Behavioral and brain sciences, 34(1), 1-56. https://doi.org/10.1017/S0140525X10001354

Ekman, P. (Ed.). (2006). Darwin and facial expression: A century of research in review. Ishk.

Damasio, A. R., Evans, D., & Cruse, P. (2004). Emotion, evolution and rationality. William James and the modern neurobiology of emotion, 3-14.


Lesson 1: Self-deception makes you think that the needs of other people aren’t very important, which makes you treat them like objects.

Imagine you’re sitting on a bus with an empty seat next to you. Are you carefully watching others around you, hoping that nobody takes the seat?

This is a form of self-deception in which you value your own comfort above that of others. And you do it all the time without realizing it.

Everybody wants and even deserves respect. Our entire society, including laws and constitutions, builds on this fact. But when it comes to everyday interactions with others, this principle is easy to forget.

When you’re caught up in deception you can’t see clearly. You’re “in the box” as the authors put it. You see others as mere objects instead of the living breathing beings they are. Which often means they don’t get the respect they deserve from you.

This is self-deception at its core. It’s the idea that you don’t see others as they really are but instead how you think they are. And most often what you think of them is based on false assumptions that your needs are more important.

In other words, you frequently deceive yourself into thinking that others don’t even really have needs at all. This is a severe limitation of your worldview. Not only does it limit the care that others get from you, but it also hinders your progression.


The Truth About Self-Deception

In theory the one person we should never, ever, lie to is ourselves. Surely lying to ourselves is counter-productive? Like calmly and deliberately shooting yourself in the foot or taking a hot toasting fork and plunging it into your eye?

But look around and it’s not hard to spot the tell-tale symptoms of self-deception in other people. So perhaps we are also deceiving ourselves in ways we can’t clearly perceive? But is that really possible and would we really believe the lies that we ‘told’ ourselves anyway? That’s what Quattrone & Tversky (1984) explored in a classic social psychology experiment published in the Journal of Personality and Social Psychology.

Lies, damn lies and psychologists

Any study of self-deception is going to involve a fair amount of bare-faced lying, and Quattrone & Tversky’s (1984) research was no different. They recruited 38 students who were told they were going to take part in a study about the “psychological and medical aspects of athletics”. Not true, in fact the researchers were going to trick participants into thinking that how long they could submerge their arms in cold water was diagnostic of their health status, when really it showed just how ready people are to deceive themselves. This is how they did it.

The participants were first asked to plunge their arms into cold water for as long as they could. The water was pretty cold and people could only manage this for 30 or 40 seconds. Then participants were given some other tasks to do to make them think they really were involved in a study about athletics. They had a go on an exercise bike and were given a short lecture about life expectancy and how it related to the type of heart you have. They were told there were two types of heart:

  • Type I heart: associated with poorer health, shorter life expectancy and heart disease.
  • Type II heart: associated with better health, longer life expectancy and low risk of heart disease.

Half were told that people with Type II hearts (apparently the ‘better’ type) have increased tolerance to cold water after exercise while the other half that it decreased tolerance to cold water. Except of course this was all lies only made up to make participants think that how long they could hold their arm under water was a measure of their health, with half thinking cold-tolerance was a good sign and half thinking it was a bad sign.

Now time for the test: participants had another go at putting their arms into the cold water for as long as they could. The graph below shows the average results before and after all the blatant lying (in the name of science of course!):

As you can see the experimental manipulation had a strong effect. People who thought it was a sign of a healthy heart to hold their arms underwater for longer did just that, while those who believed the reverse all of a sudden couldn’t take the cold. That’s all well and good, but were these people really lying to themselves or just the experimenters and did they believe those lies?

Hook, line and sinker

After the arm-dunking each participant was asked whether they had intentionally changed the amount of time they held their arms underwater. Of the 38 participants, 29 denied it and 9 confessed, but not directly. Many of the 9 confessors claimed the water had changed temperature. It hadn’t of course, this was just a way for people to justify their behaviour without directly facing their self-deception.

All the participants were then asked whether they believed they had a healthy heart or not. Of the 29 deniers, 60% believed they had the healthier type of heart. However of the confessors only 20% thought they had the healthier heart. What this suggests is that the deniers were more likely to be truly deceiving themselves and not just trying to cover up their deception. They really did think that the test was telling them they had a healthy heart. Meanwhile the confessors tried to tell a lie back to the experimenter (seems only fair!), but privately the majority acknowledged they were deceiving themselves.

This experiment is neat because it shows the different gradations of self-deception, all the way up to its purest form, in which people manage to trick themselves hook, line and sinker. At this level people think and act as though their incorrect belief is completely true, totally disregarding any incoming hints from reality.

So what this study suggests is that for many people self-deception is as easy as pie. Not only will many people happily lie to themselves if given a reason, but they will only look for evidence that confirms their comforting self-deception, and then totally believe in the lies they are telling themselves.


Self-Deception History and Background

Because the unconscious appears to be involved, self-deception is often discussed in the context of Sigmund Freud’s famous psychoanalytic theory. Rather than being one of the traditional defense mechanisms, self-deception is thought to be a necessary component of all defense mechanisms. Each one has the paradoxical element noted earlier: There must be at least one moment of self-deception for a defense mechanism to work. Those readers familiar with such defenses as projection, intellectualization, and repression will understand that, in each case, a person has to be both unaware and hyperaware of the disturbing information.

Psychoanalytic theory is pessimistic about your ability to ever recognize self-deception in yourself. That conclusion is probably too severe: A person should be able to recognize his or her own self-deception at some point after it occurs—when the person has cooled down and has a more objective perspective on the issue.


Study 1: The Decay of Self-deception

Study 1 examines the extent to which self-deception persists despite repeated evidence against a desired self-view. Participants completed a battery of four tests of general knowledge, predicting their score before the last three. Some participants—those in the answers condition—had access to an answer key for Test 1, and we expected them to use it to cheat (evidenced by outperforming a control group without answers). We also expected their high scores to trigger self-deception, leading them to overpredict their scores on subsequent tests for which they did not have answer keys. Performance on these subsequent tests offered repeated evidence of participants’ true ability. We assessed the extent to which the inflated predictions of participants given the answers on Test 1 would be tempered by their later experience taking tests without the answers, hypothesizing that their predictions would eventually but not immediately converge with their true ability. Previous research has shown self-deception in this paradigm tracked with participants’ chronic inclination to self-deceive (Chance et al., 2011), and we expected that the decay of self-deception here would be related to chronic self-deception as well. We hypothesized that for participants in the answers condition, self-deception would be greater and persist longer for those who were dispositionally high in self-deception. Furthermore, using an other-deception related scale in combination with the self-deception scale allowed us to test whether prediction gaps were indeed correlated with self-deception and not with lying.

Since the design of these self-deception studies makes cheating ambiguous—intentionally so, to make self-deception possible—we conducted a pilot study to test whether using the answer key did indeed constitute cheating. According to Jones (1991) definition of unethical behavior, community members, rather than researchers or participants given the opportunity to cheat, are the appropriate judges of which behaviors constitute cheating. Sixty-five participants from Amazon’s Mechanical Turk read a description of our experimental research paradigm, including the instructions to participants, learned the results, and were asked to write four words describing the test takers. In their open-ended responses, 𠇌heating” was the second most common open-ended response (15 people), after 𠇍ishonest” (22 people) 86% used the words 𠇌heating,” 𠇍ishonest,” “unethical,” or synonyms of these words. Participants also rated the extent to which they considered this behavior to constitute cheating, on a 10-point scale (1: definitely not cheating to 10: definitely cheating). The mean response was 6.98 (SD = 2.86), with a modal response of �.” Another group of 64 participants read about participants in the control condition, and indicated on the same scale whether that group was cheating the mean response was 2.50 (SD = 2.63) the modal response was 𠇁” (definitely not cheating). These results suggest that people judge the behavior of study participants in the answers condition who achieve higher scores to be unethical, such that 𠇌heating” is an appropriate descriptor of their behavior. Cheaters do not need to perceive themselves as cheaters—indeed, they may be self-deceived.

Materials and Methods

Participants

Seventy-one student and community member participants (33 male, Mage = 23.9, SD = 3.54) from the paid subject pool of a large, northeastern university were paid $20 to complete this experiment as the first of a series of unrelated studies during a 1-h group lab session. Participants also had the opportunity to earn performance-based bonus pay. Sample size was determined by laboratory capacity, and privacy dividers separated participants from one another.

Design and Procedure

Each participant was assigned to either the control or the answer condition. Both groups completed a series of four tests of general knowledge trivia, such as “What is the only mammal that truly flies?” (Moore and Healy, 2008), configured into four 10-question tests. Participants learned at the beginning of the study that in all four tests, they would earn a

General Discussion

One might expect people who cheat on tests—or insider traders—to feel worse about their abilities as a result of their questionable behavior. After all, if they had been more talented, they would have had no reason to cheat. However, when self-deception is possible, ethics can fade (Tenbrunsel and Messick, 2004). People tend to focus on the positive outcome of their cheating and neglect the unsavory process that led to it.

Although the construct of self-deception has a long history in psychology, the nature of the process by which self-deception takes place is still subject to debate (Audi, 1997 Mele, 2010 Bandura, 2011 McKay et al., 2011 von Hippel and Trivers, 2011). In these two studies, we showed that though self-deception does occur rapidly, there is some decay over time, suggesting that self-deception may provide temporary boosts to the self-concept but that these boosts may be relatively short-lived given corrective feedback from the environment (Study 1). Additionally, Study 2 demonstrates that sensitivity to feedback depends on the extent to which it enables self-deception feedback bolstering motivated beliefs in superior abilities seems to be given more weight than feedback about actual abilities. As a result, it appears as though people are vulnerable to serial self-deception, awaiting opportunities to inflate their self-views and only grudgingly adjusting them downward. Study 1 demonstrates that inflated predictions of subsequent performance in the answers group correlate with general self-deceptive enhancement, and have suggested that these results suggest that participants engage in self-deceptive miscalibration. Future research might disambiguate total self-deception from general miscalibration by comparing predictions of own scores to predictions of others’ scores, allowing an assessment of whether people demonstrate self-deceptive miscalibration only when they are the focal actor, or whether even observing others induces miscalibration.

In our studies, we explored self-deception using a specific set of tasks similar to test situations in which students might have the opportunity to cheat. Although our focus was the impact of self-deception on people’s beliefs about their future performance, self-deception in similar contexts might also affect subsequent behavior. It could, for example, lead students to spend less time preparing for future tests, thus reducing their learning as well as hampering their future performance. It might also increase the likelihood of cheating again, by allowing people to feel good about themselves and their abilities when they cheat (and then self-deceive). Future research is needed to examine these negative behavioral consequences of self-deception, not only in the context of academic cheating but also in the many situations in which people inflate their performance by cheating and then deceive themselves about why they did so well.

.25 bonus for each correct answer. This incentive encourages cheating, which is required for self-deception in this paradigm, although a monetary incentive is not always necessary for prompting cheating and self-deception (Chance et al., 2011).

For Test 1, participants in the answers condition had the answers to all ten questions printed in an answer key at the bottom of the page. Their instructions read, “It’s okay to check your answers as you go, but please do your own work.” These instructions were intentionally ambiguous—they did not prohibit looking at the answers, but they did imply that using the answer key to choose answers would be wrong. The control group completed the same test questions but without the answer key or instructions. All participants were given 3 min to complete Test 1. After handing their completed Test 1 to an experimenter, they were given a score sheet with an answer key, on which they recorded from memory which questions they had answered correctly. This procedure prevented participants in the control group from using the answer key to change their answers. It did not prevent either group from inflating their reported score, therefore we recorded the actual score as well. After completing and turning in the score sheet, participants in both conditions had seen the answers for Test 1 and knew their Test 1 scores.

When participants received Test 2, they were asked to look it over before writing down their predicted score. The preview ensured that those in the answers group could confirm that the test would not include an answer key. It also reduced the implicit admission of guilt that might be associated with predicting a lower score on the second test than the first (“If I say I will do worse, the researchers will know I cheated”), by giving participants a valid excuse (“I just don’t happen to know these particular answers”). Thus, this design provided a strong test of our prediction that participants who had cheated on the first test would deceive themselves into predicting an unrealistically high score on the second.

After predicting their score, participants spent 3 min completing Test 2, then repeated the process three more times: scoring Test 2 on a separate answer sheet looking over Test 3 and making a prediction scoring Test 3 on a separate answer sheet looking over Test 4 and making a prediction and scoring Test 4 on a separate answer sheet. Note that for all participants, Tests 2, 3, and 4 did not include answers at the bottom and participants had only one sheet in front of them (either a test/prediction sheet or an answer key/score sheet) at all times.

When participants had finished the testing procedure, they moved on to other unrelated studies which also included the Balanced Inventory of Desirable Responding (Paulhus, 1998). We used the self-deceptive enhancement and the impression management components of the BIDR, to distinguish dispositional self-deception from dispositional lying. At the end of the study session, participants received their bonus payment. Because participants were not deceived (by the experimenters), the university Human Subjects Committee approving the experiment determined that debrief was not required.

Results and Discussion

Cheating

We predicted participants in the answers condition would inflate their performance on the first test by looking at the answers. Indeed, they reported scoring higher than the control group, t(69) = 6.62, p < 0.001, d = 1.58 (Table 1). Our subsequent analyses reflect reported scores, since self-deception relies on beliefs however, using actual scores here or in any of the subsequent analyses did not affect the direction or significance of the results.

Table 1. Study 1 scores and predictions.

On the test in which cheating was possible, the average score was 7.89 out of 10, indicating either a mixture of cheaters and non-cheaters, many people cheating just a little, or both. Whereas no participants in the control condition reported perfect scores (a �”) on Test 1, 44% of participants in the answers condition did. However, even excluding perfect scores, Test 1 scores were higher in the answers than the control condition (6.20 vs. 4.51). This suggests many people cheating just a little, consistent with Mazar et al. (2008) theory of self-concept maintenance, which posits that people avoid negative self-signals by cheating only within an acceptable range.

Behavioral Self-deception

We expected that if participants in the answers condition were self-deceived, their predictions for subsequent tests would be higher than their actual scores we expected this gap to be highest on Test 2—immediately after participants had cheated to achieve a high score on Test 1𠅊nd to decline over time. We did not expect participants in the control condition—who were not given the opportunity to cheat—to show a gap between their predictions and actual performance on Tests 2 through 4.

A paired t-test confirmed that Test 2 predictions exceeded Test 2 scores for participants in the answers condition, t(35) = 3.67, p = 0.001, d = 0.73 (Table 1) reflecting self-deception: despite having had the chance to examine the questions on Test 2 and confirm no answers were included, participants in the answers group expected to perform better than they did. Their surprisingly low scores on Test 2 did not eliminate their self-deception: their predictions for Test 3 were also significantly higher than their Test 3 scores, t(35) = 2.52, p = 0.02, d = 0.35 (Table 1). Only after scoring below their expectations on both Tests 2 and 3 did self-deception decay completely: predictions for Test 4 were not significantly higher than actual scores, t(35) = 1.13, p = 0.27, d = 0.20 (Table 1).

By contrast, predictions did not differ significantly from scores for participants in the control group for any of the three tests: Test 2 [t(34) = 1.36, p = 0.18], Test 3 [t(34) = 0.95, p = 0.35], Test 4 [t(34) = 0.67, p = 0.51] (Table 1). The lack of overprediction in the control group also indicates the inflated predictions of participants in the answers condition are not related to mere overconfidence: overconfidence would suggest people might generally inflate their predictions (Moore and Healy, 2008), but this pattern was not observed.

Dispositional Self-deception

We also explored whether the general tendency to self-deceive would relate to the decay in the observed prediction-performance gaps. Self-Deceptive Enhancement was indeed correlated with overpredictions on the second test (r = 0.40, p = 0.02) in the answers condition, but not the control condition (p = 0.79). A median split on Self-Deceptive Enhancement revealed that high self-enhancers were driving the self-deceptive predictions observed in the answers group, and that their bias was strong even in predictions for Test 3. High self-deceivers significantly overpredicted their scores on Test 2 [6.58 vs. 4.84, t(18) = 3.07, p = 0.007, d = 0.93] as well as Test 3 [5.95 vs. 4.95, t(18) = 2.73, p = 0.01, d = 0.57], but eventually even this group tempered their expectations to conform to reality, more accurately predicting their scores on Test 4 [5.74 vs. 5.11, t(18) = 1.23, p = 0.24]. Low self-deceivers in the answers group, on the other hand, did not show significant differences between any of their predictions and subsequent scores (all p’s > 0.10). This pattern of results is shown in Figure 1. As expected, Impression Management showed no significant relationship to overpredictions in either the answers or the control group (all p’s > 0.10), suggesting that the overpredicting observed here does not derive merely from a strategy to impress others such as the experimenters. For the answers group, we also compared Self-Deceptive Enhancement of those reporting perfect scores (likely cheaters) to those scoring lower although the sample size was small and the observed difference not significant, those reporting perfect scores showed directionally higher Self-Deceptive Enhancement [7.19 vs. 6.20, t(34) = 0.64, p = 0.52]. Note that the self-deception observed here is not complete: participants in the answers condition do predict lower scores on Test 2 than they received on Test 1. These results suggest that rather than witnessing complete self-deception, we observe a self-deceptive miscalibration that then diminishes even more in the face of feedback.

Figure 1. Overpredictions on Tests 2-4 by high and low self-deceivers in Study 1.

These results demonstrate that self-deceivers come to terms with reality only when faced with repeated exposure to counterevidence against their preferred beliefs𠅏or these participants, scoring lower on multiple tests they could not cheat on𠅊nd do so eventually rather than immediately. This pattern is most striking for those with a dispositional tendency toward self-enhancement.


The power of self-deception: Why and how our brains deceive us

Brains. Where would we be without them? They control our thinking, speech, feelings, sight, and hearing. They create and store our memories. They control our breathing and make it possible for us to walk without falling down. Our brains are super-handy, and mostly dependable, or are they?

Fans of the public radio program and podcast Hidden Brain have come to value the core concept of the program, that unconscious patterns drive human behavior, for better and worse. Host Shankar Vedantam has made it his life’s work to help a wide audience understand those dynamics.

Vedantam’s new book, co-authored with Bill Mesler, is "Useful Delusions: The Power & Paradox of the Self-Deceiving Brain." The work explores ways in which evolution designed our brain to survive, not to seek the truth, and how the lies we tell ourselves sustain us, and our relationships with other people and the world. As the authors write, our self-deception “enables us to accomplish useful social, psychological, or biological goals.”

This conversation touches on many timely subjects, such as why facts may be unconvincing for many people, and why our best approach to mending divisions and bad behaviors may be inquiry, kindness, and compassion.

Shankar Vedantam is an author, science correspondent, and the host of Hidden Brain. He was interviewed by KUOW’s Ross Reynolds on April 13, 2021. Town Hall Seattle presented their conversation.


Research on the Prevalence and Purposes of Deception

Only in recent years has systematic research been conducted to understand how often and why individuals deceive others. This work has led to an important conclusion: Everyone lies. The most direct evidence of this has been conducted by Bella DePaulo and her colleagues. Using survey and diary research methods, they have found that the overwhelming majority of people (approximately 99.5%) report lying daily, willingly describing in detail an average of approximately 1 to 2 explicit verbal lies per day. The most commonly reported lies involve the misrepresentation of one’s feeling and opinions (e.g., telling your grandmother that you like the out-of-style sweater she gave you) and providing false information about one’s actions, plans, and whereabouts (e.g., reporting that you were at the library studying when actually at the pub). Less frequent but still common lies concern misleading others about one’s knowledge or achievements (e.g., lying about one’s academic record), providing fictitious reasons or explanations for actions taken (e.g., blaming computer trouble for late work actually resulting from procrastination), and lies about facts and possessions (e.g., claiming to not have the money to loan to a friend). Commonly reported reasons for lying include to avoid embarrassment, to make a favorable impression on others, to conceal real feelings or reactions, to avoid punishment or aversive situations, and to not hurt others feelings (often called altruistic lies).


7. Collective Self-Deception

Collective self-deception has received scant direct philosophical attention as compared with its individual counterpart. Collective self-deception might refer simply to a group of similarly self-deceived individuals or to a group-entity, such as a corporation, committee, jury or the like, that is self-deceived. These alternatives reflect two basic perspectives social epistemologists have taken on ascriptions of propositional attitudes to collectives. On the one hand, such attributions might be taken summatively as simply an indirect way of attributing those states to members of the collective (Quinton 1975/1976). This summative understanding, then, considers attitudes attributed to groups to be nothing more than metaphors expressing the sum of the attitudes held by their members. To say that students think tuition is too high is just a way of saying that most students think so. On the other hand, such attributions might be understood non-summatively as applying to collective entities, themselves ontologically distinct from the members upon which they depend. These so-called &lsquoplural subjects&rsquo (Gilbert 1989, 1994, 2005) or &lsquosocial integrates&rsquo (Pettit 2003), while supervening upon the individuals comprising them, may well express attitudes that diverge from individual members. For instance, saying NASA believed the O-rings on the space shuttle&rsquos booster rockets to be safe need not imply that most or all the members of this organizations personally held this belief only that the institution itself did. The non-summative understanding, then, considers collectives to be, like persons, apt targets for attributions of propositional attitudes, and potentially of moral and epistemic censure as well. Following this distinction, collective self-deception may be understood in either a summative or non-summative sense.

In the summative sense, collective self-deception refers to self-deceptive belief shared by a group of individuals, who each come to hold the self-deceptive belief for similar reasons and by similar means, varying according to the account of self-deception followed. We might call this self-deception across a collective. In the non-summative sense, the subject of collective self-deception is the collective itself, not simply the individuals comprising it. The following sections offer an overview of these forms of collective self-deception, noting the significant challenges posed by each.

7.1 Summative Collective Self-Deception: Self-Deception Across a Collective

Understood summatively, we might define collective self-deception as the holding of a false belief in the face of evidence to the contrary by a group of people as a result of shared desires, emotions, or intentions (depending upon the account of self-deception) favoring that belief. Collective self-deception is distinct from other forms of collective false belief&mdashsuch as might result from deception or lack of evidence&mdashinsofar as the false belief issues from the agents&rsquo own self-deceptive mechanisms (however these are construed), not the absence of evidence to the contrary or presence of misinformation. Accordingly, the individuals constituting the group would not hold the false belief if their vision weren&rsquot distorted by their attitudes (desire, anxiety, fear or the like) toward the belief. What distinguishes collective self-deception from solitary self-deception just is its social context, namely, that it occurs within a group that shares both the attitudes bringing about the false belief and the false belief itself. Compared to its solitary counterpart, self-deception within a collective is both easier to foster and more difficult to escape, being abetted by the self-deceptive efforts of others within the group.

Virtually all self-deception has a social component, being wittingly or unwittingly supported by one's associates (See Ruddick 1988). In the case of collective self-deception, however, the social dimension comes to the fore, since each member of the collective unwittingly helps to sustain the self-deceptive belief of the others in the group. For example, my cancer stricken friend might self-deceptively believe her prognosis to be quite good. Faced with the fearful prospect of death, she does not form accurate beliefs regarding the probability of her full recovery, attending only to evidence supporting full recovery and discounting or ignoring altogether the ample evidence to the contrary. Caring for her as I do, I share many of the anxieties, fears and desires that sustain my friend&rsquos self-deceptive belief, and as a consequence I form the same self-deceptive belief via the same mechanisms. In such a case, I unwittingly support my friend&rsquos self-deceptive belief and she mine&mdashour self-deceptions are mutually reinforcing. We are collectively or mutually self-deceived, albeit on a very small scale. Ruddick (1988) calls this &lsquojoint self-deception.&rsquo

On a larger-scale, sharing common attitudes, large segments of a society might deceive themselves together. For example, we share a number of self-deceptive beliefs regarding our consumption patterns. Many of the goods we consume are produced by people enduring labor conditions we do not find acceptable and in ways that we recognize are environmentally destructive and likely unsustainable. Despite our being at least generally aware of these social and environmental ramifications of our consumptive practices, we hold the overly optimistic beliefs that the world will be fine, that its peril is overstated, that the suffering caused by the exploitive and ecologically degrading practices are overblown, that our own consumption habits are unconnected to these sufferings anyway, even that our minimal efforts at conscientious consumption are an adequate remedy (See, Goleman 1989). When self-deceptive beliefs such as these are held collectively, they become entrenched and their consequences, good or bad, are magnified (Surbey 2004).

The collective entrenches self-deceptive beliefs by providing positive reinforcement by others sharing the same false belief, as well as protection from evidence that would destabilize the target belief. There are, however, limits to how entrenched such beliefs can become and remain self-deceptive. The social support cannot be the sole or primary cause of the self-deceptive belief, for then the belief would simply be the result of unwitting interpersonal deception and not the deviant belief formation process that characterizes self-deception. If the environment becomes so epistemically contaminated as to make counter-evidence inaccessible to the agent, then we have a case of false belief, not self-deception. Thus, even within a collective a person is self-deceived just in case she would not hold her false belief if she did not possess the motivations skewing her belief formation process. This said, relative to solitary self-deception, the collective variety does present greater external obstacles to avoiding or escaping self-deception, and is for this reason more entrenched. If the various proposed psychological mechanisms of self-deception pose an internal challenge to the self-deceiver&rsquos power to control her belief formation, then these social factors pose an external challenge to the self-deceiver&rsquos control. Determining the how superable this challenge is will affect our assessment of individual responsibility for self-deception as well as the prospects of unassisted escape from it.

7.2 Non-summative Collective Self-Deception: Self-Deception of a Collective Entity

Collective self-deception can also be understood from the perspective of the collective itself in a non-summative sense. Though there are varying accounts of group belief, generally speaking, a group can be said to believe, desire, value or the like just in case its members &ldquojointly commit&rdquo to these things as a body (Gilbert 2005). A corporate board, for instance, might be jointly committed as a body to believe, value and strive for whatever the CEO recommends. Such commitment need not entail that each individual board member personally endorses such beliefs, values or goals, only that as members of the board they do (Gilbert 2005). While philosophically precise accounts of non-summative self-deception remain largely unarticulated, the possibilities mirror those of individual self-deception. When collectively held attitudes motivate a group to espouse a false belief despite the group&rsquos possession of evidence to the contrary, we can say that the group is collectively self-deceived in a non-summative sense.

For example, Robert Trivers (2000) suggests that &lsquoorganizational self-deception&rsquo led to NASA&rsquos failure to represent accurately the risks posed by the space shuttle&rsquos O-ring design, a failure that eventually led to the Challenger disaster. The organization as a whole, he argues, had strong incentives to represent such risks as small. As a consequence, NASA&rsquos Safety Unit mishandled and misrepresented data it possessed that suggested that under certain temperature conditions the shuttle&rsquos O-rings were not safe. NASA, as an organization, then, self-deceptively believed the risks posed by O-ring damage were minimal. Within the institution, however, there were a number of individuals who did not share this belief, but both they and the evidence supporting their belief were treated in a bias manner by the decision-makers within the organization. As Trivers (2000) puts it, this information was relegated &ldquoto portions of &hellip the organization that [were] inaccessible to consciousness (we can think of the people running NASA as the conscious part of the organization).&rdquo In this case, collectively held values created a climate within NASA that clouded its vision of the data and led to its endorsement of a fatally false belief.

Collective self-deceit may also play a significant role in facilitating unethical practices by corporate entities. For example, a collective commitment by members of a corporation to maximizing profits might lead members to form false beliefs about the ethical propriety of the corporation&rsquos practices. Gilbert (2005) suggests that such a commitment might lead executives and other members to &ldquosimply lose sight of moral constraints and values they previously held&rdquo. Similarly, Tenbrunsel and Messick (2004) argue that self-deceptive mechanisms play a pervasive role in what they call &lsquoethical fading&rsquo, acting as a kind of &lsquobleach&rsquo that renders organizations blind to the ethical dimensions of their decisions. They argue that such self-deceptive mechanisms must be recognized and actively resisted at the organizational level if unethical behavior is to be avoided. More specifically, Gilbert (2005) contends that collectively accepting that &ldquocertain moral constraints must rein in the pursuit of corporate profits&rdquo might shift corporate culture in such a way that efforts to respect these constraints are recognized as part of being a good corporate citizen. In view of the ramifications this sort of collective self-deception has for the way we understand corporate misconduct and responsibility, understanding its specific nature in greater detail remains an important task.

Collective self-deception understood in either the summative or non-summative sense raises a number of significant questions such as whether individuals within collectives bear responsibility for their self-deception or the part they play in the collective&rsquos self-deception, and whether collective entities can be held responsible for their epistemic failures. Finally, collective self-deception prompts us to ask what means are available collectives and their members to resist, avoid and escape self-deception. To answer these and other questions, more precise accounts of these forms of self-deception are needed. Given the capacity of collective self-deception to entrench false beliefs and to magnify their consequences&mdashsometimes with disastrous results&mdashcollective self-deception is not just a philosophical puzzle it is a problem that demands attention.


Living a Lie: We Deceive Ourselves to Better Deceive Others

People mislead themselves all day long. We tell ourselves we’re smarter and better looking than our friends, that our political party can do no wrong, that we’re too busy to help a colleague. In 1976, in the foreword to Richard Dawkins’s The Selfish Gene, the biologist Robert Trivers floated a novel explanation for such self-serving biases: We dupe ourselves in order to deceive others, creating social advantage. After four decades, Trivers and his colleagues published the first research supporting his idea.

Psychologists have identified several ways of fooling ourselves: biased information-gathering, biased reasoning and biased recollections. Their research, published in the Journal of Economic Psychology, focuses on the first—the way we seek information that supports what we want to believe and avoid that which does not.

In one experiment Trivers and his team asked 306 online participants to write a persuasive speech about a fictional man named Mark. They were told they would receive a bonus depending on how effective it was. Some were told to present Mark as likable, others were instructed to depict him as unlikable, the remaining subjects were directed to convey whatever impression they formed. To gather information about Mark, the participants watched a series of short videos, which they could stop observing at any intermission. For some viewers, most of the early videos presented Mark in a good light (recycling, returning a wallet), and they grew gradually darker (catcalling, punching a friend). For others, the videos went from dark to light.

When incentivized to present Mark as likable, people who watched the likable videos first stopped watching sooner than those who saw unlikable videos first. The former did not wait for a complete picture as long as they got the information they needed to convince themselves, and others, of Mark’s goodness. In turn, their own opinions about Mark were more positive, which led their essays about his good nature to be more convincing, as rated by other participants. (A complementary process occurred for those paid to present Mark as bad.) “What’s so interesting is that we seem to intuitively understand that if we can get ourselves to believe something first, we’ll be more effective at getting others to believe it,” says William von Hippel, a psychologist at The University of Queensland, who co-authored the study. “So we process information in a biased fashion, we convince ourselves, and we convince others. The beauty is, those are the steps Trivers outlined—and they all lined up in one study.”

In real life you are not being paid to talk about Mark but you may be selling a used car or debating a tax policy or arguing for a promotion—cases in which you benefit not from gaining and presenting an accurate picture of reality but from convincing someone of a particular point of view.

One of the most common types of self-deception is self-enhancement. Psychologists have traditionally argued we evolved to overestimate our good qualities because it makes us feel good. But feeling good on its own has no bearing on survival or reproduction. Another assertion is self-enhancement boosts motivation, leading to greater accomplishment. But if motivation were the goal, then we would have just evolved to be more motivated, without the costs of reality distortion.

Trivers argues that a glowing self-view makes others see us in the same light, leading to mating and cooperative opportunities. Supporting this argument, Cameron Anderson, a psychologist at the University of California, Berkeley, showed in 2012 that overconfident people are seen as more competent and have higher social status. “I believe there is a good possibility that self-deception evolved for the purpose of other-deception,” Anderson says.

In another study, published in Social Psychological and Personality Science, von Hippel and collaborators tested all three arguments together, in a longitudinal fashion. Does overconfidence in one’s self increase mental health? Motivation? Popularity?

Tracking almost 1,000 Australian high school boys for two years, the researchers found that over time, overconfidence about one’s athleticism and intelligence predicted neither better mental health nor better athletic or academic performance. Yet athletic overconfidence did predict greater popularity over time, supporting the idea that self-deception begets social advantage. (Intellectual self-enhancement may not have boosted popularity, the authors suggest, because among the teenage boys, smarts may have mattered less than sports.)

Why did it take so long for experimental evidence for Trivers’ idea to emerge? In part, he says, because he is a theorist and did not test it until he met von Hippel. Other experimental psychologists didn’t test it because the theory was not well known in psychology, von Hippel and Anderson say. Further, they suggest, most psychologists saw self-esteem or motivation as reason enough for self-enhancement to evolve.

Hugo Mercier, a researcher at the Institute for Cognitive Sciences in France who was not involved in the new studies, is familiar with the theory but questions it. He believes that in the long run overconfidence may backfire. He and others also debate whether motivated biases can strictly be called self-deception. “The whole concept is misleading,” he says. It’s not as though there is one part of us deliberately fooling another part of us that is the “self.” Trivers, von Hippel and Anderson of course disagree with Mercier on self-deception’s functionality and terminology.

Von Hippel offers two pieces of wisdom regarding self-deception: “My Machiavellian advice is this is a tool that works,” he says. “If you need to convince somebody of something, if your career or social success depends on persuasion, then the first person who needs to be [convinced] is yourself.” On the defensive side, he says, whenever anyone tries to convince you of something, think about what might be motivating that person. Even if he is not lying to you, he may be deceiving both you and himself.

This post originally appeared on Scientific American and was published April 3, 2017. This article is republished here with permission.



Comments:

  1. Redding

    I liked your blog very much!

  2. Fenritaur

    It seems to me that this is not entirely accurate. There are several opinions on this topic. And each person with their own worldview has their own opinion.

  3. Adalhard

    Thanks, can, I too can help you something?

  4. Kigaktilar

    This is no longer an exception.

  5. Connacht

    Well, thank you. Really blinked. Let's fix it now



Write a message