Information

Cognitive bias believing you are good at the things you are bad at, as successes are more vivid?

Cognitive bias believing you are good at the things you are bad at, as successes are more vivid?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

What is the name for this cognitive bias?

Scenario

I am bad at socializing with people. For 95% of the time I talked to people, I was very awkward. Yet, mostly only 5% of the opposite time stuck to my memory. Therefore I think that I'm a very social person. Although, overwhelmingly, I'm bad.

My mind is biased. I'm blind to all the times I failed, but can recount the outliers (successes) more vividly as if they were the norm.


Replace "socializing with people" with anything else.

PS: found and highlighted in this video: https://www.youtube.com/watch?v=17LuaKpCoDw starting around 1:25


The self serving bias is the most likely bias at play. It affects what information we remember and how we interpret events. It ca be used to explain the dunning Krueger effect, which is the effect you are describing.


It could be described as quite a few cognitive biases. See https://en.m.wikipedia.org/wiki/List_of_cognitive_biases.

For example, confirmation bias, illusory superiority or choice-supportive bias could be at work here in your description.


The ways to overcome ‘Attribution Biases’?

Attribution biases are most often not good for us. So how do we overcome these different types of attribution biases?

We can overcome our own and others’ attribution biases in a number of ways and allow ourselves to be open to new information, facts evidence, and improve our decision-making and problem-solving abilities.

4 Steps to overcoming Attribution Bias

1. Realization

The first step towards reducing and overcoming attribution bias to recognize that these are cognitive biases and are present in each individual in varying degrees. The other realization is that you can’t completely get rid of them.

This is essential because there will always be a few times you wouldn’t realize that you are in grip of attribution bias. Once these two points are very clear, the positive movement towards overcoming attribution bias begins.

2. Start Challenging your stories

All attributions have an underlying story behind it. The first step is to identify that story. In the case of Rohit’s example, he creates a story around his colleague that she is coming late and this may be due to the fact that she is not serious about her job or maybe she is too lazy to wake up early.

The key is to identify your story first, the one that your brain creates. Acknowledge that this is a story made up by you and it needs to be verified before it becomes an attribution.

3. Try to verify your story

The next step is to verify your story. This can be done by looking at the facts and/or having a direct line of communication.

The key is to treat it as a story and not your belief. Sometimes you need to tell your inner-self that, “this is just a story, I need to verify it”. If you are successful in convincing yourself then you make yourself free from ‘confirmation bias’.

If Rohit attempts to find facts about his colleague’s past office coming time he may have noticed that she comes to office on time. He even could have had a word with her where she might have explained why she’s late.

The solutions to overcoming attribution bias are often as simple as these.

4. Avoid blaming others (externalizing the blame)

We are always better off when we get rid of the habit of externalizing the blame. This, in essence, increase our control of the situation and allows us to make efforts to make things better,

Always try to Focus on Resolving the Issues Not Who is to Blame. Once people get into problem-solving mode and are focused on resolving the problem rather than working out who’s at fault, they are more likely to be able to resolve the issues.

If these 4 steps to overcome a cognitive bias as common as attribution bias is followed, most people will be able to minimize their effect. The result is instant and better decision making that leads to success in many aspects of life.


COGNITIVE BIASES IN INFORMATION SECURITY CAUSES, EXAMPLES AND MITIGATION

This article makes a contribution to the theory of the human factor in the information security by exploring how errors in thinking distort the perceptions of InfoSec issues. Besides examples from the practice, the author proposes several ideas for mitigating the negative effects of the cognitive biases through training.

Information, security, bias, psychology, determinant, causes, mitigation, cognitive, training

One of the components of a mature information security program is the human factor. Typically, the emphasis is on maintaining a security awareness program and mitigating risks caused by human mistakes and lack of knowledge of security.

Security awareness is necessary but also only one aspect of the human factor. Another challenge for security professionals is finding actionable arguments to support their analysis and recommendations on information security issues in their organisations. The key word here is “actionable”. Their experience shows that professional analysis, argumentation techniques and even supporting evidence combined may be insufficient for properly addressing some of the identified problems. Although a number of difficulties can be noted as causes for insufficient or inadequate actions on information security matters, like deficiency of budget, time or human resources, management ignorance and so forth, the picture would be incomplete if the psychological phenomenon of cognitive biases are excluded.

The cognitive biases are inherent characteristics of the human nature and this way part of everyone’s thinking. A bias is an error in thinking when people are processing and interpreting information and thus influencing the way they see and think about the world. Unfortunately, these biases lead to poor decisions and incorrect judgments. This article correlates researches on the biased thinking with examples from the InfoSec industry.

The first part of the article explains several important (and non-exhaustive) determinants for cognitive biases and then exemplifies them with realistic sample situations that an InfoSec specialist might encounter. The second part proposes several ideas on how organisations can deal with the biases so that their occurrences and impact are reduced. The author wants to emphasize the need for further exploration of the potency of these ideas in the real world and their role for a possible mitigation strategy. In addition, the reader is encouraged to learn about the types of cognitive biases – a topic not directly discussed here.

DETERMINANTS 1 FOR COGNITIVE BIASES AND EXAMPLES

The Misperception and Misinterpretation of Data or Events

People deal with data on an everyday basis. The common approach is to analyse the data by converting it into something more useful – information – and from there to continue the conversion into knowledge and then wisdom 2 . This complex processing chain may be impacted by the misperception or misinterpretation of random data or events. As an example, a data leakage prevention (DLP) analyst, tasked to inspect the DLP reports for irregularities, may suspect random events as real attacks on a network. In this instance, the “random” data could be misinterpreted. One should understand that human’s nature is inclined to look for patterns where such do not always exist 3 .

In a second example, a typical computer user could erroneously conclude that his computer troubles are caused by malware. However, an experienced IT support specialist could identify a different cause for the symptoms of the issue and quickly rule out the malware scenario as a cause.

Judgment by Representativeness 4
Representativeness can be thought to have the reflexive tendency to assess the similarity of outcomes, instances, and categories on relatively salient and even superficial features, and then use these assessments of similarity as a basis of judgment.

Judgment by representativeness is often valid and helpful because objects, instances, and categories that go together usually do in fact share a resemblance. However, the overapplication of representativeness is what leads to biased conclusions. Many would likely recall personal experiences when a person, who belongs to a particular group, is attributed qualities, considered typical for that group. For instance, some IT experts perceive the members of their information security team as very strict security and compliance enforcers, but in reality not all of them may have this profile. The stereotypical over-generalisations like “All the IT experts…”, “All the auditors…”, “All the consultants from that company…” often follow imprecise and even incorrect qualifications (negative or positive). The simplification can and in some instances will be misleading.

Misperceptions of Random Dispersions
If the information security professional analyses statistical data from a certain security tool, he may notice patterns, which could lead him to the conclusion that specific events occur more frequently at specific time frames 5 . For instance, if a particular type of security incident occurred for four consecutive months, each time in the last seven days of the month, this could indicate that there is a pattern. These incidents could be correlated with other known events and assumptions can be made about the underlying cause, but a definite conclusion should not be drawn without additional investigation.

Solidifying the Misperceptions with Causal Theories 6
Once a person has (mis)identified a random pattern as a “real” phenomenon, it is likely going to be integrated into his pre-existing beliefs 7 . These beliefs, furthermore, serve to bias the person’s evaluation of new information in such a way that the initial belief becomes solidly entrenched. For example, if a person participated as the auditee during an audit several years ago where he was supposed to provide to the auditor some of the IT security procedures, the same person could afterward develop false expectations about the requirements in other standards or for another type of organisations. This person could be convinced that he is well aware of all the auditing practices, but in reality, he could be lacking essential knowledge on the specifics of other security standards and types of audits (e.g., see the difference between SOC 2, type I and type II audits).

Misunderstanding instances of statistical regression
The statistics teach that when two variables are related, but imperfectly so, then extreme values on one of the variables tend to be matched by less extreme values on the other. For instance, a company’s financially disastrous years tend to be followed by more profitable ones Student’s high scores on an exam (over 97%) tend to develop less regressive scores in the next exam.

If people are asked to predict the next result after an extreme value, they often tend not to consider the statistical regression and make non-regressive or only minimally regressive predictions (they predict a similar value). 8 A second problem is the tendency of people to fail to recognise statistical regression when it occurs and instead “explain” the observed phenomenon with complicated and even superfluous theories. This is called the regression fallacy. For example, a lesser performance that follows an exceptional one is attributed to slacking off A slight improvement of the security incident rate is attributed to the latest policy update Company’s management may hold their IT Security Officer accountable for the decrease of the server compliance level after an excellent patching and hardening activity three months ago.

Misinterpretation of Incomplete and Unrepresentative Data (Assuming Too Much from Too Little)

The Excessive Impact of Confirmatory Information
The beliefs people hold are primarily supported by positive types of evidence. In addition, a lot of the evidence is necessary for the beliefs to be true but they are not always sufficient to warrant the same. If one fails to recognize that a particular belief rests on deficient evidence, the belief becomes an “illusion of validity 9 ” and is seen not as a matter of opinion or values but as a logical conclusion from the objective evidence that any rational person would take. The most likely reason for the excessive influence of confirmatory information is that it is easier to deal with it cognitively, compared to non-confirmatory information.

Information systems audits are good examples of searching for confirmatory evidence 10 . In an audit, unless a statistical methodology 11 is utilised for controls testing, the evidence for the effectiveness of the controls become open for interpretation and the auditor’s intention to perform “reasonable assurance” on the controls becomes as ambiguous as it sounds. Auditors would usually ask about the existence of policies, procedures and mostly look for positive evidence. There may be even instances of auditors who ignore non-supportive evidence and ask the auditee for a supportive one. They shouldn’t, but they might do so.

In another example, if the security specialist in a small company has a number of responsibilities for the entire information security management system (ISMS), there will probably be many opportunities for him to prove his skills but also to make mistakes. If the company’s CEO favours the employee, he may look for achievements that indicate his professionalism. If the CEO doesn’t favour him, the focus may be on the person’s past mistakes, which considered alone, would indicate incompetence. In this last case, the past successes are often ignored.

The Problem of Hidden or Absent Data
In some cases, essential data could simply be absent. This makes it difficult to compare good and bad courses of action. In such situations, people could erroneously conclude that their evaluation criteria are adequate. For instance, the decision to increase the password complexity level and to lower the expiration period for the accounts of a particular business critical application is an accepted good security practice. However, if only this general best practice is taken into account, the expectations of the change could be overly optimistic. The reason for this is that a lot of missing information cannot be considered: it is nearly impossible to anticipate all the indirect consequences of such a change, like users starting to write down their passwords. If they do this, the risk for password compromise will most likely increase and the change will have the opposite effect.

In another example, the organisation’s leadership decides to outsource certain IT security functions to a third-party provider instead of modernising the existing capabilities. This will likely improve the overall capabilities, but there will be very limited information if that course of action is the best decision because the other course of action will not be pursued and tested.

A third example can be given on the subject of risk assessment. People often think that if a certain risk has never materialized, then the likelihood for its occurrence in future is very low 12 . However, if a risk specialist thoroughly analyses the existing information on the risk, he may conclude that the likelihood is much higher.

Self-fulfilling Prophecies 13
A peculiar case of the hidden data problem arises whenever our expectations lead us to act in ways that fundamentally change the world we observe. When this happens, we often accept what we see at face value, with little consideration of how things might have been different if we had acted differently. For example, if a senior manager believes that a member of the security team performs unsatisfactory, the last one will find it difficult to disprove him If the CIO thinks the CISO behaves unfriendly, the last one could find it difficult to change his perception. Even the absence of friendliness could be erroneously construed as unfriendliness. In such situations, the perceiver’s expectations can cause the other person to behave in such a way that certain behaviours by the target person cannot be observed, making what is observed a biased and misleading indicator of what the person is like. Furthermore, if we do not like a person, we generally try to avoid him and give him little opportunity to change our expectations.

Seeing What We Expect to See 14

The Biased Evaluation of Ambiguous and Inconsistent Data

“I’ll see it when I believe it.”
People are inclined to see what they expect to see, and that is consistent with their pre-existing beliefs. Information that is consistent with our pre-existing beliefs is often accepted at face value, whereas evidence that contradicts it is critically scrutinised and discounted. Our beliefs may thus be less responsive than they should to the implications of new information.

For instance, if a cybersecurity consultant is tasked to serve a client who is generally not satisfied with the IT services of the same company, the client may tend to scrutinise any piece of information the consultant provides to him and look for confirmations that the security consultancy services are at the same, unsatisfactory level as the IT services.

Ambiguous Information
If a decision is based on ambiguous information, we tend to perceive it in a way that fits our preconceptions. Why, for instance, would a newly hired Information Security Officer ask questions around in his organisation? Is he not aware of his duties or is he incapable of doing his job? Is he asking questions because there is a lack of pre-existing documentation left from his predecessor? Or is this what someone in this position is supposed to do? Or maybe because the ISMS can be effectively maintained only with the support and collaboration with the different roles in the organisation? The answer could be related to one of these questions, a combination of them or there could be a completely different explanation. Depending on the preconceptions of each employee interacting with the new Information Security Officer, they could make premature and erroneous conclusions about his capabilities.

Unambiguous Information
We tend to consider unambiguous information, which fits our beliefs, as true. However, we usually do not ignore it when it does not meet our expectations. Instead, we try to scrutinize it and look for additional information. To exemplify this, imagine a CIO who is convinced that the employees should not be occupied with information security training and instead technical controls should be preferred. Then, if he is confronted with studies, which provide evidence about the benefits of persistent security awareness training, he may tend to scrutinise them and challenge the significance of the results. He may also accept with much less scrutiny other studies, which point out the benefits of technical controls over security awareness.

MITIGATION OF COGNITIVE BIASES 15

The list of determinants for cognitive biases can be extended. In any event, recognizing the problem is only the first issue. The second and more difficult challenge is to take adequate actions to mitigate the effects of the biases. As far as organisations are concerned, the author suggests the creation of an entire programme within the organisation, which aims to mitigate the effects of erroneous beliefs and improve employees’ analytical capabilities. Depending on the characteristics of the organisation, the system could be integrated into the existing training/educational programme. The approach could focus on the following:

  • Promoting the learning and self-improvement as a life-long process. People who embrace continuous learning and improvement will have more potential to detect their own cognitive biases and correct their erroneous beliefs. They will also be in a better position to respond on biased arguments of others.
  • Promoting the benefits of scientific methods and techniques to create and test new theories with greater certainty. In addition to that, the knowledge on using scientific methods helps the people develop a mindset for structural thinking and distinguishes the critics from the closed-minded.
  • Promoting and teaching argumentation techniques to improve the interpersonal skills of the employees.

Trained and motivated individuals should teach the actual techniques. The following ideas can be considered when creating such a programme.

  • When evaluating something, the various outcomes should be specified in advance. This increases the likelihood to objectively assess the performance of processes, projects, systems and people.
  • Differentiating between generating an idea and testing it. Often, people easily create ideas, but the process of proving if they work in practice is much more complicated.
  • Organising training sessions to teach employees about logical constructs and avoiding biases.
  • Distinguishing between secondhand and firsthand information and learning about the risks involved in relying on the first one.
  • The benefits of using precise wording to describe and explain things and the perceived risks involved when using metaphors.
  • The need to focus on both – the person and the individual situation, to limit distortions in the perception.
  • The need to understand the false consensus effect that is defined as the tendency for people’s own beliefs, values, and habits to bias their estimates of how widely others share such views and habits.
  • The need to understand the distortions caused by the self-interest and how the organisation can refocus employees’ attention to serve better its interest.
  • Exploring the benefits of measurement methods.
  • Learning about the benefits of focusing on both – the amount and kind of information.
  • Learning about the tendency of positive self-assessments and the inclination of people to protect their beliefs.
  • Promoting tolerance, which can be defined as the assumption that all people make mistakes. Learning about the tendency of people to remember their successes but forget their failures.
  • Mastering learning techniques.
  • Learning how to give and receive feedback. Often people hold back their own reservations and disbelief when they disagree with what someone is saying. Biased feedback leads to an inability to adequately evaluate alternative strategies.
  • Learning how the human brain functions from a neurobiological perspective.

In a summary, this article first exemplified some determinants of cognitive biases in the context of information security and then provided some ideas on how to mitigate the implications of biased thinking in the organisations. The author believes that a better understanding and awareness of the cognitive biases will be novel for the concept of the “human factor” in the information security industry. Most importantly, the awareness of cognitive biases could provide a new perspective when designing security processes and improve communication and decision-making of individuals. As a result of that, the already existing set of analytical and argumentation techniques of the information security professionals could be innovatively upgraded to an advanced level. Such an upgrade could improve the overall performance of the staff, especially if it encompasses the entire organisation. ■

  1. The determinants of cognitive biases and their definitions are discussed in the book of T. Gilovich, “How we know what isn’t so”, The Free Press, 1991.
  2. This is known as DIKW. See L. Hayden, “IT Security Metrics”, page 57-58, Mc. Graw-Hill, 2010.
  3. The tendency of people to see patterns is discussed by M. Shermer, “How We Believe”, 2nd edition, section “The pattern-seeking animal”, Owl books, 2003.
  4. This is related to the cognitive bias known as the Representativeness Heuristic. See A. Tversky and D. Kahneman, “Judgment under Uncertainty: Heuristics and Biases”, pages. 1124-1131, Science, New Series, Vol. 185, No. 4157, 1974.
  5. This phenomenon is also known as Clustering Illusion. It is well known among financial investors who could become overly confident when the price of a stock goes up for a couple of days in a row. See “ Think again! Your guide to the cognitive biases that lead to bad investing behaviours and the ten things you can do about them ”.
  6. The Illusion of Causality is a very well known phenomenon among scientific researchers. See “ Illusions of causality: how they bias our everyday thinking and how they could be reduced ”, Front. Psychol., 02 July 2015.
  7. It is also thought that pre-existing beliefs are the trigger for new beliefs. See “ A cognitive account of belief: a tentative roadmap ”, Front. Psychol., 13 February 2015.
  8. See D. Levitin, “Foundations Of Cognitive Psychology”, pages 591-592, A Bradford Book, 2002.
  9. The term is used by H. J. Einhorn & R. M. Hogarth in “Confidence in judgment: Persistence of the illusion of validity.” Psychological Review, Vol 85 No 5 395-416, 1978.
  10. See B. L. Luippold, S. Perreault and J. Wainberg, “ AUDITORS’ PITFALL: FIVE WAYS TO OVERCOME CONFIRMATION BIAS ”, 04.06.2015.
  11. See “Practice Advisory 2320-3: Audit Sampling”, The Institute of Internal Auditors, May 2013.
  12. See section Biases of imaginability of reference 4.
  13. See C. Ackerman, Self-Fulfilling Prophecy in Psychology: 10 Examples and Definition , May 2018.
  14. See L. Yariv, “ I’ll See It When I Believe it? A Simple Model of Cognitive Consistency ”, Cowles Foundation Discussion Paper No. 1352, 2002.
  15. The application of methods to remove or reduce bias from judgment and decision making is called debiasing. Multiple other techniques for mitigating the effects of cognitive biases are discussed in this article – “ Debiasing ”, 2018

Veselin Monev is information security and compliance practitioner. He has over 5 years of information security experience in the academics and the private sector and more than 4 years of IT practice. In 2015 he received a master degree in Cybersecurity from the New Bulgarian University. He is author of several academic articles and co-author of an academic book for cybersecurity metrics.


1. Anchoring Bias

To show how this bias works, let’s play a guessing game. Do you think that the tallest tree in the world is taller or shorter than 1,000 feet? Either way, how tall do you think the tree is overall?

Unless you already know a lot about trees, you probably guessed that the world’s tallest tree is somewhere close to 1,000 feet. Maybe you guessed it was taller or shorter – say, 1,500 feet total, or only 500 feet – but either way, your guess was affected by the first number you saw.

This is an example of the anchoring bias – relying too much on the first piece of information you get. Since the figure 𔄙,000 feet” was all you had to go on, that number became your “anchor,” and your guess about the tree’s height was tied down by it. Without the number 1,000 to guide you, your guess might have been much higher or much lower. (In case you’re curious, the actual answer is 379 feet.)

How This Bias Costs You Money

The anchoring bias costs you money when it leads you to judge the price of an item based on the first price you saw. For instance, suppose you’re shopping for a tablet computer. You check the sale flyer for a local department store and see one model marked down from $500 to just $150.

That sounds like an amazing price, but only because you’re comparing it to the $500 anchor price. If you shopped around for similar tablets and found that most cost $150 or less, it wouldn’t look like such a bargain. In fact, many stores raise their “regular” prices right before Thanksgiving to make their Black Friday sales look more impressive.

Sellers know all about this bias, and they use it to their advantage. For instance, some real estate agents make sure the first house they show to a new buyer is ludicrously overpriced. Compared to that, every other house on the market will look like a great deal.

Anchoring can also hurt you when you negotiate your salary. During a job interview, if you’re offered a starting salary of $25,000, you’ll probably hesitate to ask for $50,000, even if that’s what you think you’re worth. You could end up dropping your asking price to $35,000 because you don’t want to sound unreasonable.

How to Beat This Bias

The best way to overcome the anchoring bias is to do more research. That way you can replace that initial “anchor” number with other numbers that make more sense.

For example, if you want to buy a house, check the “comps” – prices that comparable houses have sold for. That will let you know what’s really a fair price to pay for the house you want.

Likewise, before a job interview, do research on typical starting salaries. That way, when the boss names a number, you’ll know whether it’s a fair offer. Better still, turn anchoring to your advantage by being the first to name a salary. Then the boss will have to adjust to your expectations, instead of the other way around.


Useful Mistakes, Cognitive Biases and Seismic

We come to conventions and Symposia to hear about success and bask in the glory of our awesome, successful colleagues. But mistakes may also lead to useful learning opportunities. After all, we often mistakenly look upon our successes as if they are the result of some intrinsic property of ourselves, as if we are both special and right, rather than consider that at least some of them have come about their success due to luck. A failure or an error, on the other hand, may be far more illuminating and attention holding if we allow ourselves to honestly face up to it.

I was humbled and grateful to have been chosen as the Honoree at this year&rsquos CSEG Symposium. As has been customary in past Symposia, a group of very intelligent people did make a heroic effort to say some nice things about me. Looking back on my own career, I do see some things that I am proud of. I see a great many people who I am proud to have worked with. Scott Reynolds spoke of some of those excellent people in his Tribute talk, which I hope will be published. I see a few problems (sort of) solved and situations improved. All that is wonderful, but my career trajectory looks the way it does because I had a great many things to learn. I made errors. I was part of some mistakes. I was the ringleader of a few of them. My work suffered from a variety of cognitive biases, and many of the very intelligent people around me did as well. In the second half of my time as a geophysicist, I became more aware of some of my faults, errors, and shortcomings. I attempted to become a better scientist and make less cognitive errors. It turns out that both tasks are difficult.

This write up of some of the elements of the talk I gave at the Symposium delves into certain mistakes I made or was a part of and discusses why they were made. In fact, the systemic reasons for the mistakes are far more important than the mistakes themselves. It is unlikely that future geoscientists will find themselves in the identical situations that I have erred in, but they will, like me, be human.

It&rsquos not just me: ask the military

In speaking of my own mistakes, I am neither attempting to humble-brag or to suggest that I was an awful geophysicist. It is more likely that I was generally typical but had an unusual tendency to communicate about my work. Certainly, I worked with an excellent group of professionals. Everyone&mdashmyself, my peers, partners, cross-disciplinary colleagues&mdashwanted to do a good job, keep costs and environmental impacts low and bring in a high return on investment for our employers. None of us wanted to make mistakes. But we did. Lots of them, many of which slipped by without us even realizing it. Most of our errors were never covered in a university geosciences or physics course, in company training or in any technical conference. Most of our errors, the ones that we committed again and again were as a result of the human condition, because of the biases and limitations that human beings tend to have.

In his book, Men Against Fire, Marshall (1947) discusses at length the fact that over 70 percent of all infantry soldiers do not fire their weapons in actual combat, even if the vast majority will fire in combat exercises or on the firing range. Marshall quite adroitly realizes that to address this issue, the feelings, or the morale of the soldiers must be addressed. While Marshall appears loathe to use the vocabulary of the psychologist, it is inescapably the feelings and biases of the soldiers that are his main points of enquiry in the book. Without uncovering the psychological reasons for the inability of most soldiers to follow the most basic and essential order they can receive, there can be no improvement in performance. And so it is for geoscientists. While we are in no way driven to the farthest extreme of action or mortality in our jobs, we nevertheless are human beings who are subject to common pressures, common cognitive biases and make common, systemic mistakes.

Psychology, there is a better goal than to manipulate others to our ends

The study of human behavior is a ubiquitous tool of the modern world. Nobel prize winner Richard Thaler&rsquos nudge theory is at work in most modern choice tests. Nudge theory (Thaler and Sunstein, 2008) influences behavior using subtle methods such as the careful selection of the default choice on medical tests, municipal polls or dietary questions. Politicians, salesmen and marketers use the nudge theory it affects us daily as our decisions on buying and policy as it is pushed by one actor or another. Users of the theory claim they are doing it for our own good, though we can only guarantee that the theory is used to influence us for theirs&rsquo. Daniel Kahneman is another Nobel prize winner who wrote Thinking Fast and Slow (2011), an expansive summary of his studies of cognitive biases that has been hugely influential in economics. There are few policy makers or professional marketers who are unaware of the work of Kahneman and the behavioral psychologists. The use of such knowledge of humanity makes sense. If we want to influence how others buy, think or vote&mdashhow they choose&mdashwe should understand how they think, buy and vote.

Behavioral psychology seems to be used most in practice to manipulate or influence others. It does not matter that some who do so, do it in the name of beneficent sounding terms such as libertarian paternalism, it is still a tool being applied by others onto us. But if the tools of psychology are useful for others to attempt to influence us to better choices, why do we not attempt to use our knowledge of psychology to help ourselves make better decisions? Better geoscientific decisions? An inward use of the knowledge certainly has the advantage of being used towards benefits that we choose.

But how can we do this? The answer to this question brings us back to my mistakes. My mistakes are useful because they can be used as an example to bridge the gap between some of the lessons of behavioral psychology and the choices made by geoscientists.

Maslow&rsquos Hammer and Experimenter&rsquos bias

Maslow (1966) said the now famous, &ldquo[To a hammer, everything is a nail].&rdquo He was speaking of the extraordinarily strong tendency of human beings to rely&mdashsometimes to their detriment&mdashon familiar, comfortable tools, whether they are appropriate or not to the problem at hand. This cognitive bias has many names that can be used to impress friends at parties. The law of the instrument, the Birmingham screwdriver, the golden hammer, or the Einstellung Effect are just a few. Geoscientists are no exception to this human tendency to use what we are used to, what we are well versed with, or with what has perhaps gotten us notoriety or economic success in the past.

Even if we may forgive ourselves for depending too much on Maslow&rsquos Hammer, we geoscientists are likely less comfortable when considering our own use of experimenter&rsquos bias. Jeng (2006) makes a survey of this bias, which is our predilection to believe, publish and promote data that agrees with our expectations and to ignore, suppress or denigrate data that conflicts with those expectations. Maslow&rsquos Hammer likely plays a part in this bias, for preferred outcomes&mdashor expectations&mdashwill often go hand in hand with a preferred tool. Experimenter&rsquos bias is the more uncomfortable bias because it contradicts our feelings of scientific integrity. And yet we may ask, how many examples of failure do we see in the literature of the CSEG?

Example one: the frustratingly non-unique Radon transform

The Radon transform went through a period of great and manifold development for seismic applications starting in 1985, lasting for about twenty years. This development is remarkable for its ingenuity alone but is also noteworthy because the problem of experimental bias played an ongoing role.

The Radon transform essentially represents data in time and space by families of curves with onset times (tau) and ray parameters (p). The hyperbolic family was identified early on as being apt for separating primary and multiple events of common midpoint gathers (CMP) by creating a velocity space. Unfortunately, there are several well-known mathematical issues with the Radon transform, chiefly its ill-posedness. Thorsen and Claerbout (1985) discussed how the ill-posedness of the transform is exasperated in seismic by missing data in the CMP gather, chiefly at the near offset and far offset. The data truncation at these edges can never be fully removed in seismic data&mdashsuch would be physically impossible&mdashand it creates a smeared, non-unique response that limits the ability for primaries and multiples to be separated. To reduce the non-uniqueness of their results, Thorsen and Claerbout introduced sparsity constraints which reduced the smear in p space, though at horrendous&mdashand at the time, potentially bankrupting&mdashcomputational cost.

As a problem in logic, I will argue that no constraint, or inserted prior knowledge, can be truly effective if the physical assumption that it represents is not valid. That is, if the assumption is not true, the product of its use may be false. The sparsity constraints for the Radon transform are partially valid. The idea of the constrained transform simulating a gather of infinite offset range, and thus creating sparsity in p space makes intuitive sense, is easy to imagine, and was developed commercially in a series of steps. Hampson (1986) employed an ingenious method to make the transform work given the processing limitations of the time, though he was forced to give up on Thorsen&rsquos sparsity and had to use parabolas instead of hyperbolas. Sacchi and Ulrych (1995) introduced a fast method for including sparsity in p, and Cary (1998) argued that only by including constraints in both tau and p can the necessary and sufficient conditions (Hughes et al., 2010) be met for a desirable solution. Cary&rsquos explicit and correct use of logic is rare in the literature. Many others, including myself (Hunt et al, 1996) through to Ng and Perz (2004) illustrated the improved results of their sparse Radon transforms which were developed using these ideas.

Virtually all of these many papers&mdashand most major seismic processing companies of the time seemed compelled to publish their own solution&mdashshowed near perfect synthetic results in which non-uniqueness was apparently eliminated. It is within these many papers and their near perfect, unique, synthetic results, that experimental bias was at work. Consider Ng and Perz&rsquos (2004) paper and a redisplay of their Figures 1, 2a and 5a, below, which I rename Figure 1. We see in Figure 1a the input CMP, which is moveout corrected and has only one primary of tau 1100ms. The rest of the events, including another event with tau of 1100ms, are multiples. Figure 1b shows the non-sparse Hampson (1986) algorithm&rsquos Radon transform space. It shows the smeared truncation artefacts and some overlap of energy in p at a tau of 1100ms from the primary and multiple. Figure 1c shows Ng and Perz&rsquos (2004) sparse Radon algorithm&rsquos space. Events are well localized in tau and p. The two events at a tau of 1100ms are distinct and separate.

Figure 1. Synthetic example from Ng and Perz (2004). (a) Input CMP gather, (b) Radon transform space from the Hampson (1986) algorithm, (c) Radon transform space from the sparse algorithm.

This example is typical of the many papers of the time and makes the argument that these sparse algorithms have virtually eliminated the non-uniqueness in the Radon transform. No author that I am aware of has claimed to have completely eliminated non-uniqueness, but the examples given show positive synthetic results with no material uniqueness issues remaining. A reader of such papers likely knows that the sparsity will eventually fail to mitigate the non-uniqueness at some tiny moveout (the authors certainly know this), but these examples are generally not shown. This omission by itself is an argument for experimental bias, but the bias is in fact much more overt.

Hunt et al (2011) showed that there are other relevant variables impacting both the effectiveness of multiple attenuation and the uniqueness of the Radon transform. Yes, I had historically been an unwitting part of the experimental bias, but this further examination into the problem of short period multiple elimination made myself and my co-authors (who included Mike Perz from the 2004 paper) realize that this cognitive bias was denying us a better understanding of the non-uniqueness problem in the Radon transform and in its use for effective multiple attenuation.

Figure 2, from Hunt et al (2011) shows an example of non-uniqueness in the Radon transform space due to varying onset, or tau, times of primary and multiple events. Figure 2 illustrates a primary and multiple with the same amplitudes. The differential move-out of the multiple with respect to the primary is 20ms at 2500m offset. The intercept time, Tau, of the multiple is varied and the offsets were taken from a typical CMP from the 3D to simulate land 3D irregularity. A 35 Hz Ricker wavelet was used for each element of this figure. In Figure 2a, the multiple starts 10ms above the primary, in Figure 2b both events start at the same time, and in Figure 2c, the intercept of the multiple is 10ms below the primary. The corresponding sparse Radon transform spaces are shown in Figures 2d, 2e and 2f. The primary and multiple are not resolved in Figure 2d. In Figure 2e, two events are resolved in p, but the position of the multiple is slightly incorrect. Only in Figure 2f are the primary and multiple separately resolved and in their correct positions. Relative onset times by themselves control tuned character on the CMP gathers, which unsurprisingly controls uniqueness in the Radon transform space of even sparse algorithms. This is a material observation of the transform&rsquos continuing non-uniqueness. How was this missed from the literature except by experimental bias?

Figure 2. from Hunt et al (2011). A multiple with 20ms of differential move out relative to the primary event at 2500m was generated. Each event has the same amplitude, and a 35 Hz Ricker wavelet was used. The intercept time, Tau, of the multiple is varied in this figure. In (a) the intercept of the multiple is 10ms above the primary, in (b) both events start at the same time, and in (c), the intercept of the multiple is 10ms below the primary. The Tau-p spaces for Figures (a), (b), and (c) are given in Figures (d), (e), and (f), respectively. Despite the using a sparse Radon algorithm (Ng and Perz, 2004), it does not separate the events completely in the Tau-p space of (d).

Let us examine the effect of changes in wavelet size on the ability of the sparse Radon transform to separate multiples and primaries properly (Hunt et al, 2011). The simple model of Figure 3 illustrates this effect. A primary and multiple are depicted in Figures 3a, 3b, and 3c. In each case, the multiple starts 10ms above the primary, and has a differential move out of 20ms at 2500m. The primary and the multiple have equal amplitude, and the offset bins are perfectly regular. The only differences in these three images are that the wavelet of the data changes from a 15 Hz Ricker in Figure 3a, to 35 Hz Ricker in Figure 3b, to 60 Hz Ricker in Figure 3c. Figures 3d, 3e, and 3f, represent the sparse Radon transform spaces for Figures 3a, 3b, 3c, respectively. The Radon transform space corresponding to the low resolution gather of Figure 3d clearly does not resolve the primary or the multiple most of the energy is on the zero moveout curvature. The mid resolution gather fairs little better: Figure 3e shows the energy is misallocated in quantity and position. Only the highest resolution gather of Figure 3c and its Tau-p space of Figure 3f does a perfect job of resolving both events, and correctly representing the move out of each event. The greater the wavelet resolution, the less non-uniqueness in the Radon transform. That resolution affects the uniqueness of the Radon transform should have been obvious, but it had not been a focus of the work to this point. By explicitly illustrating this now unsurprising shortcoming, Hunt et al (2011) were able to focus on increasing the resolution of the wavelet as much as possible.

Figure 3. Figure 3 from Hunt et al (2011). The effects of wavelet resolution in the uniqueness or resolution of the Radon transform. The offset bins are also perfectly regular, with 50m spacing. Figures (a), (b), (c) depict a flat primary and a multiple. In each case, the multiple starts 10ms above the primary, and has a differential move out of 20ms at 2500m. We used Ricker wavelets with dominant frequencies of 15 Hz, 35 Hz, and 60 Hz in (a), (b), and (c) respectively. Figures (d), (e), and (f), represent the sparse Radon transform spaces for (a), (b), (c), respectively. The low resolution gather of (a) yields an inaccurate and unresolved Radon transform space in (d). This problem is incorrect in a different way with the wavelet used in the gather of (b), and the corresponding tau-p space shown in (e). The problem is only completely resolved with the highest frequency wavelet of (c) and the Tau-p space of (f).

What are we to glean from this example? I had been a small part of the history of experimental bias concerning the non-uniqueness of the Radon transform. This cognitive bias was not at work as part of a conspiracy. The authors were honestly attempting to reduce non-uniqueness in the transform. But that these two obvious sources of non-uniqueness were never explicitly shown in the literature until recently suggests that experimenter&rsquos bias exists. We rarely show the bad examples or look for them. We human beings are caught focussing on the narrow little points of our main purpose and we toss aside&mdashoften unthinkingly&mdashthat which does not contribute to that simple coherent idea. Some readers will note the coherency bias (Kahneman, 2011) at this point. In the case of the Radon transform, there is little doubt that the authors involved did improve the non-uniqueness of it. But they missed an opportunity to do a better job and gain a clearer understanding by not highlighting these limitations.

Example two, experimental bias and Maslow&rsquos Hammer with AVO

Amplitude versus offset analysis (AVO) is one of the most popular diagnostic tools of the modern seismic practitioner. It enables elastic rock property estimates of the earth from a large portion of the historical p-wave seismic data. As useful as AVO derived estimates can sometimes be, geophysicists may sometimes come to over-rely on them and may also sometimes apply experimenter&rsquos bias where they are concerned.

In the 2012 CSEG Symposium, I showed a case study regarding the controls on Wilrich production where AVO was shown to be ineffective. The most pressing questions from the audience were not on the many details of the novel method that I and my co-authors had created to quantitatively demonstrate the importance of steering to obtain better productivity but were instead obsessed on why I could not make AVO &ldquowork&rdquo. When my coauthors and I later published this study in Interpretation (Hunt et al, 2014), the biggest questions from the peer-review again ignored the core work of the paper and focused on the apparent failure of AVO. I was told that I had to prove why I did not include AVO estimates in my method. This is quite apparently an example of both experimenter&rsquos bias and Maslow&rsquos Hammer.

The target the work was the tight, deep basin, Wilrich sandstone in Alberta Canada, which exhibits low permeability (0.05 to 0.1mD) and low to moderate porosity (6% to 8%). Figure 4 is the petrophysical type log used in the paper. The agreement of log effective porosities is shown in Figure 4b, the mineralogic content is shown in Figure 4c. The core effective permeabilities of Figure 4d and the thin section images of Figure 4e were important elements of the paper which was concerned with how well each of the upper, middle and lower section of the Wilrich would contribute to production and if relative well placement in a particular zone would be important. Of importance is the coal section, shown in black shading in Figure 4c, which lies directly on top of the Wilrich sand. This coal section is of variable thickness in the area and has materially different rock properties than the Wilrich.

Figure 4. Petrographic analysis of the Wilrich reservoir using log, core, and thin section data. The upper, middle, and lower units are separated by blue dashed lines, and the upper and lower units are also labelled in text. (a) The gamma ray open hole log. (b) The multimineral log estimate of effective porosity. Core plug effective porosity measurements are identified and overlain with black dots. (c) The color-coded mineralogical interpretation from the logs. Quartz is colored yellow, dolomite is purple, coal is black, shale is grey-brown, clay is green-brown, red is gas-filled porosity, and blue is bound water. (d) The deep resistivity log curve with core permeability measurements identified with black dots. (e) Illustrates representative thin section images of each unit at the same magnification.

Hunt et al (2014) evaluated well length, fracture stimulation data, geologically mapped porosity thickness (Phi-h), reservoir pressure, mud gas and gamma ray logs, relative position of the horizontal wellbore in each of the upper, middle, and lower Wilrich as estimated quantitatively from wellsite data. To this was added interpreted 3D seismic data including stack response, AVO (including inverted Lame parameters), azimuthal AVO (AVAz), velocity variation with azimuth (VVAz), curvature and coherency in an exhaustive array of windows about the Wilrich. All these parameters were gridded along the horizontal wellbores and made correlatable to each other as well as to productivity on a wellbore by wellbore basis. Using the seismic to estimate bin by bin seismic Phi-h estimates along the wellbores was highly desirable, so an exhaustive effort was made to determine if any of the seismic variables, chiefly stack and AVO or AVO inversion variables would correlate to the 114 vertical control wells, each of which had an effective Phi-h value from multimineral geologic well analysis.

The effort to correlate seismic variables (including AVO) to the Phi-h of the vertical wells failed completely and was the only exigent issue to audience members at both the Symposium in 2012 and to the peer review process in Hunt et al (2014). The questioning of this failure was remarkable in a paper in which AVO was but one tool of many, and in which production estimation, not AVO estimation was the key scientific question. Having been there, I can tell you that no one wanted AVO to fail. I know this because of the direct communication made to me as first author, and because I also wanted AVO to be effective. This experience was a searing example of experimenter&rsquos bias. Hunt et al (2014) did exhaustively prove, through modeling and real data correlations, that AVO both could not and did not predict the Wilrich Phi-h. They also proved why. It was because of the variable overlying coal, which directly overlay the sand, and which dominated the reflection response at all angles of incidence.

All but ignored in the angst over the failure of AVO to be effective in the estimation of the Wilrich sand Phi-h, were several important lessons. Figure 5 shows the correlation coefficients for a few of the seismic variables in the study as they related to both Wilrich sand Phi-h and the overlying coal isopach in the 114 vertical wells. None of the seismic attributes passed the 1% p-test of statistical significance for predicting the Wilrich Phi-h. All but one (the curvature) of the seismic variables had a statistically significant correlation to the coal isopach.

Figure 5. The correlation coefficients describing the relationship between a subset of the tested seismic attributes and the Wilrich Phi-h and the overlying coal isopach. A dark yellow shading indicates that the 1% p value test for significance has been met.

The key lessons here are manifold:

  1. The seismic attributes were ineffective for predicting Wilrich Phi-h, but this is not necessarily because they did not &ldquowork&rdquo. The opposite is likely true.
  2. The seismic attributes, including AVO and AVAz, were correlated to the coal isopach. This is because these attributes are reflectivity based. AVO did likely work, just not in a way that was useful to the commercial purpose at hand.
  3. That AVAz response is reflectivity based is well known, but that it may be controlled by overlying coal, which is known to be anisotropic, is an important and obvious conclusion from this work.

Lessons #2 and #3, while not necessarily pleasing, were later repeated numerous times in other work that I engaged in, most of which was unpublished, except for the 2016 CSEG Symposium where Bahaa Beshry spoke of our stress work. There he showed again that AVAz measures showed statistically significant correlations to coal isopach in sections of the data where coal tuning is present. The biggest lesson here is that experimental bias can be so strong that we miss an unexpected but important learning.

Loss Aversion and Neglect of Probabilities, or when we are facing a loss, we often make it worse

The most common errors that I have been a part of have been caused by loss aversion and neglect of probability. For most of my career, I lacked the vocabulary to effectively treat this issue. Kahneman (2011) does an excellent job of illustrating these issues, which often go together, and providing language around these important cognitive issues. Loss aversion, simply put, is the irrational hatred that human beings have of taking a loss. This is not a suggestion that people should enjoy losing, but an observation of the objective fact that they dislike losing more than is rational. An example from Kahneman (2011) follows:

Which do you choose:
Lose $900 for sure or choose a 90% chance of losing $1100?

Kahneman studied this and many similar questions empirically and found that the choice most people make is to take the gamble, which is laughably irrational. Kahneman&rsquos conclusion is that when all options are bad, people like to gamble.

Note that the expected value of each decision is the matter of simple arithmetic. It may be that some of those who chose the gamble had performed this arithmetic and knew that the gamble had a larger expected loss, but it is likely that many did not even extend their effort even to this trivial level. In the world of geosciences and business operations, matters often go wrong, and similar&mdashthough more complex&mdashchoices will arise. The probabilities and expected values relevant to such choices may not be so glaringly obvious. Geoscientists and decision makers often have another meta-choice before them: to gamble blindly or lay out the probabilities and expectations in a rational decision-making framework.

The most vexing observation from my experience is how often, in a business with ubiquitous uncertainty, that we have disregarded probabilities when make decisions. This is common cognitive bias is called neglect of probability. It appears to go hand in hand with loss aversion, and often exacerbates it.

We might deny that we have made irrational decisions due to loss aversion and neglect of probabilities, but there is seldom a signpost or adjudicator that stops us and tells us when this is happening. I have observed or been an active participant in these biases at many of the companies that I have worked at. On some occasions, I have suspected that our decision processes have been in error but have lacked the vocabulary or tools to properly address the issue. Sometimes, I am sure, that the other decision makers have also suspected that irrational choices were being made. Some of the general circumstances in which these biases have occurred are:

  • Following a property acquisition, and we discover we paid too much, so we have acted rashly, hoping to &ldquodrill our way out&rdquo of the mistake and avoid facing the loss.
  • After operational issues with drilling, when significant money has been spent and we face an attempt to remediate the operation or abandon it. Choices are often made with an irrational bias towards further operations in the original wellbore.
  • After completion issues in a new operation, and we face the loss of the wellbore.
  • Every time we fail to quantitatively determine the predictive accuracy of our seismic attributes / data / interpretation to the target property under which a decision is about to be made.

Virtually every element of geoscientific advice we give has an element of uncertainty to it, and if we do not in some way address this, we may be engaging in neglect of probability.

Example three, operations and Loss Aversion and Neglect of Probabilities

At one of my companies, we drilled a long, deep, expensive horizontal well. Wellsite data was incredibly positive, and we were quite certain that the wellbore would be highly economic. The open hole completion equipment was placed, but the wellbore abruptly failed within the near-heel portion of the horizontal section.

The decision at hand was whether we should attempt to remediate the wellbore or to drill an entirely new well (a twin), which would cause us to take a multi-million dollar loss on the first (failed, under that decision) well. In the multidisciplinary meeting where this decision was taken up, the initial feeling was that the remediation choice would be best.

But I had seen this type of issue before, and it was suggested that we consider the problem using decision analysis (Newendorp, 1975), create a decision tree, and populate it with the correct probabilities and costs. Figure 6 shows the cartoon decision tree. To successfully remediate the original well, three operations had to be successfully executed: the stabilization of the wellbore, the patching of the breach, and the subsequent fracture stimulation of the well. The decision analysis was quite simple, and its construction was not time consuming, nor were the probabilities difficult to populate by the operational team. The analysis clearly showed that, in this example, the remediation path had a lower expected value. The decision to twin was made, and the twin was successful. The cumulative effect of considering the three sub-probabilities of the operation yielded a much different, and easily measurable answer than when considering the operation as a whole.

Figure 6. Decision Analysis mockup for the example. Each (round) chance node requires a probability and must be successful for the remediation path to be successful. Once probabilities and costs were supplied to each node, the rational decision was clear.

A world with no mistakes

The only productive reason to review mistakes, or decisions made under cognitive biases, is to improve our decision making going forward. An understanding of logic and behavioral psychology is neither best put towards conceits at a cocktail party nor to manipulating others. This knowledge is best applied inward, to improve ourselves and our decision processes. This is not something to learn once and feel superior over, it is a thing to learn and relearn and be ever watchful of the moment when, ego-depleted or in a hurry, we forget the lesson and make a new mistake. To err is human, and to be human is to always have a set of error tendencies just waiting for the proud, lazy moment and our undoing.

We will never be free of fallibility in our thinking, or of potential biases.

While we may no more eliminate mistakes than we can eliminate our own humanity, we can on a case by case basis minimize them through awareness, wariness, and humility.

About the Author(s)

Lee Hunt graduated from the University of Alberta with a B.Sc. in geophysics in 1990, and immediately started his career with PanCanadian Petroleum Ltd. His experience ranged from interpretation to managing a business unit, from the old conventional days of high-risk exploration to the new days of high-capital resource plays. Lee has drilled over 400 wells in most of the play types within the Western Canadian Sedimentary Basin. His work has focused on performing the quantitative analysis of a variety of geophysical technologies including: multiple attenuation, resolution enhancement, depth and geo-hazard predictions, stress estimation, AVO, AVAz, VVAz, curvature, prediction of fluid, lithology, porosity and fracture treatment production characteristics. At Jupiter Resources, Lee and others formed two self-organizing teams, one of which was concerned with the interconnectedness of the company, and the other with technical problems in geosciences and engineering.

Throughout his career, whenever possible, Lee has shared his technical findings with the larger CSEG community through talks, papers and articles. Lee and co-authors won the Best Oral presentation Award for the 1997 SEPM convention, the 2000 CSEG Convention Best CSEG Paper, the 2008 CSEG Convention Best Abstract, and the 2008 Best Technical Luncheon talk. He and his co-authors also received the Best CSEG Paper in 2010, the Best Exploration Paper at VII INGPET in 2011, Best Paper in the CSEG Recorder in 2011, and Honorable Mention for Best Paper in the TLE in 2011. The TLE paper subject matter was short period multiple attenuation and was an extension of some of Lee's earliest technical work. Lee has served the CSEG by volunteering in various capacities and won many awards.

In 2017, after twenty-seven years as geophysicist, Lee left the industry to pursue a lifelong dream of writing fiction. He is the author of the Dynamicist Trilogy (https://www.leehunt.org).

References

Cary, P., 1998, The simplest discrete Radon transform: 68th Ann. Internat. Mtg., Soc. Expl. Geophys., Expanded Abstracts, 1999-2002.

Hampson, D., 1986, Inverse velocity stacking for multiple elimination: J. Can. SEG, 22, 44-55.

Hughes, W., J. Lavery, and K. Doran, 2010, Critical Thinking: an Introduction to the Basic Skills, sixth edition: Broadview Press.

Hunt, L., P. Cary, W. Upham, 1996, The impact of an improved Radon transform on multiple attenuation: SEG Extended Abstracts, 1535-1538.

Hunt, L., R. Reynolds, M. Hadley, S. Hadley, Y. Zheng, M. Perz, 2011, Resolution on multiples: Interpreters' perceptions, decision making, and multiple attenuation: The Leading Edge, 30, 890-904.

Hunt, L., S. Hadley, S. Reynolds, R. Gilbert, J. Rule, M. Kinzikeev, 2014, Precise 3D seismic steering and production rates in the Wilrich tight gas sands of West Central Alberta: Interpretation, Vol. 2, No. 2 (May 2014) p. 1-18. http://dx.doi.org/10.1190/INT-2013-0086.1.

Jeng, M., 2006, A selected history of expectation bias in physics: American Journal of Physics. 74 (7): 578&ndash583.

Kahneman, D., 2011, Thinking, Fast and Slow: Farrar, Straus and Giroux.

Marshall, S. L. A., 1947, Men Against Fire: The Problem of Battle Command: Peter Smith Publisher.

Maslow, A., 1966, The Psychology of Science: A Reconnaissance: Harper Collins

Newendorp, P. D., 1975, Decision Analysis for Petroleum Exploration: Pennwell Publishing Company.

Ng, M., and M. Perz, 2004, High resolution Radon transform in the t-x domain using &ldquointelligent&rdquo prioritization of the Gauss-Seidel estimation sequence: 74th Annual International Meeting SEG, Expanded Abstracts, 23, 2160-2163.

Sacchi, M.D., and T.J Ulrych, 1995, High-resolution velocity gathers and offset space reconstruction: Geophysics, 60, 1169-1177.

Thaler, R., and C. Sunstein, 2008, Nudge: Improving Decisions about Health, Wealth, and Happiness: Yale University Press.

Thorsen, J. and J. Claerbout, 1985, Velocity-stack and slantstack stochastic inversion: Geophysics, 50, 2727-2741.

Appendices

Join the Conversation

Interested in starting, or contributing to a conversation about an article or issue of the RECORDER? Join our CSEG LinkedIn Group.


Cognitive Dispositions to Respond

Chris is an Intensivist and ECMO specialist at the Alfred ICU in Melbourne. He is also the Innovation Lead for the Australian Centre for Health Innovation at Alfred Health and Clinical Adjunct Associate Professor at Monash University. He is a co-founder of the Australia and New Zealand Clinician Educator Network (ANZCEN) and is the Lead for the ANZCEN Clinician Educator Incubator programme. He is on the Board of Directors for the Intensive Care Foundation and is a First Part Examiner for the College of Intensive Care Medicine. He is an internationally recognised Clinician Educator with a passion for helping clinicians learn and for improving the clinical performance of individuals and collectives.

After finishing his medical degree at the University of Auckland, he continued post-graduate training in New Zealand as well as Australia’s Northern Territory, Perth and Melbourne. He has completed fellowship training in both intensive care medicine and emergency medicine, as well as post-graduate training in biochemistry, clinical toxicology, clinical epidemiology, and health professional education.


Cognitive bias believing you are good at the things you are bad at, as successes are more vivid? - Psychology

Like other complex activities, poker is easier to learn when you build skills in the right order.

So I’ve compiled the top 23 cognitive mistakes that make people play bad poker. I’ve listed these roughly in order of priority. In other words, if you don’t fix the ones near the top, it won’t really matter if you’re doing fine with the ones further down! You should think of this as the roadmap of errors that are preventing you from becoming a better poker player.

Inattention is the tendency to fail to concentrate on information that could be useful for future decision making.

Confirmation bias is the tendency to search for or interpret information in a way that confirms one’s preconceptions.

Focusing effect is the tendency to place too much importance on one aspect of an event causes error in accurately predicting the utility of a future outcome.

Availability heuristic is estimating what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples.

Not knowing the math (Innumeracy)

Neglect of probability is the tendency to completely disregard probability when making a decision under uncertainty.

Base rate neglect is the tendency to base judgments on specifics, ignoring general statistical information.

Loss aversion is people’s tendency to strongly prefer avoiding losses to acquiring gains.

Self-serving bias is the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests.

Overconfidence is the state of being more certain than is justified, given your priors and the evidence available. For example, for certain types of questions, answers that people rate as 󈭓% certain” turn out to be wrong 40% of the time.

Negativity Bias is paying more attention to and giving more weight to negative rather than positive or neutral experiences.

Optimism bias is the tendency to be over-optimistic about the outcome of planned actions.

Clustering illusion (Apophenia) is the tendency to see patterns where none exist.

Illusion of control is the tendency to overestimate one’s degree of influence over external events.

Gambler’s fallacy is the tendency to think that future probabilities are altered by past events, when in reality they are unchanged.

Just-world phenomenon is the tendency for people to believe that the world is just and therefore people “get what they deserve.”

Irrational escalation is the phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the decision was probably wrong.

Pessimism bias is the tendency for some people, especially those suffering from depression, to overestimate the likelihood of negative things happening to them.

Projection bias is the tendency to unconsciously assume that others (or one’s future selves) share one’s current emotional states, thoughts and values.

Outcome bias is the tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made.

Hindsight bias is sometimes called the “I-knew-it-all-along” effect, the tendency to see past events as being predictable at the time those events happened.

Consistency bias is remembering one’s past attitudes and behavior as more similar to one’s present attitudes.

Primacy effect is the tendency to weigh initial events more than subsequent events.

Peak-end rule is how we judge our past experiences almost entirely on how they were at their peak (pleasant or unpleasant) and how they ended.


Classification of heuristics and cognitive biases

There are more than 200 types of heuristics and cognitive biases in scientific literature and it is complicated to classify them. Buster Beston carried out a simple classification based on the reasons that justify the use:

When there is too much information

We are surrounded by more than 11 million bits of information per second, and it is impossible to mentally process it all. Our brain employs mental shortcuts to select only that piece of information considered useful. For example:

The brain concentrates more on things associated nonconsciously with concepts we utilize frequently or have recently utilized (availability heuristic). This is why, when we are facing the shelves at a supermarket, we quickly find the product we are searching for, or when we are expecting a child, we see pregnant women everywhere (selective attention bias).

The brain focuses more attention and provides more value to weird, funny, stunning and surprising things and we generally omit information that is considered ordinary or expected (von Restorff effect). For example, in the image, by employing neuromarketing techniques such as the eye-tracking system, it can be seen that the zone of maximum interest is the person sustaining the signal, who is seen by 100% of people and to whom the maximum time is dedicated. However, the remaining elements are visualized only by less than 25% of people and for less than 0.25 seconds.

The brain is skillful to detect that something has changed and we generally evaluate this novelty by the effect of the change (positive or negative) rather than by its value (which is the value it would have on its own, isolatedly - anchor effect). This is why when Whatsapp decided to charge 0.99€ for its services many users felt defrauded. The problem was not the price, but the fact that before the price was 0 €.

The brain usually focuses attention on what confirms our opinions and beliefs and ignores things that contradict us (confirmation bias). In other words, the confirmation bias is the trend to give much more credibility to what is aligned with our way of thinking, and this makes us value more the information of a specific communication medium than another.

The brain detects defects in other people much easier than our own defects (bias blind spot). In this way, we think that other people are much more impressionable than ourselves, for example, with publicity. The self-serving bias is related to the blind spot bias, making us attribute more responsibility to successes and hits than to mistakes.

When we don’t know how to provide meaning to what surrounds us

As we only process a small part of the information required for a completely objective vision of the world, we fill in information blanks to give meaning to what surrounds us.

The brain finds stories and patterns even in disperse data (series illusion or apophenia), which is natural to avoid the sensation of unfamiliarity, of something that we don’t like or that makes us feel insecure. This is why we find shapes in clouds.

The brain fills in the missing information with the best suppositions (stereotypes, generalities, own past or third-party experiences) but also, we forget which parts were real and which were suppositions. For example, we evaluate a hotel on Booking as very good because of very positive evaluations from other people, rather than due to the objective information provided by the establishment (drag effect or social test).

The brain gives more value to people or things we are used to. In other words, we include suppositions on the evaluation of what we see. For example, we usually think that attractive people are more intelligent and kind than less-attractive people because we generalize a positive feature to all people (halo effect). The correspondence bias or attribution error is related, which explains behavior on the basis of the “type” of person involved, rather than on social or environmental factors that surround and influence the person.

The brain simplifies probabilities and calculations so it is more simple to think about them. Nevertheless, we are unskilled in math and intuitive statistics, making terrible mistakes (law of small numbers). For example, when playing roulette we don’t want to bet on “red” if the five previous results were red.

The brain makes us believe that we know what others think, and we mold the mind of others from our own minds (false consensus bias and projection bias). This is why we believe that everyone will love the movie we enjoyed so much.

The brain projects our current mentality to the past or future, making us believe that it is/was easy to anticipate the future (retrospective bias, hindsight bias). Therefore, once something happens, we say and feel that “I already thought so” although probably that was not true.

When we have to act fast

We are limited by time and the amount of information that we can process, but we cannot let this paralyze us.

The brain provides us with excessive confidence in our capabilities so we can act (overconfidence bias, optimism bias, actor observer bias, dunning kruger effect). This is why many times we think we can handle well a conflictive situation that in the end gets out of hand.

To keep us focused on the action, the brain favors what is immediate and closer over what is far in time or distant (hyperbolic discount bias). We value more things in the present than in the future. This makes us ignore our diet because the reward of eating pastry right now is much more irresistible than the reward of losing weight, which will only be achieved in a few months.

The brain motivates us to complete tasks in which we have already invested time and energy. This helps us finish things, even when we have plenty of reasons to give up. For example, the sunk cost fallacy/bias leads a student to keep studying, despite hating the course, because he has already been studying for two years. The correct decision would be to just let go (which this bias does not allow you to do) and find a really exciting career rather than drag through two or three more years to finish a boring course (more information on the decision-making process).

The brain helps us avoid irreversible decisions (status quo bias). If we have to select, we usually choose the option that is perceived as less risky or that preserves the status quo. “Better the devil you know”. This makes it hard to leave our comfort zone and is a clear enemy of innovation.

When we have to choose what to remember

We can only afford to remember the bits of information that will probably be useful in the future. We have to bet constantly and make allowances on what we have to remember or forget. We highlight the following types of memory biases:

We reinforce the memories after a related event occurs and, in this process, some details of the memory can change without us being conscious of it (reconstructive memory or false memory bias). For example, there are people that were not present at the 9/11 attack and after one year, were completely convinced they were actually there. This was demonstrated by researchers of the 9/11 Memory Consortium. Without going that far, this is why when you run into a friend you haven’t seen for a while and both remember an event, it is possible that your versions are not the same. But nobody is lying, at least not consciously.

We discard specific data to form generalities (implicit stereotype bias). This occurs because of necessity but the result of “lumping everything in the same bag” includes implicit association, stereotypes, and prejudice. For example, if we think about how Spain performed in the most recent World Cup, we only remember how bad it was and don’t remember the specific good things that happened. And when we vote for a political party, we are led by what we have in mind for “conservatives” and “liberals” rather than by a concrete program of actions.

We reduce events and lists to their key elements. As it is difficult to reduce these to generalities, we select some elements to represent the overall. For example, after watching a movie, the peak-end rule leaves us with a feeling due to the moment of highest emotional intensity and how it ended.

We store memories according to the context in which they were obtained, without taking into account its value. For example, the Google effect leads us to forget any information we think we can obtain on the internet. Although the information is relevant, we forget it anyway because we can easily find it again. This is why we don’t memorize any telephone numbers anymore or the directions to a specific location.

The following video from the BBC website also explains what cognitive biases are and how they influence our perception of reality.


Availability heuristic describes a shortcut where people make decisions based on information that's easier to remember.

In one experiment, a professor asked students to list either two or 10 ways to improve his class. Students that had to come up with 10 ways gave the class much higher ratings, likely because they had a harder time thinking about what was wrong with the class.

This phenomenon could easily apply in the case of job interviews. If you have a hard time recalling what a candidate did wrong during an interview, you'll likely rate him higher than if you can recall those things easily.


3 Cognitive Biases That Alter Your Thinking

There are far too many layers of cognition (thinking, knowing, remembering) to sum up in a short blog post. I only hope to provide some quick information that is easy to read and digest. I’d like to touch on a few useful examples instead of definitions, statistics, or clinical terminology.

Cognitive Bias is defined as a pattern of deviation in judgement, whereby influences about other people and situations may be drawn in an illogical fashion. Cognitive bias is a general term used to describe many observer effects in the human mind, some of which can lead to perceptual distortion, inaccurate judgment, or illogical interpretation.

In layman’s terms: A gap in between how we should reason and how we do reason. Thinking irrationally – judging or favoring a person, group, or thing in an unfair way.

As much as you may not notice them, biases are ingrained into our decision making from birth. Biases are one of the more interesting phenomena of evolved mental behavior. The brain has evolved to make us believe that we’re special, valuable, and capable. Biases help you to feel unique and overcome the strains, struggles, and challenges of your life. Biases help you to avoid second guessing yourself or feeling like a fool. We are biased in a variety of areas: from bias to live in certain climates and temperature ranges, to seeking out certain types of foods and tastes.

You can imagine the potential time pressures that our ancestors faced. The ability to make split second decisions is essential for survival. The speculation is that biases evolved in part to help us decide quickly and effectively to quickly sample the information available to us and to focus on the bits relevant to our current task or situation. In short, biases help guide us and keep us safe.

Research into human judgment and decision making over the past 60 years in cognitive science, social psychology, and behavioral economics has established an ever increasing and evolving list of cognitive bias. There is a non exhaustive list of over 100 cognitive biases on Wikipedia. Although cognitive biases help us to feel amazing about our capabilities and self image, they also have their drawbacks. They lead to poor choices, bad judgments, and erroneous insights.

Cognitive Biases Effect:

  • Memory
  • Motivation
  • Decision making
  • Probability judgments
  • Perceived causes of events
  • Group evaluation and selection
  • Having a positive attitude towards oneself

Biases emerge from a diversity of mental processes that can be challenging to pinpoint. These mental processes include heuristics (problem solving mental shortcuts ), framing (presentation), mental noise, moral and emotional motivators, and social influences.

The goal is not to completely remove your biases, but to become aware and adjust for them. By recognizing that you’re thinking is subject to influence, you can work towards a higher level of control. You can simultaneously correct and broaden your perspective. It’s actually quite amusing when you start noticing and challenging your own biases and untwisting your perceptions. The danger of not becoming aware of your biases is to think that you’re always right. It is vital to notice that the world looks different for other people. Dropping our biases enable us to listen and connect to each other much more effectively.

3 Predictable Cognitive Biases.

While this is slightly tongue-in-cheek, these are a few biases that are fairly consistent among people. It doesn’t take long to spot yourself using these and adjust for them.

1) Confirmation Bias

“The tendency to look for or interpret information that confirms your preconceptions.”

You want to be right about how you see the world. Your opinions are a product of constantly seeking out information that confirms your beliefs, while disregarding contradictory information that does not. You like to be told what you already know, so you apply a filter called confirmation bias. Your brain is helping you confirm that you’ve made the correct choice. (and you have by reading my blog) Focusing on certain things can help prevent us from being lost. Confirmation bias it is essential to piece together a coherent world.

Visiting political websites that hold the same opinions, watching a news channel that tells you what you want to hear, keeping company with people that hold the same beliefs as you – are all examples of confirmation bias. These preferential behaviors keep you comfortable and avoid cognitive dissonance. The internet has increased this behavior.

If you’ve ever purchased a car, you may have started to notice the brand you’ve chosen everywhere you looked. While researching and after purchasing an Infiniti G35, I was seeing them everywhere!

“An implicit memory affect in which exposure to one stimulus influences a response to another stimulus.”

Priming is an exposure to something that effects your later behavior in some way, without you being aware of the earlier influence. Unconscious priming effects can be very noticeable and last long after you’ve consciously forgotten.

Craving Italian food after watching “The Godfather”, walking slower after thinking about the elderly, being more argumentative after seeing “A Few Good Men”, having more patience after reading words that have to do with politeness – are all examples of priming.

Priming can be as simple as you reading the word table in your news feed, and if asked later to complete a word starting with tab, you’re more likely to answer table because you have been primed. This is also why when someone asks you for a word related to blackboard, you’re likely to choose classroom.

3) Framing Effect

“Reacting to a particular choice in different ways depending on whether it is presented as a loss or a gain.”

You routinely come to different conclusions about the same problem, depending on how it’s presented. Perception of loss or gain drives human decision making in every aspect of our existence. You avoid risk (risk aversion) when a negative frame is presented, but seek risk (risk seeking) when a positive frame is presented.

Language plays a key role in framing and can evoke completely different reactions to something. Responding differently after hearing “Obama Care” as opposed to “The Affordable Care Act” or “Global Warming” as opposed to “Climate Change” – are examples of the framing effect.

I’ll leave you with the following experiment on framing by Amos Tversky:

Participants were offered two alternative solutions for 600 people affected by a hypothetical deadly disease:

  • Option A saves 200 people’s lives
  • Option B has a 1/3 chance of saving all 600 people and a 2/3 possibility of saving no one

They offered the same scenario to another group of participants, but worded differently:

  • If option C is taken, then 400 people die
  • If option D is taken, then there is a 1/3 chance that no people will die and a 2/3 probability that 600 will die

The above experiment showcases the nature of framing. The two groups favored different options because of the way the options were presented. The first set of participants were given a positive frame (emphasis on lives saved), whereas the second set were given a negative frame (emphasis on lives lost).

Why This Matters.

It is beneficial to be aware of the processes influencing our judgments. Having background knowledge on how the mind actually works is essential for logic, reasoning, argumentation, and critical thinking. It also allows us to be aware of manipulation and influence by others on these biases. (marketing firms, political campaigns)

Cognitive biases are also related to the persistence of superstition, to large social issues such as prejudice, and they also work as a hindrance in the acceptance of non-intuitive scientific knowledge by the public.


3 Cognitive Biases That Alter Your Thinking

There are far too many layers of cognition (thinking, knowing, remembering) to sum up in a short blog post. I only hope to provide some quick information that is easy to read and digest. I’d like to touch on a few useful examples instead of definitions, statistics, or clinical terminology.

Cognitive Bias is defined as a pattern of deviation in judgement, whereby influences about other people and situations may be drawn in an illogical fashion. Cognitive bias is a general term used to describe many observer effects in the human mind, some of which can lead to perceptual distortion, inaccurate judgment, or illogical interpretation.

In layman’s terms: A gap in between how we should reason and how we do reason. Thinking irrationally – judging or favoring a person, group, or thing in an unfair way.

As much as you may not notice them, biases are ingrained into our decision making from birth. Biases are one of the more interesting phenomena of evolved mental behavior. The brain has evolved to make us believe that we’re special, valuable, and capable. Biases help you to feel unique and overcome the strains, struggles, and challenges of your life. Biases help you to avoid second guessing yourself or feeling like a fool. We are biased in a variety of areas: from bias to live in certain climates and temperature ranges, to seeking out certain types of foods and tastes.

You can imagine the potential time pressures that our ancestors faced. The ability to make split second decisions is essential for survival. The speculation is that biases evolved in part to help us decide quickly and effectively to quickly sample the information available to us and to focus on the bits relevant to our current task or situation. In short, biases help guide us and keep us safe.

Research into human judgment and decision making over the past 60 years in cognitive science, social psychology, and behavioral economics has established an ever increasing and evolving list of cognitive bias. There is a non exhaustive list of over 100 cognitive biases on Wikipedia. Although cognitive biases help us to feel amazing about our capabilities and self image, they also have their drawbacks. They lead to poor choices, bad judgments, and erroneous insights.

Cognitive Biases Effect:

  • Memory
  • Motivation
  • Decision making
  • Probability judgments
  • Perceived causes of events
  • Group evaluation and selection
  • Having a positive attitude towards oneself

Biases emerge from a diversity of mental processes that can be challenging to pinpoint. These mental processes include heuristics (problem solving mental shortcuts ), framing (presentation), mental noise, moral and emotional motivators, and social influences.

The goal is not to completely remove your biases, but to become aware and adjust for them. By recognizing that you’re thinking is subject to influence, you can work towards a higher level of control. You can simultaneously correct and broaden your perspective. It’s actually quite amusing when you start noticing and challenging your own biases and untwisting your perceptions. The danger of not becoming aware of your biases is to think that you’re always right. It is vital to notice that the world looks different for other people. Dropping our biases enable us to listen and connect to each other much more effectively.

3 Predictable Cognitive Biases.

While this is slightly tongue-in-cheek, these are a few biases that are fairly consistent among people. It doesn’t take long to spot yourself using these and adjust for them.

1) Confirmation Bias

“The tendency to look for or interpret information that confirms your preconceptions.”

You want to be right about how you see the world. Your opinions are a product of constantly seeking out information that confirms your beliefs, while disregarding contradictory information that does not. You like to be told what you already know, so you apply a filter called confirmation bias. Your brain is helping you confirm that you’ve made the correct choice. (and you have by reading my blog) Focusing on certain things can help prevent us from being lost. Confirmation bias it is essential to piece together a coherent world.

Visiting political websites that hold the same opinions, watching a news channel that tells you what you want to hear, keeping company with people that hold the same beliefs as you – are all examples of confirmation bias. These preferential behaviors keep you comfortable and avoid cognitive dissonance. The internet has increased this behavior.

If you’ve ever purchased a car, you may have started to notice the brand you’ve chosen everywhere you looked. While researching and after purchasing an Infiniti G35, I was seeing them everywhere!

“An implicit memory affect in which exposure to one stimulus influences a response to another stimulus.”

Priming is an exposure to something that effects your later behavior in some way, without you being aware of the earlier influence. Unconscious priming effects can be very noticeable and last long after you’ve consciously forgotten.

Craving Italian food after watching “The Godfather”, walking slower after thinking about the elderly, being more argumentative after seeing “A Few Good Men”, having more patience after reading words that have to do with politeness – are all examples of priming.

Priming can be as simple as you reading the word table in your news feed, and if asked later to complete a word starting with tab, you’re more likely to answer table because you have been primed. This is also why when someone asks you for a word related to blackboard, you’re likely to choose classroom.

3) Framing Effect

“Reacting to a particular choice in different ways depending on whether it is presented as a loss or a gain.”

You routinely come to different conclusions about the same problem, depending on how it’s presented. Perception of loss or gain drives human decision making in every aspect of our existence. You avoid risk (risk aversion) when a negative frame is presented, but seek risk (risk seeking) when a positive frame is presented.

Language plays a key role in framing and can evoke completely different reactions to something. Responding differently after hearing “Obama Care” as opposed to “The Affordable Care Act” or “Global Warming” as opposed to “Climate Change” – are examples of the framing effect.

I’ll leave you with the following experiment on framing by Amos Tversky:

Participants were offered two alternative solutions for 600 people affected by a hypothetical deadly disease:

  • Option A saves 200 people’s lives
  • Option B has a 1/3 chance of saving all 600 people and a 2/3 possibility of saving no one

They offered the same scenario to another group of participants, but worded differently:

  • If option C is taken, then 400 people die
  • If option D is taken, then there is a 1/3 chance that no people will die and a 2/3 probability that 600 will die

The above experiment showcases the nature of framing. The two groups favored different options because of the way the options were presented. The first set of participants were given a positive frame (emphasis on lives saved), whereas the second set were given a negative frame (emphasis on lives lost).

Why This Matters.

It is beneficial to be aware of the processes influencing our judgments. Having background knowledge on how the mind actually works is essential for logic, reasoning, argumentation, and critical thinking. It also allows us to be aware of manipulation and influence by others on these biases. (marketing firms, political campaigns)

Cognitive biases are also related to the persistence of superstition, to large social issues such as prejudice, and they also work as a hindrance in the acceptance of non-intuitive scientific knowledge by the public.


The ways to overcome ‘Attribution Biases’?

Attribution biases are most often not good for us. So how do we overcome these different types of attribution biases?

We can overcome our own and others’ attribution biases in a number of ways and allow ourselves to be open to new information, facts evidence, and improve our decision-making and problem-solving abilities.

4 Steps to overcoming Attribution Bias

1. Realization

The first step towards reducing and overcoming attribution bias to recognize that these are cognitive biases and are present in each individual in varying degrees. The other realization is that you can’t completely get rid of them.

This is essential because there will always be a few times you wouldn’t realize that you are in grip of attribution bias. Once these two points are very clear, the positive movement towards overcoming attribution bias begins.

2. Start Challenging your stories

All attributions have an underlying story behind it. The first step is to identify that story. In the case of Rohit’s example, he creates a story around his colleague that she is coming late and this may be due to the fact that she is not serious about her job or maybe she is too lazy to wake up early.

The key is to identify your story first, the one that your brain creates. Acknowledge that this is a story made up by you and it needs to be verified before it becomes an attribution.

3. Try to verify your story

The next step is to verify your story. This can be done by looking at the facts and/or having a direct line of communication.

The key is to treat it as a story and not your belief. Sometimes you need to tell your inner-self that, “this is just a story, I need to verify it”. If you are successful in convincing yourself then you make yourself free from ‘confirmation bias’.

If Rohit attempts to find facts about his colleague’s past office coming time he may have noticed that she comes to office on time. He even could have had a word with her where she might have explained why she’s late.

The solutions to overcoming attribution bias are often as simple as these.

4. Avoid blaming others (externalizing the blame)

We are always better off when we get rid of the habit of externalizing the blame. This, in essence, increase our control of the situation and allows us to make efforts to make things better,

Always try to Focus on Resolving the Issues Not Who is to Blame. Once people get into problem-solving mode and are focused on resolving the problem rather than working out who’s at fault, they are more likely to be able to resolve the issues.

If these 4 steps to overcome a cognitive bias as common as attribution bias is followed, most people will be able to minimize their effect. The result is instant and better decision making that leads to success in many aspects of life.


COGNITIVE BIASES IN INFORMATION SECURITY CAUSES, EXAMPLES AND MITIGATION

This article makes a contribution to the theory of the human factor in the information security by exploring how errors in thinking distort the perceptions of InfoSec issues. Besides examples from the practice, the author proposes several ideas for mitigating the negative effects of the cognitive biases through training.

Information, security, bias, psychology, determinant, causes, mitigation, cognitive, training

One of the components of a mature information security program is the human factor. Typically, the emphasis is on maintaining a security awareness program and mitigating risks caused by human mistakes and lack of knowledge of security.

Security awareness is necessary but also only one aspect of the human factor. Another challenge for security professionals is finding actionable arguments to support their analysis and recommendations on information security issues in their organisations. The key word here is “actionable”. Their experience shows that professional analysis, argumentation techniques and even supporting evidence combined may be insufficient for properly addressing some of the identified problems. Although a number of difficulties can be noted as causes for insufficient or inadequate actions on information security matters, like deficiency of budget, time or human resources, management ignorance and so forth, the picture would be incomplete if the psychological phenomenon of cognitive biases are excluded.

The cognitive biases are inherent characteristics of the human nature and this way part of everyone’s thinking. A bias is an error in thinking when people are processing and interpreting information and thus influencing the way they see and think about the world. Unfortunately, these biases lead to poor decisions and incorrect judgments. This article correlates researches on the biased thinking with examples from the InfoSec industry.

The first part of the article explains several important (and non-exhaustive) determinants for cognitive biases and then exemplifies them with realistic sample situations that an InfoSec specialist might encounter. The second part proposes several ideas on how organisations can deal with the biases so that their occurrences and impact are reduced. The author wants to emphasize the need for further exploration of the potency of these ideas in the real world and their role for a possible mitigation strategy. In addition, the reader is encouraged to learn about the types of cognitive biases – a topic not directly discussed here.

DETERMINANTS 1 FOR COGNITIVE BIASES AND EXAMPLES

The Misperception and Misinterpretation of Data or Events

People deal with data on an everyday basis. The common approach is to analyse the data by converting it into something more useful – information – and from there to continue the conversion into knowledge and then wisdom 2 . This complex processing chain may be impacted by the misperception or misinterpretation of random data or events. As an example, a data leakage prevention (DLP) analyst, tasked to inspect the DLP reports for irregularities, may suspect random events as real attacks on a network. In this instance, the “random” data could be misinterpreted. One should understand that human’s nature is inclined to look for patterns where such do not always exist 3 .

In a second example, a typical computer user could erroneously conclude that his computer troubles are caused by malware. However, an experienced IT support specialist could identify a different cause for the symptoms of the issue and quickly rule out the malware scenario as a cause.

Judgment by Representativeness 4
Representativeness can be thought to have the reflexive tendency to assess the similarity of outcomes, instances, and categories on relatively salient and even superficial features, and then use these assessments of similarity as a basis of judgment.

Judgment by representativeness is often valid and helpful because objects, instances, and categories that go together usually do in fact share a resemblance. However, the overapplication of representativeness is what leads to biased conclusions. Many would likely recall personal experiences when a person, who belongs to a particular group, is attributed qualities, considered typical for that group. For instance, some IT experts perceive the members of their information security team as very strict security and compliance enforcers, but in reality not all of them may have this profile. The stereotypical over-generalisations like “All the IT experts…”, “All the auditors…”, “All the consultants from that company…” often follow imprecise and even incorrect qualifications (negative or positive). The simplification can and in some instances will be misleading.

Misperceptions of Random Dispersions
If the information security professional analyses statistical data from a certain security tool, he may notice patterns, which could lead him to the conclusion that specific events occur more frequently at specific time frames 5 . For instance, if a particular type of security incident occurred for four consecutive months, each time in the last seven days of the month, this could indicate that there is a pattern. These incidents could be correlated with other known events and assumptions can be made about the underlying cause, but a definite conclusion should not be drawn without additional investigation.

Solidifying the Misperceptions with Causal Theories 6
Once a person has (mis)identified a random pattern as a “real” phenomenon, it is likely going to be integrated into his pre-existing beliefs 7 . These beliefs, furthermore, serve to bias the person’s evaluation of new information in such a way that the initial belief becomes solidly entrenched. For example, if a person participated as the auditee during an audit several years ago where he was supposed to provide to the auditor some of the IT security procedures, the same person could afterward develop false expectations about the requirements in other standards or for another type of organisations. This person could be convinced that he is well aware of all the auditing practices, but in reality, he could be lacking essential knowledge on the specifics of other security standards and types of audits (e.g., see the difference between SOC 2, type I and type II audits).

Misunderstanding instances of statistical regression
The statistics teach that when two variables are related, but imperfectly so, then extreme values on one of the variables tend to be matched by less extreme values on the other. For instance, a company’s financially disastrous years tend to be followed by more profitable ones Student’s high scores on an exam (over 97%) tend to develop less regressive scores in the next exam.

If people are asked to predict the next result after an extreme value, they often tend not to consider the statistical regression and make non-regressive or only minimally regressive predictions (they predict a similar value). 8 A second problem is the tendency of people to fail to recognise statistical regression when it occurs and instead “explain” the observed phenomenon with complicated and even superfluous theories. This is called the regression fallacy. For example, a lesser performance that follows an exceptional one is attributed to slacking off A slight improvement of the security incident rate is attributed to the latest policy update Company’s management may hold their IT Security Officer accountable for the decrease of the server compliance level after an excellent patching and hardening activity three months ago.

Misinterpretation of Incomplete and Unrepresentative Data (Assuming Too Much from Too Little)

The Excessive Impact of Confirmatory Information
The beliefs people hold are primarily supported by positive types of evidence. In addition, a lot of the evidence is necessary for the beliefs to be true but they are not always sufficient to warrant the same. If one fails to recognize that a particular belief rests on deficient evidence, the belief becomes an “illusion of validity 9 ” and is seen not as a matter of opinion or values but as a logical conclusion from the objective evidence that any rational person would take. The most likely reason for the excessive influence of confirmatory information is that it is easier to deal with it cognitively, compared to non-confirmatory information.

Information systems audits are good examples of searching for confirmatory evidence 10 . In an audit, unless a statistical methodology 11 is utilised for controls testing, the evidence for the effectiveness of the controls become open for interpretation and the auditor’s intention to perform “reasonable assurance” on the controls becomes as ambiguous as it sounds. Auditors would usually ask about the existence of policies, procedures and mostly look for positive evidence. There may be even instances of auditors who ignore non-supportive evidence and ask the auditee for a supportive one. They shouldn’t, but they might do so.

In another example, if the security specialist in a small company has a number of responsibilities for the entire information security management system (ISMS), there will probably be many opportunities for him to prove his skills but also to make mistakes. If the company’s CEO favours the employee, he may look for achievements that indicate his professionalism. If the CEO doesn’t favour him, the focus may be on the person’s past mistakes, which considered alone, would indicate incompetence. In this last case, the past successes are often ignored.

The Problem of Hidden or Absent Data
In some cases, essential data could simply be absent. This makes it difficult to compare good and bad courses of action. In such situations, people could erroneously conclude that their evaluation criteria are adequate. For instance, the decision to increase the password complexity level and to lower the expiration period for the accounts of a particular business critical application is an accepted good security practice. However, if only this general best practice is taken into account, the expectations of the change could be overly optimistic. The reason for this is that a lot of missing information cannot be considered: it is nearly impossible to anticipate all the indirect consequences of such a change, like users starting to write down their passwords. If they do this, the risk for password compromise will most likely increase and the change will have the opposite effect.

In another example, the organisation’s leadership decides to outsource certain IT security functions to a third-party provider instead of modernising the existing capabilities. This will likely improve the overall capabilities, but there will be very limited information if that course of action is the best decision because the other course of action will not be pursued and tested.

A third example can be given on the subject of risk assessment. People often think that if a certain risk has never materialized, then the likelihood for its occurrence in future is very low 12 . However, if a risk specialist thoroughly analyses the existing information on the risk, he may conclude that the likelihood is much higher.

Self-fulfilling Prophecies 13
A peculiar case of the hidden data problem arises whenever our expectations lead us to act in ways that fundamentally change the world we observe. When this happens, we often accept what we see at face value, with little consideration of how things might have been different if we had acted differently. For example, if a senior manager believes that a member of the security team performs unsatisfactory, the last one will find it difficult to disprove him If the CIO thinks the CISO behaves unfriendly, the last one could find it difficult to change his perception. Even the absence of friendliness could be erroneously construed as unfriendliness. In such situations, the perceiver’s expectations can cause the other person to behave in such a way that certain behaviours by the target person cannot be observed, making what is observed a biased and misleading indicator of what the person is like. Furthermore, if we do not like a person, we generally try to avoid him and give him little opportunity to change our expectations.

Seeing What We Expect to See 14

The Biased Evaluation of Ambiguous and Inconsistent Data

“I’ll see it when I believe it.”
People are inclined to see what they expect to see, and that is consistent with their pre-existing beliefs. Information that is consistent with our pre-existing beliefs is often accepted at face value, whereas evidence that contradicts it is critically scrutinised and discounted. Our beliefs may thus be less responsive than they should to the implications of new information.

For instance, if a cybersecurity consultant is tasked to serve a client who is generally not satisfied with the IT services of the same company, the client may tend to scrutinise any piece of information the consultant provides to him and look for confirmations that the security consultancy services are at the same, unsatisfactory level as the IT services.

Ambiguous Information
If a decision is based on ambiguous information, we tend to perceive it in a way that fits our preconceptions. Why, for instance, would a newly hired Information Security Officer ask questions around in his organisation? Is he not aware of his duties or is he incapable of doing his job? Is he asking questions because there is a lack of pre-existing documentation left from his predecessor? Or is this what someone in this position is supposed to do? Or maybe because the ISMS can be effectively maintained only with the support and collaboration with the different roles in the organisation? The answer could be related to one of these questions, a combination of them or there could be a completely different explanation. Depending on the preconceptions of each employee interacting with the new Information Security Officer, they could make premature and erroneous conclusions about his capabilities.

Unambiguous Information
We tend to consider unambiguous information, which fits our beliefs, as true. However, we usually do not ignore it when it does not meet our expectations. Instead, we try to scrutinize it and look for additional information. To exemplify this, imagine a CIO who is convinced that the employees should not be occupied with information security training and instead technical controls should be preferred. Then, if he is confronted with studies, which provide evidence about the benefits of persistent security awareness training, he may tend to scrutinise them and challenge the significance of the results. He may also accept with much less scrutiny other studies, which point out the benefits of technical controls over security awareness.

MITIGATION OF COGNITIVE BIASES 15

The list of determinants for cognitive biases can be extended. In any event, recognizing the problem is only the first issue. The second and more difficult challenge is to take adequate actions to mitigate the effects of the biases. As far as organisations are concerned, the author suggests the creation of an entire programme within the organisation, which aims to mitigate the effects of erroneous beliefs and improve employees’ analytical capabilities. Depending on the characteristics of the organisation, the system could be integrated into the existing training/educational programme. The approach could focus on the following:

  • Promoting the learning and self-improvement as a life-long process. People who embrace continuous learning and improvement will have more potential to detect their own cognitive biases and correct their erroneous beliefs. They will also be in a better position to respond on biased arguments of others.
  • Promoting the benefits of scientific methods and techniques to create and test new theories with greater certainty. In addition to that, the knowledge on using scientific methods helps the people develop a mindset for structural thinking and distinguishes the critics from the closed-minded.
  • Promoting and teaching argumentation techniques to improve the interpersonal skills of the employees.

Trained and motivated individuals should teach the actual techniques. The following ideas can be considered when creating such a programme.

  • When evaluating something, the various outcomes should be specified in advance. This increases the likelihood to objectively assess the performance of processes, projects, systems and people.
  • Differentiating between generating an idea and testing it. Often, people easily create ideas, but the process of proving if they work in practice is much more complicated.
  • Organising training sessions to teach employees about logical constructs and avoiding biases.
  • Distinguishing between secondhand and firsthand information and learning about the risks involved in relying on the first one.
  • The benefits of using precise wording to describe and explain things and the perceived risks involved when using metaphors.
  • The need to focus on both – the person and the individual situation, to limit distortions in the perception.
  • The need to understand the false consensus effect that is defined as the tendency for people’s own beliefs, values, and habits to bias their estimates of how widely others share such views and habits.
  • The need to understand the distortions caused by the self-interest and how the organisation can refocus employees’ attention to serve better its interest.
  • Exploring the benefits of measurement methods.
  • Learning about the benefits of focusing on both – the amount and kind of information.
  • Learning about the tendency of positive self-assessments and the inclination of people to protect their beliefs.
  • Promoting tolerance, which can be defined as the assumption that all people make mistakes. Learning about the tendency of people to remember their successes but forget their failures.
  • Mastering learning techniques.
  • Learning how to give and receive feedback. Often people hold back their own reservations and disbelief when they disagree with what someone is saying. Biased feedback leads to an inability to adequately evaluate alternative strategies.
  • Learning how the human brain functions from a neurobiological perspective.

In a summary, this article first exemplified some determinants of cognitive biases in the context of information security and then provided some ideas on how to mitigate the implications of biased thinking in the organisations. The author believes that a better understanding and awareness of the cognitive biases will be novel for the concept of the “human factor” in the information security industry. Most importantly, the awareness of cognitive biases could provide a new perspective when designing security processes and improve communication and decision-making of individuals. As a result of that, the already existing set of analytical and argumentation techniques of the information security professionals could be innovatively upgraded to an advanced level. Such an upgrade could improve the overall performance of the staff, especially if it encompasses the entire organisation. ■

  1. The determinants of cognitive biases and their definitions are discussed in the book of T. Gilovich, “How we know what isn’t so”, The Free Press, 1991.
  2. This is known as DIKW. See L. Hayden, “IT Security Metrics”, page 57-58, Mc. Graw-Hill, 2010.
  3. The tendency of people to see patterns is discussed by M. Shermer, “How We Believe”, 2nd edition, section “The pattern-seeking animal”, Owl books, 2003.
  4. This is related to the cognitive bias known as the Representativeness Heuristic. See A. Tversky and D. Kahneman, “Judgment under Uncertainty: Heuristics and Biases”, pages. 1124-1131, Science, New Series, Vol. 185, No. 4157, 1974.
  5. This phenomenon is also known as Clustering Illusion. It is well known among financial investors who could become overly confident when the price of a stock goes up for a couple of days in a row. See “ Think again! Your guide to the cognitive biases that lead to bad investing behaviours and the ten things you can do about them ”.
  6. The Illusion of Causality is a very well known phenomenon among scientific researchers. See “ Illusions of causality: how they bias our everyday thinking and how they could be reduced ”, Front. Psychol., 02 July 2015.
  7. It is also thought that pre-existing beliefs are the trigger for new beliefs. See “ A cognitive account of belief: a tentative roadmap ”, Front. Psychol., 13 February 2015.
  8. See D. Levitin, “Foundations Of Cognitive Psychology”, pages 591-592, A Bradford Book, 2002.
  9. The term is used by H. J. Einhorn & R. M. Hogarth in “Confidence in judgment: Persistence of the illusion of validity.” Psychological Review, Vol 85 No 5 395-416, 1978.
  10. See B. L. Luippold, S. Perreault and J. Wainberg, “ AUDITORS’ PITFALL: FIVE WAYS TO OVERCOME CONFIRMATION BIAS ”, 04.06.2015.
  11. See “Practice Advisory 2320-3: Audit Sampling”, The Institute of Internal Auditors, May 2013.
  12. See section Biases of imaginability of reference 4.
  13. See C. Ackerman, Self-Fulfilling Prophecy in Psychology: 10 Examples and Definition , May 2018.
  14. See L. Yariv, “ I’ll See It When I Believe it? A Simple Model of Cognitive Consistency ”, Cowles Foundation Discussion Paper No. 1352, 2002.
  15. The application of methods to remove or reduce bias from judgment and decision making is called debiasing. Multiple other techniques for mitigating the effects of cognitive biases are discussed in this article – “ Debiasing ”, 2018

Veselin Monev is information security and compliance practitioner. He has over 5 years of information security experience in the academics and the private sector and more than 4 years of IT practice. In 2015 he received a master degree in Cybersecurity from the New Bulgarian University. He is author of several academic articles and co-author of an academic book for cybersecurity metrics.


Cognitive bias believing you are good at the things you are bad at, as successes are more vivid? - Psychology

Like other complex activities, poker is easier to learn when you build skills in the right order.

So I’ve compiled the top 23 cognitive mistakes that make people play bad poker. I’ve listed these roughly in order of priority. In other words, if you don’t fix the ones near the top, it won’t really matter if you’re doing fine with the ones further down! You should think of this as the roadmap of errors that are preventing you from becoming a better poker player.

Inattention is the tendency to fail to concentrate on information that could be useful for future decision making.

Confirmation bias is the tendency to search for or interpret information in a way that confirms one’s preconceptions.

Focusing effect is the tendency to place too much importance on one aspect of an event causes error in accurately predicting the utility of a future outcome.

Availability heuristic is estimating what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples.

Not knowing the math (Innumeracy)

Neglect of probability is the tendency to completely disregard probability when making a decision under uncertainty.

Base rate neglect is the tendency to base judgments on specifics, ignoring general statistical information.

Loss aversion is people’s tendency to strongly prefer avoiding losses to acquiring gains.

Self-serving bias is the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests.

Overconfidence is the state of being more certain than is justified, given your priors and the evidence available. For example, for certain types of questions, answers that people rate as 󈭓% certain” turn out to be wrong 40% of the time.

Negativity Bias is paying more attention to and giving more weight to negative rather than positive or neutral experiences.

Optimism bias is the tendency to be over-optimistic about the outcome of planned actions.

Clustering illusion (Apophenia) is the tendency to see patterns where none exist.

Illusion of control is the tendency to overestimate one’s degree of influence over external events.

Gambler’s fallacy is the tendency to think that future probabilities are altered by past events, when in reality they are unchanged.

Just-world phenomenon is the tendency for people to believe that the world is just and therefore people “get what they deserve.”

Irrational escalation is the phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the decision was probably wrong.

Pessimism bias is the tendency for some people, especially those suffering from depression, to overestimate the likelihood of negative things happening to them.

Projection bias is the tendency to unconsciously assume that others (or one’s future selves) share one’s current emotional states, thoughts and values.

Outcome bias is the tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made.

Hindsight bias is sometimes called the “I-knew-it-all-along” effect, the tendency to see past events as being predictable at the time those events happened.

Consistency bias is remembering one’s past attitudes and behavior as more similar to one’s present attitudes.

Primacy effect is the tendency to weigh initial events more than subsequent events.

Peak-end rule is how we judge our past experiences almost entirely on how they were at their peak (pleasant or unpleasant) and how they ended.


Classification of heuristics and cognitive biases

There are more than 200 types of heuristics and cognitive biases in scientific literature and it is complicated to classify them. Buster Beston carried out a simple classification based on the reasons that justify the use:

When there is too much information

We are surrounded by more than 11 million bits of information per second, and it is impossible to mentally process it all. Our brain employs mental shortcuts to select only that piece of information considered useful. For example:

The brain concentrates more on things associated nonconsciously with concepts we utilize frequently or have recently utilized (availability heuristic). This is why, when we are facing the shelves at a supermarket, we quickly find the product we are searching for, or when we are expecting a child, we see pregnant women everywhere (selective attention bias).

The brain focuses more attention and provides more value to weird, funny, stunning and surprising things and we generally omit information that is considered ordinary or expected (von Restorff effect). For example, in the image, by employing neuromarketing techniques such as the eye-tracking system, it can be seen that the zone of maximum interest is the person sustaining the signal, who is seen by 100% of people and to whom the maximum time is dedicated. However, the remaining elements are visualized only by less than 25% of people and for less than 0.25 seconds.

The brain is skillful to detect that something has changed and we generally evaluate this novelty by the effect of the change (positive or negative) rather than by its value (which is the value it would have on its own, isolatedly - anchor effect). This is why when Whatsapp decided to charge 0.99€ for its services many users felt defrauded. The problem was not the price, but the fact that before the price was 0 €.

The brain usually focuses attention on what confirms our opinions and beliefs and ignores things that contradict us (confirmation bias). In other words, the confirmation bias is the trend to give much more credibility to what is aligned with our way of thinking, and this makes us value more the information of a specific communication medium than another.

The brain detects defects in other people much easier than our own defects (bias blind spot). In this way, we think that other people are much more impressionable than ourselves, for example, with publicity. The self-serving bias is related to the blind spot bias, making us attribute more responsibility to successes and hits than to mistakes.

When we don’t know how to provide meaning to what surrounds us

As we only process a small part of the information required for a completely objective vision of the world, we fill in information blanks to give meaning to what surrounds us.

The brain finds stories and patterns even in disperse data (series illusion or apophenia), which is natural to avoid the sensation of unfamiliarity, of something that we don’t like or that makes us feel insecure. This is why we find shapes in clouds.

The brain fills in the missing information with the best suppositions (stereotypes, generalities, own past or third-party experiences) but also, we forget which parts were real and which were suppositions. For example, we evaluate a hotel on Booking as very good because of very positive evaluations from other people, rather than due to the objective information provided by the establishment (drag effect or social test).

The brain gives more value to people or things we are used to. In other words, we include suppositions on the evaluation of what we see. For example, we usually think that attractive people are more intelligent and kind than less-attractive people because we generalize a positive feature to all people (halo effect). The correspondence bias or attribution error is related, which explains behavior on the basis of the “type” of person involved, rather than on social or environmental factors that surround and influence the person.

The brain simplifies probabilities and calculations so it is more simple to think about them. Nevertheless, we are unskilled in math and intuitive statistics, making terrible mistakes (law of small numbers). For example, when playing roulette we don’t want to bet on “red” if the five previous results were red.

The brain makes us believe that we know what others think, and we mold the mind of others from our own minds (false consensus bias and projection bias). This is why we believe that everyone will love the movie we enjoyed so much.

The brain projects our current mentality to the past or future, making us believe that it is/was easy to anticipate the future (retrospective bias, hindsight bias). Therefore, once something happens, we say and feel that “I already thought so” although probably that was not true.

When we have to act fast

We are limited by time and the amount of information that we can process, but we cannot let this paralyze us.

The brain provides us with excessive confidence in our capabilities so we can act (overconfidence bias, optimism bias, actor observer bias, dunning kruger effect). This is why many times we think we can handle well a conflictive situation that in the end gets out of hand.

To keep us focused on the action, the brain favors what is immediate and closer over what is far in time or distant (hyperbolic discount bias). We value more things in the present than in the future. This makes us ignore our diet because the reward of eating pastry right now is much more irresistible than the reward of losing weight, which will only be achieved in a few months.

The brain motivates us to complete tasks in which we have already invested time and energy. This helps us finish things, even when we have plenty of reasons to give up. For example, the sunk cost fallacy/bias leads a student to keep studying, despite hating the course, because he has already been studying for two years. The correct decision would be to just let go (which this bias does not allow you to do) and find a really exciting career rather than drag through two or three more years to finish a boring course (more information on the decision-making process).

The brain helps us avoid irreversible decisions (status quo bias). If we have to select, we usually choose the option that is perceived as less risky or that preserves the status quo. “Better the devil you know”. This makes it hard to leave our comfort zone and is a clear enemy of innovation.

When we have to choose what to remember

We can only afford to remember the bits of information that will probably be useful in the future. We have to bet constantly and make allowances on what we have to remember or forget. We highlight the following types of memory biases:

We reinforce the memories after a related event occurs and, in this process, some details of the memory can change without us being conscious of it (reconstructive memory or false memory bias). For example, there are people that were not present at the 9/11 attack and after one year, were completely convinced they were actually there. This was demonstrated by researchers of the 9/11 Memory Consortium. Without going that far, this is why when you run into a friend you haven’t seen for a while and both remember an event, it is possible that your versions are not the same. But nobody is lying, at least not consciously.

We discard specific data to form generalities (implicit stereotype bias). This occurs because of necessity but the result of “lumping everything in the same bag” includes implicit association, stereotypes, and prejudice. For example, if we think about how Spain performed in the most recent World Cup, we only remember how bad it was and don’t remember the specific good things that happened. And when we vote for a political party, we are led by what we have in mind for “conservatives” and “liberals” rather than by a concrete program of actions.

We reduce events and lists to their key elements. As it is difficult to reduce these to generalities, we select some elements to represent the overall. For example, after watching a movie, the peak-end rule leaves us with a feeling due to the moment of highest emotional intensity and how it ended.

We store memories according to the context in which they were obtained, without taking into account its value. For example, the Google effect leads us to forget any information we think we can obtain on the internet. Although the information is relevant, we forget it anyway because we can easily find it again. This is why we don’t memorize any telephone numbers anymore or the directions to a specific location.

The following video from the BBC website also explains what cognitive biases are and how they influence our perception of reality.


Availability heuristic describes a shortcut where people make decisions based on information that's easier to remember.

In one experiment, a professor asked students to list either two or 10 ways to improve his class. Students that had to come up with 10 ways gave the class much higher ratings, likely because they had a harder time thinking about what was wrong with the class.

This phenomenon could easily apply in the case of job interviews. If you have a hard time recalling what a candidate did wrong during an interview, you'll likely rate him higher than if you can recall those things easily.


1. Anchoring Bias

To show how this bias works, let’s play a guessing game. Do you think that the tallest tree in the world is taller or shorter than 1,000 feet? Either way, how tall do you think the tree is overall?

Unless you already know a lot about trees, you probably guessed that the world’s tallest tree is somewhere close to 1,000 feet. Maybe you guessed it was taller or shorter – say, 1,500 feet total, or only 500 feet – but either way, your guess was affected by the first number you saw.

This is an example of the anchoring bias – relying too much on the first piece of information you get. Since the figure 𔄙,000 feet” was all you had to go on, that number became your “anchor,” and your guess about the tree’s height was tied down by it. Without the number 1,000 to guide you, your guess might have been much higher or much lower. (In case you’re curious, the actual answer is 379 feet.)

How This Bias Costs You Money

The anchoring bias costs you money when it leads you to judge the price of an item based on the first price you saw. For instance, suppose you’re shopping for a tablet computer. You check the sale flyer for a local department store and see one model marked down from $500 to just $150.

That sounds like an amazing price, but only because you’re comparing it to the $500 anchor price. If you shopped around for similar tablets and found that most cost $150 or less, it wouldn’t look like such a bargain. In fact, many stores raise their “regular” prices right before Thanksgiving to make their Black Friday sales look more impressive.

Sellers know all about this bias, and they use it to their advantage. For instance, some real estate agents make sure the first house they show to a new buyer is ludicrously overpriced. Compared to that, every other house on the market will look like a great deal.

Anchoring can also hurt you when you negotiate your salary. During a job interview, if you’re offered a starting salary of $25,000, you’ll probably hesitate to ask for $50,000, even if that’s what you think you’re worth. You could end up dropping your asking price to $35,000 because you don’t want to sound unreasonable.

How to Beat This Bias

The best way to overcome the anchoring bias is to do more research. That way you can replace that initial “anchor” number with other numbers that make more sense.

For example, if you want to buy a house, check the “comps” – prices that comparable houses have sold for. That will let you know what’s really a fair price to pay for the house you want.

Likewise, before a job interview, do research on typical starting salaries. That way, when the boss names a number, you’ll know whether it’s a fair offer. Better still, turn anchoring to your advantage by being the first to name a salary. Then the boss will have to adjust to your expectations, instead of the other way around.


Useful Mistakes, Cognitive Biases and Seismic

We come to conventions and Symposia to hear about success and bask in the glory of our awesome, successful colleagues. But mistakes may also lead to useful learning opportunities. After all, we often mistakenly look upon our successes as if they are the result of some intrinsic property of ourselves, as if we are both special and right, rather than consider that at least some of them have come about their success due to luck. A failure or an error, on the other hand, may be far more illuminating and attention holding if we allow ourselves to honestly face up to it.

I was humbled and grateful to have been chosen as the Honoree at this year&rsquos CSEG Symposium. As has been customary in past Symposia, a group of very intelligent people did make a heroic effort to say some nice things about me. Looking back on my own career, I do see some things that I am proud of. I see a great many people who I am proud to have worked with. Scott Reynolds spoke of some of those excellent people in his Tribute talk, which I hope will be published. I see a few problems (sort of) solved and situations improved. All that is wonderful, but my career trajectory looks the way it does because I had a great many things to learn. I made errors. I was part of some mistakes. I was the ringleader of a few of them. My work suffered from a variety of cognitive biases, and many of the very intelligent people around me did as well. In the second half of my time as a geophysicist, I became more aware of some of my faults, errors, and shortcomings. I attempted to become a better scientist and make less cognitive errors. It turns out that both tasks are difficult.

This write up of some of the elements of the talk I gave at the Symposium delves into certain mistakes I made or was a part of and discusses why they were made. In fact, the systemic reasons for the mistakes are far more important than the mistakes themselves. It is unlikely that future geoscientists will find themselves in the identical situations that I have erred in, but they will, like me, be human.

It&rsquos not just me: ask the military

In speaking of my own mistakes, I am neither attempting to humble-brag or to suggest that I was an awful geophysicist. It is more likely that I was generally typical but had an unusual tendency to communicate about my work. Certainly, I worked with an excellent group of professionals. Everyone&mdashmyself, my peers, partners, cross-disciplinary colleagues&mdashwanted to do a good job, keep costs and environmental impacts low and bring in a high return on investment for our employers. None of us wanted to make mistakes. But we did. Lots of them, many of which slipped by without us even realizing it. Most of our errors were never covered in a university geosciences or physics course, in company training or in any technical conference. Most of our errors, the ones that we committed again and again were as a result of the human condition, because of the biases and limitations that human beings tend to have.

In his book, Men Against Fire, Marshall (1947) discusses at length the fact that over 70 percent of all infantry soldiers do not fire their weapons in actual combat, even if the vast majority will fire in combat exercises or on the firing range. Marshall quite adroitly realizes that to address this issue, the feelings, or the morale of the soldiers must be addressed. While Marshall appears loathe to use the vocabulary of the psychologist, it is inescapably the feelings and biases of the soldiers that are his main points of enquiry in the book. Without uncovering the psychological reasons for the inability of most soldiers to follow the most basic and essential order they can receive, there can be no improvement in performance. And so it is for geoscientists. While we are in no way driven to the farthest extreme of action or mortality in our jobs, we nevertheless are human beings who are subject to common pressures, common cognitive biases and make common, systemic mistakes.

Psychology, there is a better goal than to manipulate others to our ends

The study of human behavior is a ubiquitous tool of the modern world. Nobel prize winner Richard Thaler&rsquos nudge theory is at work in most modern choice tests. Nudge theory (Thaler and Sunstein, 2008) influences behavior using subtle methods such as the careful selection of the default choice on medical tests, municipal polls or dietary questions. Politicians, salesmen and marketers use the nudge theory it affects us daily as our decisions on buying and policy as it is pushed by one actor or another. Users of the theory claim they are doing it for our own good, though we can only guarantee that the theory is used to influence us for theirs&rsquo. Daniel Kahneman is another Nobel prize winner who wrote Thinking Fast and Slow (2011), an expansive summary of his studies of cognitive biases that has been hugely influential in economics. There are few policy makers or professional marketers who are unaware of the work of Kahneman and the behavioral psychologists. The use of such knowledge of humanity makes sense. If we want to influence how others buy, think or vote&mdashhow they choose&mdashwe should understand how they think, buy and vote.

Behavioral psychology seems to be used most in practice to manipulate or influence others. It does not matter that some who do so, do it in the name of beneficent sounding terms such as libertarian paternalism, it is still a tool being applied by others onto us. But if the tools of psychology are useful for others to attempt to influence us to better choices, why do we not attempt to use our knowledge of psychology to help ourselves make better decisions? Better geoscientific decisions? An inward use of the knowledge certainly has the advantage of being used towards benefits that we choose.

But how can we do this? The answer to this question brings us back to my mistakes. My mistakes are useful because they can be used as an example to bridge the gap between some of the lessons of behavioral psychology and the choices made by geoscientists.

Maslow&rsquos Hammer and Experimenter&rsquos bias

Maslow (1966) said the now famous, &ldquo[To a hammer, everything is a nail].&rdquo He was speaking of the extraordinarily strong tendency of human beings to rely&mdashsometimes to their detriment&mdashon familiar, comfortable tools, whether they are appropriate or not to the problem at hand. This cognitive bias has many names that can be used to impress friends at parties. The law of the instrument, the Birmingham screwdriver, the golden hammer, or the Einstellung Effect are just a few. Geoscientists are no exception to this human tendency to use what we are used to, what we are well versed with, or with what has perhaps gotten us notoriety or economic success in the past.

Even if we may forgive ourselves for depending too much on Maslow&rsquos Hammer, we geoscientists are likely less comfortable when considering our own use of experimenter&rsquos bias. Jeng (2006) makes a survey of this bias, which is our predilection to believe, publish and promote data that agrees with our expectations and to ignore, suppress or denigrate data that conflicts with those expectations. Maslow&rsquos Hammer likely plays a part in this bias, for preferred outcomes&mdashor expectations&mdashwill often go hand in hand with a preferred tool. Experimenter&rsquos bias is the more uncomfortable bias because it contradicts our feelings of scientific integrity. And yet we may ask, how many examples of failure do we see in the literature of the CSEG?

Example one: the frustratingly non-unique Radon transform

The Radon transform went through a period of great and manifold development for seismic applications starting in 1985, lasting for about twenty years. This development is remarkable for its ingenuity alone but is also noteworthy because the problem of experimental bias played an ongoing role.

The Radon transform essentially represents data in time and space by families of curves with onset times (tau) and ray parameters (p). The hyperbolic family was identified early on as being apt for separating primary and multiple events of common midpoint gathers (CMP) by creating a velocity space. Unfortunately, there are several well-known mathematical issues with the Radon transform, chiefly its ill-posedness. Thorsen and Claerbout (1985) discussed how the ill-posedness of the transform is exasperated in seismic by missing data in the CMP gather, chiefly at the near offset and far offset. The data truncation at these edges can never be fully removed in seismic data&mdashsuch would be physically impossible&mdashand it creates a smeared, non-unique response that limits the ability for primaries and multiples to be separated. To reduce the non-uniqueness of their results, Thorsen and Claerbout introduced sparsity constraints which reduced the smear in p space, though at horrendous&mdashand at the time, potentially bankrupting&mdashcomputational cost.

As a problem in logic, I will argue that no constraint, or inserted prior knowledge, can be truly effective if the physical assumption that it represents is not valid. That is, if the assumption is not true, the product of its use may be false. The sparsity constraints for the Radon transform are partially valid. The idea of the constrained transform simulating a gather of infinite offset range, and thus creating sparsity in p space makes intuitive sense, is easy to imagine, and was developed commercially in a series of steps. Hampson (1986) employed an ingenious method to make the transform work given the processing limitations of the time, though he was forced to give up on Thorsen&rsquos sparsity and had to use parabolas instead of hyperbolas. Sacchi and Ulrych (1995) introduced a fast method for including sparsity in p, and Cary (1998) argued that only by including constraints in both tau and p can the necessary and sufficient conditions (Hughes et al., 2010) be met for a desirable solution. Cary&rsquos explicit and correct use of logic is rare in the literature. Many others, including myself (Hunt et al, 1996) through to Ng and Perz (2004) illustrated the improved results of their sparse Radon transforms which were developed using these ideas.

Virtually all of these many papers&mdashand most major seismic processing companies of the time seemed compelled to publish their own solution&mdashshowed near perfect synthetic results in which non-uniqueness was apparently eliminated. It is within these many papers and their near perfect, unique, synthetic results, that experimental bias was at work. Consider Ng and Perz&rsquos (2004) paper and a redisplay of their Figures 1, 2a and 5a, below, which I rename Figure 1. We see in Figure 1a the input CMP, which is moveout corrected and has only one primary of tau 1100ms. The rest of the events, including another event with tau of 1100ms, are multiples. Figure 1b shows the non-sparse Hampson (1986) algorithm&rsquos Radon transform space. It shows the smeared truncation artefacts and some overlap of energy in p at a tau of 1100ms from the primary and multiple. Figure 1c shows Ng and Perz&rsquos (2004) sparse Radon algorithm&rsquos space. Events are well localized in tau and p. The two events at a tau of 1100ms are distinct and separate.

Figure 1. Synthetic example from Ng and Perz (2004). (a) Input CMP gather, (b) Radon transform space from the Hampson (1986) algorithm, (c) Radon transform space from the sparse algorithm.

This example is typical of the many papers of the time and makes the argument that these sparse algorithms have virtually eliminated the non-uniqueness in the Radon transform. No author that I am aware of has claimed to have completely eliminated non-uniqueness, but the examples given show positive synthetic results with no material uniqueness issues remaining. A reader of such papers likely knows that the sparsity will eventually fail to mitigate the non-uniqueness at some tiny moveout (the authors certainly know this), but these examples are generally not shown. This omission by itself is an argument for experimental bias, but the bias is in fact much more overt.

Hunt et al (2011) showed that there are other relevant variables impacting both the effectiveness of multiple attenuation and the uniqueness of the Radon transform. Yes, I had historically been an unwitting part of the experimental bias, but this further examination into the problem of short period multiple elimination made myself and my co-authors (who included Mike Perz from the 2004 paper) realize that this cognitive bias was denying us a better understanding of the non-uniqueness problem in the Radon transform and in its use for effective multiple attenuation.

Figure 2, from Hunt et al (2011) shows an example of non-uniqueness in the Radon transform space due to varying onset, or tau, times of primary and multiple events. Figure 2 illustrates a primary and multiple with the same amplitudes. The differential move-out of the multiple with respect to the primary is 20ms at 2500m offset. The intercept time, Tau, of the multiple is varied and the offsets were taken from a typical CMP from the 3D to simulate land 3D irregularity. A 35 Hz Ricker wavelet was used for each element of this figure. In Figure 2a, the multiple starts 10ms above the primary, in Figure 2b both events start at the same time, and in Figure 2c, the intercept of the multiple is 10ms below the primary. The corresponding sparse Radon transform spaces are shown in Figures 2d, 2e and 2f. The primary and multiple are not resolved in Figure 2d. In Figure 2e, two events are resolved in p, but the position of the multiple is slightly incorrect. Only in Figure 2f are the primary and multiple separately resolved and in their correct positions. Relative onset times by themselves control tuned character on the CMP gathers, which unsurprisingly controls uniqueness in the Radon transform space of even sparse algorithms. This is a material observation of the transform&rsquos continuing non-uniqueness. How was this missed from the literature except by experimental bias?

Figure 2. from Hunt et al (2011). A multiple with 20ms of differential move out relative to the primary event at 2500m was generated. Each event has the same amplitude, and a 35 Hz Ricker wavelet was used. The intercept time, Tau, of the multiple is varied in this figure. In (a) the intercept of the multiple is 10ms above the primary, in (b) both events start at the same time, and in (c), the intercept of the multiple is 10ms below the primary. The Tau-p spaces for Figures (a), (b), and (c) are given in Figures (d), (e), and (f), respectively. Despite the using a sparse Radon algorithm (Ng and Perz, 2004), it does not separate the events completely in the Tau-p space of (d).

Let us examine the effect of changes in wavelet size on the ability of the sparse Radon transform to separate multiples and primaries properly (Hunt et al, 2011). The simple model of Figure 3 illustrates this effect. A primary and multiple are depicted in Figures 3a, 3b, and 3c. In each case, the multiple starts 10ms above the primary, and has a differential move out of 20ms at 2500m. The primary and the multiple have equal amplitude, and the offset bins are perfectly regular. The only differences in these three images are that the wavelet of the data changes from a 15 Hz Ricker in Figure 3a, to 35 Hz Ricker in Figure 3b, to 60 Hz Ricker in Figure 3c. Figures 3d, 3e, and 3f, represent the sparse Radon transform spaces for Figures 3a, 3b, 3c, respectively. The Radon transform space corresponding to the low resolution gather of Figure 3d clearly does not resolve the primary or the multiple most of the energy is on the zero moveout curvature. The mid resolution gather fairs little better: Figure 3e shows the energy is misallocated in quantity and position. Only the highest resolution gather of Figure 3c and its Tau-p space of Figure 3f does a perfect job of resolving both events, and correctly representing the move out of each event. The greater the wavelet resolution, the less non-uniqueness in the Radon transform. That resolution affects the uniqueness of the Radon transform should have been obvious, but it had not been a focus of the work to this point. By explicitly illustrating this now unsurprising shortcoming, Hunt et al (2011) were able to focus on increasing the resolution of the wavelet as much as possible.

Figure 3. Figure 3 from Hunt et al (2011). The effects of wavelet resolution in the uniqueness or resolution of the Radon transform. The offset bins are also perfectly regular, with 50m spacing. Figures (a), (b), (c) depict a flat primary and a multiple. In each case, the multiple starts 10ms above the primary, and has a differential move out of 20ms at 2500m. We used Ricker wavelets with dominant frequencies of 15 Hz, 35 Hz, and 60 Hz in (a), (b), and (c) respectively. Figures (d), (e), and (f), represent the sparse Radon transform spaces for (a), (b), (c), respectively. The low resolution gather of (a) yields an inaccurate and unresolved Radon transform space in (d). This problem is incorrect in a different way with the wavelet used in the gather of (b), and the corresponding tau-p space shown in (e). The problem is only completely resolved with the highest frequency wavelet of (c) and the Tau-p space of (f).

What are we to glean from this example? I had been a small part of the history of experimental bias concerning the non-uniqueness of the Radon transform. This cognitive bias was not at work as part of a conspiracy. The authors were honestly attempting to reduce non-uniqueness in the transform. But that these two obvious sources of non-uniqueness were never explicitly shown in the literature until recently suggests that experimenter&rsquos bias exists. We rarely show the bad examples or look for them. We human beings are caught focussing on the narrow little points of our main purpose and we toss aside&mdashoften unthinkingly&mdashthat which does not contribute to that simple coherent idea. Some readers will note the coherency bias (Kahneman, 2011) at this point. In the case of the Radon transform, there is little doubt that the authors involved did improve the non-uniqueness of it. But they missed an opportunity to do a better job and gain a clearer understanding by not highlighting these limitations.

Example two, experimental bias and Maslow&rsquos Hammer with AVO

Amplitude versus offset analysis (AVO) is one of the most popular diagnostic tools of the modern seismic practitioner. It enables elastic rock property estimates of the earth from a large portion of the historical p-wave seismic data. As useful as AVO derived estimates can sometimes be, geophysicists may sometimes come to over-rely on them and may also sometimes apply experimenter&rsquos bias where they are concerned.

In the 2012 CSEG Symposium, I showed a case study regarding the controls on Wilrich production where AVO was shown to be ineffective. The most pressing questions from the audience were not on the many details of the novel method that I and my co-authors had created to quantitatively demonstrate the importance of steering to obtain better productivity but were instead obsessed on why I could not make AVO &ldquowork&rdquo. When my coauthors and I later published this study in Interpretation (Hunt et al, 2014), the biggest questions from the peer-review again ignored the core work of the paper and focused on the apparent failure of AVO. I was told that I had to prove why I did not include AVO estimates in my method. This is quite apparently an example of both experimenter&rsquos bias and Maslow&rsquos Hammer.

The target the work was the tight, deep basin, Wilrich sandstone in Alberta Canada, which exhibits low permeability (0.05 to 0.1mD) and low to moderate porosity (6% to 8%). Figure 4 is the petrophysical type log used in the paper. The agreement of log effective porosities is shown in Figure 4b, the mineralogic content is shown in Figure 4c. The core effective permeabilities of Figure 4d and the thin section images of Figure 4e were important elements of the paper which was concerned with how well each of the upper, middle and lower section of the Wilrich would contribute to production and if relative well placement in a particular zone would be important. Of importance is the coal section, shown in black shading in Figure 4c, which lies directly on top of the Wilrich sand. This coal section is of variable thickness in the area and has materially different rock properties than the Wilrich.

Figure 4. Petrographic analysis of the Wilrich reservoir using log, core, and thin section data. The upper, middle, and lower units are separated by blue dashed lines, and the upper and lower units are also labelled in text. (a) The gamma ray open hole log. (b) The multimineral log estimate of effective porosity. Core plug effective porosity measurements are identified and overlain with black dots. (c) The color-coded mineralogical interpretation from the logs. Quartz is colored yellow, dolomite is purple, coal is black, shale is grey-brown, clay is green-brown, red is gas-filled porosity, and blue is bound water. (d) The deep resistivity log curve with core permeability measurements identified with black dots. (e) Illustrates representative thin section images of each unit at the same magnification.

Hunt et al (2014) evaluated well length, fracture stimulation data, geologically mapped porosity thickness (Phi-h), reservoir pressure, mud gas and gamma ray logs, relative position of the horizontal wellbore in each of the upper, middle, and lower Wilrich as estimated quantitatively from wellsite data. To this was added interpreted 3D seismic data including stack response, AVO (including inverted Lame parameters), azimuthal AVO (AVAz), velocity variation with azimuth (VVAz), curvature and coherency in an exhaustive array of windows about the Wilrich. All these parameters were gridded along the horizontal wellbores and made correlatable to each other as well as to productivity on a wellbore by wellbore basis. Using the seismic to estimate bin by bin seismic Phi-h estimates along the wellbores was highly desirable, so an exhaustive effort was made to determine if any of the seismic variables, chiefly stack and AVO or AVO inversion variables would correlate to the 114 vertical control wells, each of which had an effective Phi-h value from multimineral geologic well analysis.

The effort to correlate seismic variables (including AVO) to the Phi-h of the vertical wells failed completely and was the only exigent issue to audience members at both the Symposium in 2012 and to the peer review process in Hunt et al (2014). The questioning of this failure was remarkable in a paper in which AVO was but one tool of many, and in which production estimation, not AVO estimation was the key scientific question. Having been there, I can tell you that no one wanted AVO to fail. I know this because of the direct communication made to me as first author, and because I also wanted AVO to be effective. This experience was a searing example of experimenter&rsquos bias. Hunt et al (2014) did exhaustively prove, through modeling and real data correlations, that AVO both could not and did not predict the Wilrich Phi-h. They also proved why. It was because of the variable overlying coal, which directly overlay the sand, and which dominated the reflection response at all angles of incidence.

All but ignored in the angst over the failure of AVO to be effective in the estimation of the Wilrich sand Phi-h, were several important lessons. Figure 5 shows the correlation coefficients for a few of the seismic variables in the study as they related to both Wilrich sand Phi-h and the overlying coal isopach in the 114 vertical wells. None of the seismic attributes passed the 1% p-test of statistical significance for predicting the Wilrich Phi-h. All but one (the curvature) of the seismic variables had a statistically significant correlation to the coal isopach.

Figure 5. The correlation coefficients describing the relationship between a subset of the tested seismic attributes and the Wilrich Phi-h and the overlying coal isopach. A dark yellow shading indicates that the 1% p value test for significance has been met.

The key lessons here are manifold:

  1. The seismic attributes were ineffective for predicting Wilrich Phi-h, but this is not necessarily because they did not &ldquowork&rdquo. The opposite is likely true.
  2. The seismic attributes, including AVO and AVAz, were correlated to the coal isopach. This is because these attributes are reflectivity based. AVO did likely work, just not in a way that was useful to the commercial purpose at hand.
  3. That AVAz response is reflectivity based is well known, but that it may be controlled by overlying coal, which is known to be anisotropic, is an important and obvious conclusion from this work.

Lessons #2 and #3, while not necessarily pleasing, were later repeated numerous times in other work that I engaged in, most of which was unpublished, except for the 2016 CSEG Symposium where Bahaa Beshry spoke of our stress work. There he showed again that AVAz measures showed statistically significant correlations to coal isopach in sections of the data where coal tuning is present. The biggest lesson here is that experimental bias can be so strong that we miss an unexpected but important learning.

Loss Aversion and Neglect of Probabilities, or when we are facing a loss, we often make it worse

The most common errors that I have been a part of have been caused by loss aversion and neglect of probability. For most of my career, I lacked the vocabulary to effectively treat this issue. Kahneman (2011) does an excellent job of illustrating these issues, which often go together, and providing language around these important cognitive issues. Loss aversion, simply put, is the irrational hatred that human beings have of taking a loss. This is not a suggestion that people should enjoy losing, but an observation of the objective fact that they dislike losing more than is rational. An example from Kahneman (2011) follows:

Which do you choose:
Lose $900 for sure or choose a 90% chance of losing $1100?

Kahneman studied this and many similar questions empirically and found that the choice most people make is to take the gamble, which is laughably irrational. Kahneman&rsquos conclusion is that when all options are bad, people like to gamble.

Note that the expected value of each decision is the matter of simple arithmetic. It may be that some of those who chose the gamble had performed this arithmetic and knew that the gamble had a larger expected loss, but it is likely that many did not even extend their effort even to this trivial level. In the world of geosciences and business operations, matters often go wrong, and similar&mdashthough more complex&mdashchoices will arise. The probabilities and expected values relevant to such choices may not be so glaringly obvious. Geoscientists and decision makers often have another meta-choice before them: to gamble blindly or lay out the probabilities and expectations in a rational decision-making framework.

The most vexing observation from my experience is how often, in a business with ubiquitous uncertainty, that we have disregarded probabilities when make decisions. This is common cognitive bias is called neglect of probability. It appears to go hand in hand with loss aversion, and often exacerbates it.

We might deny that we have made irrational decisions due to loss aversion and neglect of probabilities, but there is seldom a signpost or adjudicator that stops us and tells us when this is happening. I have observed or been an active participant in these biases at many of the companies that I have worked at. On some occasions, I have suspected that our decision processes have been in error but have lacked the vocabulary or tools to properly address the issue. Sometimes, I am sure, that the other decision makers have also suspected that irrational choices were being made. Some of the general circumstances in which these biases have occurred are:

  • Following a property acquisition, and we discover we paid too much, so we have acted rashly, hoping to &ldquodrill our way out&rdquo of the mistake and avoid facing the loss.
  • After operational issues with drilling, when significant money has been spent and we face an attempt to remediate the operation or abandon it. Choices are often made with an irrational bias towards further operations in the original wellbore.
  • After completion issues in a new operation, and we face the loss of the wellbore.
  • Every time we fail to quantitatively determine the predictive accuracy of our seismic attributes / data / interpretation to the target property under which a decision is about to be made.

Virtually every element of geoscientific advice we give has an element of uncertainty to it, and if we do not in some way address this, we may be engaging in neglect of probability.

Example three, operations and Loss Aversion and Neglect of Probabilities

At one of my companies, we drilled a long, deep, expensive horizontal well. Wellsite data was incredibly positive, and we were quite certain that the wellbore would be highly economic. The open hole completion equipment was placed, but the wellbore abruptly failed within the near-heel portion of the horizontal section.

The decision at hand was whether we should attempt to remediate the wellbore or to drill an entirely new well (a twin), which would cause us to take a multi-million dollar loss on the first (failed, under that decision) well. In the multidisciplinary meeting where this decision was taken up, the initial feeling was that the remediation choice would be best.

But I had seen this type of issue before, and it was suggested that we consider the problem using decision analysis (Newendorp, 1975), create a decision tree, and populate it with the correct probabilities and costs. Figure 6 shows the cartoon decision tree. To successfully remediate the original well, three operations had to be successfully executed: the stabilization of the wellbore, the patching of the breach, and the subsequent fracture stimulation of the well. The decision analysis was quite simple, and its construction was not time consuming, nor were the probabilities difficult to populate by the operational team. The analysis clearly showed that, in this example, the remediation path had a lower expected value. The decision to twin was made, and the twin was successful. The cumulative effect of considering the three sub-probabilities of the operation yielded a much different, and easily measurable answer than when considering the operation as a whole.

Figure 6. Decision Analysis mockup for the example. Each (round) chance node requires a probability and must be successful for the remediation path to be successful. Once probabilities and costs were supplied to each node, the rational decision was clear.

A world with no mistakes

The only productive reason to review mistakes, or decisions made under cognitive biases, is to improve our decision making going forward. An understanding of logic and behavioral psychology is neither best put towards conceits at a cocktail party nor to manipulating others. This knowledge is best applied inward, to improve ourselves and our decision processes. This is not something to learn once and feel superior over, it is a thing to learn and relearn and be ever watchful of the moment when, ego-depleted or in a hurry, we forget the lesson and make a new mistake. To err is human, and to be human is to always have a set of error tendencies just waiting for the proud, lazy moment and our undoing.

We will never be free of fallibility in our thinking, or of potential biases.

While we may no more eliminate mistakes than we can eliminate our own humanity, we can on a case by case basis minimize them through awareness, wariness, and humility.

About the Author(s)

Lee Hunt graduated from the University of Alberta with a B.Sc. in geophysics in 1990, and immediately started his career with PanCanadian Petroleum Ltd. His experience ranged from interpretation to managing a business unit, from the old conventional days of high-risk exploration to the new days of high-capital resource plays. Lee has drilled over 400 wells in most of the play types within the Western Canadian Sedimentary Basin. His work has focused on performing the quantitative analysis of a variety of geophysical technologies including: multiple attenuation, resolution enhancement, depth and geo-hazard predictions, stress estimation, AVO, AVAz, VVAz, curvature, prediction of fluid, lithology, porosity and fracture treatment production characteristics. At Jupiter Resources, Lee and others formed two self-organizing teams, one of which was concerned with the interconnectedness of the company, and the other with technical problems in geosciences and engineering.

Throughout his career, whenever possible, Lee has shared his technical findings with the larger CSEG community through talks, papers and articles. Lee and co-authors won the Best Oral presentation Award for the 1997 SEPM convention, the 2000 CSEG Convention Best CSEG Paper, the 2008 CSEG Convention Best Abstract, and the 2008 Best Technical Luncheon talk. He and his co-authors also received the Best CSEG Paper in 2010, the Best Exploration Paper at VII INGPET in 2011, Best Paper in the CSEG Recorder in 2011, and Honorable Mention for Best Paper in the TLE in 2011. The TLE paper subject matter was short period multiple attenuation and was an extension of some of Lee's earliest technical work. Lee has served the CSEG by volunteering in various capacities and won many awards.

In 2017, after twenty-seven years as geophysicist, Lee left the industry to pursue a lifelong dream of writing fiction. He is the author of the Dynamicist Trilogy (https://www.leehunt.org).

References

Cary, P., 1998, The simplest discrete Radon transform: 68th Ann. Internat. Mtg., Soc. Expl. Geophys., Expanded Abstracts, 1999-2002.

Hampson, D., 1986, Inverse velocity stacking for multiple elimination: J. Can. SEG, 22, 44-55.

Hughes, W., J. Lavery, and K. Doran, 2010, Critical Thinking: an Introduction to the Basic Skills, sixth edition: Broadview Press.

Hunt, L., P. Cary, W. Upham, 1996, The impact of an improved Radon transform on multiple attenuation: SEG Extended Abstracts, 1535-1538.

Hunt, L., R. Reynolds, M. Hadley, S. Hadley, Y. Zheng, M. Perz, 2011, Resolution on multiples: Interpreters' perceptions, decision making, and multiple attenuation: The Leading Edge, 30, 890-904.

Hunt, L., S. Hadley, S. Reynolds, R. Gilbert, J. Rule, M. Kinzikeev, 2014, Precise 3D seismic steering and production rates in the Wilrich tight gas sands of West Central Alberta: Interpretation, Vol. 2, No. 2 (May 2014) p. 1-18. http://dx.doi.org/10.1190/INT-2013-0086.1.

Jeng, M., 2006, A selected history of expectation bias in physics: American Journal of Physics. 74 (7): 578&ndash583.

Kahneman, D., 2011, Thinking, Fast and Slow: Farrar, Straus and Giroux.

Marshall, S. L. A., 1947, Men Against Fire: The Problem of Battle Command: Peter Smith Publisher.

Maslow, A., 1966, The Psychology of Science: A Reconnaissance: Harper Collins

Newendorp, P. D., 1975, Decision Analysis for Petroleum Exploration: Pennwell Publishing Company.

Ng, M., and M. Perz, 2004, High resolution Radon transform in the t-x domain using &ldquointelligent&rdquo prioritization of the Gauss-Seidel estimation sequence: 74th Annual International Meeting SEG, Expanded Abstracts, 23, 2160-2163.

Sacchi, M.D., and T.J Ulrych, 1995, High-resolution velocity gathers and offset space reconstruction: Geophysics, 60, 1169-1177.

Thaler, R., and C. Sunstein, 2008, Nudge: Improving Decisions about Health, Wealth, and Happiness: Yale University Press.

Thorsen, J. and J. Claerbout, 1985, Velocity-stack and slantstack stochastic inversion: Geophysics, 50, 2727-2741.

Appendices

Join the Conversation

Interested in starting, or contributing to a conversation about an article or issue of the RECORDER? Join our CSEG LinkedIn Group.


Cognitive Dispositions to Respond

Chris is an Intensivist and ECMO specialist at the Alfred ICU in Melbourne. He is also the Innovation Lead for the Australian Centre for Health Innovation at Alfred Health and Clinical Adjunct Associate Professor at Monash University. He is a co-founder of the Australia and New Zealand Clinician Educator Network (ANZCEN) and is the Lead for the ANZCEN Clinician Educator Incubator programme. He is on the Board of Directors for the Intensive Care Foundation and is a First Part Examiner for the College of Intensive Care Medicine. He is an internationally recognised Clinician Educator with a passion for helping clinicians learn and for improving the clinical performance of individuals and collectives.

After finishing his medical degree at the University of Auckland, he continued post-graduate training in New Zealand as well as Australia’s Northern Territory, Perth and Melbourne. He has completed fellowship training in both intensive care medicine and emergency medicine, as well as post-graduate training in biochemistry, clinical toxicology, clinical epidemiology, and health professional education.