15  Cognitive Biases

Author

Matthew Dworkin, Jacob Steinhardt

15.1 Overview

In this section we cover cognitive biases, which we define as any tendency of the brain that does not reliably lead to correct conclusions. Cognitive biases can lead to poor forecasts that often don’t feel incorrect, so it is useful to be explicitly aware of them. We will also cover some forecasting techniques that serve as antidotes to cognitive biases.

15.2 Warmup Example

Suppose the following questions are asked separately to two groups of people.

  1. An oil spill happens off the coast of California. There are 2,000 gulls that are in danger of being caught in the spill. You are governor of California and can pay a non-profit to relocate the gulls out of harm’s way. How much are you willing to pay to achieve this?

  2. An oil spill happens off the coast of California. There are 200,000 gulls that are in danger of being caught in the spill. You are governor of California and can pay a non-profit to relocate the gulls out of harm’s way. How much are you willing to pay to achieve this?

These questions are identical except for the number of gulls in danger. If the people in the groups are calibrated, the cost per bird that each group is willing to pay should differ by the same ratio between the number of gulls in harm’s way (barring other considerations like the possibility of extinction that arise if the number of gulls in danger is too high).

15.3 Scope Neglect

Scope neglect is a cognitive bias where one fails to account for the effect of a multiplicative factor on a numerical estimate.

15.3.1 Example of Scope Neglect

Consider the following question:

  1. What is the probability that the Bay Area has an earthquake (magnitude >= 4) in the next month?

Spend a couple of minutes thinking about this question and write down an answer before moving to the next part.


(whitespace to avoid spoilers)

  1. What is the probability that the Bay Area has an earthquake (magnitude >= 4) in the next year?

Spend another couple of minutes thinking about this question and write down an answer before moving to the next part.


(whitespace to avoid spoilers)

These questions are identical except for the duration of time (next month vs. next year). Suppose your answer to question 1 is 0.15 (i.e. we believe that the probability of NOT having an earthquake in the next month is 0.85). \[\mathbb{P}(\text{earthquake in next month}) = 1 - \mathbb{P}(\text{no earthquake in next month})\] \[= 1 - 0.85 = 0.15\]

Assuming independence across months, this would extend to give the following answer for question 2: \[\mathbb{P}(\text{earthquake in next year}) = 1 - \mathbb{P}(\text{no earthquake in next year})\] \[= 1 - \mathbb{P}(\text{no earthquake in next month})^{12}\] \[= 1 - 0.85^{12} \approx 0.86\]

Perform the calculations above using the number you actually came up with as your answer to question 1. Does your answer to question 2 match that which you get by extending your estimate from question 1?

For many people, their answer to question 2 ends up being quite far off from the extended version of their answer to question 1. This is an example of scope neglect, where one’s intuitive guess for the longer time window is inconsistent with what their guess for the shorter time window. In other words, one’s intuitive answer tends to improperly account for the multiplicative factor on the window of time.

15.3.2 Antidotes to Scope Neglect

Below are two ways to combat scope neglect:

  1. Performing a Fermi estimate to ensure the order of magnitude of one’s forecast makes sense for the given scope of the problem (i.e. the time window)
  2. Sanity checking: asking one’s self “why X and not 5X?”

15.4 Extension Neglect

Extension neglect is a generalization of scope neglect where one fails to account for sample size.

15.4.1 Antidotes to Extension Neglect

Below are two ways to combat extension neglect:

  1. Make forecasts across different time horizons and check if they are consistent, or select one time horizon for which you have the best intuition
  2. Decompose the problem

15.5 Anchoring Bias

Anchoring bias occurs when we have too much inertia around our first idea or estimate. Anchoring bias also occurs when we forget to revisit assumptions or are too slow to revise a hypothesis given new information.

An example of this is when one makes a forecast two different ways and the final estimates from each method “magically” end up close to one another. This can happen when the forecaster has the initial forecast in their head while coming up with the second estimate and they subconsciously “fudge the numbers” to make the forecasts consistent.

15.6 Confirmation Bias

Confirmation bias occurs when one rejects information that contradicts their beliefs and accepts information that reinforces them. It also occurs when one relies too heavily on a single information source, or on sources with similar biases.

15.7 Antidotes to Anchoring and Confirmation Bias

Below are a few ways to combat anchoring and confirmation bias:

  1. Sanity checking: Ask one’s self “are there assumptions I made that I forgot to check or account for?”
  2. Reversal test: Ask one’s self “if I saw A then B, rather than B then A, would the answer be different?” or “if I don’t want to increase X, how do I feel about decreasing X?”
  3. Discuss with other people. This can reduce the chance of relying too heavily on a single source of information.
  4. Have a wide variety of analogies, modeling tools, and strategies for approaching problems to avoid relying too heavily on any single one.
  5. Consider problems from many different angles, resisting urges to make them agree.
Misha’s comment

Another idea is allowing yourself to wear “multiple hats.”

Say for the question of “Will Alexander Lukashenko remain president of Belarus on January 31st, 2021?” I approached the problem with the perspectives of the following people:

  • “busy [forecaster] who makes appropriate reference class forecast and moves on”,
  • “a Russian with in-depth [on-the-ground] knowledge of the protest”,
  • “a senior Belarusian politician trying to outplay the situation,”
  • “a senior Russian politician trying to outplay the West.”

15.8 Availability Bias

Availability bias occurs when one relies on the most salient data rather than using all relevant evidence. People tend to overrate data from their own personal experience and from their immediate social groups / subculture.

Consider the question: “What is the probability of a military or terrorist strike on U.S. soil in the next 10 years?”

People tend to give different answers based on whether they are old enough to remember September 11th, 2001. Perhaps a more calibrated approach would be to consider historical base rates: Considering Pearl Harbor (1941), the US Capitol Shooting (1954), the World Trade Center Bombing (1993), and September 11 (2001), this gives a base rate of about once per 20 years.

Consider another question: “How many people own a credit card?” Forecasters of this question may face availability bias based on whether they or their peers own credit cards.

15.8.1 Antidotes to Availability Bias

Below are two ways to combat availability bias:

  • Reference class forecasting / considering base rates: This forces one to use relevant evidence instead of relying on their own intuition, which would leave more room for bias
  • Reading history books: This broadens the amount of historical knowledge available to one as they make their forecast.
  • Talk to people from other backgrounds and social classes
  • Read opinion polls and surveys
  • Cultivate a diverse group of friends with different perspectives

15.9 Substitution Heuristic

The substitution heuristic is the tendency to answer a simpler question instead of the question actually at hand.

For example, consider making a forecast for the question: “What will the Metaculus community forecast of the question ‘Omicron variant deadlier than Delta’ be on Dec 8?”

A forecaster might be inclined to invest in making a strong forecast for whether Omicron will be deadlier than Delta. However, that is a simplification of the actual question at hand. Instead, the question is really asking about what the Metaculus community will think about whether Omicron will be deadlier than Delta, not the forecaster. Therefore, time might be better spent for this forecast modelling the dynamics of the Metaculus community.

15.9.1 Antidote to the Substitution Heuristic

Below are some ways to prevent falling victim to the substitution heuristic.

  • Sanity checking: Ask one’s self “am I using the substitution heuristic?”
  • Consider the unaccounted-for: Ask one’s self: “are there important considerations in how the question will resolve that I haven’t accounted for?”

15.10 The Exposure Effect

The exposure effect is the tendency to believe things just because they have been repeated many times.

For example, if many news articles assert a particular supposed “fact,” people can end up believing it even if the news articles don’t provide evidence.

15.10.1 Antidotes to the Exposure Effect

Below are some ways to combat the exposure effect.

  • Trust no one
  • Remember news copies from the same source
  • Be a skeptic: check sources and/or work shown before trusting conclusions
  • Have a good information diet

15.11 Tips for Good Information Consumption

One final tip for good information consumption: Build a “nearly vacuous statement detector.” Common turns of phrase such as “there’s a good chance that…,” “I wouldn’t be surprised if…,” and “there’s a serious possibility that…” don’t actually say much. Be wary when you come across them.