55 pages • 1 hour read
A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Daniel Kahneman shared the idea of System 1 and 2 thinking in his book Thinking, Fast and Slow (2011). System 1 thinking is the fast kind that is barely perceptible to us. The authors describe it as “the realm of automatic perceptual and cognitive operations—like those you are running right now to transform the print on this page into a meaningful sentence” (33). The ability to think in this way not only aids our day-to-day functioning but also enabled our primeval ancestors to quickly assess potential opportunities and dangers and so helped with the survival of our species.
While System 1 thinking is popular with many in the contemporary self-help movement, who talk at length about gut-knowing, the authors consider that it can be a hindrance to accurate forecasts. This is because System 1 makes us prematurely reach for the most seemingly obvious conclusions. Arguably, it was a proliferation of System 1 thinking that made people jump to the conclusion that the 2011 Oslo terrorist attack was caused by Islamists. Speculators drew upon the evidence that similar attacks in the cities of London, Madrid, and Bali had been caused by Islamists to reach a satisfying conclusion. In order to discover the truth—that the attack was caused by a single Islamophobe—investigators had to push against System 1 thinking and consider the counterevidence.
System 2 thinking is the other part of the dual-system mental universe. The authors write that “it consists of everything we choose to focus on” and is thereby the realm of conscious thought (33). If System 1 comes up with an automatic assumption, System 2 must interrogate it and ask how well it stands up to scrutiny. As this process is a time-consuming one, most people can only access System 2 thinking after they have their System 1 answer.
Superforecasters are skilled at not automatically accepting their System 1 answer and getting System 2 into gear quickly. In addition to asking themselves what it would take for their System 1 answer to be right, they ask what it would take for that answer to be wrong. They thus echo the process of scientists, who test their hypotheses. System 2 thought makes forecasters more accurate by keeping them humble. They are all too aware of their fallibility and safeguard against it.
The bait-and-switch process involves swapping the hard, original question for an easier one. This happens on a quasi-automatic level in the human mind and is one of the most common reasons behind misleading forecasts.
Part of System 1 thinking, bait and switch is one of the heuristics, or mental shortcuts, that have held evolutionary advantage. For example, asking whether a shadow in the grass ought to be worrying is a difficult question that requires time-consuming consideration of evidence. In contrast, it is easier to answer the question of whether shadowy grass is the sort of place a predator would hide. Thus, “that question becomes a proxy for the original question and if the answer is yes to the second question, the answer to the first also becomes yes” (40).
In everyday life, we often employ bait and switch when we supplement the opinions of experts on a subject with our own opinion on the subject. Thus, if we know little about climate change, we might swap the difficult question of whether climate change is real for the easier one of whether a climatologist is the sort of person who would be correct about climate change. The best forecasters, who aggregate a variety of perspectives before making their predictions, must be conscious of the temptation to bait and switch by employing doubt as to whether they are answering the question they were assigned.
Calibration measures how well the forecasted probability of an outcome (for example, 70% chance of rain) matches the actual outcome. Calibration is a good way to measure the accuracy of individual forecasters by mapping the precision of their forecasts over time. Starting with the authors’ belief that explicitly measuring accuracy is the only way to improve the science of forecasting, calibration is an essential process for accountability and identifying superforecasters. Perfect calibration looks like a diagonal line on a graph that measures forecasts on the x-axis and the percentage correct over time on the y-axis. Forecasters whose predictions remain close to that diagonal are more accurate than those whose predictions waver from it.
Calibration is also a tool for measuring the likelihood of an event taking place or in the aggregation of many forecasts to reach an average.
Resolution is complementary to calibration. Resolution is the ability to stray outside of the comfort zone of one’s predictions and means that the forecaster has the confidence to assign “high probabilities to things that happen and low probabilities to things that don’t” (62). In the sphere of meteorology, it’s easier to identify forecasters with good resolution in zones where the weather is unpredictable, rather than in those with consistent climate. The authors argue that forecasters with high resolution should be rewarded better than those who don’t stray out of their comfort zone when their predictions prove accurate. However, these same high-resolution forecasters should also be “punished” with a hit in their Brier scores if their forecasts prove too bold to be true (62).
Brier scores were developed by the American statistician and weather forecaster Glenn W. Brier in 1950 to measure the distance between a forecast and what actually happened. Thus, the perfect score is zero, while the worst score of 2.0 would be awarded for a prediction that was 100% inaccurate. If a dart-throwing chimp were to hit its mark only 50% of the time, its “lifetime” Brier score would be 0.5.
Still, while one might assume that the best forecaster is the one whose score is closest to zero, the difficulty of the forecasters’ questions is key. Thus, a weather forecaster who scores 0.2 in a place with an unpredictable climate is more skilled than the one who scores the same in a place like Phoenix, Arizona, where the weather is extremely consistent.
In the book, the authors show how superforecasters are conscious of their Brier scores, using them as measures for challenges themselves and keeping themselves humble. They know that they are only as good as their last forecast and that too many disappointing Brier scores will see them slip from the ranks of superforecasters.
Aggregation is an important tool for superforecasters, who know that getting as many perspectives on a problem as possible is essential to accuracy. Aggregation, which means taking the average of multiple forecasted probabilities, produces a numerical value that leads to greater accuracy. This is because, ideally, each forecast or forecaster will have considered a different aspect of the problem.
In a team of superforecasters who have been careful to make individual predictions rather than becoming swayed by groupthink, aggregation can often ensure optimal results. However, aggregation can lead to inaccuracy when team members have varying degrees of skill in forecasting, as the weaker forecast can distort the whole score. Therefore, aggregation must take the skill of individual forecasters into account.
Regression to the mean is a statistical concept asserting that while an individual may give an outstanding performance once, their subsequent scores are likely to land somewhere between that outstanding number and the mean. This concept is useful when making forecasts because it increases the chances of accurately predicting performance over time.
Regression to the mean can also be applied to superforecasters’ performance. If a forecaster achieves an outstanding Brier score on one occasion, then statistically, the score of their next forecast should land halfway between the first result and the group’s average. However, superforecasters in the Good Judgment Project actually bucked this trend and generally showed consistent improvement. This proves Tetlock’s point that people can be trained to make consistently good forecasts. This is important because it shows that superforecasters can be a valuable resource for governments and organizations.
Taking the outside view of a situation means placing it within a wider statistical landscape rather than treating it as unique or placing too much value on the details of the narrative. Thus, in seeking to estimate the likelihood of whether a particular family has a pet, taking the outside view would mean finding out what percentage of families with the same demographic and residence style have a pet.
Taking the outside view is a valuable tool for superforecasters, who employ it before taking the inside view. In the case of the potentially pet-owning family, a superforecaster would take the average percentage of American families who own pets and then adjust according to the attributes of the family in question. For each factor, such as, for example, the number of children in the family, the forecaster would keep the outside view in mind by considering the statistical average. The authors maintain that considering the outside view first is imperative because it prevents bias. They show how Peggy Noonan, a former speech writer for Republican President Reagan, made the mistake of forecasting that the Democrats were in trouble because of the rise in George W. Bush’s rating four years after leaving office. Noonan neglected to take the outside view that a president’s rating always rises after they leave office.
Radical indeterminacy is the idea that the smallest change in the occurrences leading up to an event could have produced a profoundly different result. This favored philosophy of forecasting skeptic Nassim Taleb derives in part from Edward Lorenz’s theory of the butterfly effect, whereby the air variations caused by a butterfly flapping its wings in Brazil could set off a tornado in Texas.
The authors agree that radical indeterminacy is real and “instills profound humility” in forecasters, who know that they should never fall into the trap of overconfidence (249). However, they do not think that it should intimidate forecasters out of predicting the future altogether. This is because the Good Judgment Project has shown that certain techniques and capacities for analysis and problem-solving can improve predictive accuracy.
Plus, gain access to 8,800+ more expert-written Study Guides.
Including features:
Business & Economics
View Collection
Canadian Literature
View Collection
Common Reads: Freshman Year Reading
View Collection
New York Times Best Sellers
View Collection
Politics & Government
View Collection
Psychology
View Collection
Science & Nature
View Collection
Self-Help Books
View Collection
Teams & Gangs
View Collection
The Best of "Best Book" Lists
View Collection