This is a summary review of Superforecasting containing key details about the book.
What is Superforecasting About?
Superforecasting by Philip E. Tetlock and Dan Gardner explores the art of making accurate predictions through a combination of analytical and intuitive thinking. (Full Summary…)
Superforecasting Summary Review
“Superforecasting: The Art and Science of Prediction” by Philip Tetlock and Dan Gardner is a seminal work that masterfully navigates the intricate world of forecasting. This book stands out as a profound exploration into the art of prediction, melding meticulous research with real-world applications and insightful analysis.
Tetlock and Gardner embark on a quest to demystify the science of forecasting. The book’s foundation lies in Tetlock’s groundbreaking research, which revealed a stark reality: the general inability of experts to predict future events with greater accuracy than random chance. This revelation, however, is just the starting point. The authors delve deeper, uncovering a unique group of individuals, termed ‘superforecasters,’ whose knack for accurate predictions defies conventional wisdom.
What distinguishes “Superforecasting” is its remarkable storytelling. The authors skillfully weave tales of ordinary people from varied backgrounds – including a Brooklyn filmmaker and a retired pipe installer – who possess an extraordinary ability to foresee global events. These narratives are not only engaging but also serve to illustrate the broader principles at play in effective forecasting.
A defining feature of the book is its accessibility. Complex concepts are distilled into comprehensible and actionable insights, making the book valuable for a wide spectrum of readers. Tetlock and Gardner argue that effective forecasting isn’t contingent on advanced technology or obscure methods but hinges on fundamental, universally applicable principles.
At its core, the book is an examination of the traits that typify superforecasters. Qualities such as open-mindedness, meticulousness, self-critical thinking, and a dedication to perpetual improvement are underscored. The authors advocate for a probabilistic approach to thinking, the importance of deconstructing complex queries, and the necessity of adapting forecasts as new information emerges.
“Superforecasting” delves deeply into the cognitive biases that impede effective forecasting. By examining pitfalls like confirmation bias and belief perseverance, Tetlock and Gardner offer readers a chance to understand and overcome the psychological obstacles in prediction-making.
In a particularly engaging chapter, the authors explore how forecasting principles apply to leadership and decision-making. The discussion on group dynamics and the dangers of groupthink is especially pertinent for leaders striving to cultivate effective, decision-making environments.
Historical examples are another strength of the book. From the Bay of Pigs fiasco to the Cuban Missile Crisis, the authors illustrate the impact of successful and failed forecasts. The analysis of the CIA’s assessment of weapons of mass destruction in Iraq is a poignant reminder of the stakes involved in prediction.
The conclusion of the book emphasizes the significance of feedback and learning from errors. Forecasting is presented not just as a skill but as a continuously evolving practice, relevant in numerous realms including business, politics, and everyday decision-making.
In summary, “Superforecasting: The Art and Science of Prediction” is more than a guide to enhancing predictive skills; it is a comprehensive exploration of decision-making, team dynamics, and leadership. Tetlock and Gardner have created a work that is not only enlightening but also empowering, equipping readers with the tools to refine their foresight in an increasingly unpredictable world. This book is bound to be recognized as a classic in its field, providing enduring insights and practical wisdom for anyone interested in the art of prediction.
Who is the Author of Superforecasting?
Dan Gardner is the New York Times best-selling author of books about psychology and decision-making. His work has been called “an invaluable resource for anyone who aspires to think clearly” by The Guardian and “required reading for journalists, politicians, academics, and anyone who listens to them” by Harvard psychologist Steven Pinker.
Philip E. Tetlock is a bestselling author. He is a Canadian-American political science writer, and is currently the Annenberg University Professor at the University of Pennsylvania, where he is cross-appointed at the Wharton School and the School of Arts and Sciences. He was elected a Member of the American Philosophical Society in 2019.
How long is Superforecasting?
- Print length: 352 pages
- Audiobook: 9 hrs and 45 mins
What genre is Superforecasting?
Nonfiction, Business, Science
What are good quotes from Superforecasting?
“The more that is unknown, the greater the opportunity.”
“For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded.”
What are the main summary points of Superforecasting?
Here are some key summary points from the book:
- Prediction is a talent and a skill that can be developed. Anyone with enough dedication, interest and domain expertise can improve their skill and accuracy
- Our complex world means that small events can lead to large unforeseen consequences making regular forecasting rather limited. This, however, does not mean that forecasting should be scrapped. Just because not everything is predictable, doesn’t mean everything is unpredictable.
- Superfocasting is derived from curiosity, the desire to learn, the ability to gather information, and the willingness to change and update our beliefs.
- Superforecasters are less ideologically (or professionally) biased. They seek data from a wide range of sources to examine future trends. They are open-minded and have less fear of making mistakes.
- Superforecasters embrace probabilistic thinking and understand how statistical probability works.
- As forecasters, we want to measure the accuracy of our forecasts in order to improve our forecasting skills. We also want to adjust your forecast as new information comes to light.
- Forecasters should avoid using vague language like “might”, “could” and “likely” because different people attach different meanings to these words, it’s far better to use numbers, particularly percentages.
- Seemingly impossible forecast problems can be tackled by breaking them down into smaller bite-size units to analyze
- Every situation is unique so don’t judge a case too quickly. Approach it from the outside by finding the base rate first.
- It’s always wise to plan for adaptability and resilience.
- Many forecasters tend to give advice that is too certain due to the fact that consumers have an inherent distaste for uncertainty
- Declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.
- In forecasting, often, a small edge can make a big difference.
- The more that is unknown, the greater the opportunity.
- Consensus is not always good; Disagreement not always bad. If you do happen to agree, don’t take that agreement—in itself—as proof that you are right. Never stop doubting.
- Beliefs are hypotheses to be tested, not treasures to be guarded.
- We as humans have the ability to take a series of past events and turn them into linear narrative that makes the outcome seem all but inevitable.
- Using multiple lenses to view a subject can provide a more detailed picture and yield greater understanding and better forecasting.
- One of the keys for better forecasting is to embrace diverse thoughts, expose assumptions, catch mistakes, and correct biases.
- People who work in teams had forecasts that were more accurate than forecasts made by individuals. However, it’s key to include critical discussions and present an alternative view.
- The goal of forecasting is not to see what’s coming. It’s to advance the interests of the forecaster and the forecaster’s community.
- Anchoring bias: “When we make estimates, we tend to start with some number and adjust. The number we start with is called the anchor. It is important because we typically underadjust, which means a bad anchor can easily produce a bad estimate.”
What are key takeaways from Superforecasting?
Takeaway #1. Know The Limitations of Forecasting
Our forecasts reflect our expectations for our future but life in our complex world means that small events can lead to large unforeseen consequences making regular forecasting rather limited.
Take the Arab Spring for example, the uprisings started because 1 Tunisian street vendor set himself on fire after he was humiliated by deceitful police officers. Forecasters cannot predict events such as this due to the chaos theory aka ‘the butterfly effect’. These limitations don’t mean that forecasting should be scrapped though.
Takeaway #2. Measure and Update Your Forecasts
Weather forecasts are quite reliable when made a few days in advance because forecasters analyze the accuracy of their forecasts with the actual weather that has occurred, this allows them to improve their forecasts alongside their understanding of how the weather works. Unfortunately, forecasters in other fields do not usually measure the accuracy of their forecasts so in order to improve forecasting we have to get serious not only about comparing but about measuring and updating our findings.
You can also use the Brier score system to see how accurate past forecasts were. The Brier score system states that the lower the score, the more accurate the forecast so if the forecast was perfect, it gets 0. Forecasts which are randomly guessed are given a score of 0.5 whilst completely wrong forecasts are given a maximum score of 2.0. Interpreting the score depends on the question being asked as you might have a Brier score of 0.2 which seems great yet the forecast could be terrible.
Takeaway #3. Be Precise with Your Measurements
You might think that measuring forecasts is as simple as gathering the data and doing the calculations to judge their accuracy but it’s not quite that simple as you have to understand the meaning behind the original forecast.
For increased precision, forecasters should avoid using vague language like “might”, “could” and “likely” because different people attach different meanings to these words, it’s far better to use numbers, particularly percentages.
The false claim of Saddam Hussein having weapons of mass destruction was made by the NSA and CIA intelligence agencies, here is an example where percentages rather than words should have been used – If the forecast said that the chances of Iraq having weapons of mass destruction was 60% likely, this would have meant that there was a 40% chance they didn’t resulting in insufficient knowledge to justify the U.S invading Iraq.
Takeaway #4. Break Down Problems
How do you eat an elephant? One bite at a time! Seemingly impossible forecast problems can be tackled the same way, not by eating them but by applying Fermi-style thinking which means breaking problems down into smaller bite-size units to analyze. Effective superforecasters separate the known from the unknown, getting to grips understanding the basics first before looking for assumptions.
Let’s look at the forecast of the cause of death of Yasser Arafat, leader of the Palestine Liberation Organization. Initially, his cause of death was unknown but it was later discovered that he had been in contact with lethally high levels of polonium-210 which were found on his belongings. Conspiracy theories had always mentioned poison but now this seemed a very likely cause of death. His body was exhumed and examined in France and Switzerland with volunteer forecasters on the Good Judgment Project asked to forecast whether scientists would find high levels of polonium in Yasser Arafat’s remains. One volunteer concluded that there was a 60% chance thanks to the Fermi-style approach which he used to break down the information he had. People said that Polonium decays rapidly so it was thought that the chance of it still existing in Arafat’s remains, 8 years after he had died, were slim however, the forecaster did his own research and discovered that it could still be detectable. Also taking into consideration that the leader had Palestinian enemies and that there could have been foul play during the postmortem due to Palestine wanting to blame Israel for Arafat’s death and the forecaster came to his 60% conclusion.
Takeaway #5. Don’t Jump To Conclusions
Don’t judge a case too quickly as every situation is unique. Approach it from the outside with a concept called anchoring which has forecasters finding the initial figure, the base rate.
Let’s say you want to predict the chances of a particular Italian family owning a pet. A regular forecaster would look at the specific facts about the family first, learning that they live in the U.S in a modest house and that the father is a bookkeeper and the mother a part-time childcare provider. They have 1 child and the grandmother lives in the same house.
Superforecasters don’t use the inside facts first, the details, they go to the outside view first discovering what percentage (base rate) of American households own a pet. Thanks to Google they quickly learn the base rate is 62%. It’s only now that the superforecaster looks at the inside view taking into consideration the family dynamics as this allows them to get more specific with their data and adjust the base rate accordingly. Knowing that the family is Italian, their next step might be to check the percentage of Italian families owning a pet in the U.S.
Takeaway #6. Stay Up To Date
Once your initial forecast is made, it’s no good sitting back to wait and see if you were right, you’ve got to adjust your forecast as new information comes to light.
This is exactly what the researcher who forecasted that Yasser Arafat’s remains had a 60% chance of being detected with polonium did. By keeping an eye on the news surrounding this case, you can also set a Google Alert, he would update his forecast as new data arose. The official Swiss research team announced that they needed to do additional testing long after Bill Flack made his original forecast. Because Bill had done a lot of research on polonium he knew that the delay meant the Swiss team had found polonium and that they needed to do further tests to confirm its source. He adjusted his original forecast from 60% to 65% certainty that Arafat had died of polonium. The Swiss team did find polonium so Bill’s final Brier score came out at an impressive 0.36.
This case-study doesn’t mean that new information is always helpful though – it can also misguide you. To update a forecast you must use your skill to tease out subtle details from extraneous information. On the other hand, you shouldn’t be afraid to change your mind and update your forecast.
Takeaway #7. Working In Teams Can Be Beneficial
The term groupthink was coined by psychologist Irving Janis who hypothesized that people working in small groups build team spirit by unconsciously creating shared illusions that disrupt critical thinking because of people agreeing with each other.
The research team on the Good Judgment Project decided to see if teamwork would influence accuracy and concluded, after 1 year, that on average, the people who worked in teams had forecasts that were 23% more accurate than forecasts made by individuals. In the 2nd year, the team replaced regular forecasters with superforecasters and found they outperformed the regular group of forecasters tremendously but that the group dynamics were impacted with everyone overly polite with very little critical discussion and no one willing to present an alternative view.
To combat this problem teams can work more effectively by using precision questioning, a method that encourages people to rethink their argument by defining the finer details of their argument i.e by asking them to define a particular term.
Remember, by not conforming, your ideas can be a tremendous asset to the group.