Foxes, Hedgehogs and How to Predict the Future
Predicting the future. We’re keen on peering into the crystal ball at Khaos HQ, reporting on the latest trends affecting the Khaos Family, and imagine you are too. Forecasting is hard-baked into the business world. Investment decisions are made daily based on what we think is going to happen.
Many believe anticipating the future is little more than guesswork. A spin of the roulette wheel where luck is all that counts. They say the failures of the polling industry and betting markets, expressed in Vote Leave and Donald Trump’s victories, confirm prediction is an art. Not a science.
They are wrong.
Through the Good Judgement Project, a research team dedicated to improved forecasting, political scientist Philip E. Tetlock made stunning discoveries. Foremost among them that there exist people, so-called ‘superforecasters’, who perform consistently better than their peers. Better even than the intelligence community armed with classified information. And that superforecasters’ approaches were alike, could be learned and refined. They weren’t all academics, instead representing a cross section of society, nor were they geniuses.
This means anyone, including you, can become a superforecaster.
But what does better prediction have to do with foxes and hedgehogs?
In 1953, philosopher Isiah Berlin wrote the essay The Hedgehog and the Fox, separating famous thinkers. Hedgehogs interpret the world in line with one or two standout ideas. Karl Marx and Adolf Hitler were hedgehogs, Marx seeing events through the prism of class warfare while Hitler’s was racial.
Foxes, on the other hand, believe the world is too complex to be described by just one or two ideas. They’re pragmatists, open to rapidly revising their positions as facts change. Berlin described William Shakespeare and Aristotle as foxes.
Which is better at forecasting?
Foxes. Tetlock discovered that almost all superforecasters are.
This implies prediction is improved by reading widely around a topic, by challenging conventional wisdom and by weighing up different points of view.
So, follow politicians with whom you disagree on Twitter, read columnists from newspapers you would never usually touch and ask difficult questions about which everyone seems to agree. When updating predictions as circumstances change, trade being labelled a hypocrite for greater accuracy.
It can be uncomfortable.
Humans are prone to confirmation bias, meaning they naturally search for information that reinforces existing beliefs. Just look at newspaper readership and party allegiance. 6% of Guardian readers, a centre-left newspaper, vote Conservative. For the centre-right Daily Mail, this figure rises to 59%.
Then there’s wanting to fit in. Known as preference falsification, it’s people’s tendency to share the opinions of their social groups, avoiding ostracism.
But Tetlock found that, when those adopting a ‘foxy’ mindset collaborated in small teams, their biases were screened out further, boosting forecast accuracy 50%.
Being a superforecaster isn’t just about open-mindedness and bias avoidance. It’s also about improvement.
TV pundits aren’t called out as they often make vague assertions about the future that can’t obviously be judged right or wrong by a specific point. In contrast, Tetlock promotes definite forecasts that pass or fail by a certain time, and whose accuracy is graded accordingly. It becomes obvious who is good at forecasting, and who isn’t, allowing people to work on their methods if necessary.
An example would be last year’s U.S. Presidential election. Forecasters would have rated Hillary Clinton’s chances on a scale from 0 to 1, where 0 equalled certain defeat and 1 certain victory. The outcome would be obvious the day after election day, at which point, using Brier Scoring, their accuracy would be graded.
It means using a simple formula where, if a prediction is between 0 and 1, and the outcome is 0 for its failure and 1 for its success, you minus the outcome from the prediction and square it.
Let’s say our forecaster believed Clinton would win with 80% certainty. Their Brier Score, or accuracy, would be:
(0.8 – 0)2 = 0.64
If they believed Trump would win with 60% certainty, their Brier Score would be:
(0.6 – 1)2 = 0.16
The closer to zero, the better.
Let’s summarise Tetlock’s findings so you and your business can better plan what’s coming down the road.
- Don’t believe the world can be explained by one or two ideas. Reality is more complex
- When forming judgements, consult a diverse range of sources
- As the facts change, change your mind
- Question that on which everyone seems to agree
- Be definite in your forecasts. They must pass or fail by a certain point. If you can narrow the range of outcomes, so much so the better
- Grade forecast accuracy with the Brier formula
- Be open to changing your methods, testing many techniques, to get better
- Screen employees for superforecasting ability. Define superforecasters whose average Brier Scores are some amount less than the overall mean
- Organise these into small teams
- Train them. A one off, 60-minute tutorial on basic statistics was found to improve accuracy 10%.
We hope these tips and tricks help grow your business. Your growth is our growth. That’s why we’re so enthusiastic about Khaos Control, superior business management software proven to boost efficiency and save money.