What’s the point of forecasting? This is the question posed for me by Michael Story’s account of superforecasters’ predictions for the course of the pandemic. Take this: Our central estimate that the death toll will peak at 1,278 with an 80% confidence interval of 892 to 5,750. So what? It’s not all clear how this affects the debate about how tight the lockdown should be. That depends upon how the virus is transmitted; the trade-off between mental and physical health; how far the economic effects of the lockdown can be mitigated; and so on. Yes, that claim that the daily death toll could be 5750 is alarming, and speaks to the need for a tight lockdown. But you don’t need any precise number to make that case. The mere chance of a high death toll does as well. Similarly the
chris considers the following as important:
This could be interesting, too:
chris writes Endogenous policy
chris writes Character, context and work
chris writes Debates, fake and genuine
chris writes On fantasy politics
What’s the point of forecasting? This is the question posed for me by Michael Story’s account of superforecasters’ predictions for the course of the pandemic.
Our central estimate that the death toll will peak at 1,278 with an 80% confidence interval of 892 to 5,750.
So what? It’s not all clear how this affects the debate about how tight the lockdown should be. That depends upon how the virus is transmitted; the trade-off between mental and physical health; how far the economic effects of the lockdown can be mitigated; and so on. Yes, that claim that the daily death toll could be 5750 is alarming, and speaks to the need for a tight lockdown. But you don’t need any precise number to make that case. The mere chance of a high death toll does as well.
Similarly the forecasters’ 80% confidence interval that between six and 20 million people will be vaccinated by end-February is also irrelevant for policy. What matters is that the vaccine is delivered to as many people as possible. That’s a matter of logistics, not of forecasts.
If the forecasts are irrelevant for policy, what use are they?
Yes, if you’re spread-betting on the final death count, they might be useful. But few of us are doing this. And anybody who is faces the problem that superforecasters’ views should in theory get quickly embedded into prices. Which means that by now they don’t help us beat the market. We’re no better off.
Another potential use of forecasts is that they test hypotheses: a correct forecast should strengthen our confidence in the theory upon which it rests, whilst a wrong forecast should weaken it. We must therefore always ask of a forecast: was it right or wrong, and if so why and what do we learn from this? as Jonathan Portes does here and I did here.
But again, it’s not obvious how the superforecasters help us in this respect. Let’s say we see low numbers of vaccinations, so the superforecasters are wrong. What hypothesis is then weakened? Sure, faith in the government’s logistical ability would be – but we don’t need superforecasters to assess this, merely progress in vaccination against the government’s own targets.
My reaction to this piece, then, is the same as that to very many forecasts, such as those for November’s US presidential election: there’s no point to them, as we’ll find out soon enough anyway.
In fact, for me, exercises such as this miss four more interesting issues in forecasting.
First, is there any predictability in human affairs, and if so what is its origin?
Take for example, my chart. It shows that medium-term returns on UK equities have been highly predictable by a simple ratio of retail sales to the All-share index: this ratio predicts decent returns over the next three years (as I say, a forecast should be a test of a hypothesis).
This measure works for two reasons. First, consumer spending is, partly, forward-looking: if we anticipate good times we’ll spend more than if we anticipate bad, and on average across millions of consumers forecast errors largely cancel out. Secondly, equity investors do not fully price in this wisdom of crowds, causing equities to be under-priced when retail sales are strong relative to share prices. (For more sophisticated versions of this theory, see Lettau and Ludvigson (pdf) and this paper from the Bank of England).
What we have here, then, is evidence of predictability and a reason for it. What are the analogues of these in other domains?
Secondly, who are the best forecasters: foxes (who know many things) or hedgehogs (who know one thing)? Philip Tetlock argues that foxes do better. This is not true in all domains, however. In my example, the hedgehog who looked only at the retail sales to All-share ratio would have done better than foxes who tried to process all possible information about prospective market returns. Also, if you want to know the chances of a recession, a single glance at the yield curve does far, far better than economists’ foxier forecasts. What works in one context doesn’t work in another.
Thirdly (and perhaps relatedly): do forecasts change the environment or not? Most of those discussed by Tetlock do not: this is true of the Covid forecasts as well. Other forecasts, however, do. Optimistic forecasts for (say) where Bitcoin or Tesla shares raise their prices today and hence reduce future returns. Investor sentiment – which is correlated with share price forecasts – can predict future returns.
In such cases, we need a different approach. We should ask not: “what will happen?” but “what information (if any) are market participants ignoring?” To use a useful anology in Meir Statman’s Finance for Normal People, superforecasting is a game you play against the environment whereas investing is one you play against other investors. These are different things.
Fourthly, do we need forecasts at all? Sometimes we don’t. I’ve shown that investors who diversified very simply have done perfectly well without forecasting anything. In the same spirit, the stock-picker who simply bought momentum or defensive (pdf) stocks would over time have out-performed those who tried to forecast returns for individual stocks or the market.
In other contexts, what matters is not a point forecast but rather the distribution of possibilities. The case for zero interest rates isn’t based on a particular forecast, but upon the balance of risks: there’s a risk of sustained high unemployment but less risk of soaring inflation.
The best policies should not rest upon a forecast. For example, a basic income can and should be justified on many grounds other than that we need one because automation will destroy jobs (maybe it will, maybe it won’t). And one reason why we need stronger counter-cyclical automatic stabilizers is that we cannot rely upon a predict-and-control approach to macro policy.
Now, I don’t say all this to attack superforecasting. Mr Story is bang right to say that bad forecasters should be driven out of the marketplace of ideas: the fact that they are not is because of perverse incentives in the media. And a lot of advice on how to be a better forecaster is valuable as advice on how to think better.
Forecasts are perhaps the least interesting part of the superforecasting project.