The Top Two Criteria for Expert Judgment: Curiosity and . . . Curiosity

First a recap:

Philip Tetlock’s Expert Political Judgment was a groundbreaking look at whether political experts really are expert, as judged by their success at making predictions. His overall conclusion: they aren’t. But (lifted from a previous post):

…among the experts, ”foxes” — those who in Nicholas Kristof’s words are “are more cautious, more centrist, more likely to adjust their views, more pragmatic, more prone to self-doubt, more inclined to see complexity and nuance” — resoundingly beat out the “hedgehogs” — those who “have a focused worldview, an ideological leaning, strong convictions.”

This even while hedgehogs end up getting the biggest megaphones for their incorrect predictions.

But as Bryan Caplan pointed out quite cogently, there were two key flaws in Tetlock’s methodology (my words here, again from my previous post):

1. He only examines questions that are highly controversial among experts. (If 50% believe each way, 50% will inevitably be wrong.) Tetlock explicitly ignores the “dumb” questions that seem to the experts to have obvious answers, but which everyday folks might consider controversial.

2. He doesn’t compare the the experts to the average person on the street. The only such comparison in the book is between experts and Berkeley undergrads — who are darned high on the elite/expert spectrum, in absolute terms. And even in that comparison, the experts win in a landslide. The undergrads aren’t even as good as chimps or dartboards.

Back to the present: Tetlock’s latest initiative, the Good Judgment Project, looks to address those shortcomings. The first-round results are in — reported in an email to Tyler Cowen – and they’re eye-opening. Their predictors:

collectively blew the lid off the performance expectations that IARPA had for the first year. Their original hope was that in Year 1 the best forecasting submissions might be able to outperform the unweighted average forecasts of the control group by 20%. When we created weighted-averaging algorithms that gave more weight to our most insightful and engaged forecasters, these algorithms beat that baseline by roughly 60% (exceeding IARPA’s expectations for Year 4).

And what, you may ask, are the characteristics of their successful expert predictors?

(1) an intense curiosity about the workings of the political-economic world; (2) an intense curiosity about the workings of the human mind; (3) cognitive crunching power (“fluid intelligence” and a capacity for “timely self correction”).

Again: the foxes kick the hedgehogs’ butts.

I always agreed with the commentary about George W and his ilk — that they have no real curiosity about how the world works, they just seek confirmation of their existing (and often simplistic) beliefs — but I never considered it much of a knock-down argument. These results — once we see them explained (the email is pretty thin stuff) — may change my beliefs about that.

Caveat: these “curiosity” criteria are uncannily good at describing Philip Tetlock, Bryan Caplan, Tyler Cowen, and me. I tend to look askance at findings that are self-congratulatory.

Justin Fox reports and ruminates on his experience as a fairly mediocre forecaster in the project (emphasis mine):

So what distinguishes a bad forecaster? In my case, two things: (1) a discomfort with expressing my level of confidence with the size of my bets — this is a real flaw, perhaps traceable to the fact that I had never played a game of poker until two weeks ago; and (2) an almost complete lack of interest in the events being forecast. I think I’m pretty curious about the workings of the political-economic world. I just wasn’t interested in whether the IMF would officially announce before 1 April 2012 that an agreement had been reached to lend Hungary an additional 15+ Billion Euros.

… Hedgehogs who are obsessively focused on a particular theory of how the world works aren’t very good at forecasting. But foxes who don’t care aren’t very good at it either. The best forecasters would appear to be foxes who really really want to win the game of forecasting. To quote Saffo again, the key is to “hold strong opinions weakly.” Don’t be stuck in your views; be willing to revise them quickly when new information comes in. But have bold views, or don’t bother trying to make forecasts.

Cross-posted at Asymptosis.