(Also, it feels weird to call an auditorium at GWU filled with 200 + data scientists listening to one of the leading statistical consultants in 3 of the last 4 presidential campaigns a "meet up." Also also, a longer post about last night's presentation is forthcoming. Also also also, three sentences in parenthesis as an aside to this whole post; Strunk and White would be so disappointed.)
We got to talking about economic forecasting. I recall reading an IMF paper (that I cannot locate) that said something to the effect of "models don't make forecasts, economists do." That does not mean that economists should eschew models in favor of their gut. The broader point of the paper was about making a quantitative model for internally-consistent forecasts, in fact.
The "economists make forecasts" message is actually an operationalization of a phenomena that Tyler Cowen writes about in his book Average is Over. Cowen cites the interaction between humans and various predictive algorithms in playing freestyle chess as an allegory for how the most productive sector of the labor market will be operating 50 years hence. He claims (and I choose not to verify) that the best chess algorithms routinely trounce the best chess players, but the best human-computer teams trounce everyone. And interestingly, the best humans for these teams are often mediocre chess players in their own right, but they understand the advantages and limitations of both the algorithms and themselves.
I'm inclined to agree with Dr. Cowen; for the foreseeable future we (in data science and beyond) are best served by arming humans with our best computer models, but leaving the humans empowered to make the final judgement.
human + computer > computer > human
For now, at least, the above generally holds. But who knows what will ultimately happen. After all, if economic forecasting has taught me anything, predicting the future is hard (like math).
Post a Comment
Note: Only a member of this blog may post a comment.