Why You Must Never Trust Economists – Including Me!
By Shlomo Maital
Have you ever noticed that the word ‘economics’ contains two shorter words? First, the word ‘con’, to swindle or cheat; and second, the word ‘comic’, which means, laughable, humorous, funny.
Two respected economists, Joshua Angrist and Jorn-Steffen Pischke, recently published a lead article in a respected journal, noting how nobody believes empirical results derived using ‘econometrics’ (the application of statistics and mathematics to data analysis). * They point out that things are slowly changing. Economists are increasingly doing real experiments, or quasi-experiments (data analysis that simulates a true experiment, including a control group), and that this is increasingly becoming a requirement for research to be published or to be believed. This ‘revolution’ is mainly happening, they note, in labor economics, public finance and development economics. It is NOT happening, they note, in macroeconomics or in industrial organization – the two fields where new research and thinking is urgently needed, in view of the global macro crisis and the need to rethink regulation of markets. Imagine what could be learned, they note, if we only did credible research in the latter two fields. In macro – want to prove the key is money? Easy. Want to prove the opposite? Equally easy. And both approaches have won Nobel Prizes.
I’m afraid the ‘con’ and ‘comic’ will not leave economics, until we economists get out of our offices, dig our noses out of data tables, and get out into the field and talk to real people and real businesses. I have a confession to make. I once co-authored a paper in a leading journal, Econometrica. It was an analysis of inflation expectations, based on a dataset supplied by a friend, Joe Livingston, of the Philadelphia Inquirer, who ran regular surveys about inflation and kindly provided me with the data. It took seven full years for the paper to be accepted, running the gauntlet of referees’ criticisms. What we did to massage those poor data was inhuman. By the time our manipulations were done, well, nothing was left of the simple underlying data (“do you think prices will rise or fall?”). None of this was crooked or dishonest. It simply mangled the data using generalized least squares, Heckman corrections, heteroskedasticity, time-invariant blah blah, ….. beyond recognition.
Kudos to Angrist and Pischke for revealing the ‘con’ in economics. The latest fad in economics is to pursue Dan Ariely-like experiments to understand behavior. This is good. There is very little ‘con’ in behavioral economics, when you study real live people first-hand.
* Joshua D. Angrist, Jorn-Steffen Pischke, “The credibility revolution in empirical economics: How better research design istaking the ‘con’ out of econometrics’, Journal of Economic Perspectives, Spring 2010, pp. 3-30.


2 comments
Comments feed for this article
June 24, 2012 at 3:05 pm
Bernardo Javalquinto
Quiet remarkable …..!!!!!
July 9, 2012 at 10:12 am
las artes
Since Edward Leamer’s memorable 1983 paper, “Let’s Take the Con out of Econometrics,” empirical microeconomics has experienced a credibility revolution. While Leamer’s suggested remedy, sensitivity analysis, has played a role in this, we argue that the primary engine driving improvement has been a focus on the quality of empirical research designs. The advantages of a good research design are perhaps most easily apparent in research using random assignment. We begin with an overview of Leamer’s 1983 critique and his proposed remedies. We then turn to the key factors we see contributing to improved empirical work, including the availability of more and better data, along with advances in theoretical econometric understanding, but especially the fact that research design has moved front and center in much of empirical micro. We offer a brief digression into macroeconomics and industrial organization, where progress — by our lights — is less dramatic, although there is work in both fields that we find encouraging. Finally, we discuss the view that the design pendulum has swung too far. Critics of design-driven studies argue that in pursuit of clean and credible research designs, researchers seek good answers instead of good questions. We briefly respond to this concern, which worries us little.