DARPA, the NSF and the social benefits of economics: a comment on Cowen and Tabarrok

Tyler Cowen and Alex Tabarrok have a new piece in which they ironically note that economists are surprisingly shy when it comes to applying their tools to evaluate the efficiency of the NSF economics grant program. Their target is a companion JEP article in which Robert Moffitt defends the policy-relevance of economics against the last round of attacks on NSF’s social science budget, not least Senator Coburn’s 2011 list of wasteful federal spending. Cowen and Tabarrok criticize Moffitt’s use of the econ relevance poster child, Paul Milgrom’s research, which, legend says, has brought 60 billions $ to the US government through rounds of FCC spectrum auctions. This is a typical case of crowding-out effect, they argue, since private firms like Comcast, who also saved $1,2 billion in the process, had huge incentives in funding that research program anyway. Likewise, they note, the quest to raise revenue in sponsored search auctions led Google engineers to rediscover the Vickrey-Clarke-Groves mechanism. NSF funding should thus be shifted to high social benefits programs, for instance setting up a replication journal, supporting experimental projects with high fixed costs or –here they agree with Moffitt – deploying large publicly available datasets such as the Panel Study of Income Dynamics, which was specifically targeted by Coburn.

NSF grants are also biased toward research programs with high probability of success, already well established and published in top journals, they add. Such “normal science” is hardly groundbreaking. NSF should rather emulate DARPA and fund “high risk, high gain, and far out basic research” (which could include heterodox economics). They also suggest shifting from funding grants ex ante to giving prizes ex post (DARPA’s practice), because this creates competition between ideas. If an heterodox model provides better predictions than mainstream ones, then a NSF prize would signal its superiority.

The paper is challenging and, as always with the authors, unrivalled in its clever applications of economic tools. My problem is with:

1.their romanticized picture of DARPA

2. their lack of discussion of how the public “benefits” of economic research should be defined

 1. Should the NSF emulate DARPA?

department_mad_scientists_paperback-224x300 Cowen and Tabarrok’s suggestions are predicated on the impressive record of the Defense Advanced Research Projects Agency in promoting groundbreaking research, from hypersonic planes, driverless cars and the GPS to ARPANET and onion routing. This success is usually attributed to the philosophy and structure of the secretive defense institution founded in 1958 to “prevent technological surprise like the launch of Sputnik,” and later to “create technological surprise for US enemies.” Michael Belfiore, for instance, has described a “department of mad scientists” who profess to support “high risk, high gain, and far out basic research.” This is achieved through a flexible structure in which bureaucracy and red tape are avoided and the practical objectives of each project are carefully monitored. I have only skimmed Anne Jacobsen’s recent “uncensored” history of DARPA, but so far it does not seem to differ much from Belfiore’s idyllic picture. Yet digging into the history of specific projects yields a different picture. In his account of DARPA’s failed Strategic Computing program, Alex Roland explained that while the high-risk, high-gain tradition served as the default management scheme, the machine intelligence project was supervised by no less than 8 different managers with diverging agenda. Robert Kahn, the project’s godfather, believed that research was contingent and unpredictable and wanted to navigate by technology push, whereas his colleague Robert Cooper insisted on demand pull and imagined an AI development plan oriented toward innovations he could sell. Some viewed expert systems as crucial and other dismissed it, which changed what applications could be drawn from the program.

Roland’s research exemplified the difficulty of DARPA’s officials in agreeing over a representation of the scientific, technological and innovative process that would yield maximum benefits. And benefits were to be evaluated in terms of defense strategy, which the history of Cold War science has shows was far easier than to evaluate the benefits of social programs. From cost-benefit analysis to GDP, hedonic prices, contingent valuation, VSL or public economics, the expertise economists have developed is precisely about defining, quantifying and evaluating “benefits.” But the historical record also shows each of these quantifications have been fraught with controversy, and that when it comes to defining the social benefits of their science as a whole, economists are not even struggling with quantification yet. For the last 70 years, they have been stuck with negotiating a definition of “social,” “public” or “policy” benefits consistent with the specific kind of knowledge they produce with their patrons.

2. Fighting over “policy benefits”

Moffitt’s article is only the last instantiation of a series of attempts to reconcile economists’ peculiar culture of mathematical modeling with external pressures to produce useful research, their quest for independence and their need for relevance. This required a redefinition of the terms “pure,” “applied,” “theoretical,” or “basic,” and Moffit’s prose perfectly illustrates the difficulty and ambiguity of the endeavor:

The NSF Economics program provides support to basic research, although that term differs from its common usage in economics. Economists distinguish between “pure theory” and “applied theory,” between “pure econometrics” and “applied econometrics,” and between “microeconomic theory” and “applied microeconomics,” for example. But all these fields are basic in the sense used in government research funding, for even applied research in economics often does not concern specific programs (think of the vast literature on estimating the rate of return to education, for example, or the estimation of wage elasticities of labor supply). Nevertheless, much of the “basic” research funded by NSF has indeed concerned policy issues, which is not surprising since so much of the research in the discipline in general is policy-oriented and has become more so over time. Although most of that research has been empirical, there have been significant theoretical developments in policy areas like optimal taxation, market structure and antitrust, and school choice designs, to name only three.

 For Moffitt, in other words, the nub of the funding struggle is that both theoretical and applied economics are considered “basic” by funding agencies because they are only indirectly relevant to specific policy programs. Trying to convince patrons to fund “basic” or “theoretical” research was an issue even before the NSF opened its social science division in 1960. At that time, economics’ major patron was the Ford Foundation, whose representatives insisted on funding policy-relevant research. Mathematically-oriented economists like Jacob Marschak, Tjalling Koopmans, or Herbert Simon had a hard time convincing Thomas Carroll, head of the behavioral science division, that their mathematical models were relevant.

NSF’s economic funding remained minimal throughout the 1960s, and it climbed substantialy only after the establishment of the Research Applied to National Needs (RANN) office in the early 1970s. Tiago Mata and Tom Scheiding explain that RANN funded research on social indicators, data and evaluation methods for welfare programs. It however closed in 1977 after Herbert Simon issued a report emphasizing that the applied research funded was “highly variable in quality and, on the average, not impressive.” The NSF continued to fund research in econometric forecasting, game theory, experimentation and development of longitudinal data sets, but in 1981, Reagan made plan to slash the Social Sciences NSF budget by 75%, forcing economists to spell out the social benefits of their work more clearly. Lobbying was intense and difficult. Kyu Sang Lee relates how the market organization working group, led by Stanley Reiter, singled out a recent experiment involving the Walker mechanism for allocation a public good as the most promising example of policy-relevant economic research. Lawrence Klein, Kenneth Arrow and Zvi Griliches were asked to testify before the House of Representatives. The first highlighted the benefits of his macroeconometric models for the information industry, the second explained that economic tools were badly needed at a time when rising inflation and decreasing productivity needed remedy, and the third explained that

…the motivation for such selective cuts [could only be due to] vindictiveness, ignorance and arrogance: Vindictiveness, because many of the more extreme new economic proposals have found little support among established scientists. Because they have not flocked to support them, they are perceived as being captives of liberal left-wing ideologues; Ignorance, because this is just not so. It is ironic and sad that whoever came up with these cuts does not even recognize that most of the recent ‘‘conservative’’ ideas in economics – the importance of ‘‘rational expectations’’ and the impotency of conventional macro-economic policy, the disincentive effects of various income-support programs, the magnitude of the regulatory burden, and the arguments for deregulation – all originated in, or were provided with quantitative backing by NSF supported studies. And arrogance, in the sense that those suggesting these cuts do not seem to want to know what good economic policy can or should be. They do not need more research, they know the answers.

 31r2hG8wtzL._SY344_BO1,204,203,200_Sounds familiar? The ubiquity of Al Roth’s research on kidney matching in economists’ media statements, the proliferation of books such as Better Living Through Economics, Angus Deaton’s 2011 Letter from America, the 53 proposals by economists to rethink NSF future funding, and Moffitt’s article can all be interpreted as attempts to redefine the relationship between basic, applied and policy-relevant research and to provide a framework to assess the public benefits of economic research. They all exhibit tensions between publicizing benefits, and maintaining objectivity, independence and prestige. Reconciling these contradictory goals underwrite centuries of terminological chicanery. In 1883, physicist Henry Rowland delivered a “Peal for Pure Science” in an attempt to divorce his research from the corrupting influence of money and materialism on “applied” physics. In an effort to promote both scientists’ autonomy and their ability to foster profitable industries and strategic military applications, Vannevar Bush introduced the term “basic science” into his 1945 Endless Frontier report. And this is how Moffitt ended up straddling pure, applied, basic, practical, theoretical and empirical science. Economists nevertheless might be able to cut through these debates over the “policy benefits” of their science by turning it into a battle of indicators, as they successfully did with the concepts of growth and inequality.

Bonus question that no paper on NSF econ funding addresses. How has the NBER succeeded in monitoring 15% of NSF econ grants, and what are the consequences on the shape of econ research?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s