On Entering the Canon: Who Wins the Nobel Memorial Prize?

Meanwhile, my department – along with some others, I suspect, – created a form for predicting the future Nobel Memorial Prize winner(s) in economics. In fact, it might be a good exercise for a sociologist of the economics profession (although a bit too narrow one).
     Who will get the Nobel? Many think of Olivier Blanchard (as Web of Science does) – and indeed, Blanchard might be easily confused with those who already got the Nobel long ago, leaving us puzzled that he is still not among them. It will be an interesting twist if Paul Romer gets the prize (recall his recent devastating critique of macroeconomics, as well as previous ones). From the older generation people like Martin Feldstein seem to be overlooked. All these scholars are known for their penchant for ‘macroeconomic’ issues  – whatever this means today (and the last ‘macroeconomic’ Prize winner was Thomas Sargent in 2011).
     Anyway, the intrigue is still present, since the result can often be unexpected (as it was the case, say, with Elinor Ostrom or could have been, for some, with Maurice Allais or James Buchanan). Current priorities, despite the fact that one gives the Prize for the work done long ago, still might play a role – thus juxtaposing the past economic science and the ever growing complexity of the present one. Multiple winners are also possible and ubiquitous, and here we might get most striking combinations interweaving times, schools and sub-disciplines. One could, for example, pull together Elhanan Helpman, Avinash Dixit and Marc Melitz as theorists of international trade, Philippe Aghion, Peter Howitt and Robert Barro as economic growth giants (oops, no Romer and too much Harvard here), or William Easterly and Robert Townsend as classics in ‘economic development’. A more  interdisciplinary – but less probable – perspective would involve Harold Demsetz and Richard Posner as ‘institutionalists’ and, of course, people like H. Peyton Young and W. Brian Arthur or Ernst Fehr (to my mind, Fehr should at some point get the Prize by all conceivable standards) or even Samuel Bowles/Herbert Gintis (with Robert Axelrod as another disciplinary counterpart to Ostrom), many of them could be also paired with the  previously nominated William Baumol or Israel Kirzner. I know much less about econometricians, perhaps those who know more could give their suggestions.
    But this is still a game. What would interest me a lot (and what, as far as I know, is only partly touched upon in the new book by Avner Offer and Gabriel Söderberg – which nonetheless should be fascinating) is the internal mechanics and tensions of the selection process, the struggles behind the nominations and their evaluation, and the techniques of compiling Nobel press releases and «scientific backgrounds». We will never learn the ’true’ story, but we have to come closer to it, because this kind of historical/sociological inquiry might be at least as instructive for understanding the dynamics of economic knowledge as the exercise in prediction.
Advertisements

DARPA, the NSF and the social benefits of economics: a comment on Cowen and Tabarrok

Tyler Cowen and Alex Tabarrok have a new piece in which they ironically note that economists are surprisingly shy when it comes to applying their tools to evaluate the efficiency of the NSF economics grant program. Their target is a companion JEP article in which Robert Moffitt defends the policy-relevance of economics against the last round of attacks on NSF’s social science budget, not least Senator Coburn’s 2011 list of wasteful federal spending. Cowen and Tabarrok criticize Moffitt’s use of the econ relevance poster child, Paul Milgrom’s research, which, legend says, has brought 60 billions $ to the US government through rounds of FCC spectrum auctions. This is a typical case of crowding-out effect, they argue, since private firms like Comcast, who also saved $1,2 billion in the process, had huge incentives in funding that research program anyway. Likewise, they note, the quest to raise revenue in sponsored search auctions led Google engineers to rediscover the Vickrey-Clarke-Groves mechanism. NSF funding should thus be shifted to high social benefits programs, for instance setting up a replication journal, supporting experimental projects with high fixed costs or –here they agree with Moffitt – deploying large publicly available datasets such as the Panel Study of Income Dynamics, which was specifically targeted by Coburn.

NSF grants are also biased toward research programs with high probability of success, already well established and published in top journals, they add. Such “normal science” is hardly groundbreaking. NSF should rather emulate DARPA and fund “high risk, high gain, and far out basic research” (which could include heterodox economics). They also suggest shifting from funding grants ex ante to giving prizes ex post (DARPA’s practice), because this creates competition between ideas. If an heterodox model provides better predictions than mainstream ones, then a NSF prize would signal its superiority.

The paper is challenging and, as always with the authors, unrivalled in its clever applications of economic tools. My problem is with:

1.their romanticized picture of DARPA

2. their lack of discussion of how the public “benefits” of economic research should be defined

 1. Should the NSF emulate DARPA?

department_mad_scientists_paperback-224x300 Cowen and Tabarrok’s suggestions are predicated on the impressive record of the Defense Advanced Research Projects Agency in promoting groundbreaking research, from hypersonic planes, driverless cars and the GPS to ARPANET and onion routing. This success is usually attributed to the philosophy and structure of the secretive defense institution founded in 1958 to “prevent technological surprise like the launch of Sputnik,” and later to “create technological surprise for US enemies.” Michael Belfiore, for instance, has described a “department of mad scientists” who profess to support “high risk, high gain, and far out basic research.” This is achieved through a flexible structure in which bureaucracy and red tape are avoided and the practical objectives of each project are carefully monitored. I have only skimmed Anne Jacobsen’s recent “uncensored” history of DARPA, but so far it does not seem to differ much from Belfiore’s idyllic picture. Yet digging into the history of specific projects yields a different picture. In his account of DARPA’s failed Strategic Computing program, Alex Roland explained that while the high-risk, high-gain tradition served as the default management scheme, the machine intelligence project was supervised by no less than 8 different managers with diverging agenda. Robert Kahn, the project’s godfather, believed that research was contingent and unpredictable and wanted to navigate by technology push, whereas his colleague Robert Cooper insisted on demand pull and imagined an AI development plan oriented toward innovations he could sell. Some viewed expert systems as crucial and other dismissed it, which changed what applications could be drawn from the program.

Roland’s research exemplified the difficulty of DARPA’s officials in agreeing over a representation of the scientific, technological and innovative process that would yield maximum benefits. And benefits were to be evaluated in terms of defense strategy, which the history of Cold War science has shows was far easier than to evaluate the benefits of social programs. From cost-benefit analysis to GDP, hedonic prices, contingent valuation, VSL or public economics, the expertise economists have developed is precisely about defining, quantifying and evaluating “benefits.” But the historical record also shows each of these quantifications have been fraught with controversy, and that when it comes to defining the social benefits of their science as a whole, economists are not even struggling with quantification yet. For the last 70 years, they have been stuck with negotiating a definition of “social,” “public” or “policy” benefits consistent with the specific kind of knowledge they produce with their patrons.

2. Fighting over “policy benefits”

Moffitt’s article is only the last instantiation of a series of attempts to reconcile economists’ peculiar culture of mathematical modeling with external pressures to produce useful research, their quest for independence and their need for relevance. This required a redefinition of the terms “pure,” “applied,” “theoretical,” or “basic,” and Moffit’s prose perfectly illustrates the difficulty and ambiguity of the endeavor:

The NSF Economics program provides support to basic research, although that term differs from its common usage in economics. Economists distinguish between “pure theory” and “applied theory,” between “pure econometrics” and “applied econometrics,” and between “microeconomic theory” and “applied microeconomics,” for example. But all these fields are basic in the sense used in government research funding, for even applied research in economics often does not concern specific programs (think of the vast literature on estimating the rate of return to education, for example, or the estimation of wage elasticities of labor supply). Nevertheless, much of the “basic” research funded by NSF has indeed concerned policy issues, which is not surprising since so much of the research in the discipline in general is policy-oriented and has become more so over time. Although most of that research has been empirical, there have been significant theoretical developments in policy areas like optimal taxation, market structure and antitrust, and school choice designs, to name only three.

 For Moffitt, in other words, the nub of the funding struggle is that both theoretical and applied economics are considered “basic” by funding agencies because they are only indirectly relevant to specific policy programs. Trying to convince patrons to fund “basic” or “theoretical” research was an issue even before the NSF opened its social science division in 1960. At that time, economics’ major patron was the Ford Foundation, whose representatives insisted on funding policy-relevant research. Mathematically-oriented economists like Jacob Marschak, Tjalling Koopmans, or Herbert Simon had a hard time convincing Thomas Carroll, head of the behavioral science division, that their mathematical models were relevant.

NSF’s economic funding remained minimal throughout the 1960s, and it climbed substantialy only after the establishment of the Research Applied to National Needs (RANN) office in the early 1970s. Tiago Mata and Tom Scheiding explain that RANN funded research on social indicators, data and evaluation methods for welfare programs. It however closed in 1977 after Herbert Simon issued a report emphasizing that the applied research funded was “highly variable in quality and, on the average, not impressive.” The NSF continued to fund research in econometric forecasting, game theory, experimentation and development of longitudinal data sets, but in 1981, Reagan made plan to slash the Social Sciences NSF budget by 75%, forcing economists to spell out the social benefits of their work more clearly. Lobbying was intense and difficult. Kyu Sang Lee relates how the market organization working group, led by Stanley Reiter, singled out a recent experiment involving the Walker mechanism for allocation a public good as the most promising example of policy-relevant economic research. Lawrence Klein, Kenneth Arrow and Zvi Griliches were asked to testify before the House of Representatives. The first highlighted the benefits of his macroeconometric models for the information industry, the second explained that economic tools were badly needed at a time when rising inflation and decreasing productivity needed remedy, and the third explained that

…the motivation for such selective cuts [could only be due to] vindictiveness, ignorance and arrogance: Vindictiveness, because many of the more extreme new economic proposals have found little support among established scientists. Because they have not flocked to support them, they are perceived as being captives of liberal left-wing ideologues; Ignorance, because this is just not so. It is ironic and sad that whoever came up with these cuts does not even recognize that most of the recent ‘‘conservative’’ ideas in economics – the importance of ‘‘rational expectations’’ and the impotency of conventional macro-economic policy, the disincentive effects of various income-support programs, the magnitude of the regulatory burden, and the arguments for deregulation – all originated in, or were provided with quantitative backing by NSF supported studies. And arrogance, in the sense that those suggesting these cuts do not seem to want to know what good economic policy can or should be. They do not need more research, they know the answers.

 31r2hG8wtzL._SY344_BO1,204,203,200_Sounds familiar? The ubiquity of Al Roth’s research on kidney matching in economists’ media statements, the proliferation of books such as Better Living Through Economics, Angus Deaton’s 2011 Letter from America, the 53 proposals by economists to rethink NSF future funding, and Moffitt’s article can all be interpreted as attempts to redefine the relationship between basic, applied and policy-relevant research and to provide a framework to assess the public benefits of economic research. They all exhibit tensions between publicizing benefits, and maintaining objectivity, independence and prestige. Reconciling these contradictory goals underwrite centuries of terminological chicanery. In 1883, physicist Henry Rowland delivered a “Peal for Pure Science” in an attempt to divorce his research from the corrupting influence of money and materialism on “applied” physics. In an effort to promote both scientists’ autonomy and their ability to foster profitable industries and strategic military applications, Vannevar Bush introduced the term “basic science” into his 1945 Endless Frontier report. And this is how Moffitt ended up straddling pure, applied, basic, practical, theoretical and empirical science. Economists nevertheless might be able to cut through these debates over the “policy benefits” of their science by turning it into a battle of indicators, as they successfully did with the concepts of growth and inequality.

Bonus question that no paper on NSF econ funding addresses. How has the NBER succeeded in monitoring 15% of NSF econ grants, and what are the consequences on the shape of econ research?

The regulation of public numbers

On the night of the Brexit referendum the “Economics in the Public Sphere” project at University College London hosted a panel discussion on the regulation of public numbers. We heard about lying with numbers, and how to choose numbers that move people. We heard about independent statisticians and shy regulators. We heard about the politics of numbers and measurement.

The speakers came from government, advocacy groups, and academia and were Mike Hughes (Royal Statistical Society); Ed Humpherson (Director General for Regulation, UK Statistics Authority); Diane Coyle (University of Manchester, author of GDP: A brief but affectionate history); Saamah Abdallah (New Economics Foundation, Programme manager on Wellbeing); Mary Morgan (London School of Economics, author of The World in the Model); Sheila Jasanoff (Harvard Kennedy School of Government, author (with Sang-Hyun Kim) of Dreamscapes of Modernity).

You can watch the whole event on youtube, here. But as a teaser I post Mary’s terrific contribution. 

A few reads on (and for) Paul Romer, next World Bank chief economist

A few years ago, the World Bank sounded Paul Romer to fill its chief economist position, and he was not interested. It seems that, after several lives as academic, entrepreneur and urban thinker, he is now ready to become a “global intellectual leader.”

Capture d’écran 2016-07-24 à 19.10.22As a growth economist, Romer has earned fame by making the production of knowledge endogenous. This story has been masterfully narrated by David Warsh, though commentators disagree on what exactly made Romer’s 1990 paper a tour de force. For Warsh, it was that solving the 200 year-old Adam Smith paradox. The Scottish economist had emphasized both the importance of specialization and associated increasing returns to scale (the Pin Factory) and the importance of competition to produce wealth (the Invisible Hand). Yet, increasing returns to scale act toward concentration and the gradual suppression of competition. Romer’s contribution was not merely solving the puzzle, Warsh argues, but doing so formally. The notion that knowledge was not a standard private good and could yield increasing returns to scale had been around since Arrow, Johnson or Griliches at least. But models with increasing returns were technically difficult to solve, and, Warsh points out, the internal dynamics of the discipline requires that new intuitions be formally incorporated into economic models. Romer’s idea was to model knowledge as non-rival and non-excludable. Yet, according to Joshua Gans, Romer’s pathbreaking advance wasn’t so much how he put knowledge in the production function, but rather how he closed the model with a market for intellectual property derived through demand for new goods, and markets for skilled and unskilled labor.

Romer then left academia to engineer a teaching and grading plateform called Aplia. Around 2008, he thought more deeply about ways to foster development and came up with the idea to set up charter cities : wherever existing institutions and vested interests prevented the development of economic activities, he argued, new cities should be erected on land leased to foreign powers. The governance, institutions and sets of rules of those cities are thus imported.  As Sebastian Mallaby explains, his idea stem from the study of Hong-Kong and was unsuccessfully applied to Madagascar. Charter cities were met with considerable resistance in intellectual circles (see his debate with Chris Blattman or Mallaby’s characterization of the idea as “neo-medieval and neo-colonial”), but this did not deter Romer from setting up an urban institute at NYU. Last year, frustrated that economic scholarship on growth had not converged toward a consensus, he called his former PhD advisor Robert Lucas out for using mathematics to smuggle ideological assumption in his analysis (a sin he called ‘mathiness’).

.

Lauchlin Currie, Paul Romer

.

flc00605
Le Corbusier’s plan for Bogota

In the post explaining his new career move, Romer writes about his attempts to build “a new academic field of inquiry based on ‘the city as a unit of analysis’” at NYU. In an interesting twist, he is joining an institution whose history shows that bringing together development and urban scholarship is nothing new.  Back in the 1940s, neither promoting growth nor managing cities were understood in terms of non-rival goods, scales, spillovers, etc. Yet both lines of thought laid at the heart of economists’ original reflection on the Bank’s strategy, as shown by the extensive research Michele Alacevich has conducted on the early years of the World Bank. In 1949, lacking the data to define a loan policy, the Bank sent Lauchlin Currie, former economic advisor to Roosevelt, to Colombia. Currie wanted a development plan that would stimulate the latent potential of specific sectors while simultaneously achieving social aims. This, he believed, could be achieved by supporting the labor-intensive housing sector in the sprawling city of Bogota. Developing a capital city district and reorganizing basic public services such as water and electricity delivery therefore laid at the heart of the Plan para Bogota he framed with Enrique Penalosa. In an attempt to gather data and expertise, Currie soon found himself working in close association with architects José Luis Sert and Paul Wiener. The latter had produced a distinct urban plan, blending Sert’s ‘organic city’ approach with the functional guidelines previously defined by Le Corbusier. The collaboration was abruptly interrupted by a coup, but at the Bank, the idea that investing in urban planning and in particular housing would foster economic and social development stuck around.

 

lecorbusierconpaulwienneryjoseplluissertenbogotfebrerode1950

 

Le Corbusier, Sert and Weiner in Bogota (1950)

.

In these early years, though, being a bank lending money to individual profitable projects and being a development institution seemed irreconcilable. In the fight over the accurate way to define creditworthiness – through debt service ratio or through the ability to productively use loans to boost growth in the long term, a perspective favored by economist Paul Rosenstein-Rodan, Wall Street-trained top managers sided with the Loan Department. The project of giving social loans to improve housing conditions and city infrastructures was rejected, and the Economic Department was disbanded in 1952. A few economic advisors to the president remained, but the position had no operational responsibility and was staffed with figures who left little imprint, such as Irving Friedman. For more than a decade, Alacevich relates, the World Bank was thus estranged from development economics, which was developed in a few university bodies such as MIT’s Center for International Studies. Another hothouse for development ideas was the central and regional offices associated with the United Nations, where the likes of Gunnar Myrdal in Europe, or Raul Prebish is South America, forged theories centered on capital, investment, big push and path dependency (with a concern for institutions, learning-by-doing, already).

.

Robert MacNamara and Hollis Chenery

.

The Bank’s isolation ended in the mid-1960s. Faced with a stream of criticisms on the Bank’s results, newly-appointed president George Woods stated that “Gene Black [the previous president] is afraid of economists. I am not.” He reopened the Economic Department with the hope of turning the Bank into a “development agency” and of funding wider social projects. Woods’ reorientation was met with resistance, but by 1969 the department was staffed with 120 researchers and had launched projects involving, notably, Albert Hirschman. His vision was fulfilled by his successor, Robert McNamara, who dramatically increased the number and variety of loans. A cornerstone of McNamara’s  overhaul was the recrutment of development economist Hollis Chenery as advisor. Chenery adopted a long-term strategy based on the reduction of poverty, strengthened longstanding ties with the International Labor Organization and borrowed the basic needs approach from Paul Streeten’s group. Essential to McNamara’s reorientation, Alacevich notes, was also the establishment of an Urban Development Department and an Urban Poverty task group, again focused on housing. Within academia, some urban economists were also concerned with the tied between cities and development.

Capture d’écran 2016-07-24 à 19.43.07.png

.

As documented by Sarah Babb, the strategy of development banks has often been aligned with the US government’s ideology. Therefore, the replacement of McNamara and Chenery by Alden Clausen and Anne Krueger in 1982 shifted the Bank’s philosophy toward a “Washington Consensus” consistent with Reagan’s program. But Chenery’s long and influential term had created some intellectual and institutional space for economists at the Bank. It  stabilized the reliance upon economic and urban approaches to development, though it is not clear to me how much cross-fertilization between the two programs there was in the 70s. It was eventually two World-Bank researchers that Romer recruited in the 2000s to build his Marron Institute of Urban Management, and Alain Bertaud and Schlomo Angel‘s résumés illustrate how differently economics and urban planning insights are waved together by economists and urbanists.  Though Bertaud explicitly intends to bridge “the gap between urban economics and operational urban planning,” both researchers were trained as architects and define themselves as urban planners. The effects of Romer’s leadership on the Bank’s strategy and on the fields of development, urban economics, and urban planning will thus be interesting to see.

A virtual summer camp for historians with deadlines

The blog History of Economics Playground began in November 2007.  We were brought together as the youth of the history of economics. In a field that honors the discrete and collected elder, we wanted to brave new ground. We wanted to be serious about being playful. For 4 years, we debated, gossiped and expressed our feelings about life in scholarship. In 2011, we left the playground and went blogging @INET.

billwattersonNine years after, we’re hopefully still good-looking and enthusiastic, yet also swamped with deadlines. Some of us are writing habilitation theses, others are finishing a book,  revising some articles, preparing talks and lectures or applying for grants. This summer, we will be reading the same books and papers, and our topics will often intersect. We will research various protagonists embedded in similar contexts, yet we will disagree a good deal about what shaped economics as we know it today. We are therefore reopening our old playground for a couple of months, turning it into a summer bootcamp for historians with deadlines.

Pooling together for 2016 summer camp are Ivan BoldyrevBeatrice CherrierYann Giraud and Tiago Mata. Others will meet us by the beach at some point. If you’re interested in joining, just send an email to any of us.

Time to get a tan.

14798605

@INET-BW: What’s history?

History keeps appearing and reappearing in the different discussions and presentations. But there’s history and history. One reference to history, as Tiago observed in front of the hearth last night, is history as nostaligia: imagine Keynes walking around in this very room!, picture the Americans thrashing the Russians in a agme of softball during BW  ’44!, “as a graduate student, I learned a lor from….” Second, there is the implicit no-history argument, in which ecoomics is one big pile of research from which one may take different sources depending on the issue at hand. Much like philosophy, in which one can as easily apply Aristotle, Hobbes, or Sloterdijk to contemporary issues. History as history is yet to appear.

The Age of In-Between Economists

A weakness of our profession – and perhaps of humans generally –  is that we want to classify. Thus we categorized ourselves and each other as neoclassical, institutional, Keynesian, Marxist and Chicago economists. To some extent, these labels have always been problematic. Where to put John Kenneth Gailbraith, Albert Rees, or Herbert Simon for instance? Moreover, classifications always seem to fall apart when you look too closely.

Over the past years, however, things have become more complicated still with a new generation of economists who, consciously or not, constantly position themselves in between whatever labels and domains are out there. Herbert Gintis is a Marxist, game theorist, behavioral economist, and institutional economist depending on the occasion. Robert Shiller is anything in between traditional finance, behavioral finance, and an applied type of finance research that is more concerned with solving problems in the here and now than advancing a new theory. And then there is someone like Benjamin Friedman. As an economist, I never know where to situate Benjamin Friedman for my students.

Postmodernism may be long gone, but this is the age of the in-between economist. We, the economists, do a paper that may be classified as belonging to this category, but tomorrow we’ll do a paper that may be put into that category. And we don’t really care what these categories are, as long as you classify us and our work in their totalities as in-between.

Interestingly enough, though, we definitely are economists, not sociologists, politicians, writers, or whatever. We’re very much in-between, but we’re also very much economists.