2016 as a ten-letter word

proctor
Robert N. Proctor (Photo: Linda A. Cicero)

In the end of November, as it is the case every year since 2004, Oxford Dictionaries revealed their choice for the word of the year. For 2016, they settled on “post-truth”. This adjective, defined as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief”, echoed a number of events of the past few months, including UK’s vote in favor of Brexit and, most infamously, the election of Donald J. Trump. Following the announcement, a few commenters were quick to observe that “post-truth” could be considered as an emanation of postmodernism, the brain-child of post-1968 French philosophy and critical theory. Whether you buy this or not – I don’t -, there’s no denying that “post-truth” has been everywhere in the press and on social networks. Yet, as a historian of science with little – if any – interest in questions of “truthfulness” and “falsity”, I would like to suggest another ten-letter word for describing more accurately what has been going on over the past few months – and, admittedly, over the past few decades as well, 2016 representing in my opinion some kind of turning point in its development. This word is: “agnotology”.

For those of you who are not familiar with the concept elaborated by Robert N. Proctor in books such as The Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer (1995) and Golden Holocaust – Origins of the Cigarette Catastrophe and the Case for Abolition (2012), suffice to say that “agnotology” is the production and dissemination of ignorance – as well as the study of this phenomenon. Proctor’s argument in a nutshell is that knowledge is not created out of a vacuum which we would call “ignorance” but, instead, that both knowledge and ignorance are social constructs, therefore contingent to many social, political and individual factors. The production and dissemination of ignorance, therefore, can be studied using the tools that are traditionally attached to the history of science, making the distinction between science and non-science not so significant in the process. When we look at the history of how the cigarette industry intentionally spread doubts about evidences that linked smoke ingestion to cancer, there were a few scientists to back this claim. At first, I was skeptical about “agnotology” because I had read Naomi Oreskes and Erik Conway’s Merchants of Doubt, which does not use the term “agnotology” but tells a relatively similar story of ignorance dissemination, and was unsatisfied with the way they tried to demarcate between the good, disinterested scientists fighting for truth and those who were paid by big corporations to spread false information – I had expressed my dissatisfaction on the INET version of this blog. However, I do not find the same problem with Proctor’s historical narrative which is not so much interested in questions of demarcation but rather in the cultural and political context in which ignorance is produced and disseminated. Accordingly, agnotology has been used in the history of economics by Phil Mirowski and Edward Nik-Khah in a way that may seem controversial at first  but which I found, after some resistance, increasingly convincing. After all, agnotology deals with producing and distributing something, so economics should not be too far away when we think about it. Two French economists have recently tried to use Proctor’s work, without using the term ‘agnotology’, in order to build a case against the critics of mainstream economics but in my opinion – which I have expressed in a forthcoming review of their book for a French STS journal -, they fell short of understanding the complexity of the concept and, quite ironically, ended up generating a lot of agnotology over the current status of their discipline.

trump
Donald Trump: the rise of agnotologic governmentality?

But so much for these issues of scientific demarcation – or lack therof. “Agnotology” is an enlightening word to describe 2016 because it is effectively applicable to the political issues of the day. Ignorance production and dissemination is not something which is just relevant to scientific issues. It is actually, a total social fact in the Maussian sense of the term, one that ties together cultural, psychological and political elements. With the election of Donald Trump, I even wonder if it is not possible to assert that we are entering an age of agnotologic governmentality, a way of governing that uses ignorance as a political device. In using the term governmentality, I explicitly refer to Foucault’s Birth of Biopolitics. Governmentality, in Foucault’s conception, should not be confused with “government”. A regime of governmentality is enforced, not just by the State, but at many different levels where knowledge and power are connected. Suffice to replace knowledge by ignorance and then you have some idea of what a Foucaldian version of agnotology could look like. Since his election in November, Donald Trump has been a master in disseminating so much information, both in the press and on social networks, that it is almost impossible to assess what his legislature will yield. But of course, this had not begun with his election: this is the way he had behaved since the very beginning of his campaign at the Republican primaries. At the time, it had been summed up in one fantastically short tweet.

Now that “the comments section” has become the new President of the United States, we can argue that the kind of ignorance that such comments section typify will noy only lead  the most powerful nation, but as a result will preside over the way of the world at large. But ignorance is not a “top-down” phenomenon and that’s what makes it so stealth, yet powerful. Ignorance is cultivated at every level of the society and now, through more or less trustable internet news coverage, it is disseminated at a higher speed. Even academics and self-proclaimed “intelligent” people such as you and I can be subjected to it. If you have been a regular user of social networks in the course of the past few months, I defy you to tell me that you have never fallen into a clic-bait, believing for at least a few minutes a piece of information that has been revealed to be either false or (mis)guided by a non-objective source. The bombing of Aleppo, for instance, has been the subject of so much news coverage that it is impossible be sure that everything we were told was true. While there is no doubt that, on one hand, some information has been manipulated by pro-Syrian and Russian medias, we are not so naive as to believe that there is no propaganda on the other side, too. Increasingly complex conflicts and social issues such as this one are not easy to grasp and we can all be deceived. Effective propaganda knows how to exploit the capacity we all have to doubt. It is no surprise that agnotology is often related to neoliberalism. It is not so much, I think, that there is a mechanical relation between the two but, instead, that both are so squeezed in the recesses of our our everyday life that they are difficult to espace, unless we turn off our computers and start leading a more recluse life – which may not be a bad idea after all.

Anyway, I am afraid I have conveyed that 2016 has been a very bad year and this is similar to a lot of rants you have already read elsewhere. I should apologize for my lack of originality. But there is also a more positive message: as historians of science, we may  be able to apply our critical toolbox to the understanding of how we got there and, hopefully, how we will be able to get away with it.

Dr. Phil – or how I stopped worrying about economists and embraced neoliberalism.

mirowski_397x267At the latest History of Economics Society Meeting, I, with a number of friends and colleagues (co-bloggers Béatrice Cherrier, Till Duppe and Floris Heukelom), participated in a roundtable devoted to “the practical challenges of writing recent history”, organized and chaired by E. Roy Weintraub. On this occasion, we all gave speeches – mostly drawn from personal experiences – that addressed how writing the history of recent economics is different from doing the history of older economics and the kind of practical issues it required us to consider. Most of our talks addressed at some point or another the relation to current economics: on the one hand, writing the history of recent economics resonates with current research in the field, but on the other hand, economists can disagree – sometimes in print – with the kind of accounts that historians construct about them. So, in sum, writing on recent economics can help you being noticed by economists, but sometimes there is attention you may just want to avoid. Then, at the end of what was an interesting, if somewhat polite, discussion, Philip Mirowski intervened, saying that our talks were, in his opinion, too focused on our relation with economists, that we have no reason to fear them, that they have no interest in history whatsoever, whereas, at the same time, science studies scholars are mostly concerned with economics as a subject, because they feel that the prevalence of economic imperatives on the academia is a threat to the humanities departments in which they are located.

My feeling is that, even though Phil expressed his opinion in his own distinctively provocative way, he was right and that, on the other hand, by focusing too much on the relation between history of economics and economics, we may not be fully wrong, buJHETt still, at the very least, mistaken. For at least one part of the argument is true: economists, on the whole, are not interested in the history of their field and are not likely to be interested in it anytime soon. A bibliographic research I have undertaken over the past few years with my friend and fellow Pedro Duarte – forthcoming in the Journal of the History of Economic Thought -, focusing on the historical pieces published in major economics journal, led us to reach quite clear conclusions:

The trends we observe … seem to illustrate … [the] increasing estrangement between economists, when writing to the profession at large in their general top journals, and HET. Not only have we shown that, in contrast to the 1970s, fewer HET papers have been published recently in most of the top journals we studied, but we also demonstrated that the papers that have been published are so diverse in the methods they use and the issues they address that it is very hard to see them as a coherent whole—not to mention as part of a unified subfield. In particular, the fact that most of these articles rely not on specific tools and methodologies, but, rather, on surveys and quite general statements may have contributed to the conflation of historical investigations and literature surveys. Therefore, practicing economists themselves have become the main narrators of their past, whereas historians are less and less seen as the expert community to be properly consulted when accounts of past economics are needed. … As a result, the issues that are central to the latest developments of the history of economics … and the new tools that historians are using to address them … have yet to make their way into the mainstream literature.

51l-3HtHuvL._SX331_BO1,204,203,200_On the other hand, sociologists, historians, political scientists, and even management scholars are increasingly drawn to the history of recent economics. They do so because they feel that economics is an important part of today’s social, political and cultural environment and they want to understand it. Of course, there’s nothing new about this. Another friend and colleague of mine, Loïc Charles, has done work on 18th century economics with practicing historians, showing how economic thinking was intertwined with a lot of things happening at the time: international trade (including, most notoriously, slave trade), the colonization of the Americas, the French revolution, etc. But what is specific to the recent – postwar – period, is that economic thinking is not just mixed with other types of knowledge and practices, but increasingly,  is THE knowledge which is used as a way to ground, to legitimize all knowledge and practices. This recent move toward the economization of every aspect of our society is what researchers have come to designate as “neoliberalism”, and this is the one of the main concepts that makes the study of postwar economics a possibly interdisciplinary venture, one that has a lot of chance to attract readers and create scholarship.

For years, I have resisted this “neoliberal” narrative. I thought that neoliberalism was a complotist construction, that it was hard to pretend that a small group of Austrian economists, even helped by some well-organized think tanks, could influence society at large so as to create a culture so ubiquitous that we are all influenced by it, whether we like it or not. But now the literature on neoliberalism has attained a critical mass, and I must say that, altogether, it provides a good analysis grid of what’s happening in the world, even though we think that there is much to criticize in all of these contributions. There’s of course, Foucault’s 1979 course at the College de France, which falls short of details, but sets up the big picture, but in recent years, many other books have helped developed the neoclassical narrative: Wendy Brown’s philosophical account of how neoliberalism is detrimental to democracy, Bernard Harcourt’s assertion that neoliberalism is transforming all citizens into punishable subjects, Sonia Amadae’s claim that the neoliberal citizen and consumer is the strategic rational actor, described in non-cooperative game theory, Elizabeth Popp Berman depiction of the economization of academic science, etc. And of course there are all of Phil Miroswki’s contributions to the subject: see here, there, and everywhere.*

CSISo, is it convincing? Well, let’s take for instance Béatrice’s latest post. She talks about Paul Romer being appointed as chief economist of the World Bank. First, why should we be concerned about this? Why is it so special that there is a new chief economist whereas we do not seem to have much to say about Dr. Jim Yong Kim, who is an American (Korean-born) physician, and is the actual President of this institution? Well, maybe, it is because we feel that economic knowledge is going to be more important than medical knowledge when it comes to decide how countries need to be helped financially. That is something that the neoliberal narratives tries to explain. And what was Romer doing before he got this new position? I quote Béatrice, here: “Romer left academia to engineer a teaching and grading plateform called Aplia.” Some neoliberalism scholars have argued that this kind of platforms offer instances of the neoliberal transformation of education. And what about Béatrice’s last point on how “the replacement of McNamara and Chenery by Alden
Clausen and Anne Krueger in 1982 shifted the Bank’s philosophy toward a ‘Washington Consensus‘ consistent with Reagan’s program”? That is also the subject of many contributions to the history of neoliberalism. In fact, we now have a neoliberal narrative for everything: even TV series are subjected to it.

So, should we embrace all of it? Of course, not necessarily. These accounts are often partial and in need of qualification. Also, I am not claiming that every history about modern economics is underwritten by this neoliberal narrative. There are many other narratives to draw. But this is one strong reading of the current situation, and as such it needs to be addressed. This is also a fascinating laboratory for possible discussions between historians and sociologists of all social sciences, as well as with cultural theorists and political scientists. This is why I expect that when Pedro, Joel Isaac, Verena Halsmayer and I do the next HISRECO conference in Lucerne on April, 21-22 2017 (call for papers coming soon!!), the term “neoliberal” is going to pop up once again on several occasions.

*Not to mention the fact that even notorious neoliberal institutions have ended up acknowledging themselves.

Did Duke University blacklist Milton Friedman?

Great ideas are earned through hardship. It is a conviction that requires no argument, inscribed into our collective consciousness. As I have been writing/researching about Milton Friedman’s popular writings, I was surprised by the (popular) claim that Friedman was for many years an outcast in the economics profession, the proof was that such a respectable place as Duke University refused to carry his books (the specific source was a celebration of Friedman’s life by Robert Samuelson in Newsweek).

Milton and Rose Friedman write in their autobiography Two Lucky People, page 341 in the 1999 edition, of a letter sent to them by Mark Rollinson in 1989, who 30 years earlier had been a student at Duke University,

My years at Duke … were not happy ones. … To make matters worse, most of my fellow students and all of my professors held my views on several subjects in overt disdain.
One day after particularly severe ridicule in an economics class I went to the professor after the session and told him that I was quite certain that I was not stupid and I asked him if there were not at least some economists who shared my views. “Oh yes,” he said “as a matter of fact we’ve discussed you frequently here at the faculty level. You’re nearly a clone of some chap in Chicago named Milton Friedman. It’s truly amazing.”
Well, I went running over to the library with your name in hand, only to find that you were in the name catalogue. On consulting with my professor later, he explained that Duke had a system of screening new material by the appropriate department and the Economics Department did not consider your work worthy of carrying.
Whereupon I went to the Dean of Men … and made an offer: put Friedman into the library or take Marx out; otherwise I would write a letter to the editor of every newspaper I could find.
They opted to add you and keep Marx.
When you received the Nobel Prize, I was prouder probably even than you, as you might imagine.

Continue reading “Did Duke University blacklist Milton Friedman?”

Measuring the “Shock”

You can attest that a concept has become fairly popular when it is used by educated laymen/laywomen in very different circles. Obviously, Naomi Klein’s idea of a “Shock doctrine” is all over the place since we learnt about the tragic earthquake disaster – and its consequences –  in Japan. This morning, I heard on the French public radio a political analyst talking about fears that international institutions may apply to Japan the “shock doctrine”, a word, he noted, that “economists like to use frequently”. In addition, the same concerns were expressed by a Facebook friend of mine as soon as Friday morning who wondered whether the World Bank was going to impose Japanese people a “Chicago School-like Shock therapy” (emphasis added). She is not an economist or a social scientist but a film editor and a street artist. Tiago, I think it is time to revive your “the Evil that economists do” paper!

PS : I refrained from using as illustration one of these terrifying earthquake or desolate lands pictures that have circulated all over the net. I feel uneasy with the ambiguity existing in their intense dissemination, as if people were both appalled and fascinated in an unhealthy way by the Japanese drama. Anyway, you can still donate to one of the organizations that are working on relief and recovery in the region.

The End of Relativism?

The beginning of a new year is always the occasion to reflect on the recent past, as the posts of my fellows Benjamin, Clément and Béatrice [to whom the opposite Calvin & Hobbes comic strip is dedicated] have shown. Though their interrogations mainly concern the purposes and practices of historians, I would like to add another one, which may be a bit more ‘philosophical’ – pardon the grand word! What has struck me during the year is the slow decline of what some thinkers call relativism.

Relativism (not to be confused with moral relativism), as I have argued here and there, is not the idea that everything is equal or that there is nothing demarcating the good and the bad, the true and the false. Instead, it is the observation that what we call truth or scientific facts or fair decisions is affected by the context in which we are located and that they can be appraised differently in different communities or cultures. It is not surprising that relativism – a term sometimes used pejoratively by its detractors – has been associated with literary theorists such as Stanley Fish, because rhetoric is where it is used more conspicuously. My literary style will greatly change depending on the people I am addressing to and, as a result, the meaning of what I am saying too. For instance, while writing a scientific paper, I can call some previous contribution ‘misleading’ or ‘unfortunate’ while in front of friends researchers, I will call it a ‘piece of crap’, and back at home, in a sign of deep fatigue and irritation,  I will paraphrase Lennon and call it ‘the shittiest pile of shit ever’. Talking about Samuelson in a private correspondence, Stigler wrote Friedman: “It may merely be prejudice, but I’m inclined to write him off as an economist” [in Hammond, Making Chicago Price Theory: Friedman-Stigler Correspondence, 1945-1957, p. 97]. This is certainly not something he would have used – in spite of his renowned acerbic wit – in publication, and though Samuelson may have been conscious of such animosity he certainly did not take it into account when he called Friedman “an able scholar” and “an old friend” [Samuelson, Economics From the Heart, p. xi). There is nothing abnormal in this. Whatever our opinions are, we have different ways of communicating them to our interlocutors – from our closest friends to the scientific community and the public at large.

This, however, has seriously been threatened in 2010 and I will only mention two events that struck me in this respect: the first one is the fact that a few people have been legally fired from their jobs after talking badly about their supervisors on Facebook, the other one is the whole Wikileaks affair. In the former, it is quite striking that people who have written on their wall a few negative words about their work environment – like calling their boss an idiot, or their job crappy – have been recognized as guilty of serious professional mistakes while we know that everyday people spend most of their time at the workplace, near the coffee machine for instance, unfearfully disparaging other colleagues and immediate superiors. Why is something that is considered normal in the workplace is suddenly demonized when it is done outside of it? The wikileaks affair is quite similar, as it simply shows that when diplomats talk between then, they do not adopt the same discourse that they will use publicly. Is there anything shocking about that? I don’t believe so. You may have to deal in a friendly manner with that head of state you believe is an arrogant and disagreeable human being, especially if world peace is threatened. Similarly, you can perfectly envision with some allied country the use of the military force  toward a country you are simultaneously conducting amiable negotiations with – just in case this does not work, as Clausewitz believed . The fact that these seemingly inconsistent behaviors are suddenly judged negatively by law courts and the public opinion at large will make people adopt the same discourse whoever they talk to. Whether we are blogging, writing academic papers or chatting on our Facebook walls, should we adopt the same writing style? Some people obviously believe we should and the huge informational database that is constituted on the internet seems to put some pressure upon us to do so as well.

How much our practices as historians [of economics] are to be affected by that? I believe History as we construct it is built upon the idea that things – ideas, objects, etc. – evolve and differ in different periods of time and among different communities. If they do not, there is simply no story to be told. The denial of relativism is then the denial of historicity. Happy new year!

Julian Reiss: Evidence for Use

This post was sent by Julian Reiss (Erasmus University of Rotterdam, see here) speaking to the subjects of a recent comments discussion.
————————

Analytical philosophers of science, especially those trained at an Anglo-American university, tend to ask questions that are abstract, narrow and pertain to somewhat idealized circumstances. They are abstract so that answers stand a chance of being general; they are narrow so that answers stand a chance of being precise; and they pertain to idealized circumstances so answers stand a chance of being correct. ‘Evidence for use’ can be understood as a reaction against this way of doing philosophy.

For this way of doing philosophy comes at a cost: the more abstract, narrow and ideal a question is, the less likely it is to address an issue that has broader social relevance. Proponents of evidence for use urge instead the pragmatist vision of philosophers contributing to solving the pressing social issues of the civilization they are a part of. The idea of evidence for use, then, is that philosophers of science interested in theory and evidence should ask questions and frame answers in ways that have some societal significance.

The idea has origins in Philip Kitcher’s work on the ‘well-ordered science’ (most importantly in his 2003 OUP book Science, Truth and Democracy) and Nancy Cartwright’s recent work on evidence (see for instance her paper ‘Well-Ordered Science: Evidence for Use’ that was published in Philosophy of Science in 2006). A science is well-ordered to the extent that its research priorities are such that they would be endorsed in a democratic deliberation among well-informed participants committed to engagement with the needs and aspirations of others. In other words, Kitcher demands that science should ask the right questions, and in the right ways. Cartwright’s concern is mainly with methodology: how do we devise methods so that the products of science help solving practical problems?

The recent movements of evidence-based medicine and evidence-based policy can illustrate what is at stake here. These movements demand that the causal claims on which we based our policies (such as decisions to approve a new drug or implement a new schooling program) are supported by high-quality evidence, which in their understanding means randomized controlled trials (RCTs). Indeed, RCTs can be shown to prove a causal claim, given certain assumptions. But there are two major problems: first, the assumptions required are exceedingly narrow so that their satisfaction is unlikely except under ideal conditions; second, even when satisfied, the RCT proves a narrow ‘it works somewhere’ causal claim (in Cartwright’s words), whereas what we need to know is that ‘it works for us’. Because the correctness of the claim proved by an RCT depends crucially on the characteristics of the test population, the circumstances of the test and the specific ways of administering the treatment, results are unlikely to continue to hold in the circumstances we are ultimately interested in.

Evidence for use invites us to refocus from questions we can answer easily (such as ‘How do we design an experiment so we can be reasonably certain about its result?’) to questions that matter to society (such as ‘How do we design an empirical study so we can be reasonably certain that a policy based on it will be successful?’). For a recent special issue of the journal Synthese that takes up some of these themes, see here.