Every year, the United Nations Biological Weapons Convention has a Meeting of Experts to share updates on developments relevant to the Convention.  This year, several colleagues and I presented on work we have been doing as part of the ESRC/AHRC/DSTL funded grant on The Formulation and Non-formulation of Security Concerns: Preventing the Destructive Application of the Life Sciences

I presented on ways to “‘Take Care’ of Security in Synthetic Biology,” which is a piece of work I am current developing with Emma Frow at the University of Edinburgh. The main argument is that states would benefit from building in mechanisms to consider the assumptions they make about objects of security concern and the validity and legitimacy of the mechanisms they use to govern them. See my slides, and please email me if you have any questions.

In a recent issue of Nature (pictured left), there was a special section on moving ‘Beyond Divisions’ in building the future of synthetic biology. While I and many of my colleagues support many ways of moving beyond many types of divisions, we thought the initial ‘Worldview‘ piece by  Volker ter Meulen required a concerted reply, as it missed the point of much of our work. 20 colleagues (listed below) and I sent the following Correspondence to Nature’s Editor, which was published in Issue 7504. A preprint version is below.


Synthetic biology: missing the point

Volker ter Meulen warns that if environmental groups and others exaggerate the risks of synthetic biology it could promote over-regulation, which he says happened for genetically modified organisms (See here). But the point of supporting synthetic biology is not about making sure that science can go wherever it wants: it is about making the type of society people want to live in.

In the United States, for example, the rapid and uncritical introduction of genetically modified organisms prevented debate on issues such as alternative innovation pathways, and the impact on biodiversity and pest resistance. Many believe that these issues would have been better addressed through earlier and broader public discussion of the uncertainties surrounding transgenic organisms (see  for example S. Jasanoff Designs on Nature Princeton Univ. Press; 2005).

In our view, ter Meulen trivializes the role of social scientists in suggesting that they could help the synthetic-biology debate by finding better ways to communicate what scientists think. He also implies that public concern over such technologies and their governance reflects only a failure to understand the science of risk assessment — but this ‘deficit model’ of public concerns has long been discredited (see A. Irwin and B. Wynne Misunderstanding Science? Cambridge Univ. Press; 1996).

It is not unknown for scientists themselves to foster exaggeration and uncritical acceptance of claims, or to focus on anticipated benefits rather than on risks. This practice may be at the heart of wider public concerns about responsible innovation (see the report of the Synthetic Biology dialogue, for instance).


Sam Weiss Evans University of California, Berkeley, USA.
Sheila Jasanoff Harvard Kennedy School, Cambridge, Masschusetts, USA.
Jane Calvert University of Edinburgh, UK.
Jason Delborne North Carolina State University, Raleigh, USA.
Robert Doubleday University of Cambridge, UK.
Emma Frow University of Edinburgh, UK.
Silvio Funtowicz University of Bergen, Norway.
Brian Green Santa Clara University, California, USA.
Dave H. Guston Arizona State University, Phoenix, USA.
Ben Hurlbut Arizona State University, Phoenix, USA.
Alan Irwin Copenhagen Business School, Denmark.
Pierre-Benoit Joly INRA, IFRIS, Paris, France.
Jennifer Kuzma North Carolina State University, Raleigh, USA.
Megan Palmer Stanford University, California, USA.
Margaret Race SETI Institute, Mountain View, California, USA.
Jack Stilgoe University College London, UK.
Andy Stirling University of Sussex, UK.
James Wilsdon University of Sussex, UK.
David Winickoff University of California, Berkeley, USA.
Brian Wynne Lancaster University, UK.
Laurie Zoloth Northwestern University, Evanston, Illinois, USA.

My report with the Flemish Peace Institute on Multilateral Export Control List Modification Processes is now published. The first part of the introduction is below.

An export control system is one of a range of mechanisms that states can employ to govern the security concerns tied to goods and technologyI. It is a tool that has been used by states as long as states have existed, in conjunction with sanctions, embargos, interdictions, and intelligence activities. Put simply, export controls control the transfer out of a state of objects and knowledge of potential security concern. Any export control system must contain a list of items to control, a way of controlling the export, and a method of enforcing compliance with the system. Of these various parts of the export control system, perhaps the most under- studied are the lists of items under control. How did these items get onto (or off of) the lists? How is an item on a list related to an object that is actually exported? Who has a say in what is listed or not?

This report addresses these questions by providing an analysis of the processes states go through to modify the lists they employ in their export control systems. The lists are often intricate, and while some are updated yearly, others go many years between modifications. While some items on the lists may be added by a single state or region, the majority of them derive from multilaterally agreed lists that have been in place for decades. Understanding how these lists change is a key part of both being involved in the process and being able to critique the process. This document aides that understanding.

Download the Full Report on the Flemish Peace Institute website.

source: NSABB

source: NSABB

On 21 February 2013, the US Government released a proposed policy for institutional oversight of dual-use research of concern. This proposed policy was then opened to public comments, where were due yesterday. As someone who has been looking at issues of dual-use research and technology for nearly a decade, and having recently turned my attention more specifically to dual-use research in the life sciences, I felt that I needed to comment. My comments, which you’re welcome to read if you like, centered on the point that this policy will not be very effective because it places too much of the obligation to raise security concerns in the hands of the scientists doing the research. As it stands, the policy will likely be seen as a burden by the academic community, but a necessary obligation to sustain an increasingly tenuous belief in the “social contract for science”.  At its core, the contract is an unwritten understanding that the government should fund science with as little oversight as possible, as this will enable free enquiry that will inevitably lead to innovation useful to the society as a whole.1

I argued that, rather than setting up the policy to fail in the same way that previous attempts at governing dual-use research have failed,2 we should at least try something different, something that involved a recognition that securing academic freedom and the nation (and the economy!) needed an active and continuing collaborative dialogue between these communities with radically different conceptions of what needs to be secured and what counts as adequate security.

I asked other academic colleagues with a close interest in this if they were planning on submitting comments, and was somewhat surprised that several of them responded that they were not. Citing reasons of it being the middle of the semester, or that their voices would not be heard because their Washington contacts were out of date, their lack of engagement concerned me. For those of us who are engaged in, well, engaged research, I feel it is an obligation, when asked our opinion, to give it. We can criticize all we like, but when given the opportunity to have those critiques feed into the policy process, I think we should do it. I know that there are probably dozens if not hundreds of comments on this policy, and that while the government is obligated to ‘consider’ them all, there is no transparency on how this works, and the likelihood that my comments will be closely regarded. But that’s not the point. To me, doing these public comments is like voting; if you don’t vote, you can’t complain about the government you get. It’s not testifying in front of Congress, but it is being a responsible scholar.

  1. You can read a recent review of the problems with this vision of the relationship between science and the state in a new book out shortly called Responsible Innovation, particularly chapter
  2. e.g. the Fink Report

Now available in the latest issue of Mineva:

Export Controls and the Tensions Between Academic Freedom and National Security

Samuel A. W. EvansWalter D. Valdivia (May 2012)


In the U.S.A., advocates of academic freedom—the ability to pursue research unencumbered by government controls—have long found sparring partners in government officials who regulate technology trade. From concern over classified research in the 1950s, to the expansion of export controls to cover trade in information in the 1970s, to current debates over emerging technologies and global innovation, the academic community and the government have each sought opportunities to demarcate the sphere of their respective authority and autonomy and assert themselves in that sphere. In this paper, we explore these opportunities, showing how the Social Contract for Science set the terms for the debate, and how the controversy turned to the proper interpretation of this compact. In particular, we analyze how the 1985 presidential directive excluding fundamental research from export controls created a boundary object that successfully demarcated science and the state, but only for a Cold War world that would soon come to an end. Significant changes have occurred since then in the governance structures of science and in the technical and political environment within which both universities and the state sit. Even though there have been significant and persistent calls for reassessing the Cold War demarcation, a new institutionalization of how to balance the concerns of national security and academic freedom is still only in its nascent stages. We explore the value of moving from a boundary object to a boundary organization, as represented in a proposed new governance body, the Science and Security Commission.

I have begun a new position at the University of California, Berkeley, within the new Center for Science, Technology, Medicine, & Society (CSTMS).   The Center is a vibrant hub of activity and research on the social and political dimensions of science, technology and medicine.  It takes an array of perspectives, from Science & Technology Studies, History of Science, and the Medical Humanities, to help us understand the past, and shape the present and future course of social and technical decisions.

As an Academic Coordinator, I am responsible for helping the Center create its identity, building its presence within the University and the broader community, and writing grants for new research initiatives.  It is an extremely exciting time to be in such a position, as there is a lot of momentum within Berkeley to expand research in this area.  Do keep an eye on our website, as it will be changing significantly in the coming weeks!

I am also a Visiting Scholar at Berkeley, and will be maintaining my Harvard University affiliation as an Associate Research Fellow in the Program for Science, Technology, & Society at the Kennedy School of Government.  Under these titles, I will be publishing at least one article about how states have come together over the last 400 years to try to jointly control militarily significant technology.  At different points in that history, there were very different visions of what the international order was and should be, and those visions significantly influenced (and were influenced by) how states envisioned what counted as militarily significant technology.

My research interests are shifting slightly this year, as I move from an international focus on export controls and the ambiguity of classification to a focus more on the role of government regulations on the conduct of university research.  I will be looking specifically at issues of deemed exports, where foreign nationals at US universities are given information that is considered militarily significant.  The questions I am asking are: Who decides what counts as militarily significant technology within an emerging research environment?  How are controls substantiated?  How are they legitimized and justified, and to whom?  Who has a say in the process of determining an acceptable level or government regulation?

Do get in touch if you would like to discuss any of these issues.

Warning: Division by zero in /home/pondering/samuelevansresearch.org/main/wp-content/plugins/nextgen-gallery/products/photocrati_nextgen/modules/nextgen_basic_gallery/adapter.nextgen_basic_slideshow_controller.php on line 30

The Science, Power, and Politics reading group that I am running this year at Harvard this week looked at the role that maps play in creating ‘objective’ knowledge for a state.  The primary reading for this was Benedict Anderson’s Imagined Communities, although James Scott also discussed this same point in Seeing Like a State. More recently in the August issue of Social Studies of Science, Christine Leuenberger and Izhak Schnell discuss “The politics of maps: Constructing national territories in Israel.”  The basic point is that maps allow a visualization of a territory, which in turns allows for state’s to promote their control over that territory.

copyright reuters

Nicaragua's President Daniel Ortega shows a map referring to the territorial dispute with Costa Rica during an address to the nation in Managua November 13, 2010. (Reuters)

While this point may have been true when map-making was largely in the hands of government officials (or companies closely aligned with governments) I was curious about what a company like Google might do with its powerful map website.  Would this company create a map of the world that was open to all?  A democratization of cartography?  We have already seen one consequence of Google’s mapping with the recent Nicaraguan invasion of Costa Rica, which I might add, is still ongoing.  Whether Google maps played a pivotal role, or whether it was just a rhetorical ploy of legitimation is a moot point here.  That it was invoked suggests that it was seen as being a source of legitimation.

So is Google an independent entity, or does the United States maintain some degree of control over this powerful metaphor of state power?  I decided to find out by looking at a couple other areas on Google maps that the US has some interest in.

The first point of note for me with the map of Kabul was the amazing inaccuracy of the streets that were labelled.  It was obvious that there the was only a general correlation between the location and direction of the streets on the image versus on the overlaid map.  The second point I noted was the lack of any detail on the streets.  I zoomed out to all of Afghanistan to look at it compared to its neighbors, and found more interesting things.

On the Afghanistan/Pakistan border, we can see that Pakistan has more detail shown in its road structure, and also names are shown in Arabic as well as English. The detail of the one country compared to the other is more obvious as one scrolls into the border, where we can see that the map in Pakistan closely aligns with the satellite imagery.  On the Afghan side, however, the map of the road that crosses the border is a good mile off of the satellite image.

Was Google complying with the US government to purposefully distort the maps in Afghanistan for security reasons?  To be clear, I am not adverse to such regulation.  I just am curious as to how states relate to this development of mass cartography.  To test this hypothesis, I thought I’d look at two other areas of interest to the US.  The first is Tehran:

Here we can see a stark difference to Kabul.  The city as well as streets are all labelled, and many in Arabic as well as English.  There are also links to public transportation.  The map lines up perfectly with the satellite image.  What’s going on here?  Is Google providing all this information because the US is not stopping it here?  But what about Iran?  Does the government want all of this detail in its capital?  Perhaps it does, but I thought I would look at a few other examples as well.  The first is Lhasa, the site in Tibet of several protests by Monks that has received publicity, most recently in 2008.

Here we can see what seems to be a combination of the first and second examples.  At a distant level, we can see the name of the city, at a medium level, we can see more detail of the streets.  But at a near level, all mapping is gone.  Curiously, at a very near level, we see that some buildings have been shaded blue and some red.  I have no explanation for this image shading, and a quick google search did not turn up anything.  We can guess about who had a say here about the level of detail available in the map, and I doubt it was the US.

The second example is of North Korea:

Perhaps it is no real surprise that there is absolutely no data on North Korea at all (other than its name), at any level.  But why not?  Surely it would be in the US interest to have maps of the roads there. Or even locations of the cities.

I think what these images (and my very quick analysis) tell us is that Google, far from being an independent entity, is still beholden to not one, but likely several governments in what information it can include on maps and what information it cannot.  Maps, it seems, are still a vital part of state-making, and one over which governments are not quick to relinquish control.

Conference Season is now over, and I participated in quite a few this year.  It began for me in June with three conferences/workshops in the UK.  At Oxford, I attended the Oxford Intelligence Group‘s discussion of whether the UK needs an intelligence doctrine (notes available on their website).  In London, UCL put on a workshop concerning the current work which has developed off of the late Dame Mary Douglas.

I then attended the Science Democracy Network‘s joint meeting with the Royal Society at Kavli House, where I presented a paper on “Imaginaries of State Security”.  The main point of this paper was to outline three periods of history in (Western) international relations and how each period is characterized by a different set of assumptions about what should be considered militarily significant technology and how it should be controlled.

Back in the US, I presented a new case study on the BP oil spill that I am developing for next year’s Introduction to Technology and Society undergraduate course that I am helping run with Venky Narayanamurti.

My final conference was the Society for the Social Studies of Science meeting in Tokyo, where I presented a paper on “Technology control and imagined international orders,” which was a re-worked version of my SDN presentation.

Now back at Harvard, I plan on spending this semester getting several articles out and hopefully designing a couple workshops and grant applications.

It is my pleasure to finally announce that my thesis has been published on the Oxford Research Archive.

This is a redacted version of my thesis.  The redactions were made in line with requests from the British Government, and include primarily a description of the location of the Wassenaar Secretariat, the Arrangement’s information system, and the reproduction in Appendix G of the Guidelines for the Drafting of Lists.  The redactions will be valid for 30 years or until I can get permission from the Government to remove them, whichever is sooner.


International cooperation on export controls for technology is based on three assumptions, that it is possible: to know against whom controls should be directed; to control the international transfer of technology; and to define the items to be controlled. These assumptions paint a very hierarchical framing of one of the central problems in export controls: dual-use technology. This hierarchical framing has been in continual contention with a competitive framing that views the problem as the marketability of technology. This thesis analyses historical and contemporary debates between these two framings of the problem of dual-use technology, focusing on the multilateral Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies. Using a framework of concepts from Science & Technology Studies and the theory of sociocultural viability, I analyse the Arrangement as a classification system, where political, economic, and social debates are codified in the lists of controlled items, which then structure future debates. How a technology is (not) defined, I argue, depends as much on the particular set of social relations in which the technology is enacted as on any tangible aspects the technology may have.

The hierarchical framing is currently hegemonic within Wassenaar, and I show how actors that express this framing use several strategies in resolving anomalies that arise concerning the classification of dual-use technology. These strategies have had mixed success, and I show how they have adequately resolved some cases (e.g. quantum cryptography), while other areas have proved much more difficult (e.g. focal plane arrays and computers). With the development of controls on intangible technology transfers, a third, egalitarian framing is arising, and I argue that initial steps have already been taken to incorporate this framing with the discourse on dual-use technology. However, the rise of this framing also calls into question the fundamental assumption of export controls that technology is excludable, and therefore definable.

To read the whole thesis (or just the parts that interest you!) head over to the Oxford Research Archive.

[UPDATE 9 July 2010: The ORA appears to be down right now.  I apologize for anyone trying to access my thesis.  I’ll let you know when it is back up.]

[UPDATE 12 July 2010: The ORA is back up]

I have modified the Oxford Maths LaTeX template to work for the social sciences.  There are a lot of bells and whistles in this file, but I have tried to provide lots of comments to make the process of getting up and running with minimal effort.

I would also recommend perusing the LaTeX resources on the Maths website for lots of LaTeX tutorials and information.

You can find my thesis template folder here: Oxford LaTeX thesis

If you don’t have LaTeX installed yet, head over to CTAN.

[Update 22 May 2011] As for editors, I highly recommend Textmate (for Mac)