ERIN LEMIEUX

Think of a question or technology of interest to neuroscience and there is an application with military or counterintelligence potential. Brain-machine interfacing can make drones (unmanned vehicles) more efficient; anti-sleep medication could prevent combat errors by men and women at war; calmative aerosols could diffuse a tense hostage situation; and new imaging devices could improve detection of deception, while a boost of certain natural neurohormones could aid in interrogation.

Earlier this year I published an update of my 2006 book Mind Wars: Brain Research and National Defense, in which I had described the implications of neuro-science for military and counterintelligence technologies. The publication of the new version, Mind Wars: Brain Science and the Military in the 21st Century, was more than justified both by the favorable reception the first edition enjoyed (surprisingly, it is still the only book on the topic), and the subsequent burst...

Among neuroscientists there has been a lively discussion about scientists’ responsibility for the ultimate purposes to which their work is put. Of course, this quandary is not limited to neuroscientists, but spans a wide range of disciplines, highlighted by the recent lab-evolved strains of the H5N1 avian flu virus that were transmissible between ferrets. (See “Deliberating Over Danger,” The Scientist, April 2012.) An analogy to the early atomic physicists immediately comes to mind: the theoretical and experimental work of leading physics researchers led to the development of one of the most destructive weapons mankind has ever known. But unlike the bomb itself, which was developed for an explicit purpose under the pressures of national security, discoveries in neuroscience and other life science fields are plagued by a more challenging philosophical problem: how can we hold an individual scientist morally responsible for the ultimate applications of her or his work when such applications are often difficult to predict?

A classic example is Einstein’s 1905 paper on special relativity, which formed the theoretical basis for the subsequent work on the atomic bomb. Should Einstein therefore be held responsible for the devastation at Hiroshima and Nagasaki? Although the distinction between “basic” and “applied” science is fuzzy at the edges, drawing such a line in the sand may be necessary in order to  determine direct ethical responsibility for the application of one’s work. (Note: His later regrets notwithstanding, Einstein himself was prepared to support the bomb project for fear that Germany would get there first.)

Some within the neuroscience community have urged that their colleagues simply pledge not to work on projects that could be of interest to the national security establishment. But one practical problem with this proposal is that researchers don’t always know the precise source, let alone the ultimate purpose, of the funds associated with a request for proposals. In the 1950s, for example, the CIA created a front organization called the Society for the Study of Human Ecology that funded research on hallucinogens and other “brainwashing” experiments. Researchers funded by this Society never knew the CIA was behind it.

And then there is the obvious fact that not all neuroscientists will agree that doing work for a national security agency poses ethical problems. Some may consider it an act of patriotism—for example, some scientists may view their work with such agencies as central to the fight against theocratic regimes—or simply of professional self-interest. Finally, if neuroscientists worried about the unpredictable uses their work might be put to, what it might mean in practice is refusing to accept funding from certain government agencies altogether.

All this might seem like a lot of rationalization were it not for the fact that I am a philosopher and historian, not a neuroscientist, so in that sense I have no dog in this hunt. However, there is one related effort that I can see no reasonable objection to neuroscientists’ vigorous involvement in: the formulation of international conventions to establish limits on the use of neuroscience-based innovations in international conflict. Some drugs are already covered by the treaties against chemical and biological weapons, and international human rights law generically prohibits many practices that exploit knowledge of the brain and central nervous system.  But, for example, the convergence of neuroscience with cyber- and drone technology raises new questions that have not been previously contemplated.

Neuroscience is moving along swiftly, and experimental applications are proceeding at a dizzying pace. But it can take decades for sovereign states to reach binding multilateral agreements. Given the consequences of having been slow to implement rules governing the use of atomic weapons, future generations might well wonder: What are we waiting for now?

Jonathan D. Moreno is the David and Lyn Silfen University Professor at the University of Pennsylvania and author of Mind Wars: Brain Research and the Military in the 21st Century. 
 

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!
Already a member?