LUCY READING - IKKANDA

Every scientist has done an experiment that produced an unexpected result. Sometimes this is a pleasant surprise, and leads to new research projects and discoveries. But most of the time, to a researcher on a quest for reproducibility, the unexpected is unwelcome, leading to wasted time and reagents as the scientist struggles to repeat the experiment yet again. Similarly, many scientists have struggled to reproduce the results of others, often failing because of missing or undocumented, yet essential, steps in protocols. Defining the precise moment where your experiment went wrong (or right) is very tricky. Anecdotes abound: a story is told of a student’s cell cultures regularly dying without apparent reason—until she discovered that the cleaner was swabbing the incubator with bleach once a week.

Today, there is an increasing requirement to provide very thorough supporting data when publishing a paper. And many people, scientists...

This means that small, unexpected, yet critical variations in conditions—as trivial, perhaps, as sunlight striking a photosensitive reaction, or someone leaving the door of an incubator open too long—go unnoticed and unrecorded. What if we could capture all those moments, all those variations; record every input and output of every experiment?

Better science through virtual monitoring

We propose creating an environment in the laboratory where everything is indeed captured. Not simply what goes directly into an experiment, but also people’s movements, messages received, and conversations engaged in; status of other equipment, temperatures in different places in the laboratory, light and noise levels—a huge amount of low-level information that machines are quite capable of monitoring.

Defining the precise moment where your experiment went wrong (or right) is very tricky.

Many types of sensors cost less than $20, and off-the-shelf components can be used to measure parameters such as acceleration, temperature, and the presence and amounts of volatile compounds in the air. We don’t know, for any given experiment, which factors might be important for reproducibility, but we can, even now, create a computerized record of everything that happens in a lab. From such a record it is relatively straightforward to extract the significant—whether this is something out of the ordinary, or conversely, something that happens repeatedly.

Much of this will be recording, or looking for, errors and defects—in the purity of solvents, for example, or in calibration dates for different lab apparatuses. Balances and other measuring devices can be hooked up to computers; equipment and reagent bottles can be labeled with radio-frequency ID tags to track where they are (to check that they have been stored at the correct temperature, for example). One can even speak a chemical name and have the computer do basic voice recognition and transcription, which, when hooked up to a database such as OPSIN, will identify the molecule and its potential reactions.

The increasing use of automation in mainstream science dovetails with this idea. Small labs, in particular, are likely to have off-the-shelf products, such as labeled and tagged 96-well plates, that facilitate the recording of experimental details. Our lab has already run a six-month Rapid Innovation project sponsored by JISC (formerly the UK’s Joint Information Systems Committee) to investigate ways of monitoring and collecting experimental data. We experimented with ways in which a Microsoft Kinect—a video camera coupled with an infrared motion sensor that connects to an Xbox 360 and monitors objects and movements in front of it in 3D—could be used to control equipment in a laboratory setting (to be published in a special edition of J. Cheminformatics).

Big Brother in the lab?

Equipping a laboratory with a variety of sensors to monitor activity would involve an investment. But just as production lines decrease the cost of individual components, so moving to an engineering scale would make this affordable. In the design of new laboratory buildings, sensors and recording equipment would be integral, driving the cost down still further. The productivity gain achieved by not having to unnecessarily repeat experiments or struggle with technical issues, and by the higher guarantee of reproducibility, will allow more science to be done.

There are other social issues to consider as well: wouldn’t audio and video monitoring breach privacy rights? Close-up monitoring of a reaction vessel is unlikely to include video footage of the experimenter. Individual scientists will, of course, be associated with particular experiments, and their movements and interactions may be tagged as part of the experimental records, but this is simply an extension of what already happens with paper notebooks. A wider video view of activity within a laboratory would, however, capture personally identifiable and possibly sensitive information about an experimenter, and we do need to address these concerns. In our pilot project, we did not deploy video monitoring, but we would envisage that control of the video records of an experiment would be treated like any other experimental data—i.e., its release would usually be in the control of the scientist.

Some researchers may be happy with working completely openly, whereas others might prefer all information to remain private until it is time to publish. One could also imagine that the more paranoid scientist might welcome video surveillance: “You pipetted from my RNase-free Tris!” There are potential applications for fraud investigations as well. Additionally, some scientific activities are done within the boundaries of contracts with partner organizations, which for intellectual property reasons may or may not wish to release any data obtained. A successful data management tool must provide fine-grained control of access to information, and place this control in the hands of the scientist in the lab.

Does the use of computers and sensors to help the scientist at the bench raise the specter of Big Brother? As long as control remains in the hands of the experimenter, our project and its eventual implementation simply offers a better way of recording what should already be recorded for the sake of good science. The possibilities for remote observation, or indeed control, of an experiment are also intriguing.

Adopting new technologies is always a challenge, given the independent nature of scientists. Many researchers still use paper notebooks, and don’t have computers at the bench (sometimes for good safety reasons!); what’s more, no academic credit is given for creating a new laboratory environment—it’s not easy to publish such advances in mainstream journals. However, if the use of electronic tools to monitor laboratory science proves advantageous to the work of those who embrace it, then the natural desire to achieve the best possible science will ensure the widespread adoption of new technologies over time.

Our trial project was successful, and is likely to be adopted and promoted by JISC. We do need to check real examples of usage for unforeseen issues, but given that we have all the necessary technology already, and on the assumption that further investment is forthcoming, we predict that the prototype of such a laboratory could be built within two years.

Peter Murray-Rust is at the Unilever Centre for Molecular Science Informatics, part of the chemistry department at the University of Cambridge. Brian Brooks, the director of BIB Consultancy, worked with the Murray-Rust lab on the trial project described in this article.

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!
Already a member?