ART VALERO / CORBIS

In the late 1980s and early 1990s, Merck & Co. was at the height of an epic pharmaceuticals boom. Annual sales doubled and profits tripled, most notably driven by sales of a congestive heart failure treatment that hit the billion-dollar mark just three years after its 1985 introduction. In 1993, Fortune magazine named Merck America’s “most admired” company—for the seventh year in a row. Despite the company’s unparalleled success, Merck was not immune to the common cognitive biases that can subtly influence everyday research decisions.

Merck employees, for example, were overly confident that they had the best way of bringing new products to market. “They believed so strongly in themselves and in their hunches about these drugs that they could get themselves to just totally pour themselves in and engage,” says Randy Case, who analyzed Merck’s management strategies for his 1993 doctoral dissertation in strategy...

But Merck was gambling in an arena with alarmingly bad odds. According to a recent study, since 2004 only about one in 10 new drugs moves beyond Phase I trials to receive FDA approval, and billions of dollars are regularly poured into products that will eventually fail. The company was unrealistically confident that it could beat the odds. That optimism, Case says, may have led Merck employees to make decisions that more risk-averse researchers would probably have avoided. “We’re all really loss averse, unbelievably so,” says Case, who now runs a management consulting firm called Case Management Group that specializes in organizational development. Nobody likes to lose, so “we’ve got to give our mind a way to deal with that if we’re going to take risk. Overoptimism is one way.”

It’s very easy to be a bad decision maker.

—Nigel Nicholson, London Business School

This attitude is in stark contrast to the other company Case shadowed during his doctoral research—SmithKline Beecham (now GlaxoSmithKline). “They typically didn’t have a strong sense of confidence at all that what they were doing would succeed, or even that they were bringing first-rate expertise or management to bear on it,” he recalls. SmithKline’s more risk-averse approach to drug development was to distribute its resources across as many different projects and collaborators as possible—a strategy that has since been widely adopted by the industry.

Which business model ultimately proves to be more successful will vary, but Case’s research holds valuable lessons on the influence of human cognitive biases on decisions in biomedical research. In industry, such decisions can be both financially loaded and fraught with subjectivity—a precarious combination for any company, especially small biotechs. As in all areas of research, biases can alter the course of scientific discovery.

“Biases will affect how we assimilate information; biases affect actions, the things we choose to act upon; and biases affect our reactions to outcomes and often beliefs about the controllability of outcomes,” says Nigel Nicholson, a professor of organizational behavior at the London Business School. “It’s very easy to be a bad decision maker.”

Common biases

ART VALERO / CORBIS

Fear of failure

“Risk is a bad word because the norm in science is that really innovative ideas are often wrong,” says Alan Leshner, CEO of the American Association for the Advancement of Science. But if researchers dropped projects at the first sign of trouble because they were afraid of taking risks, science might have missed some of its greatest discoveries. SmithKline’s blockbuster drug for gastric ulcers, Tagamet, for example, only succeeded after the research team pursued a series of unproven approaches, thanks to the fearlessness of its leader, James Black, who later became a Nobel laureate. “In science, failure is a very frequent phenomenon and there’s nothing pejorative about it,” Leshner says. “Failure’s part of the process.”

Not letting go

Overconfidence is not the solution to overcoming fear of failure, however. Though Merck’s self-assured attitude of the ‘80s had some large payoffs, its home run frequency eventually waned, as the success rate of the company’s blockbuster ideas slowed dramatically. Overconfidence can lead researchers to cling to scientific ideas even in the face of contrary evidence—the so-called confirmation bias—which results in wasted resources dedicated to dead-end projects. “The confirmation bias is one of the major enemies of science,” says Nigel Nicholson of the London Business School.

Calling it skill, when it’s really luck

People who fall victim to what is known as attributional bias wrongly assume that a streak of good luck (or misfortune) is caused by their actions. This kind of mindset can be “fatal,” says Nicholson. If, for example, a company or research group runs five new projects each year for 3 years without a significant achievement to show for it, does that mean its strategy is flawed? With an average success rate of just one in 10 for drug discovery projects, there is actually more than a 20 percent chance it was just a run of bad luck. Because there is so much chance involved in scientific research, “the attributional bias is one of the most dangerous,” Nicholson says. “[It’s wrong to think], ‘When you get a good result, it’s because you’re a good scientist. When you get a bad result, it’s because you’re a bad scientist.’” It’s likely to be the persevering scientist who advances over one with a string of good luck.

Uncertain about uncertainty

“Almost all the data used within drug discovery comes from a model of sorts, whether it’s from an in vitro model, a predictive computer model, or an animal model,” which saddles all the scientific data with a measure of uncertainty, says Edmund Champness, director and CSO of Optibrium, a company that offers software designed to help researchers digest complex data. Understanding the trustworthiness of data and how this should inform decision-making takes careful analysis—something people tend to struggle with. “Often this uncertainty is ignored and the data is filtered by selecting hard cutoffs…which could ultimately be quite misleading,” Champness says. For example, researchers rule out candidate drugs based on particular qualities, such as affinity for a target. However, if its affinity value is within the accepted range but its uncertainty is high, another candidate with a slightly less optimal affinity value but lower uncertainty may be a better choice.

Tips for overcoming bias

Trim your variables

The best way to make good, unbiased decisions is to rely more heavily on the data. “The numbers matter,” Case says. Many scientific disciplines involve experiments with numerous variables, and statistical analyses can condense the data into a more manageable form. Applying a method known as principal components analysis to detailed morphological measurements of mastodon tusks, for example, paleontologist Kathlyn Smith at Georgia Southern University was able to combine 10 variables, such as tusk length and circumference, into just two components that explained most of the variation among individuals. “It really does reduce the complexity of having so many different variables,” she explains. “Instead of having 5 or 10 different plots for each of 21 mastodons, each one is going to have a score [for each of the two principal components] that’s going to be a combination of all the different measurements.”

Make data easy on the eyes

When researchers struggle to digest spreadsheet upon spreadsheet of numbers, looking at the data in a more visual way can often help identify patterns. Computer programs such as SpotFire and Imaris can slice and dice the data and depict it visually in a number of different ways, allowing the user to manipulate certain variables and test assumptions. “It’s not just passively looking at the data, but actively exploring it,” says Andrew Chadwick of Tessella, a science technology company that advises biotech and pharma companies on how to make their research more efficient and effective. “I’m a strong believer in the power of the picture to help people see things they might have otherwise missed.”

Put it to the computer

For extremely complex data, such as large-scale screens that measure drug compounds’ potency, solubility, and bioavailability, among other variables, computer software may be necessary to gain a full understanding of the results. Software programs such as Tripos’s Muse or Optibrium’s StarDrop are specifically aimed at helping researchers to make decisions during the drug discovery process, and even to identify some compounds that may have been overlooked. For example, while working with a client aiming to develop a drug for pain, drug discovery consultant Alan Naylor entered the structures of a handful of promising compounds into StarDrop. The program flagged a number of compounds with a high affinity for the hERG potassium channels in the heart, a binding propensity that is associated with potentially fatal arrhythmias in patients. With that information, Naylor could modify the structures in StarDrop into ones with a lower predicted risk of hERG complications. “It pointed us in the right direction,” says Naylor, who is on the scientific advisory board of Optibrium.

Diversify the decision makers

It can be difficult when others don’t agree with your ideas, but listening to constructive criticism is a healthy way to go about making complex decisions such as those involved in research. In industry, creating a leadership team diverse in background and experience can help overcome both the risk-averse and overconfident mindsets that can hamper good decision making. “The more homogeneous a group is, the more likely they are to reinforce each other’s bad decisions,” Nicholson says. “You need people who are able to challenge each other.” Indeed, management research has “shown again and again that groups with diverse people make better decisions than groups that are alike,” Case says.

Similarly, “the peer-review process is structured, at least in part, to address [confirmation bias],” says Mary Woolley, president and CEO of Research!America. Granting committees, for example, are composed of a diverse group of scientists and even some nonscientists, to help avoid funding projects that are simply trying to confirm the favorite hypothesis du jour. But even before you submit a proposal, you can get similar advice from your more immediate peers, she adds. “So people have a reality check, rather than just being there in the comfort of their own mind.”

Give it a go

Because of the lag time between a decision made and its outcome, researchers don’t have the luxury of trial-and-error learning. Thus, scientists must learn from the successes and mistakes of others. Software programs that simulate the decision-making process can provide people with “training that accelerates their experience,” says Chadwick. Technology consultancy Tessella, for example, offers an interactive training module on its website that steps viewers through a series of real-life drug-screening scenarios, in which a user must choose the best strategy for a set of candidate compounds. The program provides immediate feedback on the consequences of the decision by estimating the impact of different research strategies on pipeline value, given the anticipated risks, costs, and potential payoffs of a particular project.

Similarly, STRATPHARM from INSEAD, an international graduate business school and research institution, is a role-playing game that simulates the typical stages of drug development and marketing using industry-derived data, noting whether each project was a success or a failure and showing the associated earnings or losses. When both marketing and research colleagues play the game, it promotes discussion of real-world scenarios. What occurs is “a meeting of the minds between research and marketing people by looking at risk and potential outcomes over a spread of projects,” explains Chadwick, who took part in a STRATPHARM course when he was the head of R&D information systems at Boots Pharmaceuticals in the 1990s. “There was a faction [of Boots employees] wanting to invest mostly in safe but limited-potential [projects]. Others wanted to take greater risk. It was interesting how taking part in the game helped the two to understand each other’s perspectives better.”

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!
Already a member?