Pitch Perfect

Academic detailing has the potential to significantly improve clinical practice.

Jan 1, 2012
Josephine Johnston


In 2009, the US government made a major investment in the kind of research that pharmaceutical and medical device companies loathe. Along with tax relief, extended unemployment benefits, and money for roads, bridges and schools, the American Recovery and Reinvestment Act—otherwise known as the Stimulus Bill or the Recovery Act—directed $150 billion in new funds to health care, of which $1.1 billion was to be used for comparative effectiveness research. Because such research isn’t much use if clinicians don’t know about it, part of that money is being used for dissemination of those results, including by a practice known as “academic detailing.”

Although the precise definition of comparative effectiveness research is a matter of some debate, as its name suggests it compares the efficacy, safety, and sometimes the cost of available treatment options for a given medical condition. Old drugs are compared with new drugs, surgical treatments with nonsurgical options, wait-and-see approaches with radical interventions. This research is quite unlike the kinds of clinical trials that drug and device companies conduct or sponsor in order to obtain FDA approval for their products. For one thing, it compares a range of available treatment options, instead of just pitting the company’s drug or device against a placebo. In addition, comparative effectiveness research often uses a more diverse patient population in a broader range of clinical contexts to get closer to “real world” use.

Federal money has been used for this kind of research for some time. In 1997, the Agency for Healthcare Research and Quality (AHRQ) began providing funding mainly to academic institutions for the development of Evidence-based Practice Center reports, which would review the available literature on a wide range of clinical topics, beginning with the diagnosis of sleep apnea, rehabilitation for traumatic brain injury, and the pharmacological treatment of alcoholism. In 2003, the Medicare Modernization Act authorized AHRQ to extend this work to include the funding of new research, as well as dissemination of its results to a variety of audiences. The stimulus money was a considerable boost to these efforts—in 2009, the appropriation to AHRQ for comparative effectiveness research was $300 million, ten times what it had received for those activities the year before.

Academic detailers are providing the kind of analysis that is quite a bit closer to the calculus physicians are supposed to be undertaking each time they write a script.

It’s easy to see why drug and device manufacturers are wary of comparative effectiveness research. These studies can sometimes conclude that older drugs, some available now as generics, are better than newer ones, or that a nonmedical therapy is better than implantation of an expensive device. But they are not the only ones concerned. Some politicians and commentators fear that the research results will be used by payers, including Medicare and Medicaid, to mandate particular treatments or to deny coverage. That fear is not completely unfounded: comparative effectiveness research is among the kinds of data that the UK’s National Institute for Health and Clinical Excellence (or “NICE”) relies on when drafting its clinical guidelines for the National Health Service—guidelines that significantly affect the care choices available to UK citizens.

Nevertheless, many clinicians, researchers, and policy makers believe that comparative effectiveness research is both sorely needed and likely to be instrumental in improving the effectiveness and value of health care and reducing unwarranted practice variation. Of course, just conducting the research isn’t enough—the results have to be disseminated to those who need it most. Like much scientific research, comparative effectiveness studies can make dry reading. And it just isn’t realistic to expect busy clinicians, particularly nonspecialists and those working outside academic medical centers, to keep on top of the latest developments. Some of the stimulus money, therefore, was earmarked for academic detailing.

The birth of academic detailing

Developed more than 30 years ago by physician Jerry Avorn and colleagues at Harvard Medical School, academic detailing takes a leaf from the pharmaceutical industry’s marketing playbook. For more than half a century, drug companies have spent a considerable amount of money marketing directly to physicians, including by sending sales representatives to visit doctors in their offices. In 1958, the industry estimated that its “detail men” made as many as 20 million calls. By 2005, approximately 100,000 individuals were employed as drug reps. Today, companies continue to use specially trained sales reps (although the numbers are reportedly down a little), but they also employ physicians and “medical science liaisons” (individuals with advanced scientific degrees) to meet with physicians about treatments.

These in-person calls have proved an extremely effective way of increasing sales, in part because they capitalize on the power of interpersonal relationships. One study conducted in the mid-1960s reported that “45 percent of the physicians indicated that a ‘good’ detail man was more like a friend than a salesman.” Data collected over the past two decades show that visits by sales reps have an immediate impact. One study of Danish primary care physicians reported that three visits by drug reps “markedly increased the market share of the promoted drug”—preference for the promoted drug rose from 15 percent before the first visit to 28 percent after the third. Yet, as the US Institute of Medicine noted in a 2009 report, “Sales representatives are, however, tasked with promoting their company’s products and not with providing a balanced assessment of the evidence for the use of different clinical options, including nonpharmacologic approaches.”

Back in 1979, impressed by the effectiveness of drug reps, Avorn wondered: “What if we could take the very sophisticated communications and behavior-change tools that the drug companies deploy so effectively, but instead use them to give doctors the latest and best facts about drugs’ comparative efficacy, safety, and cost-effectiveness?” He and his colleagues tested the idea of the “un-sales rep”—initially training a group of pharmacists in four states to visit physicians and educate them on several common prescribing topics. They showed that the idea worked in a randomized trial involving more than 400 doctors: 92 percent of those offered a face-to-face meeting accepted it, and those who received academic detailing significantly reduced their use of three target drug groups that Avorn and his colleagues had identified as overprescribed and/or not cost-effective: cerebral and peripheral vasodilators, an antibiotic, and a painkiller.

In the decades since Avorn and colleagues tested and refined the idea, academic detailing programs have been tested and established around the U.S. and beyond, many funded by state or national governments. With the recent stimulus money, AHRQ created the National Resource Center for Academic Detailing, whose goals include supporting the establishment and improvement of academic detailing programs around the country. The initial focus has been on diabetes care, the use of antipsychotic drugs in dementia, and pain control in arthritis. The education is being targeted mainly to primary care physicians.

Many of the criticisms that have been fired at comparative effectiveness research are also directed at academic detailing. In particular, critics, including representatives of the pharmaceutical industry, have argued that academic detailers are not “disinterested” because the programs are about reducing cost, mainly by increasing the use of generic medications, at the expense of patient care. Proponents point out that the content of these programs is controlled by the clinical faculty who run them, not by the public-health authorities that support them. These issues need to be viewed against a backdrop of both very widespread concern that the marketing and education efforts of pharmaceutical and device companies have contributed to inappropriate clinical practice, and a growing acknowledgment that we simply cannot justify spending ever larger sums of money on medical care that we know too often varies in its quality.

Commercial entities still far outspend even the most ambitious academic detailing programs, and their reps talk to more physicians about more treatments for a far greater range of conditions than academic detailing programs are targeting. The problem for the companies is that academic detailers are providing the kind of analysis that is quite a bit closer to the calculus physicians are supposed to be undertaking each time they write a script, order a test, or recommend an intervention: which treatments are out there and what do we know about how effective and safe they are in various patient populations? It has never been the job of drug reps to do this. It is good news that professionals without a particular product to push have stepped into the breach.

Josephine Johnston is a research scholar at The Hastings Center, an independent bioethics research institute in Garrison, NY. Her previous Thought Experiment article, “America’s Stem Cell Mess,” appeared in the October 2010 issue of The Scientist.

Suggested Reading
Examples of academic detailing program

Jerry Avorn and Michael Fisher, “‘Bench to behavior’: Translating comparative effectiveness research into improved clinical practice,” Health Affairs 2010, 29: 1891-1900, http://content.healthaffairs.org/content/29/10/1891.abstract.

Mike Mitka, “New physician education initiatives seek to remove the devil from the detailing,” JAMA, 306: 1187-88, 2011.

Conflict of Interest in Medical Research, Education, and Practice, Bernard Lo and Marilyn J. Field (eds), Washington DC: The National Academies Press, 2009. (http://www.iom.edu/Reports/2009/Conflict-of-Interest-in-Medical-Research-Education-and-Practice.aspx).