Extreme Makeover: Clinical Trials Edition
Megatrials have changed the face of medical practice, and in doing so, clinical trials themselves are getting an extreme makeover. It’s not that “gold standard” randomized controlled trials (RCTs) have become tarnished, but they sure have become pricey—and much more difficult to do. As CSWN noted a year ago, clinical trials cost too much, recruit too few patients, and are causing too many investigators to just give up.
Here’s why: as of October 23, 2013, there were 31,267 studies listed in the US National Institutes of Health (NIH) database ClinicalTrials.gov. According to Christopher O’Connor, MD, director of the Duke Heart Center at Duke University and editor-in-chief of JACC: Heart Failure, about half of those trials will never finish because of recruitment problems, while many of those that do finish will be underpowered to answer their original question. Meanwhile costs have soared as demands grow on the data gathered.
Consider the rocky road of the Duke University–sponsored Surgical Treatment for Ischemic Heart Failure (STICH) trial, which was the largest cardiovascular surgical trial ever conducted, with final data from 127 sites in 26 countries. Sounds impressive, no? For the investigators, though, it was all uphill: the average enrollment was about two patients per site per year; only 22 sites joined the study in its first year; and over the course of the trial, 44 sites were deactivated due to no enrollment—at a cost of about $10,000 per site. Overall, 20% of principal investigators (PIs) failed to enroll a single patient and 30% of PIs under-enrolled.
What was the lesson from this study? In a clinical trial sense, the experience suggests it will not be possible to answer the questions that need to be answered in the coming years if patient enrollment mirrors that seen in the STICH trial. It’s not just this single trial, either: 38% of PIs who participated in clinical trials between 2000 and 2005 did not return to conduct another clinical trial. They quit, said Dr. O’Connor, due to the burden of conducting clinical research.
Considering the number of new molecules, new technologies, new therapies, and new strategies that need to be tested, Dr. O’Connor says the need “far exceeds our ability to conduct these [trials]” given the current RCT model that is too cumbersome, too expensive, and too time-consuming. “If you’re in the field of heart failure, like myself, you might be able to run five, maybe 10, trials in an academic lifetime, but that may be it.” How is that a workable plan of attack given the explosion in new agents, new devices, and “hundreds and hundreds of clinical questions in heart failure that could be asked but never get advanced further than a hypothesis because of the cumbersome nature of conducting clinical trials?”
The Sky is (Not) Falling
Options to RCTs are available, of course, but these have stirred contentious debate. Back in the summer of 2000, a pair of studies suggested that observational studies give results similar to those of RCTs. This led Stuart J. Pocock, PhD, and Diana R. Elbourne, PhD, to warn in an accompanying editorial in the New England Journal of Medicine, “If these claims lead to more observational studies of therapeutic interventions and fewer randomized, controlled trials, we see considerable dangers to clinical research and even to the wellbeing of patients.”1
They noted the “one crucial deficiency” of observational studies: the design is not an experimental one. “Each patient’s treatment is deliberately chosen rather than randomly assigned,” they wrote, “so there is an unavoidable risk of selection bias and of systematic differences in outcomes that are not due to the treatment itself. Only randomized treatment assignment can provide a reliably unbiased estimate of treatment effects.”
Those looking for alternative approaches to RCTs countered with the notable limitations of randomized trials:
- Highly selected populations due to exclusion criteria
- Often selected specialized study centers
- Reliance on surrogate endpoints
- Long time from planning to completion
- Industry sponsorship—leaving only studies with economic interest likely to be funded
Maybe clinical trials are due for a retooling. At the recent European Society of Cardiology meeting in Amsterdam, investigators presented the results of a randomized registry controlled trial (RRCT): the Thrombus Aspiration in ST-Elevation Myocardial Infarction in Scandinavia (TASTE) trial.2 Barbara Casadei, MD, DPhil, University of Oxford, United Kingdom, said of TASTE: “What an imaginative way to use registry data instead of just auditing practice—an approach that may be better and less expensive.”
The TASTE investigators created this large-scale trial to answer an important clinical question and carried it out at remarkably low cost by building on the platform of an already-existing high-quality observational registry: the Swedish Coronary Angiography and Angioplasty Registry (SCAAR). They leveraged clinical information that was already being gathered for the registry (thus avoiding filling out long case report forms) in order to quickly identify potential participants.
Instead of reams of pages, investigators had to answer two questions per patient:
1) Did the patient consent verbally?
2) Are inclusion and no exclusion criteria met?
With that, the TASTE trial enrolled 7,244 patients in little time at a cost of less than $50 per randomized participant. The total cost, about $350,000, was less than the amount of a typical modular R01 grant (for research initiated by an individual investigator) from the NIH. Compare that to the HIGH-COST Study, a hypothetical trial designed by Christopher B. Granger, MD, director of the Cardiac Care Unit at Duke University Medical Center, to illustrate the costs of research at a recent Institute of Medicine (IOM) workshop. The HIGH-COST Study’s design included 20,000 patients across 1,000 sites, 2-year follow-up with 24 site visits, 60 case report form pages, and a study award of $10,000 per patients. The total cost? Roughly $400 million.
Sweden has not cornered the market on RRCTs, although they do have several more already planned or underway. At TCT 2013 in San Francisco, results were presented for the first registry-based randomized trial in the United States: the Study of Access site For Enhancement of PCI for Women (SAFE-PCI) for Women trial. This trial from the Duke Clinical Research Institute (DCRI) was a RRCT that used the infrastructure of the ACC’s National Cardiovascular Data Registry® (NCDR), specifically the CathPCI Registry®. Because the registry already contains so much basic data per patient, the DCRI team estimates there was a 65% workload reduction per patient for site coordinators. Time to ramp-up and closeout the trials was cut nearly in half.
Dr. O’Connor said, “The patients are already coming in for these procedures and they’re being entered into the registry, so randomization can be done through a central randomization mechanism. In this particular trial, we used web-based randomization using a Duke proprietary randomization engine, making it a very convenient way to do the randomization.”
Very economical, too: typically, such trials pay $5,000 to $10,000 for site participation. SAFE-PCI for Women paid $750 per patient to cover the costs of some data collection and follow-up. Investigators randomized 1,787 women at 60 US sites at a total budget of ~$5 million. Because RRCTs are more economical, investigators can enroll large numbers of patients from a real-world population.
Added Dr. O’Connor: “If you are asking a question about a new biologic or a therapeutic, you probably won’t be able to do it in this fashion because the regulatory requirements for documentation of safety and efficacy are so strict that the cursory information collected in registries would not be enough. However, if you›’e testing new strategies for old drugs, generic drugs, or generic devices, this is a good way to approach this.”
But perhaps all that glitters still is not a gold standard. An editorial accompanying the NEJM publication of the TASTE trial results called the RRCT a “disruptive technology.”3 The coauthors, Michael S. Lauer, MD, director of National Heart, Lung, and Blood Institute’s (NHLBI) division of cardiovascular sciences, and Ralph B. D’Agostino, Sr., PhD, chairman of the mathematics and statistics department at Boston University and co-PI of the Framingham Heart Study, wrote that the randomized registry trial represents a “disruptive technology, a technology that transforms existing standards, procedures, and cost structures.”
They raised a number of fundamental questions that must be addressed if today’s clinical research enterprise is to be transformed by registry-based randomized trials, or other trials with highly efficient designs.
- Will registry data (or data coming from other digital sources, such as electronic health records) be of high enough quality?
- Will too many data fields be missing?
- How will we balance efficacy versus effectiveness?
- Can we transition single registries from efficacy to effectiveness, making it possible to assess external validity much more expeditiously than we do now?
- How will we approach concerns about privacy and informed consent (particularly in the context of trials that compare acceptable standards of care and use cluster-randomization methods)?
- Is blinding possible?
- How will we standardize and adjudicate certain outcomes?
- Can we assure representativeness, given that even within a registry there may be systematic differences between patients who are and are not eligible for randomization or between those who do or do not consent?
One way to tackle some of the issues is through a standardization of cardiovascular data for research, which was proposed recently. It’s part of the National Cardiovascular Research Infrastructure (NCRI) project that was initiated to create a model framework for standard data exchange in all clinical research, clinical registry, and patient care environments, including all electronic health records.
In 2009, the NCRI project was initiated by the DCRI and the ACCF. The four goals of NCRI are to:
1) replace the repetitive assembly and disassembly of short-lived clinical investigator networks with a stable and enduring operational infrastructure for clinical research;
2) standardize and harmonize cardiovascular data to achieve complete syntactic and semantic interoperability throughout the network;
3) coordinate and facilitate the transfer of selected, standardized cardiovascular data into existing and future national registries; and
4) develop an enduring library of content for education and training of clinical investigators and site personnel.
This approach certainly worked well with the SAFE-PCI study, according to William (Britt) Barham, the clinical trials project leader, who said NCRI can be used to identify sites that are a good fit for a particular trial by evaluating metrics on an institutional level. For example, unlike the STICH trial experience, SAFE-PCI had a high level of site participation with successful patient recruitment seen for 60 out of 62 sites enrolled into the study. This, he said, is indicative of good site selection. “We want more sites actively participating and enrolling patients in these clinical studies,” said Mr. Barham. “This is not merely a software or database type of build; it’s a new way of thinking about conducting research in the United States.”
In another twist, the typical operational model for these studies indicates that 80% of the patients are enrolled at 20% of the sites. For this project, that metric was flipped, resulting in a much more distributed enrollment across multiple sites. Important trial metrics include the fact that 90% of the sites enrolled at least one patient, and more than 70% of sites enrolled at least 10 patients. The study also included several sites that had never before participated in clinical research.4
“Remember the Instamatic”
An odd rallying cry, perhaps, but a year before he co-wrote the NEJM editorial, Dr. Lauer addressed critics of what the IOM calls Large Simple Trials or LSTs (see SIDEBAR). Such trials will use data routinely collected, but he said no one should disparage studies as inferior in detail and quality to data collected for traditional RCTs.
At an IOM workshop on LSTs, he likened such a view to that of Eastman Kodak upon seeing the first digital camera images. The photos were substantially inferior to film images, which made it easy for Kodak to dismiss the importance of digital camera technology. Oddly enough, it was Steven Sasson, a Kodak engineer, who invented the digital camera. So, even though the technology was developed by Kodak, in a moment of hubris the company decided not to exploit its advantage and stuck instead to its film-based business model.
This was, after all, the height of the Kodak Instamatic, the ubiquitous family camera from the 1960s into the 1980s. For goodness sake, who needed a new product when Kodak had 85% of the camera market and 90% of the film market in 1976? Turns out they did. As digital became the predominant camera format, Kodak eventually produced digital cameras, but the effort was too little, too late to halt the company’s decline. Dr. Lauer said remember the Kodak Instamatic before dismissing the LST model at its current stage of development.
1. Pocock SJ, Elbourne DR. N Engl J Med. 2000;342:1907-9.
2. Fröbert O, Lagerqvist B, Olivecrona GK, et al. N Engl J Med. 2013;369:1587-97.
3. Lauer MS, D’Agostino RB Sr. N Engl J Med. 2013;369:1579-81.
4. Anderson H, Weintraub WS, Radford MJ, et al. J Am Coll Cardiol. 2013;61:1835-46.
< Back to Listings