July 11, 2018 at 5:00 am ET
Over the last 20 years, two keywords in drug development have been “speed” and “innovation.” Going forward, “the patient voice,” “data quality” and “evidence generation” must be added to that list.
There is a raging debate over the level of evidence expected to first introduce a treatment to patients. Some argue for less data followed by post-approval follow-up, others for more adaptive clinical trial designs and end-point modification driven by patient-focused drug development and use of real-world evidence. The transition in the regulatory framework is happening in front of our eyes.
The uniqueness of the debate is that it is not entirely over the clinical trial design or robustness of the clinical data, or other traditional criteria — it’s about failing to approve a drug that actually works. Is this a new regulatory standard? What does it mean? How is it defined and measured? In the past, regulators have been accused of embracing ambiguity to allow them flexibility to deny approvals — can we now expect similar regulatory ambiguity that results in unanticipated approvals?
This evolving approach to data, uncertainty and evidence generation is accessible in the Food and Drug Administration’s actions relative to the Merck & Co. product Keytruda. In May 2017, the FDA issued a press release announcing that it had “… granted accelerated approval to a treatment for patients whose cancers have a specific genetic feature (biomarker). This is the first time the agency has approved a cancer treatment based on a common biomarker rather than the location in the body where the tumor originated.”
Is the FDA approving drugs too fast or not fast enough? Is it demanding too much data or not enough? There isn’t any dearth of commentary supporting either proposition. There is, however, no evidence to support the sound bite that the FDA is approving “everything,” or that every product that requests an expedited pathway receives it, or that “all” those that do receive an expedited pathway designation get approved, or that every product that does reach the market via an expedited approval is in some way more dangerous than other medicines.
In short – don’t believe the hype. Some particulars:
— An analysis of every product (364) requesting a Breakthrough Therapy designation from July 2012 to June 2016 shows that the Center for Drug Evaluation and Research granted 133 (37 percent) of those requests, denied 182 (50 percent), and the sponsor withdrew its request 49 times (13 percent) before the agency made a decision. Hardly a regulatory carte blanche.
— In 2013, the first full year of the Breakthrough designation, the FDA approved three new drugs, 14 in 2014 and nine in 2015. Hardly an onslaught of new medicines.
— Among 22 drugs with 24 indications granted accelerated approval by the FDA from 2009-2013, efficacy was often confirmed in post-approval trials a minimum of three years after approval, although confirmatory trials and preapproval trials had similar design elements, including reliance on surrogate measures as outcomes.
Unsafe? Not effective? Dangerous? A new “wild west” FDA? No.
In “Balancing the Need for Access With the Imperative for Empirical Evidence of Benefit and Risk,” a recent editorial in the Journal of the American Medical Association, former FDA Commissioner Robert Califf wrote, “substantial progress in balancing safety with access to effective therapies will come from systemic changes in the ecosystem rather than incremental modifications made by imposing more severe demands on individual products.”
New ways of understanding and interpreting how data evolves over time upend the traditional frame of regulatory stasis and introduce the opportunity for a much more dynamic understanding of a medicines benefits and risks that extends far beyond what is understood at approval. Such a mindset opens up the opportunity to embrace a more 21st-century approach to the regulation of health care technologies.
New science and the strategies and tactics to incorporate them into regulatory thinking do not mean a free pass for bad science. What it does mean is that the FDA (and from the highest levels) is rightly embracing regulatory dimensionality, a combination of scrupulous review processes and pragmatism. Together with a recalibrated sense of regulatory velocity (speed + accuracy + public health need), it’s the agency’s next step toward a more entrepreneurial regulatory attitude — the FDA as innovation accelerator.
Policies and regulations that seek to limit risk are often shaped by the immediate fear of sensational events. This perspective is commonly called “the precautionary principle,” which in various forms asserts that unless innovators can demonstrate that a new technology is risk-free, it should not be allowed into the marketplace.
Moreover, any product that could possibly be dangerous at any level should be strictly and severely regulated. In reality, this punishes patients and stymies innovation. There is never a medicine that is 100 percent safe.
Nobel Prize laureate Joshua Lederberg once observed that the failure of regulatory, legal and political institutions to integrate scientific advances into risk selection and assessment was the most important barrier to improved public health.
According to Lederberg, “the precedents affecting the long-term rationale of social policy will be set, not on the basis of well-debated principles, but on the accidents of the first advertised examples.”
When it comes to regulatory science and the broader issues facing the FDA, there will always be significant gradations of nuance and ambiguity. “Predictability” is a trail that regulator and regulated must blaze together sometimes heroically and at other times with greater caution.
Peter J. Pitts, a former FDA associate commissioner, is president of the Center for Medicine in the Public Interest.
Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.