Skip to content

FDA Regulatory and Compliance Monthly Recap — August 2016

FDA issues draft guidance on use of real-world data in medical device decision-making

The Food and Drug Administration’s draft guidance provides an overview of how the agency will determine the quality and reliability of real-world data for use in regulatory decision-making for medical devices. It describes the factors the FDA will consider in assessing the data, as well as the instances in which an investigational device exemption may be required.

The FDA published draft guidance detailing how it assesses real-world data (RWD) to ascertain whether it may be adequately relevant and reliable to generate the types of real-world evidence (RWE) that can be used in regulatory decision-making for medical devices. Specifically, it outlines circumstances in which RWD can be used in different FDA contexts based on existing evidentiary standards or regulatory decision-making. It also explains when an investigational device exemption (IDE) may be needed to prospectively collect and use RWD to determine the safety and efficacy of a device.

The guidance defines RWD as data collected from sources beyond traditional clinical trials, including large sample trials, pragmatic clinical trials, prospective observational studies, case reports and electronic health records, among others. RWE is defined as evidence derived from the accumulation and analysis of RWD elements. The guidance acknowledges that RWD may be adequate in certain instances to help inform or enhance the FDA’s understanding of the benefit-risk profile of devices at various points in their life cycles and may provide new insight into the performance of these devices.

The FDA points out that not all RWD are collected and maintained in a manner that provides sufficient reliability, so the use of RWD for regulatory purposes will be considered based on criteria that assess the data’s relevance and reliability. The quality threshold will be determined based on the specific regulatory use of the evidence. The draft guidance states that the FDA doesn’t endorse one type of RWD over another, and it indicates the RWD sources should be selected based on the ability to address particular regulatory questions. The agency will assess RWD based on:

  • Relevance. The agency will examine whether the individual data elements are sufficient to meet a regulatory purpose and whether they are reliable, complete, consistent and accurate, and include all essential elements to assess the performance of a device in the applied regulatory context.
  • Reliability. To ensure reliability, sponsors must have in place a prospective protocol that outlines the data elements to be collected, data element definitions, and methods for aggregation and documentation, as well as the time frame for data utility and outcome assessments.
  • Data assurance — quality control. The FDA will evaluate the data quality assurance plan and procedures developed for the data source itself, taking into consideration factors such as data quality, adherence to source verification procedures, completeness, data consistence and the use of data quality audit programs, among others.

The draft guidance indicates that the collection of RWD may be subject to IDE regulations if it constitutes a clinical investigation. The FDA states that the collection of RWD that is initiated for the specific purpose of determining the safety and efficacy of a device may be considered a clinical investigation. For example, a registry designed to ascertain the safety and efficacy of an approved device for a patient group outside the approved indication may be considered an assessment subject to IDE regulations. When data collection is not meant to determine the safety and efficacy of a device with the goal of supporting a marketing application, it would generally not meet the definition of a clinical investigation, the guidance states. The FDA will make decisions on a case-by-case basis about whether an IDE is required.

FDA finalizes guidance on implementing adaptive designs for medical device trials

The finalized guidance incorporates feedback received on a draft version published in 2015 and outlines considerations for planning and implementing adaptive designs for trials of medical devices. The guidance spells out when such designs are acceptable and how they can be properly incorporated, and it provides details on adaptations for unblinded and blinded data.

The Food and Drug Administration finalized guidance offering sponsors and FDA staff a road map for planning and implementing adaptive designs for medical device clinical trials. The guidance, which incorporates feedback from several groups on a May 2015 draft guidance, defines adaptive designs as clinical studies that permit prospectively planned modifications based on accumulating study data without damaging the study’s integrity and validity. The guidance applies to studies for premarket medical device submissions, including premarket approval applications, premarket notifications, humanitarian device exemption applications and investigational device exemption (IDE) submissions. It does not, however, apply to clinical assessments of combination products or the co-development of a pharmaceutical product with an unapproved diagnostic test.

The guidance suggests that the feasibility and appropriateness of an adaptive design be considered when deciding whether or not to choose an adaptive design. Generally, adaptive designs are feasible if there are a small number of end points for which adaptation will take place, and if the timing of the primary outcome will allow for the implementation of an adaption. To determine whether an adaptive design will be advantageous over a nonadaptive design, sponsors should consider realistic scenarios and a particular design and calculate the chance of success, average size of the study and operating characteristics in contrast to characteristics of a nonadaptive design. The decision to select a nonadaptive design may be based on the sponsor’s confidence in the expected parameter values and willingness to risk a failed study.

The FDA outlines two primary principles for design clinical studies, including adaptive design trials:

  • Controlling the chances of erroneous conclusions. Sponsors need to be vigilant in controlling the rate of false positive conclusions and inflation of this error rate, which generally occurs due to “multiplicity.” In adaptive trials, multiplicity can arise from multiple end points, multiple subgroups, multiple exposures or a combination of these features.
  • Minimizing operational bias. Operational and statistical biases need to be reduced, because the presence of bias can alter the findings and hamper the trials’ validity.

The guidance provides input on adaptations for both blinded and unblinded trials. For comparative studies with blinded data, adaptations based on the demographic characteristics or the aggregate outcome results don’t pose any challenges to error control or bias, whereas change based on outcomes based on treatment group may cause issues. In cases when adaptations are not preplanned at the start of the study, the FDA expects sponsors to justify the scientific rationale and show they have not had access to any unblended data and that access has been vigorously safeguarded. For adaptations using unblinded data, the FDA encourages sponsors to consult with the agency before starting a trial. The most commonly used adaptations, according to the agency, are group sequential designs, sample size reassessments and group sequential sample size reassessments. The guidance also provides input on special considerations such as changes to pivotal trials that are not preplanned using blinded data and unblinded data, adaptive designs for safety end points or for open-label randomized trials, observational comparative studies, and one-arm studies without a control.

The FDA recommends that sponsors communicate with the agency about review decisions and statistical staff during the planning phase to ensure they understand expectations for pivotal adaptive design studies. They are also asked to establish a risk-based monitoring plan focusing on specific aspects of adaptive studies that may not be present in nonadaptive designs. Sponsors should also provide the agency with evidence that protections are in place to make sure personnel are appropriately blinded during the conduct of an adaptive study. The guidance notes that submissions for an adaptive design should clearly indicate that the study uses an adaptive design and should outline the proposed adaptations as well as key issues related to study monitoring and the role of the Data Monitoring Committee.

FDA publishes draft guidances to help device makers determine whether new 510(k) is needed for device modifications

The agency published two draft guidances outlining the guiding principles on deciding whether a new 510(k) is needed for device changes or modifications to device software. The guidances provide flowcharts outlining the logic to be used when making these decisions.

The Food and Drug Administration published two draft guidances detailing its policy for when medical device makers should submit a 510(k) for modifications to a medical device or its software. The guidances preserve the basic format and content of the original guidance published in 1997 on 510(k) submission for changes to existing devices, but with updates to offer more clarity to increase consistent interpretations of the guidance.

The first guidance, Deciding When to Submit a 510(k) for a Change to an Existing Device, outlines several guiding principles for using the document when deciding whether to submit a new 510(k) for device changes:

  • Modifications designed to significantly affect the safety or effectiveness of a device. A new 510(k) is likely needed if modifications are intended to improve the safety or effectiveness of a device.
  • “Could significantly affect” assessment and the role of testing. A manufacturer should determine, through a risk-based assessment, whether a device alteration could markedly impact safety or effectiveness. If the assessment concludes that a new 510(k) isn’t needed, the decision should be validated through successful, routine verification and validation activities.
  • Unintended consequences of changes. Manufacturers should consider whether there may be any unintended consequences of or effects from the device modifications.
  • Evaluating simultaneous changes. When multiple changes are considered together, each change should be examined separately as well as in aggregate.
  • Appropriate comparative device and cumulative effect of changes. Device makers should conduct a risk-based assessment comparing the changed devices to their most recently cleared devices.
  • 510(k) submissions for modified devices. For a 510(k) submission for a device with multiple modifications, the submission should describe all changes that meet the threshold for requiring a 510(k), as well as any other modifications since the last-cleared 510(k).

The guidance divides types of changes into labeling changes; technology, engineering or performance changes; and materials changes. It provides flowcharts to describe the logic scheme to be used when determining whether a new 510(k) is needed for each of the change divisions:

  • Labeling changes. Determinations as to whether a new 510(k) is needed should focus on change in indications for use, defined as the disease or condition the device will diagnose, treat, prevent, cure or mitigate.
  • Technology, engineering and performance changes. These changes include an array of design activities, from minor engineering modifications to a change in control of device function. Changes of this nature should be validated according to quality system requirements.
  • Materials changes. For changes to the materials from which a device is manufactured, device makers should assess any collateral changes, such as labeling changes or changes in specifications, that may necessitate a 510(k).

The second guidance, Deciding When to Submit a 510(k) for a Software Change to an Existing Device, follows the same guiding principles but applies to device software, defined as electronic instructions used to control the actions or output of a device, provide input or output from a device, or provide the actions of a medical device. Unlike the guidance on modifications to existing devices, the guidance on software changes does not break up changes and uses only one logic scheme to determine whether a new 510(k) is needed.

FDA proposes amendments to update, expand good laboratory practice regulations

The amendments would see the introduction of a good laboratory practice (GLP) quality system to ensure nonclinical laboratory studies used to support a Food and Drug Administration application or submission meet quality standards. They would also update the existing rules to reflect the current conduct of nonclinical laboratories.

The FDA proposed amendments to its GLP rule requiring the use of a full quality system approach — known as a GLP quality system — when a nonclinical laboratory study is meant to support an application or submission to the agency. Nonclinical laboratory studies, often referred to as preclinical studies when conducted prior to first-in-human trials, provide important safety and toxicity information for FDA-regulated products. The FDA defines the GLP quality system as “the organizational structure, responsibilities, procedures, processes and resources for implementing quality management in the conduct of nonclinical laboratory studies.”

Although current rules include several aspects of a quality system approach, certain essential components for a fully implemented GLP quality system are not required. The proposed amendments are designed to provide a framework for incorporating quality into planning, conducting and reporting nonclinical laboratory studies while ensuring regulations reflect existing practices for the conduct of such studies. The amendments follow an advanced notice of proposed rulemaking (ANPRM) issued in late 2010.

Under the proposed GLP quality system, the FDA would require sponsors to take on additional responsibilities for testing facility management and maintaining standard operating procedures (SOPs). The amendments would also expand the definition of a testing facility to all nonclinical laboratory studies, whether carried out at single or multiple sites. The proposal will update existing rules to reflect the current conduct of nonclinical laboratories, as stakeholder comments on the ANPRM suggest the existing rules are outdated and stymie the efficient use of new technology.

As part of the amendments, the FDA proposed introducing and modifying definitions and terms as well as organizational and personnel roles and responsibilities. New or modified definitions include:

  • Facility-based inspection. A quality assurance unit (QUA) inspection covering the general facilities and activities, such as installations, environmental monitoring and equipment maintenance.
  • Lead quality assurance unit. The QUA responsible for quality assurance in a multisite study.
  • Nonclinical laboratory study. The term would be updated to make clear that the conduct of these studies is not limited to a traditional laboratory environment, and to clearly indicate that the purpose for such studies may be to assess toxicity.
  • Process-based inspection. Inspecting repetitive, frequently performed procedures and processes.
  • QUA: The definition would be updated to ensure the unit is totally separate and independent from the personnel engaged in the direction and conduct of the study.

The FDA estimates the proposed amendments would cost between $51.5 million and $69.3 million. One-time costs would include updating existing SOPs and crafting new SOPs and training, while costs to the industry may include additional reporting and record-keeping responsibilities. While the agency is unable to quantify the expected benefits, it expects the amendments to result in better quality and more reliable data to support applications and submissions.