FDA first published the draft version of the guidance in February 2011, the result of the 2007 FDA Amendments Act (FDAAA) of 2007 that contained provisions to expand the agency’s focus on postmarketing safety assessments in light of ongoing safety scandals.
One of FDA’s commitments under the act was to identify best practices that might be used to keep track of and use those studies, and how industry would be expected to conduct itself while conducting those studies on behalf of FDA.
As patients engaged in real-world settings are rarely treated in company-controlled facilities, tracking them involved interacting with electronic health data, which often comes from various sources. But how do you ensure that multiple studies using electronic health data come to a conclusion that is as rigorous as one from a placebo-controlled randomized clinical trial? That’s where FDA’s guidance comes in.
Industry reaction to the 2011 draft guidance was mixed, with most entities saying that they appreciated the intent of the guidance while taking issues with its provisions. For example, the International Society for Pharmacoepidemiology (ISPE) wrote in a comment submitted to FDA’s docket that there is current “inconsistency in the depth of general guidance on the use of databases for pharmacoepidemiologic studies.”
“The areas of design, database selection and analysis are covered to some extent, but not always fully, and other areas are covered in one sentence or not at all.” ISPE went on to recommend that areas of the guidance be expanded, and provided several suggestions for how to do so. The Biotechnology Industry Organization (BIO), meanwhile, wrote in a lengthy set of comments that the document needs to be more explicit in some areas to make clear that the role of any pharmacoepidemiologic study is to determine whether there is a correlation or causation between a product and a disease. This association, BIO noted, is not always clear, and the organization wanted the guidance to reflect the lack of clarity that sometimes results from studies. BIO also said it wanted the guidance to provide some feedback on instances when FDA may provide interim feedback on a study, a distinction that it said was lacking in the draft guidance.
In addition, the Pharmaceutical Research and Manufacturers Association (PhRMA) said it wanted more details in the guidance on how FDA would use the data provided to it through pharmacoepidemiological studies.
“While the guidance is clear on the definitions and expectations regarding protocols of observational safety studies, there is no information on the process FDA will use to review and comment on these documents. PhRMA requests clarity regarding who will provide feedback on the proposed methodological approaches and how would potential disagreements be resolved,” the group wrote. PhRMA also said it would “appreciate clarity regarding the expected timelines for FDA’s feedback to sponsors and protocol approval.”
FDA’s final guidance seems to take some of those concerns into account, adding detail to the guidance in at least eight different areas, according to FDA: interpretation of findings; study time frame; identification and handling of confounding and the use of statistical techniques to address confounding; exposure ascertainment; study design; outcome definition and validation; pre-specified analysis plan; and the linkage and pooling of data from different data sources.
For example, FDA explains that sponsors should summarize key results of their analyses, including the primary and secondary analyses. The “precision” of the results should be explained in the findings, as well as the confidence intervals and statistical significance of the study. FDA is quick to note that statistical significance alone does not “exclusively determine the clinical importance of the findings,” as the “significance of very small effect measures can be common.” FDA recommends that sponsors pay attention to the potential for biases to enter into their results.
In regards to the study time frame, FDA recommends that this be defined prior to the commencement of a study to avoid potential biases or differences in prescribing habits in the general population.
Finally, FDA notes that so-called “pooled data from different sources”—also known as linked data—may be used in a study, but their linkage must be explained in the protocol, both in the rationale for the linkage and the specific method of how they will be pooled. While FDA noted that this can provide a fuller picture of a drug’s epidemiological effects, it may also present problems if done incorrectly.
REFERENCE: Alexander Gaffney, RF News Editor; RAPS; 16 May 2013