In a recent editorial, we highlighted information commonly absent from manuscripts that report intervention effectiveness (Gutman & Murphy, 2012). One critical element often missing from manuscripts is a discussion of intervention fidelity, which is the extent to which the intervention is delivered as it was intended (Gearing et al., 2011). To assess fidelity, researchers need to determine at the study design stage the methods they will use to assess and monitor the reliability and validity of the intervention (Bellg et al., 2004; Borrelli, 2011). Reporting of fidelity methods in the published product is crucial to allow readers to judge the quality of the study and replicate it and to assist the intervention developers in understanding how various factors may have influenced the outcome of their study.
The Consolidated Standards of Reporting Trials (CONSORT) guidelines (Altman et al., 2001), the criteria now required by most medical journals for reporting clinical trials, have been expanded to include reporting of intervention fidelity in nonpharmacological trials (Boutron, Moher, Altman, Schulz, & Ravaud, 2008). The expanded criteria for reporting offer detailed guidance on how to report this additional information. Because fidelity methods are infrequently and inconsistently reported in journals, our intent in this editorial is to discuss the five basic components of intervention fidelity (Bellg et al., 2004; Borrelli, 2011; Gearing et al., 2011):
Intervention design
Training of providers
Intervention delivery
Receipt of intervention
Enactment of skills gained from the intervention.
Intervention Design
Aspects of trial design pertaining to intervention fidelity include the content and dose of the intervention and the use of any comparison groups (Bellg et al., 2004). Researchers should describe in detail the number, length, and frequency of intervention sessions. The researchers should articulate the underlying theoretical framework or clinical guidelines that provided the foundation for the intervention and specify the intervention’s “active ingredients” (Borrelli, 2011). In designing a trial, researchers should try to plan for potential setbacks and consider alternative strategies before the setbacks occur. One potential setback, for example, is provider dropout (Bellg et al., 2004; Gearing et al., 2011); a potential solution is to train extra providers at the outset of the study so that backup providers with the requisite skills are available.
Training of Providers
To ensure fidelity, it is necessary to be certain that multiple providers administer the same intervention in the same manner; for this reason, many researchers develop and standardize training procedures. This training may be done initially and throughout the study implementation to allow for turnover in providers and to keep providers from deviating from the standardized procedures over time (a phenomenon known as therapist drift; Bellg et al., 2004; Borrelli, 2011; Gearing et al., 2011). Before beginning intervention delivery, researchers should assess providers’ acquisition of skills by written test, direction observation, or a combination (Bellg et al., 2004). Often, providers who have particular credentials or experience are selected for a study; training plans may need to be adapted if providers have different levels of experience or education (Borrelli, 2011).
Intervention Delivery
Researchers should report any methods used to standardize the interventions; this information is especially important so that others may replicate the study (Boutron et al., 2008). Researchers may use written intervention manuals, which assist in ensuring fidelity by helping to control for provider differences, ensuring adherence to the intervention protocol, and maintaining the distinct features of the intervention and comparison treatments (Bellg et al., 2004). Written manuals also provide a concrete means to articulate the active ingredients of the intervention, ensuring consistency in how and when these ingredients are delivered.
Receipt of Intervention
The first three components of fidelity focus primarily on providers and how treatment is delivered. Fidelity methods also involve a fourth component: how participants received an intervention (e.g., whether they understood the intervention content and how relevant they thought the intervention was to daily life). Researchers can assess this component of fidelity by tracking attendance at sessions and administering measures such as pre- and postintervention assessments of knowledge gained. It also may be important to assess participant self-efficacy in implementing newly taught behaviors or strategies. Borrelli (2011) described several ways to enhance fidelity of intervention receipt.
Enactment of Skills Gained From the Intervention
The fifth basic component of fidelity pertains to how people apply the intervention content in daily life. Measuring enactment differs from assessing study outcomes because measurement of enactment occurs throughout the study and not just at an endpoint (Borrelli, 2011). An outside observer can assess enactment using a checklist or other objective measurement. For instance, if a goal of an intervention was medication management and treatment involved teaching the participant to organize a pillbox, the ability to organize the pillbox would be a skill involved in treatment receipt, but taking the medication appropriately would be enactment of the skill in daily life (Bellg et al., 2004).
Conclusion
Intervention fidelity is an important aspect of designing and implementing intervention effectiveness studies. Assessment of intervention fidelity not only is important for replication of the study but also provides crucial information to researchers for interpreting the effects of the intervention.