“Vitality shows in not only the ability to persist, but in the ability to start over.”
–F. Scott Fitzgerald
Avoiding unnecessary changes while integrating evidence-based practice (EBP) innovations to improve care is among the most important challenges in healthcare (Dixon-Woods, Amalberti, Goodman, Bergman, & Glasziou, 2011; IOM, 2015a). Next, the EBP process comes to this decision point: Is the change appropriate for practice? Prior to the decision to adopt a practice change, pilot testing is imperative. Consider use of pilot testing as parallel to a Phase 1 trial in research designed to identify risk or unintended outcomes of the innovation and system effects (Dixon-Woods et al., 2011). The pilot step allows the intervention and implementation plan to be tested and refined in context, addressing potential threats that will increase the chance for successful adoption in practice before full-scale implementation occurs. Implementation success can be measured by assimilation (the process of moving from awareness to adoption, resulting in a practice becoming routine or institutionalized) and fidelity (the extent to which the practice change is carried out as intended) (Panzano, Sweeney, Seffrin, Massatti, & Knudsen, 2012).
The team reviews evaluative data from the pilot and decides whether the EBP practice change worked to improve outcomes as the evidence indicated and whether the implementation plan worked to promote adoption. Data from the evaluation are reviewed (see Tool 10.1). Each component of the evaluation informs the decision about effectiveness of the EBP change, use of the EBP protocol in the pilot area, and expanding rollout to other appropriate areas.
Outcome Data and Balancing Measures
Outcome data and balancing measures provide the team with direction when determining whether the EBP change is appropriate for adoption in practice. Patient and/or organization quality and safety indicators make up the outcomes targeted for improvements (e.g., hospital-acquired infections, clinician retention). Balancing measures monitor for systemic effects and unintended consequences from the practice change (e.g., risk for falls when implementing an early ambulation intervention). Balancing measures are particularly important to consider when the evidence is not clear or risk is evident. Outcome data and balancing measures collected in the pilot should be compared pre- and post-practice change and then benchmarked with outcomes reported in the literature and by other organizations. If the practice change is not carried out as intended (low fidelity), outcomes may fail to improve (e.g., if steps are skipped in an EBP central line dressing change protocol, bloodstream infection rates may not improve and could even increase). If the intervention fails to achieve the expected outcomes, it may be a function of implementation failure rather than the EBP protocol or intervention itself (Panzano et al., 2012). Deciding whether outcomes met the intended goal for improvement is central to making a decision about ...