Skip to Main Content

We have a new app!

Take the Access library with you wherever you go—easy access to books, videos, images, podcasts, personalized features, and more.

Download the Access App here: iOS and Android. Learn more here!

INTRODUCTION

Evaluation is an essential component of the evidence-based practice (EBP) process, warranting dedicated effort within the design and pilot step (Iowa Model Collaborative, 2017). Just as EBP has a similar but distinct purpose and process from research and quality improvement (QI), EBP evaluation has similarities and differences from research and quality methods (Zhao & Granger, 2018). The EBP process is iterative, adaptive, and context-specific; therefore, the evaluation component must align with that process (Parry et al., 2018). EBP evaluation must be scholarly yet flexible enough to adapt to the real-world setting and allow for continuous learning (Barry et al., 2018). This chapter introduces the Precision Implementation Approach®, describes the EBP Evaluation Framework (KABOB), and establishes a process to generate local data that drives planning for the EBP change and selection of implementation strategies. Choose among the numerous focused tools, resources, and tips provided for actionable guidance to operationalize your EBP evaluation.

CREATE AN EVALUATION PLAN

Clinicians are often eager to start using practice changes found through critique and synthesis of the evidence. However, it is essential to create a comprehensive evaluation plan for the pilot and collect baseline data prior to the “go live” (Curtis et al., 2017; Puterman et al., 2013; Russell et al., 2011). Identify robust, timely, actionable data needed to drive decision-making for EBP improvement (Polancich et al., 2019). For the pilot, the team will need baseline (pre-pilot) data to design the localized EBP protocol and select implementation strategies that best target the needs of the local setting (Cullen, Hanrahan, et al., 2019; Waltz et al., 2019). After the pilot, the team will compare pre- and post-pilot data to identify whether the EBP change and implementation plan worked as intended (Barry et al., 2018) and to demonstrate impact.

Decide what to measure by first considering the quality aim (Zhao & Granger, 2018). The Institute of Medicine identifies six aims for healthcare quality: safe, effective, patient-centered, timely, efficient, and equitable care (Agency for Healthcare Research and Quality, 2018c; Institute of Medicine, 2001). These aims give structure for selecting outcome data that can be measured and interpreted consistently (Zhao & Granger, 2018). Most EBP improvements fall within the realm of safety and effectiveness.

Measures rarely need to include personal identifiers but must be pragmatic, acceptable, compatible, easy, and useful (Powell et al., 2017). Demographic data may not be relevant. Create a comprehensive evaluation plan that includes topic-specific data points (i.e., indicators) to measure both processes and outcomes. Process data measure the steps taken to make improvements. Capture the impact on quality or cost and unintended consequences through outcomes and balancing measures. Use evaluation data to improve processes and reduce unnecessary harm to maximize the benefits of an EBP change (American Evaluation Association, 2018). Later in the EBP process, trended ...

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.