Skip to Main Content

We have a new app!

Take the Access library with you wherever you go—easy access to books, videos, images, podcasts, personalized features, and more.

Download the Access App here: iOS and Android


“It is often necessary to make a decision on the basis of information sufficient for action but insufficient to satisfy the intellect.”

–Immanuel Kant

Clinical practice guidelines (CPGs) are foundational to improving healthcare (Woolf, Schünemann, Eccles, Grimshaw, & Shekelle, 2012). The goal of landmark Institute of Medicine (IOM) reports (IOM, 2010a, 2011a, 2011b) is to improve the quality and consistency of both CPGs and systematic reviews in order to promote application of evidence in practice. Likewise, the goal of assembling, appraising, and synthesizing evidence is to improve the quality of healthcare delivered to patients. Yet the jump from critiquing research and other reports to making decisions or applying research findings in practice is not always straightforward, easy, or clear-cut (Aveyard & Bauld, 2011).

Evidence-based practice (EBP) teams must move beyond synthesis to application and not get stuck reading and evaluating the literature. Instead, move forward and make a decision about using available evidence for developing practice recommendations. Team members experienced in methods for weighing evidence can help to mentor novice members and increase the reliability of recommendations (Berkman, Lohr, Morgan, Tzy-Mey, & Morton, 2013; Murad et al., 2014; Stevens, 2009). Clinicians with content expertise are key participants in weighing the evidence because they have unique perspectives and knowledge about risks, benefits, and cost of interventions, in addition to an understanding of patient values (Vaccaro et al., 2010).

Follow a well-developed analytical process for working through the literature; consider relevant sources of evidence (i.e., looking at lower levels of evidence may be indicated when other evidence is lacking), and develop an EBP protocol for local use. Use a simple checklist to keep work on track (see Tool 6.1). At the outset, make explicit decisions about the specific questions to be answered and key outcomes. As a group, determine the questions to be answered, the types of evidence that are relevant, and the criteria that will be used for making decisions (Woolf et al., 2012). While synthesizing information, begin to evaluate the quality of the evidence and strength of the recommendations for practice. This provides an efficient summary for guiding practice decisions.


Rarely does the answer to a clinical question come from a single study (Aslam, Georgiev, Mehta, & Kumar, 2012). Clinical decisions should be based on the body of evidence and not on individual studies (Berkman et al., 2013; Murad et al., 2014). The quality of individual studies should be determined first, based on the study design, and not the level of evidence (i.e., you can have a high-quality observational study or a poor-quality meta-analysis). Then separately, the overall strength of the evidence base can be determined (Jones, 2010). The research question should drive study design (see Table 6.1).

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.