Skip to Main Content

We have a new app!

Take the Access library with you wherever you go—easy access to books, videos, images, podcasts, personalized features, and more.

Download the Access App here: iOS and Android. Learn more here!

INTRODUCTION

The goal of assembling, appraising, and synthesizing evidence is to improve the use of best evidence to make recommendations that guide healthcare delivery. Yet, the jump from critiquing research and other reports to making decisions or applying research findings in practice is not always straightforward, easy, or clear-cut (Schloemer & Schröder-Bäck, 2018).

Evidence-based practice (EBP) teams must synthesize evidence to get to application and not get stuck just reading and evaluating the literature. Instead, move forward and make decisions using the best available evidence to design and pilot the practice change. Team members who have experience weighing evidence can help mentor novice members to assure the reliability of practice recommendations (Berkman et al., 2013; Murad et al., 2014; Stevens, 2009). Include clinicians with content expertise when weighing the evidence because they have knowledge about the current practice, in addition to an understanding of patient values, practical risks, benefits, and cost of interventions (Vaccaro et al., 2010). Include end users and patient perspectives to determine their needs in relation to the evidence (Murad et al., 2017).

Follow a well-developed analytical process for working through the literature (see Chapter 5). While synthesizing information, begin to evaluate the strength and consistency of the evidence. Keep work on track with a checklist (see Tool 6.1). As a team, make explicit decisions about the questions that need to be answered in order to guide practice, the types of evidence that are relevant, the criteria that will be used for making decisions, and key outcomes of interest (Woolf et al., 2012). An evidence synthesis table provides an efficient summary to determine the overall strength of evidence to guide practice decisions (see Tool 5.4).

STRENGTH OF EVIDENCE

Use multiple forms of evidence and consider the context for making clinical or operational decisions (Jones & Steel, 2018; Knottnerus & Tugwell, 2019). Identify research designs that best match the PURPOSE to drive decisions (see Table 5.4). Determine the quality of studies based on the risk of bias associated with each design, and then the overall strength of the body of evidence. Avoid grading individual studies or using outdated levels of evidence hierarchies (see Chapter 5; Berkman et al., 2013; Jones & Steel, 2018; Murad et al., 2014).

Recognize that including all available evidence may not be feasible in clinical practice or operations (Jones & Steel, 2018). Use the best evidence that answers the clinical or operational question. When evidence is sparse or inconsistent, other sources of evidence, risks, and patient preferences may best determine the direction to take (Sylvester et al., 2017). In some cases, scientific principles or theory may guide next steps. Scientific principles may include consideration of peak effect, side effects, half-life, associated consequences, benefits of patient centered care, or holistic care principles. ...

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.