Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Systematic reviews aim to provide a comprehensive, unbiased synthesis of many relevant studies in a single document using rigorous and transparent methods. A systematic review aims to synthesize and summarize existing knowledge. It attempts to uncover “all” of the evidence relevant to a question.

Given the explosion of knowledge and access to a diverse range of knowledge sources over the past decade, it is now almost impossible for individual clinicians or clinical teams to stay abreast of knowledge in a given field. Systematic reviews (also referred to as research syntheses), conducted by review groups with specialized skills, set out to retrieve international evidence and to synthesize the results of this search into evidence to inform practice and policy. They follow a structured research process that requires rigorous methods to ensure that the results are both reliable and meaningful to end users.

Systematic reviews and meta-analyses began to appear in a variety of health fields in the 1970s and 1980s (Bastian et al. 2010). In the 1990s confusion arose between the terms  ‘systematic review’ and ‘meta-analysis’ with the importance of using systematic approaches to reduce bias in reviews being distinguished as an issue separate from meta-analysis. Chalmers and Altman (1995) suggested that the term ‘meta-analysis’ be restricted to the process of statistical synthesis, that is meta-analysis may or may not be part of a systematic review. Growing interest in systematic reviews led to the emergence of international, interdisciplinary groups of scholars promoting and expanding upon systematic reviews (such as the JBI, Cochrane, The Campbell Collaboration etc.). Today the methodology of systematic reviewing still continues to evolve. JBI reviewers are encouraged to read the article by Aromataris and Pearson (2014) that provides an introductory overview regarding systematic reviews. 

The quality of a systematic review depends heavily on the extent to which methods are followed to minimize the risk of error and bias during the review process. Such rigorous methods distinguish systematic reviews from traditional reviews of the literature. As such, explicit and exhaustive reporting of the methods used in the synthesis is a necessity and a hallmark of any well conducted systematic review. As a scientific enterprise, a systematic review will influence healthcare decisions and should be conducted with the same rigor expected of all research.

Currently, JBI has formal guidance for the following types of reviews:

  1. Systematic reviews of experiences or meaningfulness
  2. Systematic reviews of effectiveness
  3. Systematic reviews of text and opinion/policy
  4. Systematic reviews of prevalence and incidence
  5. Systematic reviews of costs of a certain intervention, process, or procedure
  6. Systematic reviews of etiology and risk
  7. Systematic reviews of mixed methods
  8. Systematic reviews of diagnostic test accuracy
  9. Umbrella reviews
  10. Scoping reviews

There is general acceptance of the following steps being required in a systematic review of any evidence type. These include the following:

  1. Formulating a review question
  2. Defining inclusion and exclusion criteria
  3. Locating studies through searching
  4. Selecting studies for inclusion
  5. Assessing the quality of studies
  6. Extracting data
  7. Analyzing and synthesizing the relevant studies
  8. Presenting and interpreting the results, potentially including a process to establish certainty in the body of evidence (through systems such as GRADE)

An essential step in the early development of a systematic review is the development of a review protocol. A protocol pre-defines the objectives and methods of the systematic review which allows transparency of the process which in turns allows the reader to see how the findings and recommendations were arrived at. It must be done prior to conducting the systematic review as it is important in restricting the presence of reporting bias. The protocol is a completely separate document to the systematic review reportEvidence-based healthcare (EBHC) has been defined as “clinical decision-making that considers the feasibility, appropriateness, meaningfulness and effectiveness of healthcare practices” (p.5). (Jordan et al. 2016) Healthcare practices should be informed by the best available evidence, the context in which the care is delivered, the individual patient, and the professional judgment and expertise of the health professional (Jordan, Z et al. 2016; Jordan, Z. et al. 2018). However, getting evidence into practice is not necessarily straightforward, with a simple estimate of the time it takes from evidence being created to its use in practice being 17 years (Morris, Wooding & Grant 2011). This delay in uptake is often due to a variety of gaps in the movement of research from one stage to another (i.e. from pre-clinical research through to clinical trials). How to bridge the gap between the evidence and the translation (or uptake) of research into clinical practice has been a point of ongoing debate over the years (Lang, Wyer & Haynes 2007; Pearson, Jordan & Munn 2012).


The field of knowledge translation (which we view as a broad term encompassing the movement of research findings and knowledge) has been established to address the facilitation of knowledge through the various phases of creation through to its use (Munn et al. 2018; Pearson, Jordan & Munn 2012). As eloquently stated by Woolf 2008, “translational research means different things to different people, but it seems important to almost everyone” (p.211) (SH. 2008). For some, research translation refers to bench to human research, while for others focused on healthcare delivery and improving population health, translational research refers to human research to policy or practice (Milat & Li 2017; Munn et al. 2018).


In addition to the differences in the interpretation of the definition and application of translational research, there are commonly used terms that are interchangeably applied. Translational research, knowledge translation, knowledge to action, implementation science, knowledge transfer— all of these terms are used to describe processes to address the gap between research knowledge and its application in treatment, policy and practice (Milat & Li 2017). The underlying premise for each of these terms is similar and focused on reducing this gap with the common goal to improve practice and outcomes.


Within JBI, while we acknowledge the importance of the entire translation science movement, we have a particular focus on implementing findings from systematic reviews, trustworthy clinical practice guidelines and other evidence-based resources into policy and practice. When we use the terms implementation science and evidence implementation, we are particularly focusing on strategies and methods to move the results and guidance in these evidence-based resources into policy, practice and action.