Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Prior to developing a protocol for your review, some preliminary investigation of the literature is recommended to determine if studies are available on the topic of interest. If you have a strong feeling that there are no studies available on your review topic, your energies may be better directed towards a different endeavor than conducting an ‘empty’ review.

In order to avoid duplication, reviewers are advised to register their review title (see Section 1.2). It is also recommended that reviewers search major electronic databases to determine that there have been no recently published systematic reviews on the same topic prior to registration of a review title. A search of the Cochrane Database, PubMed/MEDLINE, PROSPERO and DARE databases as well as our online journal, the JBI Database of Systematic reviews and Implementation Reports will assist to establish whether or not a recent review report exists on the topic of interest. The results of this search should be mentioned in the background of the systematic review protocol and review. If a systematic review on the topic of interest has already been conducted, consider the following questions to establish if continuing with the review topic will be strategic.

-   Is the date of last update longer than three years ago?

-   Is it a high quality, well conducted systematic review?

-   Do the methods reflect the specific criteria of interest for your topic?

-   Is there a specific gap in terms of population or intervention outcome that has not been addressed in the identified review?

If a systematic review (or protocol) already exists on your topic, think carefully about conducting your review. To reduce duplication and a waste of human resources, it may be best not to conduct your review. However, there may be important reasons why you should still conduct your review. Your inclusion criteria may differ in terms of the population, context, interventions and even study types. Additionally, you may plan to use a different method for searching, critical appraisal and synthesis. In these cases, duplication may be appropriate. The other systematic review may also have some flaws in its conduct and reporting which warrants a new review.  

Authors may also wish to consider the technical resources available to them. The conduct of a systematic review is greatly facilitated by access to extensive library and electronic databases and the use of citation management software as well as software designed specifically to facilitate the conduct of a systematic review such as JBI SUMARI.

When preparing to undertake a systematic review, consideration needs to be given to the human as well as the technical resources needed to complete the review. To maintain the required rigorous standards and alleviate risk of bias in the review process, a JBI review requires a minimum of two reviewers to conduct a systematic review. Authors should always consider the submission guidelines before submitting a manuscript to a journal. For example, the JBI Database of Systematic reviews and Implementation Reports requires that at least one author has been trained in the JBI approach to systematic review, although it is ideal when all reviewers have undergone training. The skills and expertise required for a systematic review will vary depending on the nature of the review being undertaken and the methodology utilized. It is therefore recommended that a JBI systematic review is conducted by a team comprising of individuals that possess the skills and knowledge required to conduct the review to a standard acceptable for publication in an international scientific periodical. 

Dependent upon the type of review being conducted, review teams should ideally consist of members with:

  • Knowledge of general JBI systematic review methodology such as formulating a review question, defining inclusion criteria and critical appraisal.
  • An information scientist or research librarian with specialised skills to develop and implement a comprehensive search strategy.
  • Specific methodological expertise required for the type of review being undertaken, for example knowledge of the statistical methods to be used, experience in qualitative synthesis, or experience with economic analyses for economic evaluations.
  • Knowledge of the topic area. Representation from clinical specialties and consumers is recommended where the review is being undertaken by systematic reviewers/methodologists rather than topic experts
  • The ability to write a report in English to a publishable standard.

From the outset, the review team should consider expected contributions to the review project and eventual authorship. Some members of the review team may be better recognised in the acknowledgements of the published report rather than as authors. Conversely, part of the review team may be formally organised as a “Review Panel”, where some of the individuals with the attributes listed above provide formal advice and oversight throughout the conduct of the review including reviewing the draft protocol and final manuscript submissions or providing specific insight into the interpretation of data and formulating recommendations for practice and research for example. The names, contact details and areas of speciality of each member of the review panel should be included in both the protocol and the report.Implementation science is defined as “the scientific study of methods to promote the systematic uptake of research findings and other evidence‐based practices into routine practice and, hence, to improve the quality and effectiveness of health services and care”(p.1) (Eccles & Mittman 2006). This field of inquiry emerged out of a need to address the ongoing difficulties associated with getting research into practice (Nilsen, P 2015). It is well documented that existing barriers are a main contributor to the discrepancy between evidence-based recommendations and practice (Rainbird 2006; Pearson, Field & Jordan 2007). The findings of a systematic review identified three overarching domains related to barriers and facilitators of implementing evidence into practice: system, staff and intervention (Geerligs et al. 2018). System-level barriers and facilitators include environmental context (staff time, workload, workflow, space and staff turnover), culture (attitude to change, commitment, motivation, roles/trust and champions), communication processes and external requirements (reporting, standards and guidelines) (Geerligs et al. 2018). Staff-level barriers and facilitators include staff commitment and attitude, understanding and awareness, identification of individual roles, skills, ability and confidence. (Geerligs et al. 2018) Barriers and facilitators related to the intervention include the ease of integration (complexity, costs and required resources), validity of the evidence base, safety, legal and ethical concerns, and supportive components such as education and training, marketing and awareness (Geerligs et al. 2018).

Implementation science seeks to understand these barriers and facilitators, and to empower health professionals to utilize evidence-based approaches with the end goal of improving the quality and service of healthcare (Tabak et al. 2012). Implementation has been defined as “the methods to promote the systematic uptake of clinical research findings and other evidence-based practices into routine practice and hence improve the quality and effectiveness of healthcare policy and practice” (p.1)(Eccles & Mittman 2006).

A variety of theoretical approaches, models and frameworks are prescribed within this field, with the central aim to assist in developing a better understanding and explanation of why and how implementation succeeds or fails (Atkins et al. ; Ayanian & H. 2016; Brown & McCormack 2005; Gardner, Gardner & O'Connell 2014; Graham et al. 2006; Khalil 2017; Kitson, Harvey & McCormack 1998; Nilsen, 2015; Prochaska & DiClemente 1983; Rogers 1995; Rycroft-Malone & Bucknall 2010; Rycroft-Malone  et al. 2002). Table 1 below details some of the existing frameworks and models available to assist with the implementation of evidence into practice. The list is by no means a comprehensive list of all existing frameworks and models, but it does highlight the complexity involved in getting evidence into practice. A recent review examining the differences and similarities of research translation frameworks identified 41 frameworks and models, with the four most published and cited frameworks being the Reaching Effectiveness Adoption Implementation Management (RE-AIM) framework, Knowledge to Action (KTA) framework, knowledge translation continuum models, or “T” models, and the PARiHS frameworks. All identified frameworks described the gap that exists between research knowledge and its application into policy and practice, and all acknowledged the importance and difficulty in closing this gap (Milat & Li 2017). A plethora of published information is available on the different frameworks and models.


Table 1: Description of Implementation Theories, Models and Frameworks

Theory / Model / Framework

Description


Diffusion of innovation model


  • Knowledge phase: involves learning about the innovation to be implemented (such as a guideline, or best practice recommendation).
  • Persuasion phase: relies on opinion leaders with good knowledge, who are credible, approachable, can effectively influence practice and encourage others to take up new evidence in practice by personal example - facilitating individuals to form positive (or sometimes negative) attitudes to the innovation.
  • Decision phase: the point in time where the acceptability of the changes are determined by stakeholders as either worthwhile or not worth pursuing.
  • Adoption or rejection phase: reflects the outcome of the decision phase and is the ultimate decider as to whether evidence is implemented in practice.

Health education theory model


  • Behavior change requires that knowledge and skills be addressed at the individual level.
  • The positive impact of health education theory is proportional to the degree of active learning.

JBI implementation framework


  • The model has three governing principles that guide a seven-step process.
  • The seven-step process is grounded in the audit/feedback/change/re-audit cycle and is of critical importance when attempting to promote sustainable change in health.
  • Further detail provided in section 2 of this handbook.

Knowledge to action (KTA) framework


  • Consists of two interconnected cycles (knowledge creation and action).
  • At the center of the model is knowledge creation, which includes the three phases of knowledge inquiry (primary research), synthesis (systematic reviews), and products/tools (guidelines, algorithms, etc.).
  • Surrounding knowledge creation is the action cycle, which consists of seven phases. These phases may occur sequentially or simultaneously (identify problem; adapt knowledge to local context; assess barriers to knowledge use; select, tailor and implement interventions; monitor knowledge use; evaluate outcomes; sustain knowledge use).

PARiHS model


Research implementation expressed as a function of the relationships among evidence, context and facilitation:

  • Evidence (research, clinical experience, patient experience)
  • Context (culture, leadership and evaluation)
  • Facilitation (purpose, role, skills and attitudes)

PDSA model


The model is cyclic comprising four stages:

  • Plan - the change to be tested or implemented
  • Do - carry out the test or change
  • Study – based on the measurable outcomes agreed before starting, collect data before and after the change, and reflect on the impact of the change and what was learned
  • Act - plan the next change cycle or full implementation

Pipeline model


  • Evidence enters the pipeline and flows through a variety of stages from awareness of the evidence to adherence by patients/clients. 
  • Between these are stages of acceptance of the evidence, applicability of the evidence and the ability to implement into the particular area of practice. 
  • Finally, there are stages of acting on the evidence, reaching agreement between practitioners and patients, and sustained adherence. It is only at this stage that patient outcomes will be affected.

RE-AIM

  • Reach (proportion of the target population that participated in the intervention).
  • Efficacy or effectiveness (success rate if implemented as in guidelines; defined as positive outcomes minus negative outcomes).
  • Adoption (proportion of settings, practices, and plans that will adopt intervention).
  • Implementation (extent to which intervention is implemented as intended in the real world).
  • Maintenance of intervention effects in individuals and settings over time.

Social theory model


  • Layers of culture and society at play in the work environment.

Theoretical domains framework

  • A theoretical framework that targets behavior change in health professionals and comprises 14 domains that encompass factors likely to influence healthcare professional behavior change: knowledge; skills; social/professional role and identity; beliefs about capabilities; optimism; beliefs about consequences; reinforcement; intentions; goals; memory, attention, and decision processes; environmental context and resources; social influences; emotion; and behavioral regulation.

The triple C model

  • Stage 1: Consultation
  • Stage 2: Collaboration
  • Stage 3: Consolidation

Translation research continuum or “T’” models

  • Description and discovery.
  • From discovery to health application.
  • From health application to evidence guidelines.
  • From guidelines to health practice.
  • Evaluation of effectiveness and cost-effectiveness of such interventions in the real world and in diverse populations.

Trans-theoretical model

  • Pre-contemplation
  • Contemplation
  • Preparation
  • Action
  • Maintenance