Versions Compared
Version | Old Version 1 | New Version Current |
---|---|---|
Changes made by | ||
Saved on |
Key
- This line was added.
- This line was removed.
- Formatting was changed.
Prior to developing a protocol for your review, some preliminary investigation of the literature is recommended to determine if studies are available on the topic of interest. If you have a strong feeling that there are no studies available on your review topic, your energies may be better directed towards a different endeavor than conducting an ‘empty’ review.
In order to avoid duplication, reviewers are advised to register their review title (see Section 1.2). It is also recommended that reviewers search major electronic databases to determine that there have been no recently published systematic reviews on the same topic prior to registration of a review title. A search of the Cochrane Database, PubMed/MEDLINE, PROSPERO and DARE databases as well as our online journal, the JBI Database of Systematic reviews and Implementation Reports will assist to establish whether or not a recent review report exists on the topic of interest. The results of this search should be mentioned in the background of the systematic review protocol and review. If a systematic review on the topic of interest has already been conducted, consider the following questions to establish if continuing with the review topic will be strategic.
- Is the date of last update longer than three years ago?
- Is it a high quality, well conducted systematic review?
- Do the methods reflect the specific criteria of interest for your topic?
- Is there a specific gap in terms of population or intervention outcome that has not been addressed in the identified review?
If a systematic review (or protocol) already exists on your topic, think carefully about conducting your review. To reduce duplication and a waste of human resources, it may be best not to conduct your review. However, there may be important reasons why you should still conduct your review. Your inclusion criteria may differ in terms of the population, context, interventions and even study types. Additionally, you may plan to use a different method for searching, critical appraisal and synthesis. In these cases, duplication may be appropriate. The other systematic review may also have some flaws in its conduct and reporting which warrants a new review.
Authors may also wish to consider the technical resources available to them. The conduct of a systematic review is greatly facilitated by access to extensive library and electronic databases and the use of citation management software as well as software designed specifically to facilitate the conduct of a systematic review such as JBI SUMARI.
When preparing to undertake a systematic review, consideration needs to be given to the human as well as the technical resources needed to complete the review. To maintain the required rigorous standards and alleviate risk of bias in the review process, a JBI review requires a minimum of two reviewers to conduct a systematic review. Authors should always consider the submission guidelines before submitting a manuscript to a journal. For example, the JBI Database of Systematic reviews and Implementation Reports requires that at least one author has been trained in the JBI approach to systematic review, although it is ideal when all reviewers have undergone training. The skills and expertise required for a systematic review will vary depending on the nature of the review being undertaken and the methodology utilized. It is therefore recommended that a JBI systematic review is conducted by a team comprising of individuals that possess the skills and knowledge required to conduct the review to a standard acceptable for publication in an international scientific periodical.
Dependent upon the type of review being conducted, review teams should ideally consist of members with:
- Knowledge of general JBI systematic review methodology such as formulating a review question, defining inclusion criteria and critical appraisal.
- An information scientist or research librarian with specialised skills to develop and implement a comprehensive search strategy.
- Specific methodological expertise required for the type of review being undertaken, for example knowledge of the statistical methods to be used, experience in qualitative synthesis, or experience with economic analyses for economic evaluations.
- Knowledge of the topic area. Representation from clinical specialties and consumers is recommended where the review is being undertaken by systematic reviewers/methodologists rather than topic experts
- The ability to write a report in English to a publishable standard.
From the outset, the review team should consider expected contributions to the review project and eventual authorship. Some members of the review team may be better recognised in the acknowledgements of the published report rather than as authors. Conversely, part of the review team may be formally organised as a “Review Panel”, where some of the individuals with the attributes listed above provide formal advice and oversight throughout the conduct of the review including reviewing the draft protocol and final manuscript submissions or providing specific insight into the interpretation of data and formulating recommendations for practice and research for example. The names, contact details and areas of speciality of each member of the review panel should be included in both the protocol and the report.Implementation science is defined as “the scientific study of methods to promote the systematic uptake of research findings and other evidence‐based practices into routine practice and, hence, to improve the quality and effectiveness of health services and care”(p.1) (Eccles & Mittman 2006). This field of inquiry emerged out of a need to address the ongoing difficulties associated with getting research into practice (Nilsen, P 2015). It is well documented that existing barriers are a main contributor to the discrepancy between evidence-based recommendations and practice (Rainbird 2006; Pearson, Field & Jordan 2007). The findings of a systematic review identified three overarching domains related to barriers and facilitators of implementing evidence into practice: system, staff and intervention (Geerligs et al. 2018). System-level barriers and facilitators include environmental context (staff time, workload, workflow, space and staff turnover), culture (attitude to change, commitment, motivation, roles/trust and champions), communication processes and external requirements (reporting, standards and guidelines) (Geerligs et al. 2018). Staff-level barriers and facilitators include staff commitment and attitude, understanding and awareness, identification of individual roles, skills, ability and confidence. (Geerligs et al. 2018) Barriers and facilitators related to the intervention include the ease of integration (complexity, costs and required resources), validity of the evidence base, safety, legal and ethical concerns, and supportive components such as education and training, marketing and awareness (Geerligs et al. 2018).
Implementation science seeks to understand these barriers and facilitators, and to empower health professionals to utilise evidence-based approaches with the end goal of improving the quality and service of healthcare (Tabak et al. 2012). Implementation has been defined as “the methods to promote the systematic uptake of clinical research findings and other evidence-based practices into routine practice and hence improve the quality and effectiveness of healthcare policy and practice” (p.1) (Eccles & Mittman 2006).
A variety of theoretical approaches, models and frameworks are prescribed within this field, with the central aim to assist in developing a better understanding and explanation of why and how implementation succeeds or fails (Atkins et al. ; Ayanian & H. 2016; Brown & McCormack 2005; Gardner, Gardner & O'Connell 2014; Graham et al. 2006; Khalil 2017; Kitson, Harvey & McCormack 1998; Nilsen, 2015; Prochaska & DiClemente 1983; Rogers 1995; Rycroft-Malone & Bucknall 2010; Rycroft-Malone et al. 2002). Table 1 below details some of the existing frameworks and models available to assist with the implementation of evidence into practice. The list is by no means a comprehensive list of all existing frameworks and models, but it does highlight the complexity involved in getting evidence into practice. A recent review examining the differences and similarities of research translation frameworks identified 41 frameworks and models, with the four most published and cited frameworks being the Reaching Effectiveness Adoption Implementation Management (RE-AIM) framework, Knowledge to Action (KTA) framework, knowledge translation continuum models, or “T” models, and the PARiHS frameworks. All identified frameworks described the gap that exists between research knowledge and its application into policy and practice, and all acknowledged the importance and difficulty in closing this gap (Milat & Li 2017). A plethora of published information is available on the different frameworks and models.
Table 1: Description of Implementation Theories, Models and Frameworks
Theory / Model / Framework | Description | |
Diffusion of innovation model |
| |
Health education theory model |
| |
JBI implementation framework |
| |
Knowledge to action (KTA) framework |
| |
PARiHS model | Research implementation expressed as a function of the relationships among evidence, context and facilitation:
| |
PDSA model | The model is cyclic comprising four stages:
| |
Pipeline model |
| |
RE-AIM |
| |
Social theory model |
| |
Theoretical domains framework |
| |
The triple C model |
| |
Translation research continuum or “T’” models |
| |
Trans-theoretical model |
|