As the process relates to textual findings rather than numeric data, the need for methodological homogeneity – so important in the meta-analysis of the results of quantitative studies – is not a consideration.
This section of the report should include how the findings were synthesized. Where meta-aggregation is possible, textual findings should be pooled using JBI SUMARI. The units of extraction in this process are specific conclusions stated by the author/speaker and the text that demonstrate the argument or basis of the conclusion. Conclusions are principal opinion statements embedded in the paper and are identified by the reviewer after examining the text in the paper; the conclusion is the claim or assertion of the author. It is for this reason that reviewers are required to read and re-read the paper closely to identify the conclusions to be entered into JBI SUMARI. Conclusions should be extracted as verbatim statements from the author.
The processes for categorization and formulating synthesized findings mirror that of the JBI SUMARI qualitative module. For a more detailed discussion of synthesis, reviewers are encouraged to read the section on data synthesis for qualitative studies.
Data synthesis should involve the aggregation or synthesis of findings to generate a set of statements that represent that aggregation, through assembling the findings rated according to their credibility, and categorizing these findings on the basis of similarity in meaning. These categories should then be subjected to a meta-synthesis in order to produce a single comprehensive set of synthesized findings that can be used as a basis for evidence-based practice. Where textual pooling is not possible the findings can be presented in narrative form.
Prior to carrying out data synthesis, reviewers first need to establish, and then document:
- their own rules for setting up categories
- how to assign conclusions (findings) to categories
- how to aggregate categories into synthesized findings.
In JBI SUMARI, a reviewer can add conclusions to a study after an extraction is completed on that paper.
The JBI approach to synthesizing the conclusions of textual or non-research studies requires reviewers to consider the credibility (logic, authenticity) of each report as a source of guidance for practice; identify and extract the conclusions from papers included in the review; and to aggregate these conclusions as synthesized findings.
The most complex problem in synthesizing textual data is agreeing on and communicating techniques to compare the conclusions of each publication. The JBI approach uses the SUMARI analytical module for the meta-synthesis of opinion and text. This process involves categorizing and re-categorizing the conclusions of two or more studies to develop synthesized findings. Reviewers should also document these decisions and their rationale in the systematic review report.
Many text- and opinion-based reports do not state conclusions explicitly. It is for this reason that reviewers are required to read and re-read each paper closely to identify the conclusions to be generated into JBI SUMARI.
Each conclusion/finding should be assigned a level of credibility, based on the congruency of the finding with supporting data from the paper where the finding was found. Textual evidence has three levels of credibility; thus, the reviewer is required to determine if, when comparing the Conclusion with the argument the Conclusion represents evidence that is:
Unequivocal - relates to evidence beyond reasonable doubt which may include conclusions that are matter of fact, directly reported/observed and not open to challenge
Credible - relates to those conclusions that are, albeit interpretations, plausible in light of the data and theoretical framework.
Not Supported - is when the findings are not supported by the data
In the systematic review report, it may be set out in the following way.
Papers were pooled using JBI SUMARI. This involved a three stage process:
1. Extraction of Level 1 author’s conclusions from full text articles and rating each according to its assessed validity (unequivocal, credible, not supported).
2. Categories were developed and assigned (Level 2 conclusions) based on similarity of meaning of Level 1 conclusions.
3. A set of synthesized conclusions were developed (Level 3 conclusions) after subjecting the categories to meta-synthesis. This represents the meta-aggregation of Level 1 and Level 2 conclusions.
4. Recommendations for practice and research were developed from the meta-syntheses and graded according to JBI Grades of Recommendation.
Have all of the conclusions been extracted from the included papers? Do all of the conclusions have illustrations? Do all of the conclusions have levels of credibility assigned to them?