3.3.11 Assessing methodological quality of qualitative studies
The assessment of methodological quality in qualitative studies has been a topic of significant debate over the years, particularly concerning its role in qualitative evidence synthesis (Carroll & Booth 2015; Hannes, Lockwood & Pearson 2010; Mays & Pope 2000; Porritt et al. 2014). The JBI Qualitative Methodological group has consistently advocated for the evaluation of methodological quality in systematic reviews, especially as the findings of a meta-aggregative review are intended to inform policy or practice. In recent years, there has been growing consensus that assessing the methodological quality of qualitative studies in evidence synthesis is essential. However, there remains ongoing debate regarding what criteria or checklists to use to evaluate the methodological strengths and limitations of qualitative research, whether studies should be excluded following appraisal and whether cut-off points or sum scales should be used (Carroll & Booth 2015; Garratt & Hodkinson 1998; Lockwood et al. 2023; Noyes et al. 2019).
The JBI Qualitative Methodology Group adopts the position that methodological quality assessment constitutes a crucial aspect of the systematic review process and recommends the use of a tool to assist with the assessment (Lockwood et al. 2015; Porritt et al. 2014). Over one hundred tools or checklists are available to assess the quality of qualitative studies (Munthe-Kaas et al. 2019). For JBI qualitative systematic reviews, the JBI Qualitative Critical Appraisal Tool is recommended (Lockwood et al. 2015). A detailed discussion on tools for meta-aggregative reviews is published elsewhere (Hannes et al. 2010).
JBI critical appraisal tool
The JBI critical appraisal tool is a well-established, validated tool (Hannes et al. 2010) that has been used in hundreds of qualitative systematic reviews since the early 2000s (Pearson 2004; Pearson et al. 2006). During the update of this chapter, the critical appraisal tool was reviewed by the methodological group members. The questions in the tool remain the same; however, the accompanying explanations have been slightly elaborated or modified to provide greater clarity to review teams.
The JBI qualitative critical appraisal tool consists of ten questions, and detailed guidance has been provided for each question. The review team is advised to assess the entire paper to determine whether a study should be included in the review when specific criterion is not met, and to transparently report the rationale for including such studies. A downloadable version of the tool is available; the tool is also embedded in the supporting software, JBI SUMARI. The ten questions of the tool are discussed below.
Is there congruity between the philosophical or theoretical positioning of the study and the methodological approach that has been adopted?
This question relates to whether the philosophical or theoretical positioning and the chosen methodology are aligned. For example, a report may state that the study adopted a critical perspective and a participatory action research methodology was followed. In this example, there is congruence between a critical view which focuses on knowledge arising out of critique, action and reflection, and action research, which initially works with groups to reflect on issues or practices to create change and a ‘Yes’ category would be selected.
Alternatively, a report may state that the study adopted an interpretive perspective and used survey methodology. Here there is incongruence between an interpretive view, which focuses on knowledge arising from studying what phenomena mean to individuals or groups, and surveys which pose standard questions to a defined study population. In this case, a ‘No’ category would be selected.
Some studies may just report that the study was ‘qualitative’ or used ‘qualitative methodology’; other studies may make no statement at all regarding philosophical orientation or methodology. In these cases, if there is evidence in the report that the data were collected and analysed in a way that is congruent with either a critical or interpretive perspective (e.g. semi-structured interviews and qualitative description), reviewers may assign ‘Unclear’ to this question since insufficient information would be available to make a judgement.
Is there congruity between the research methodology and the research question or objectives?
This question relates to whether the study methodology is appropriate for addressing the research question/objectives. For example, a report may state that the research question was to understand the meaning of pain in a group of people with rheumatoid arthritis and that a phenomenological approach was taken. Here, there is congruity between the question and the methodology; the ‘Yes’ category would therefore be assigned.
Conversely, a report may state that the research question was to establish the effects of counselling on the severity of pain experience and that an ethnographic approach was pursued. A question that aims to establish cause and effect cannot be addressed through an ethnographic approach, as ethnography seeks to understand cultural practices. Thus, this approach would be incongruent and a ‘No’ category would be assigned.
Is there congruity between the research methodology and the methods used to collect data?
This question relates to the alignment between the chosen methodology and the methods used to collect data. For example, a report may state that the study pursued a phenomenological approach and data were collected through phenomenological interviews. There is congruence between the methodology and data collection.
Alternatively, a report may state that the study pursued a phenomenological approach and data were collected through a postal questionnaire. There is incongruence between the methodology and data collection here, as phenomenology seeks to elicit rich descriptions of the experience of a phenomenon that would not normally be achieved through written responses to standardised questions.
Is there congruity between the research methodology and the representation and analysis of data?
This question relates to whether there is alignment between the chosen methodology and the analytical process and the way in which data were represented. For example, a report may state that the study pursued a phenomenological approach to explore people’s experience of grief by asking participants to describe their experiences of grief. If the text generated from asking these questions is searched to establish the meaning of grief to the participants and the meanings of all participants are included in the report findings, then this represents congruity and the ‘Yes’ category would be selected.
The same report may, however, focus only on those meanings that were common to all participants and discard divergent or single reported meanings. This would not be appropriate in a phenomenological study and would, therefore, be assigned ‘No’. ‘Unclear’ would be assigned to a report where the methodology and/or analytical process were not outlined, since insufficient information would be available to make a judgement.
Is there congruity between the research methodology and the interpretation of results?
Although certain qualitative approaches do not involve direct interpretation of the data, results are usually contextualised and interpreted as key findings. This question specifically relates to whether the interpretation of the results align with the chosen methodology. For example, a report may state that the study pursued a phenomenological approach to explore people’s experience of facial disfigurement and the results are used to inform practitioners about accommodating individual differences in care. There is congruence between the methodology and this approach to interpretation.
Alternatively, a report may state that the study pursued a phenomenological approach to explore people’s experience of facial disfigurement and the results are used to generate practice checklists for assessment. There is incongruence between the methodology and this approach to interpretation as phenomenology seeks to understand the meaning of a phenomenon for the study participants and cannot be generalised to total populations to a degree where standardised assessments are relevant across a population.
Is there a statement locating the researcher culturally or theoretically?
This question relates to the issue of researcher reflexivity and whether the identities and personal/professional perspectives of the research team have been explicitly acknowledged and addressed. In qualitative research, it is important to recognise that researchers’ social, cultural or professional identities and theoretical/disciplinary backgrounds will influence how the research topic is chosen, how the problem is framed, what questions are explored (or not explored) and how the data are interpreted. A reflexive account within the research report helps the reader to critically consider the dependability of the research findings by highlighting how the researchers’ values and background may have influenced the knowledge produced. An example of this in a report might include a description of how the review team’s commitment to equity, diversity and inclusion shaped and influenced their approach to participant selection and framing of research questions, or a statement recognising the cultural background and lived experience of review members and these factors shape their understanding, focus, and interpretation in the research process.
Is the influence of the researcher on the research and vice versa, addressed?
This question relates to the practice of reflexivity within the research process. It recognises that knowledge is produced from interactions within the research team and from interactions between the researchers and the participants. The dependability of a study is optimised when there is an explicit discussion of these interpersonal interactions and how they may have influenced decision-making or other events and processes within the research. An example of this in a report might include a description of the process of ‘bracketing’, whereby researchers attempt to recognise and reflect upon their own presuppositions or when there is consideration of how potential power relationships (or differences) between researchers and study participants were identified and addressed (e.g. when the researchers are health professionals and are conducting research with patients).
Are participants and, where appropriate, their voices adequately represented?
This question relates to the way in which researchers have chosen to represent the views/experiences/interactions of the participants. Generally, reports should provide illustrations (e.g. quotations, diary extracts, blogs, photographs, observational memos) from the data to show the basis of their conclusions and to ensure that participants are represented in the report. Additionally, researchers can employ various techniques to ensure the data is representative, such as prolonged engagement, persistent observation, triangulation, peer debriefing, negative case analysis, referential adequacy or member-checking.
The participants included in the study should be described (e.g. number of participants, age, gender, ethnicity and other characteristics, such as diagnosis), as relevant to the study aim.
Is the research ethical according to current criteria, and for recent studies, is there evidence of ethical approval by an appropriate body?
This question relates to the ethical conduct of research. In some jurisdictions and institutions, formal ethical approval may not be required for qualitative research. However, researchers must adhere to ethical principles in research. Researchers should state that, where required, institutional or other ethical approval process have been followed. If ethical approval was not required, there should be evidence that ethical standards have been met (e.g. there is an explanation of the informed consent process).
10. Do the conclusions drawn in the research report flow from the analysis or interpretation of the data?
This question relates to whether the authors draw conclusions based on their presentation of the findings and the views or words of study participants, rather than their own views, which may not be based on the data. In appraising a paper, appraisers seek to satisfy themselves that the conclusions drawn by the researchers are based on the data collected, with the data being the text generated through observation, interviews, or other processes.
Key concepts related to quality in qualitative research
The focus of assessing methodological quality is on the rigour or credibility of the research design and quality of reporting. It is important to distinguish between the assessment of methodological quality and the assessment of reporting quality. These are distinct yet interconnected concepts within the context of research evaluation, particularly in systematic reviews. Assessment of methodological quality refers to the evaluation of the rigour and soundness of the design and conduct of a research study. Conversely, assessment of reporting quality focuses on the clarity, transparency and completeness with which the research is documented and presented. Both assessments ensure a comprehensive understanding of the strengths and limitations of a study.
Dependability, reflexivity, credibility and transferability are concepts commonly referred to when discussing the rigour and quality of qualitative studies (Lincoln & Guba 1985).
Dependability ‘implies trackable variability; that is, variability that can be ascribed to identified sources’ (Krefting 1991, p.216). Dependability can be established if the research process is logical (i.e. the methods are suitable to answer the research question and are in line with the chosen methodology), traceable and clearly documented (Munn et al. 2014).
Reflexivity involves researchers acknowledging their own subjectivity, biases and personal perspectives during the research process. It encourages self-awareness and transparency in recognising the potential influence of the researcher on the study. Reflexivity helps in understanding how the values and perspectives of the researcher may affect data collection, analysis and interpretation.
Transferability refers to the extent the findings of the qualitative study can be transferred to other contexts, settings or groups. Researchers enhance transferability by providing detailed and rich descriptions of the study's context, methods, and participants, which allows others to assess the relevance of the findings to their own settings.
Credibility refers to the confidence in the plausibility and accuracy of the research findings. It involves ensuring that the results genuinely reflect the participants' experiences and perspectives.
The concepts above underpin the items in the JBI checklist for qualitative research and JBI’s approach to establishing confidence in qualitative review findings, ConQual, which is discussed in more detail later in this chapter (Munn et al. 2014).
Pilot-testing the tool
When assessing the quality of studies, it is recommended to first pilot-test the tool among the reviewers. The purpose of piloting is to ensure alignment between the reviewers when they interpret and apply the appraisal tool questions. Piloting will uncover areas of ambiguity in how the reviewers may be understanding the critical appraisal questions and help the team to clarify their stance in applying the questions to different kinds of qualitative studies.
Once pilot-testing is complete, two reviewers should independently perform an assessment of the retrieved studies, conferring together to reach a final agreed evaluation. Any disagreements are solved by consensus or through discussion with a third reviewer well-versed in qualitative research.
Inclusion or exclusion of studies in the review
As mentioned earlier, the decision to include or exclude studies based on appraisal of methodological quality continues to be a topic of debate. The review team decides whether to include or exclude a study based solely on the outcome of the quality assessment. A clear and detailed justification for decision-making is required. If teams decide to exclude studies, they are expected to provide a detailed rationale regarding which criteria were used and why particular criteria were selected as key markers of study quality. These decisions should be made prior to commencing the review and noted in the protocol.
Common challenges with assessing the quality of qualitative research
Identifying congruity between philosophical perspective, methodology and methods can be challenging for a novice qualitative researcher. For this reason, it is recommended to include a person with qualitative research knowledge and experience in the review team when undertaking any type of qualitative synthesis.
Identifying the philosophical perspective underlying the research methodology requires an understanding of the key philosophical assumptions that guide the research methodology. For example, grounded theory is often associated with constructivist epistemology, while phenomenology is rooted in phenomenological philosophy. It is also important to understand the key tenets of each of the philosophical perspectives.
With this understanding, the reviewer can examine how the methodology aligns with the key tenets of the philosophical perspective, including its approach to knowledge generation, understanding of reality and the role of the researcher.
For example, phenomenology as a research methodology is guided by a philosophical perspective that emphasises individuals’ subjective experiences and the meaning they ascribe to those experiences. Phenomenology seeks to explore and describe the essence of human experience, often through in-depth interviews with participants. A paper that reports this would be considered congruent in its alignment of philosophical perspective, methodology and methods.
Let us now consider ethnography. The philosophical perspective of ethnography is rooted in interpretivism, which focuses on human behaviour and social phenomena from the perspective of those being studied. Ethnography seeks to provide an in-depth, holistic understanding of the culture or social group being studied and believes that culture and social phenomena are best understood in their natural context. You would expect the researcher to immerse themselves in the culture and social group of interest and collect data via open-ended or semi-structured interviews, field notes, photographs, etc.
Another research methodology is content analysis. This perspective is guided by a hermeneutic approach, which emphasises the researcher’s pre-understanding of the phenomenon under study when describing and explaining the views of those surveyed (Graneheim, Lindgren & Lundman 2017). The researcher’s pre-assumptions should therefore be stated and data should be collected via semi-structured interviews, field notes, documents, etc.
A qualitative descriptive research methodology provides broad insights into a phenomenon and seeks to describe experiences and perceptions (Kim, Sefcik & Bradway 2017). It is not grounded in a specific philosophical perspective but has been aligned with constructionism, critical theories and pragmatism (Doyle et al. 2020). Qualitative descriptive studies aim to explore, describe, identify and understand the participants’ experience of a certain phenomenon (Doyle et al. 2020; Kim et al. 2017). The data collection methods often include open-ended interviews, focus groups, telephone interviews or online approaches (Doyle et al. 2020). A study using a qualitative descriptive methodology would employ exploratory, inductive approaches, with content and thematic analysis being the most common data analysis techniques used (Doyle et al. 2020; Kim et al. 2017).
Overall, determining the alignment between philosophical perspective, research methodology and methods requires a thorough understanding of all components and an ability to assess any incongruence or divergence between them.