research and development organization

The Joanna Briggs Institute Critical Appraisal tools
for use in JBI Systematic Reviews

Checklist for
Case Control Studies

The Joanna Briggs Institute


The Joanna Briggs Institute (JBI) is an international, membership based research and development organization within the Faculty of Health Sciences at the University of Adelaide. The Institute specializes in promoting and supporting evidence-based healthcare by providing access to resources for professionals in nursing, midwifery, medicine, and allied health. With over 80 collaborating centres and entities, servicing over 90 countries, the Institute is a recognized global leader in evidence-based healthcare.

JBI Systematic Reviews

The core of evidence synthesis is the systematic review of literature of a particular intervention, condition or issue. The systematic review is essentially an analysis of the available literature (that is, evidence) and a judgment of the effectiveness or otherwise of a practice, involving a series of complex steps. The JBI takes a particular view on what counts as evidence and the methods utilized to synthesize those different types of evidence. In line with this broader view of evidence, the Institute has developed theories, methodologies and rigorous processes for the critical appraisal and synthesis of these diverse forms of evidence in order to aid in clinical decision-making in health care. There now exists JBI guidance for conducting reviews of effectiveness research, qualitative research, prevalence/incidence, etiology/risk, economic evaluations, text/opinion, diagnostic test accuracy, mixed-methods, umbrella reviews and scoping reviews. Further information regarding JBI systematic reviews can be found in the JBI Reviewer’s Manual on our website.

JBI Critical Appraisal Tools

All systematic reviews incorporate a process of critique or appraisal of the research evidence. The purpose of this appraisal is to assess the methodological quality of a study and to determine the extent to which a study has addressed the possibility of bias in its design, conduct and analysis. All papers selected for inclusion in the systematic review (that is – those that meet the inclusion criteria described in the protocol) need to be subjected to rigorous appraisal by two critical appraisers. The results of this appraisal can then be used to inform synthesis and interpretation of the results of the study. JBI Critical appraisal tools have been developed by the JBI and collaborators and approved by the JBI Scientific Committee following extensive peer review. Although designed for use in systematic reviews, JBI critical appraisal tools can also be used when creating Critically Appraised Topics (CAT), in journal clubs and as an educational tool.

JBI Critical Appraisal Checklist for Case Control Studies

Reviewer Date

Author Year Record Number

Yes No Unclear Not applicable
Were the groups comparable other than the presence of disease in cases or the absence of disease in controls?
Were cases and controls matched appropriately?
Were the same criteria used for identification of cases and controls?
Was exposure measured in a standard, valid and reliable way?
Was exposure measured in the same way for cases and controls?
Were confounding factors identified?
Were strategies to deal with confounding factors stated?
Were outcomes assessed in a standard, valid and reliable way for cases and controls?
Was the exposure period of interest long enough to be meaningful?
Was appropriate statistical analysis used?

Overall appraisal: Include □ Exclude □ Seek further info □

Comments (Including reason for exclusion)

Explanation of case control studies critical appraisal

How to cite: Moola S, Munn Z, Tufanaru C, Aromataris E, Sears K, Sfetcu R, Currie M, Qureshi R, Mattis P, Lisy K, Mu P-F. Chapter 7: Systematic reviews of etiology and risk . In: Aromataris E, Munn Z (Editors). Joanna Briggs Institute Reviewer’s Manual. The Joanna Briggs Institute, 2017. Available from

Case Control Studies Critical Appraisal Tool

Answers: Yes, No, Unclear or Not/Applicable

  1. Were the groups comparable other than presence of disease in cases or absence of disease in controls?

The control group should be representative of the source population that produced the cases. This is usually done by individual matching; wherein controls are selected for each case on the basis of similarity with respect to certain characteristics other than the exposure of interest. Frequency or group matching is an alternative method. Selection bias may result if the groups are not comparable.

  1. Were cases and controls matched appropriately?

As in item 1, the study should include clear definitions of the source population. Sources from which cases and controls were recruited should be carefully looked at. For example, cancer registries may be used to recruit participants in a study examining risk factors for lung cancer, which typify population-based case control studies. Study participants may be selected from the target population, the source population, or from a pool of eligible participants (such as in hospital-based case control studies).

  1. Were the same criteria used for identification of cases and controls?

It is useful to determine if patients were included in the study based on either a specified diagnosis or definition. This is more likely to decrease the risk of bias. Characteristics are another useful approach to matching groups, and studies that did not use specified diagnostic methods or definitions should provide evidence on matching by key characteristics. A case should be defined clearly. It is also important that controls must fulfil all the eligibility criteria defined for the cases except for those relating to diagnosis of the disease.