Skip to main content

Systematic and Systematic-like Reviews

Step 5: Critical appraisal

Now that you’ve chosen the studies to include in your review you need to check if they have followed sound scientific principles and the outcomes are valid.

Critical appraisal has been described as "the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context" (Burls, 2009).

  • Has the research been conducted in a way that minimises bias? (Is it trustworthy?)

  • If so, what does the study show? (What is its value?)

  • What do the results mean for a particular context in which a decision is being made? (Is it relevant to your research question?)

Traditional literature reviews and scoping reviews require a fairly general critical analysis, but for a systematic review, each study will have to be analysed. That sounds difficult, but fortunately there have been a number of tools developed for different study types in different disciplines (mainly health) to guide you through the process of determining the validity and accuracy of the information in individual studies.

Quick appraisal

If you're not completing a full systematic review and need to quickly analyse the studies you've retrieved, here are some considerations:

  • Relevance - Compare the study to your search framework (such as PICO)  
  • Results - Were statistical tests applied and did they indicate the findings were statistically significant?
  • Applicability to your research question - Did the researchers contribute to an answer to your original question? Is your study group similar to this or quite different? 
  • Quality of study – There are many tools out there for a rapid critical appraisal of the study quality. For example, one you could use if you were reviewing controlled clinical trials is the RAMMbo appraisal method (Salisbury, Glasziou, & Del Mar, 2007):

R - Recruitment – were the subjects chosen in the study representative of the target population? Were there enough subjects to make the study valid?

A - Allocation – was the trial randomised?   

M - Maintenance – Was the status of the control group and study group maintained throughout the trial?  Were they treated the same way apart from the intervention?  

M - Measurement (Blinding, Objective measures) – were the outcomes measured objectively and the subjects blinded to the intervention?  Was bias eliminated as much as possible?      

Salisbury, J., Glasziou, P., & Del Mar, C. (2007). Evidence-based practice workbook: Bridging the gap between health care research and practice (2nd ed.). Oxford: Blackwell/BMJ Books.

Levels of Evidence

Not all the studies you have retrieved in your searches will be of the same quality and many researchers have developed tables to explain this. For example, systematic reviews and meta analyses rank very highly because they have synthesised a wide range of evidence. Case studies rank lowly because the results are in isolation from other e6S hierarchy of evidence pyramid with systems at the top, then summaries, synopses of syntheses, syntheses, synopses of studies then individual studies at the pyramid base.vidence. Here are some examples of ranking the levels of evidence:

Tools for critical appraisal

There are many useful tools to help you assess the value of various types of studies. Buccheri and Sharifi (2017) outlined the uses and strengths and weaknesses of various appraisal tools, for nursing research. Again, this is a field strong in the health sciences.

The "How to read a paper" series from The BMJ is also very handy, linking to papers that explain how to read and interpret different kinds of research.

Some freely available critical appraisal tools include: