JBI-Reviewers Manual-2011 HR PDF
JBI-Reviewers Manual-2011 HR PDF
JBI-Reviewers Manual-2011 HR PDF
Reviewers’ Manual
2011 Edition
Joanna Briggs Institute Reviewers’ Manual: 2011 edition
Copyright © The Joanna Briggs Institute 2011
The Joanna Briggs Institute
The University of Adelaide
South Australia 5005
AUSTRALIA
ABN: 80 230 154 545
Phone: +61 8 8303 4880
Fax: +61 8 8303 4881
Email: jbi@adelaide.edu.au
Web: www.joannabriggs.edu.au
Some of the images featured in this book contain photographs obtained from publicly available electronic
sources, that list these materials as unrestricted images. The Joanna Briggs Institute is in no way associated
with these public sources and accepts any claims of free-use of these images in good faith.
All trademarks, designs and logos remain the property of their respective owners.
Permissions to reproduce the author’s original material first published elsewhere have been obtained where
possible. Details of these original publications are included in the notes and glossary pages at the end of
this book.
Published by the Joanna Briggs Institute, 2011
Prepared for the Joanna Briggs Institute by the Synthesis Science Unit.
All rights reserved. No part of this publications may be produced, stored in a retrieval system, or transmitted
in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the
prior permission of The Joanna Briggs Institute. Requests and enquires concerning reproduction and rights
should be addressed to the Publisher at the above address.
Author: The Joanna Briggs Institute
Tile: Joanna Briggs Institute Reviewers’ Manual: 2011 edition
Publisher: The Joanna Briggs Institute
ISBN: 978-1-920684-09-9
Subjects: Systematic Review; Protocol; Effectiveness; Qualitative; Economic; and Methods
Foreword
The Joanna Briggs Institute (JBI) is now in its fifteenth year of operation and has grown into an
international not-for-profit research and development organisation within the Faculty of Health
Sciences at the University of Adelaide.
We collaborate internationally with over 70 entities across the world who subscribe to our
definition of what constitutes evidence and our methodologies and methods in relation to evidence
synthesis. The Institute and its collaborating entities promote and support the synthesis, transfer
and utilisation of evidence through identifying feasible, appropriate, meaningful and effective
healthcare practices to assist in the improvement of healthcare outcomes globally.
Our major role is the global translation of research evidence into practice. We work closely
with the Cochrane Collaboration and the Campbell Collaboration and encourage the conduct
of reviews of effects (involving the meta-analysis of the results of randomised controlled trials)
through Cochrane Review Groups.
Our strength is in the conduct of systematic reviews of the results of research that utilize other
approaches, particularly qualitative research, economic research and policy research. This
broad, inclusive approach to evidence is important when the association between health care
and social, cultural and economic factors is considered.
It is highly recommended that all reviewers, associate reviewers and potential reviewers read this
handbook in conjunction with the user guide for the relevant analytical modules of JBI SUMARI
and JBI-CReMS.
We highly value the contribution of reviewers to the international body of literature used to inform
clinical decision-making at the point of care. It is important that this work continues and is
distributed in a variety of formats to both those working in and using health systems across the
world. We hope that this work will contribute to improved global health outcomes.
Contents
7 SECTION 1 – Introductory Information
Introduction:
Purpose of this Manual ������������������������������������������������������������������������������������������������ 7
Planning a JBI systematic review��������������������������������������������������������������������������������� 7
17 SECTION 2 – Qualitative Evidence
Chapter One:
Qualitative Evidence and Evidence-Based Practice �������������������������������������������������� 17
Chapter Two:
Qualitative Protocol and Title Development���������������������������������������������������������������� 22
Chapter Three:
The Systematic Review and Synthesis of Qualitative Data���������������������������������������� 33
45 SECTION 3 – Quantitative Evidence
Chapter Four:
Quantitative Evidence and Evidence-Based Practice������������������������������������������������� 45
Chapter Five:
Quantitative Protocol and Title Development������������������������������������������������������������� 49
Chapter Six:
The Systematic Review and Synthesis of Quantitative Data ������������������������������������� 62
77 SECTION 4 – Economic Evidence
Chapter Seven:
Economic Evidence and Evidence-Based Practice���������������������������������������������������� 77
Chapter Eight:
Economic Protocol and Title Development���������������������������������������������������������������� 81
Chapter Nine:
The Systematic Review and Synthesis of Economic Data����������������������������������������� 92
107 SECTION 5 – Text and Opinion Based Evidence
Chapter Ten:
Text and Opinion Based Evidence and Evidence Based Practice:
Protocol and Title Development for Reviews of Textual,
Non-research evidence �������������������������������������������������������������������������������������������� 107
Chapter Eleven:
The Systematic Review and Synthesis of Text and Opinion Data����������������������������� 114
127 SECTION 6 – Publishing
Publication of JBI Reviews��������������������������������������������������������������������������������������� 127
The Synthesis Science Unit�������������������������������������������������������������������������������������� 129
Reviewer training and accreditation ������������������������������������������������������������������������ 130
The role of Centres and of Evidence Synthesis Groups (ESGs)�������������������������������� 132
Companion Publications ������������������������������������������������������������������������������������������ 134
Joanna Briggs Institute 5
Reviewers’ Manual 2011
135 REFERENCES
138 GLOSSARY
143 APPENDICES
Appendix I – Title registration form�������������������������������������������������������������������� 144
Appendix II – Critical appraisal tool for Systematic reviews������������������������������ 145
Appendix III – Data extraction tools for Systematic reviews ������������������������������ 146
Appendix IV – QARI critical appraisal tools �������������������������������������������������������� 147
Appendix V – QARI data extraction tools����������������������������������������������������������� 148
Appendix V – QARI data extraction tools – Findings ����������������������������������������� 149
Appendix VI – JBI Levels of evidence����������������������������������������������������������������� 150
Appendix VII – MAStARI critical appraisal tools
Randomised Control / Pseudo-randomised Trial������������������������� 151
Appendix VII – MAStARI critical appraisal tools
Comparable Cohort / Case Control Studies������������������������������� 152
Appendix VII – MAStARI critical appraisal tools
Descriptive / Case Series Studies ����������������������������������������������� 153
Appendix VIII – Discussion of MAStARI critical appraisal checklist items������������ 154
Appendix IX – MAStARI data extraction tools
Extraction details�������������������������������������������������������������������������� 161
Appendix IX – MAStARI data extraction tools
Dichotomous Data – Continuous Data����������������������������������������� 162
Appendix X – ACTUARI critical appraisal tools�������������������������������������������������� 163
Appendix XI – Discussion of ACTUARI critical appraisal checklist items ����������� 164
Appendix XII – ACTUARI data extraction tools����������������������������������������������������� 167
Appendix XII – ACTUARI data extraction tools
Clinical effectiveness and economic results�������������������������������� 168
Appendix XIII – NOTARI critical appraisal tools����������������������������������������������������� 169
Appendix XIV – NOTARI data extraction tools (Conclusions)�������������������������������� 170
Appendix XV – Some notes on searching for evidence ��������������������������������������� 171
www.CartoonStock.com
6 Section 1
Introductory Information
www.CartoonStock.com
1
Joanna Briggs Institute 7
Reviewers’ Manual 2011
SECTION Section 2 -
Introductory
Information
Introduction:
Purpose of this Manual
The JBI Reviewers’ Manual is designed to provide authors with a comprehensive guide to
conducting JBI systematic reviews. It describes in detail the process of planning, undertaking
and writing up a systematic review of qualitative, quantitative, economic, text and opinion based
evidence. It also outlines JBI support mechanisms for those doing review work and opportunities
for publication and training. The JBI Reviewers Manual should be used in conjunction with the
SUMARI User Guide.
Planning a JBI systematic review
Planning a JBI systematic review
The JBI Synthesis Science Unit (SSU) accepts for peer review (and publication) the following
review types:
systematic reviews of primary research studies (quantitative, qualitative, health economic
evaluation);
comprehensive systematic reviews (a systematic review which considers 2 or more types
of evidence quantitative, qualitative, health economic evaluation, textual evidence);
systematic reviews of text and opinion data;
overview of reviews (“umbrella reviews” or systematic reviews of systematic reviews); and
scoping reviews.
Reviewers are encouraged to register their review title. This enables other centres to identify
topics that are currently in development and avoids accidental duplication of topics. Once
registered, a title is valid for 6 months from the date of entry in the database. Should a protocol
not be completed within that timeframe for a nominated topic, the topic becomes de-registered
and available to any other JBI entity whose members may wish to conduct the review. A review
title becomes registered with JBI on completion of the title registration form. The form is shown
in (Appendix I). The form should be downloaded from the website and once complete, it should
be emailed to the synthesis science unit (SSU). Once titles become registered with JBI, they are
listed on the website
http://www.joannabriggs.edu.au/Access%20Evidence/Systematic%20Review%20
Registered%20Titles
These and other points of consideration are detailed in the subsequent sections of this
handbook.
JBI Reviewers
Reviewers from the Joanna Briggs Collaboration who have undergone JBI Comprehensive
Systematic Review (CSR) training (or equivalent Cochrane or Campbell Collaboration systematic
review training) are eligible to submit a JBI systematic review. A reviewer can submit through the
following JBI entities:
Collaboratingcentres;
Affiliate
centres;
Evidence Synthesis Groups (ESGs); or
through SSU directly as a Remote reviewer.
All Reviewers should have completed the JBI CSR training program or equivalent systematic
review training programs (Cochrane or Campbell) within the last 3 years, and been an active
contributor to the development of systematic reviews for JBI including reviews co-registered with
the Cochrane Collaboration. If this is not possible, at least the first or second reviewer should
have completed the JBI training program. JBI keeps a record of who has undergone JBI CSR
training.
Reviewers associated with a JBI entity should be listed as core staff members of that entity on
the JBI website. Students undertaking systematic reviews through a collaborating entity should
also be listed as core staff in the same way. There is no similar requirement for remote reviewers,
who should submit protocols directly to SSU.
For reviews that are co-registered with either the Cochrane Collaboration or the Campbell
Collaboration, review authors must register their title with respective review groups, prior to
submission with JBI. The submission to either Cochrane or Campbell must include the JBI
centre name and affiliation with JBI to be valid.
Joanna Briggs Institute 9
Reviewers’ Manual 2011
Currently, review authors are required to notify JBI of their intent to co-register a systematic
review, however all the peer reviewing is undertaken through either Cochrane Review Groups
(CRGs) or the Campbell Collaboration, as appropriate. Once the protocol has been approved,
reviewers are required to send a copy of their protocol to their nominated SSU contact. This also
applies to the subsequent systematic review report.
Review authors are required to list their JBI affiliation (i.e. Centre details) on the protocol for
copyright reasons.
The Reviewers affiliation with a JBI Centre/ESG must be stated on Protocols and Systematic
Reviews in order to be considered Centre output.
There may however be technical difficulties associated with conducting an umbrella review,
such as:
Lack of high quality systematic reviews to include; and
Interpretation
may be difficult as the review is far removed from original primary studies
and original data may be misinterpreted
Lack of appropriate critical appraisal instruments
Scoping Reviews
Scoping reviews can be useful as a preliminary assessment in determining the size and scope
of a body of literature on a topic, with the aim of identifying what research exists and where the
gaps are. 1
The JBI approach
JBI will consider scoping reviews for publication, however currently SUMARI software does
not support the conduct of scoping reviews and reviewers interested in conducting this type of
review should contact the SSU for guidance.
Review Panels
It is recommended that review panels are established on commencement of a new systematic
review, or on update of an existing systematic review. The review panel should consist of experts
in review methods (i.e. persons who have completed JBI or Cochrane systematic review training),
experts in the content area (i.e. nationally or internationally recognised experts in the field of
research and/or practice), together with a lay/consumer representative.
12 Section 1
Introductory Information
The type of knowledge needed for a particular review may vary according to the topic and
scope of the review. It is recommended that the review panel meet throughout the progress of
the project – either face-to-face or via teleconference, as appropriate. Suggested key stages of
panel input are:
prior to submission of the protocol to the JBI SSU;
prior to submission of the report in its first draft; and
prior to submission of the report in its final draft.
The names, contact details and areas of speciality of each member of the review panel should be
included in both the protocol and the report.
All Centres and ESG’s are required to develop processes to determine priority areas for review.
Topics for systematic reviews conducted by JBI entities may be sourced from within the centre,
from the constituency that the centre represents, or topics may be specified by grant or tender
opportunities. Centres may use a variety of techniques to identify relevant needs from their
jurisdiction and to target their review program at specific areas of health.
A range of mnemonics is available to guide the structuring of systematic review questions, the most
common for quantitative reviews being PICO. The PICO mnemonic begins with identification of
the Population, the Intervention being investigated and its Comparator and ends with a specific
Outcome(s) of interest to the review. A specific mnemonic (PICo) for qualitative reviews has also
been developed which identifies the key aspects Population, the phenomena of Interest, and the
Context. A more generic mnemonic that can be used across quantitative and qualitative reviews
is the SPICE mnemonic, where the Setting, Perspective, Intervention, Comparison and (method
of) Evaluation are described.
The level of detail incorporated into each aspect of a mnemonic will vary, and consideration of
the following will assist reviewers to determine the appropriate level of detail for their review. The
population may be the primary focus of interest (for example, in reviews examining gender-based
phenomena such as smoking or alcohol use among women) and may further specify an age
group of interest or a particular exposure to a disease or intervention.
In quantitative reviews, the intervention(s) under consideration need to be transparently reported
and may be expressed as a broad statement such as “The Management of…”, or framed as
a statement of “intervention” and “outcome” of interest. Comparators may include placebos
and/or alternative treatments. In qualitative reviews, the interest relates to the experience of a
particular phenomenon (for example, men’s experience of healthy living).
Comparators (or controls) should be clearly described. It is important to know what the intervention
is being compared with. Examples include: usual care, placebo or alternative treatments.
In quantitative reviews, outcomes should be measurable and chosen for their relevance to the
review topic and research question. They allow interpretation of the validity and generalisability of
the review findings. Examples of outcomes include: morbidity, mortality, quality of life. Reviewers
should avoid the temptation of being too vague when determining review outcomes. In identifying
which outcomes will be specified, it is useful to consider the interests of the target audience of
the review findings, the impact that having a large number of outcomes may have on the scope
and progress of the review, the resources (including time) to be committed to the review and the
measurability of each specified outcome.
Does the planned JBI review have a clear, concise title that covers all of the PICO elements
of the review? Does the planned JBI review have a primary and secondary reviewer?
As mentioned previously, SUMARI includes the CReMS software, designed to assist reviewers
manage and document a review by incorporating the review protocol, search results and findings.
Reviewers are required to undertake systematic reviews using CReMS software and the SUMARI
user guide is a recommended reference for technical aspects of creating a JBI review.
JBI levels of evidence are discussed in a later section of this manual and can be found on the
JBI website, but are largely based on how the studies included in the review were conducted
and reported.
The hierarchy of study designs has led to a more sophisticated hierarchy or levels of evidence,
on the basis of the best available evidence.3 Several international organisations generate levels
of evidence and they are reasonably consistent. Each JBI systematic review will have levels of
evidence associated with its findings, based on the types of study design included in the review
that support each finding
In quantitative research, study designs that include fewer controls (and therefore impose less
control over unknown factors or potential sources of bias) are considered to be lower quality of
evidence – hence a hierarchy of evidence is created on the basis of the amount of associated
bias and, therefore, certainty of an effect. Many JBI reviews will consider a range of designs for
inclusion and a protocol should be a statement about the primary study designs of interest and
the range of studies that will be considered appropriate to the review objective and questions.
For quantitative research, study designs include:
Experimental e.g. randomised controlled trials (RCTs);
Quasi experimental e.g. non-randomised controlled trials;
Observational (Correlational) – e.g. cohort, case control studies;
Observational (Descriptive) – e.g. case series and case study; and
Expert opinion.
The JBI levels of evidence are discussed in a later section in more detail and can be found in
Appendix VI and at the web address:
http://www.joannabriggs.edu.au/About%20Us/About%20Us/JBI%20Approach/Levels%20
of%20Evidence%20%20FAME briefly they are as follows:
Level 1 (strongest evidence) Meta-analysis (with homogeneity) of experimental studies (e.g.
RCT with concealed randomisation) OR One or more large
experimental studies with narrow confidence intervals;
Level 2 One or more smaller RCTs with wider confidence intervals OR
Quasi-experimental studies (without randomisation);
Level 3 a. Cohort studies (with control group);
b. Case-controlled;
c. Observational studies (without control group);
Level 4 Expert opinion, or physiology bench research, or consensus.
In this case quantitative evidence is ranked in terms of research findings most likely to provide
valid information on the effectiveness of a treatment/care option. Such hierarchies usually
have the systematic review with meta-analysis at the top, followed closely by RCTs. There are
several other hierarchies of evidence for assessing studies that provide evidence on diagnosis,
prevention and economic evaluations; 4 their focus remains quantitative. The major disadvantage
in this is that while some health topics may concentrate on treatment/management effectiveness,
their themes may possibly not be addressed by RCTs. For example, Kotaska suggests that
vaginal breech birth is too complex and multifaceted to be appropriately considered within trials
alone.5 It has been reported how one RCT on breech birth has changed practice. 5 The reasons
for this are likely to be complicated and involve underlying professional beliefs as well as the
evidence. The emphasis, however, on trials as the apogee of the hierarchy of evidence may be
viewed as only encouraging an acceptance of this as the ‘gold standard’ in all circumstances,
rather than reflecting on whether a specific subject or topic is best considered from a different
perspective, using different research approaches. It must be acknowledged that quantitative
studies alone cannot explore or address all the complexities of the more social aspects of human
life. 6 For example, in midwifery this would include qualitative themes such as experience of birth,
parenthood, or topics regarding social support, transition to parenthood, uptake of antenatal
screening, education, or views on lifestyle such as smoking, etc. 7 These are more appropriately
explored though qualitative research approaches that seek to explore and understand the
dynamics of human nature, what makes them believe, think and act as they do. 8-10
Note: There is no widely accepted hierarchy of evidence for qualitative studies. Current
methodological opinion related to qualitative review does not require any distinction between
critical or interpretive studies, therefore choices regarding types of studies is the decision of the
reviewer. The inclusion of studies from across paradigms or methodologies does not ignore
the philosophic traditions of the approach but aims to integrate the richness of the qualitative
traditions in order to capture the whole of a phenomenon of interest.
2
Joanna Briggs Institute 17
Reviewers’ Manual 2011
SECTION Section 2 -
Qualitative
Evidence
Chapter One:
Qualitative Evidence and Evidence-Based Practice
Qualitative evidence or qualitative data allows researchers to analyse human experience and
cultural and social phenomena. 11 Qualitative evidence has its origins in research methods from
the humanities and social sciences and seeks to analyse the complexity of human phenomena
in naturalistic settings and from a holistic perspective. 12 The term ‘qualitative’ refers to various
research methodologies including ethnography, phenomenology, action research, discourse
analysis and grounded theory. Research methods include interview, observation and interpretation
of written material. Researchers who use qualitative methodologies seek a deeper truth, aiming
to “study things in their natural setting, attempting to make sense of, or interpret, phenomena in
terms of the meanings people bring to them”. 13
In the healthcare or medical context, qualitative research:
“…seeks to understand and interpret personal experiences, behaviours, interactions, and
social contexts to explain the phenomena of interest, such as the attitudes, beliefs, and
perspectives of patients and clinicians; the interpersonal nature of caregiver and patient
relationships; the illness experience; or the impact of human suffering”. 14
Qualitative evidence is especially useful and applicable in areas where there is little pre-existing
knowledge, where it is difficult or inappropriate to generate a hypothesis and where issues are
complex and require more detailed exploration. 15 The strength of qualitative research lies in its
credibility (i.e. close proximity to the truth), using selected data collection strategies that “touch
the core of what is going on rather than just skimming the surface”. 16
Paradigm/Philosophy to
structure knowledge and Methodologies Data Collection Methods
understanding
Phenomenology
Seeks to understand
people’s individual
subjective experiences and
Interviews.
interpretations of the world.
Focus groups
Ethnography
Interpretivism Observations.
Seeks to understand the
social meaning of activities,
Seeks to understand. Field work.
rituals and events in a
Sees knowledge in the (Observations, Interviews)
culture.
possession of the people. Interviews.
Field observations.
Grounded Theory
Purposeful interviews
Seeks to generate theory
Textual analysis.
that is grounded in the real
world. The data itself defines
the boundaries and directs
development of theory.
Action research
Participative group work
Involves researchers
Reflective Journals.
participating with the
(Quantitative methods
researched to effect change.
can be used in addition to
qualitative methods).
Feminist research
Seeks to create social
Critical enquiry Qualitative in-depth
change to benefit women.
interviews.
Seeks to change. Focus Groups.
Discourse Analysis
(Quantitative methods
assumes that language
can be used in addition to
socially and historically
qualitative methods).
constructs how we think
about and experience
Study of communications,
ourselves, and our
written text and policies.
relationships with others.
20 Section 2
Qualitative Evidence
As such, the JBI approach contrasts with the meta-ethnography approach to qualitative evidence
synthesis which has a focus on interpretation rather than aggregation. Meta-ethnography was
conceived by Noblit and Hare 25 as a method of synthesis whereby interpretations could be
constructed from two or more qualitative studies. It draws on the findings and interpretations
of qualitative research using a purposive sampling method and the analysis is a process of
interactive construction of emic interpretations with the goal of producing new theoretical
understandings. 26
JBI recognises the usefulness of alternate interpretive approaches such as meta-ethnography,
as well as narrative synthesis and thematic synthesis. As an example, the usefulness of meta-
ethnography lies in its ability to generate theoretical understandings that may or may not be
suitable for testing empirically. Textual Narrative Synthesis is useful in drawing together different
types of non-research evidence (e.g. qualitative, quantitative, economic), and Thematic Synthesis
is of use in drawing conclusions based on common elements across otherwise heterogeneous
studies. JBI considers, however, that these approaches do not seek to provide guidance for action
and aim only to ‘anticipate’ what might be involved in analogous situations and to understand
how things connect and interact.
Meta-aggregation is the preferred JBI approach for developing recommendations for action. The
JBI-QARI software is designed to facilitate meta-aggregation however it can be used successfully
in meta-ethnography and other interpretive processes as a data management tool.
www.CartoonStock.com
22 Section 2
Qualitative Evidence
Chapter Two:
Qualitative Protocol and Title Development
Background
The Joanna Briggs Institute places significant emphasis on a comprehensive, clear and meaningful
background section to every systematic review. Given the international circulation of systematic
reviews, it is important to state variations in local understandings of clinical practice (including
‘usual practice’), health service management and client or patient experiences. The background
should describe and situate the phenomena of interest under review including the population and
context. Definitions can assist to provide clarity. Where complex or multifaceted phenomena are
being described, it may be important to detail the whole of the phenomenon for an international
readership.
Joanna Briggs Institute 23
Reviewers’ Manual 2011
The background should avoid making value-laden statements unless they are specific to papers
that illustrate the topic and/or need for a systematic review of the body of literature related to the
topic. For example: “Young women were found to take up cigarette smoking as an expression
of independence or a sign of self confidence”. This is what the review will determine. If this type
of statement is made it should be clear that it is not the reviewer’s conclusion but that of a third
party, such as “Smith indicates young women were found to take up cigarette smoking as an
expression of independence or a sign of self confidence”. Such statements in the background
need to be balanced by other points of view, emphasising the need to synthesise potentially
diverse bodies of literature.
The background should conclude with a statement indicating the reviewer has examined the
Cochrane Library, JBI Library of Systematic Reviews, CINAHL and other relevant databases and
not found any current or planned reviews on the same topic.
Questions to consider:
Does the background cover all the population, phenomenon of interest and
the context for the systematic review? Are operational definitions provided?
Do systematic reviews already exist on the topic? Why is this review important?
Inclusion criteria
Population/Types of participants
In the above example, the PICo mnemonic describes the population (young women). Specific
reference to population characteristics, either for inclusion or exclusion should be based on a
clear, scientific justification rather than based on unsubstantiated clinical, theoretical or personal
reasoning. The term population is used in the PICo but it is not intended to imply aspects of
population pertinent to quantitative reviews such as sampling methods, sample sizes or
homogeneity. Rather, population characteristics that may be based on exposure to a disease, or
intervention, or interaction with health professionals as they relate to the qualitative experiences
or meanings individuals associate with are examples of the types of population characteristics
that may need to be considered by the review.
24 Section 2
Qualitative Evidence
Phenomena of Interest
Also in the above example, the phenomenon of interest is young women’s experiences in relation
to uptake and/or cessation of tobacco smoking. The level of detail ascribed to the phenomena
at this point in protocol development may vary with the nature or complexity of the topic. It may
be clarified, expanded or revised as the protocol develops.
Context
In a qualitative review, context will vary depending on the objective of the review, and the
specific questions constructed to meet the objective. Context may include but is not limited to
consideration of cultural factors such as geographic location, specific racial or gender based
interests, detail about the setting such as acute care, primary health care, or the community as
they relate to the experiences or meanings individuals or groups reported in studies.
Outcomes
As there is no clear international consensus on the construct of qualitative questions for
systematic review, no specific requirement for an outcome statement exists in qualitative reviews.
An outcome of interest may be stated (this may relate to, or describe the phenomena of interest),
or this section may reasonably be left out of the protocol.
Types of studies
This section should flow naturally from the criteria that have been established up to this point,
and particularly from the objective and questions the review seeks to address. There should
be a match between the review objectives and the study designs of the studies sought for the
review, especially in terms of the methodology and the research methods used. There should be
a statement about the primary study type and the range of studies that will be used if the primary
study type is not found.
The CReMS software offers optional standardised text consisting of statements regarding the
types of studies considered for inclusion in a JBI qualitative review. The choice of set text will
depend on the methodological approach taken by the review:
Option 1: This review will consider studies that focus on qualitative data including, but not limited
to, designs such as phenomenology, grounded theory, ethnography, action research
and feminist research.
In the absence of research studies, other text such as opinion papers and reports
will be considered. If you wish to include opinion papers and reports select Textual
Evidence and the NOTARI analytical Module and then select to insert set text now.
Option 2: This review will consider interpretive studies that draw on the experiences of #?# with
#?# including, but not limited to, designs such as phenomenology, grounded theory,
ethnography, action research and feminist research.
In the absence of research studies, other text such as opinion papers and reports
will be considered. If you wish to include opinion papers and reports select Textual
Evidence and the NOTARI analytical Module and then select to insert set text now.
Joanna Briggs Institute 25
Reviewers’ Manual 2011
Option 3: This review will consider critical studies that explore #?# including, but not limited to,
designs such as action research and feminist research.
In the absence of research studies, other text such as opinion papers and reports
will be considered. If you wish to include opinion papers and reports select Textual
Evidence and the NOTARI analytical Module and then select to insert set text now.
As can be seen from the three set text options above, creating a protocol for an interpretive or
critical or generalist systematic review depends on the nature of the question being addressed.
Interpretive reviews might be conducted to aggregate evidence related to social interactions
that occur within health care, or seek to establish insights into social, emotional or experiential
phenomena and generate new theories. Critical reviews might be conducted to explore and
theorise about issues of power in relationships while a critical and interpretive review might be
conducted to bring both elements together.
Search strategy
Systematic reviews are international sources of evidence; particular nuances of local context
should be informed by and balanced against the best available international evidence. The
protocol should provide a detailed strategy that will be used to identify all relevant international
research within an agreed time frame. This should include databases that will be searched,
and the search terms that will be used. In addition to this, it should also specify what research
methods/methodologies will be considered for inclusion in the review (e.g. phenomenology,
ethnography). Quantitative systematic reviews will often include a hierarchy of studies that will be
considered, however this is not the case for qualitative reviews. A qualitative review may consider
text and opinion in the absence of qualitative research.
If a review is to consider text and opinion in the absence of qualitative research studies, this
should be detailed in the protocol and the appropriate NOTARI1 tools appended.
Within systematic reviews the search strategy is described as a three-phase process, beginning
with the identification of initial key words followed by analysis of the text words contained in the
title and abstract, and of the index terms used in a bibliographic database to describe relevant
articles. The second phase is to construct database-specific searches for each database included
in the protocol, and the third phase is to review the reference lists of all studies that are retrieved
for appraisal to search for additional studies.
The Joanna Briggs Institute is an international collaboration with an extensive network of centres,
ESGs and other entities worldwide. This creates networking and resource opportunities for
conducting reviews where literature of interest may not be in the primary language of the reviewers.
Many papers in languages other than English are abstracted in English, from which reviewers
may decide to retrieve the full paper and seek to collaborate with other JBI entities regarding
translation. It may also be useful to communicate with other JBI entities to identify databases not
readily available outside specific jurisdictions for more comprehensive searching. JBI entities that
do not have access to a range of electronic databases to facilitate searching of published and
grey literature are encouraged to contact the SSU regarding access to the University of Adelaide
Barr Smith Library title through an ever-increasing range of electronic resources are available.
In addition to databases of published research, there are several online sources of grey or
unpublished literature that should be considered. Rather than compete with the published
literature, grey literature has the potential to complement and communicate findings to a wider
audience.
Systematic literature searching for qualitative evidence presents particular challenges. Some
databases lack detailed thesaurus terms either for qualitative research as a genre or for specific
qualitative methods. Additionally, changes in thesaurus terms mean that reviewers need to
be cognisant of the limitations in each database they may use. Some early work has been
undertaken to examine searching, and suggests a combination of thesaurus terms, and specific
method terms be used to construct search strategies. 27 The help of an experienced research
librarian/information scientist is recommended.
Assessment criteria
There are a variety of checklists and tools available to assess studies. The QARI checklist can be
found in Appendix IV. Most checklists use a series of criteria that can be scored as being met,
not met or unclear or not applicable. The decision as to whether or not to include a study can be
made based on meeting a pre-determined proportion of all criteria, or on certain criteria being
met. It is also possible to weight the different criteria differently. In JBI reviews the assessment
criteria are built in to the analytical module QARI. These decisions about the scoring system and
any cut-off for inclusion should be made in advance, and be agreed upon by all participating
reviewers before critical appraisal commences.
Reviewers need to discuss whether a cut-off point will be established for each review, and if so,
whether it will be based on either key items in the appraisal scale or a tally of responses. Applying
a cut-off point on the basis of weighting or overall scores is a decision that needs to be detailed
in the protocol and should be based on sound reasoning. Applying a cut-off point based on a
number of items in the appraisal checklist that were answered Yes compared with No does not
guarantee that those papers with the maximum credibility will be included and review authors
should consider carefully before taking this approach,
It is JBI policy that all study types must be critically appraised using the standard critical appraisal
instruments for specific study designs, built into the analytical modules of the SUMARI software.
The protocol must therefore describe how the primary studies will be assessed and detail any
exclusion criteria. The appropriate JBI critical appraisal instruments should also be included as
appendices to the protocol.
28 Section 2
Qualitative Evidence
Optional standardised set text is provided to help the reviewer. It is editable and states:
Qualitative papers selected for retrieval will be assessed by two independent reviewers for
methodological validity prior to inclusion in the review using standardised critical appraisal
instruments from the Joanna Briggs Institute Qualitative Assessment and Review Instrument
(JBI-QARI). Any disagreements that arise between the reviewers will be resolved through
discussion, or with a third reviewer.
Reviewers may wish to add or edit the set text, however the QARI critical appraisal tool is required
for all JBI entities conducting reviews of qualitative evidence through JBI. There are 10 criteria
for appraisal in the QARI module. They relate not to validity or bias in the process-orientated
methods related to reviews of effects, but to establishing the nature and appropriateness of the
methodological approach, specific methods and the representation of the voices or meanings of
study participants.
The QARI critical appraisal tool is in Appendix IV and has been designed with the intention that
there will be at least two reviewers (a primary and a secondary) independently conducting the
critical appraisal. Both reviewers are blinded to the assessment of each other and once both
reviewers have completed their appraisal, the primary reviewer compares the two appraisals and
makes a decision on whether to include a study or not.. The two reviewers should discuss cases
where there is a lack of consensus in terms of whether a study should be included, or how it
should be rated; it is appropriate to seek assistance from a third reviewer as required.
There is an ongoing international debate around the role of critical appraisal of qualitative research
for the purposes of systematically reviewing a body of literature. 28 The debate extends from
whether appraisal has any role at all in qualitative reviews, through whether criteria should be
explicit or implicit statements of guidance and how the ratings should be used. Given similar
questions continue to arise with quantitative research, it is likely this will continue to be discussed
and debated in the long-term for qualitative reviews. The JBI approach rests on both the need
for standardisation of process to facilitate quality monitoring and the view that evidence to inform
health care practice should be subject to critique of its quality. Furthermore this critique should
inform decisions reviewers make regarding which studies to include or exclude and the criteria
on which those decisions are made should be transparent.
Data extraction
Data extraction refers to the process of sourcing and recording relevant results from the original
(or primary) research studies that will be included in the systematic review. It is important that
both reviewers use a standard extraction tool that they have practised using and then consistently
apply. Doing so will facilitate accurate and reliable data entry into the QARI software for analysis.
The QARI data extraction tool is in Appendix V. In qualitative reviews, the data consists of
statements and text of interest to the review as published in primary studies.
It is necessary to extract data from the primary research regarding the participants, the
phenomenon of interest and the results. It is JBI policy that data extraction for all study types must
be carried out using the standard data extraction instruments for specific study designs, built into
the analytical modules of the SUMARI software. The protocol must therefore describe how data
will be extracted and include the appropriate JBI data extraction instruments in appendices to
the protocol.
Joanna Briggs Institute 29
Reviewers’ Manual 2011
As with critical appraisal, optional set text is provided to assist the reviewer. The set text is
editable and indicates the types of content considered necessary to the write up of a systematic
review, it states:
Qualitative data will be extracted from papers included in the review using the standardised
data extraction tool from JBI-QARI. The data extracted will include specific details about the
phenomena of interest, populations, study methods and outcomes of significance to the
review question and specific objectives.
The data extraction instrument should be read through by both reviewers and each criterion
discussed in the context of each particular review. Without discussion, reviewers are likely to take
a slightly different interpretation of the questions, creating more work later in the review. Unlike
the JBI data extraction process for quantitative experimental and economic reviews (which is
conducted independently by two reviewers) the data extraction process of qualitative reviews
using QARI may involve both or either of the reviewers. The general consensus is that one
reviewer may extract the data but that it is useful for the second reviewer to read and discuss the
extraction. The aim is not to minimise risk of error (unlike the quantitative and economic reviews)
but rather to gain a shared understanding to facilitate effective progression through synthesis and
write-up of the review.
As a systematic review seeks to summarise a body of international research literature, the data
extraction needs to include information to inform readers, not just of the key findings, but also of
the methodological framework, research methods used and other important contextual factors.
The data extraction template for a JBI qualitative review incorporates methodology, method,
phenomena of interest, setting, geographical location, culture, participants, method of data
analysis used in primary study, the author’s conclusions and comments the reviewer might wish
to record about the paper at that point in time.
Strategies to minimise the risk of error when extracting data from studies include:
utilising a standardised data extraction form;
pilot test extraction prior to commencement of review;
train and assess data extractors; and
have two people extract data from each study.
Unlike experimental studies, where reviewers can, if necessary, contact authors of publications
and seek assistance in providing raw data, in qualitative studies this is not generally required as
the reviewer only works with the findings reported by the author in each study.
30 Section 2
Qualitative Evidence
Once the detail about the nature of the publication, it’s setting, methodologies, methods and other
relevant data have been extracted, the focus turns to identifying particular findings. Considered
first order or level one analysis, specific findings (and illustrations from the text that demonstrate
their origins) are identified and entered in the analytical module QARI.
The units of extraction in this process are specific findings and illustrations from the text that
demonstrate the origins of those findings. Note that in QARI a finding is defined as:
A conclusion reached by the researcher(s) and often presented as themes or metaphors.
To identify findings, reviewers read the paper carefully and continue to re-read it closely at least
a second time and identify the findings and enter them into QARI. Each finding is extracted and
textual data that illustrates or supports the finding is also extracted. In this approach, the reviewer
is searching the paper to locate data in the form of direct quotes or observations or statements
that lead to the finding being reported in each primary study. This is a form of confirmation of
authenticity that demonstrates the level of association between each identified finding in terms
of how clearly and/or directly it can be attributed to participants or observations in the primary
research.
The level of congruency between findings and supporting data from the primary studies is graded
to communicate the degree to which the interpretation of the researcher is credible. There are
three levels of credibility in the analytical module QARI as described below:
Unequivocal - relates to evidence beyond reasonable doubt which may include findings that
are matter of fact, directly reported/observed and not open to challenge;
Credible - those that are, albeit interpretations, plausible in light of data and theoretical
framework. They can be logically inferred from the data. Because the findings are interpretive
they can be challenged; and
Unsupported - when neither 1 nor 2 apply and when most notably findings are not supported
by the data.
Have the QARI critical appraisal and data extraction tools been attached to the protocol?
Have the authors agreed on how to apply the levels of credibility?
Data synthesis
The grading of findings (unequivocal, credible or unsupported) is conducted concurrently with
the identification and extraction of findings into the analytical module QARI.
It is important to combine the studies in an appropriate manner using methods appropriate to
the specific type and nature of data that has been extracted. Within the protocol, the methods
by which studies will be combined should be described in as much detail as is reasonably
possible.
Joanna Briggs Institute 31
Reviewers’ Manual 2011
The optional set text in CReMS provides a framework that reviewers can extend or edit, and
clarifies the synthesis involved in meta-aggregation through the analytical module QARI:
Qualitative research findings will, where possible be pooled using JBI-QARI. This will involve
the aggregation or synthesis of findings to generate a set of statements that represent
that aggregation, through assembling the findings (Level 1 findings) rated according to
their quality, and categorising these findings on the basis of similarity in meaning (Level 2
findings). These categories are then subjected to a meta-synthesis in order to produce a
single comprehensive set of synthesised findings (Level 3 findings) that can be used as a
basis for evidence-based practice. Where textual pooling is not possible the findings will be
presented in narrative form.
One of the challenges in synthesising textual data is agreeing on and communicating techniques
to compare the findings of each study between reviewers. The QARI approach to the meta-
aggregation of qualitative research findings involves categorising and re-categorising the findings
of two or more studies to develop synthesised findings. Although not considered a formal
requirement of the protocol, reviewers are encouraged to discuss and pilot 2 or more studies
through the process if they have not previously conducted a qualitative synthesis. This will enable
the primary and secondary reviewers (plus any associate reviewers) to be clear on how they
will assign findings to categories, and how the categories will be aggregated in to synthesised
findings.
Differing research methods such as phenomenology, ethnography or grounded theory, can
be mixed in a single synthesis of qualitative studies because the synthesis is of findings and
not data. This is a critical assumption of the QARI process. QARI meta-aggregation does not
involve a reconsideration and synthesis of primary data - it is restricted to the combination of
findings that is, processed data). Contrary to Noblit and Hare’s original views regarding meta-
ethnography, JBI consider it unnecessary to restrict meta-synthesis to studies undertaken using
the same methodology. 25
The process of meta-aggregation is illustrated in Figure 1.
STEP 1
32 Findings from 15 41 Findings from 7 38 Findings from 11
phenomen. Studies Ethnographies Discourse Analysis FINDINGS
Recommendations
for Practice
Narrative Summary
Although the focus of this section has been on describing and explaining the approach to meta-
aggregation, the protocol should also describe a process for developing a narrative summary
to anticipate the possibility that aggregative synthesis is not possible. Narrative summary
should draw upon the data extraction, with an emphasis on the textual summation of study
characteristics as well as data relevant to the specified phenomena of interest.
Conflict of Interest
A statement should be included in every review protocol being submitted to JBI that either
declares the absence of any conflict of interest, or describes a specified or potential conflict of
interest. Reviewers are encouraged to refer to the JBI’s policy on commercial funding of review
activity.
Acknowledgements
The source of financial grants and other funding must be acknowledged, including the reviewer’s
commercial links and affiliations. The contribution of colleagues or Institutions should also be
acknowledged.
References
Protocols are required to use Vancouver style referencing. References should be numbered in the
order in which they appear with superscript Arabic numerals in the order in which they appear
in text. Full reference details should be listed in numerical order in the reference section. (This is
automatically performed in CReMS.)
More information about the Vancouver style is detailed in the International Committee of
Medical Journal Editors’ revised ‘Uniform Requirements for Manuscripts Submitted to
Biomedical Journals: Writing and Editing for Biomedical Publication’, and can be found at
http://www.ICMJE.org/
Appendices
Appendices should be placed at the end of the protocol and be numbered with Roman numerals
in the order in which they appear in text. At a minimum this will include critical appraisal and data
extraction tools. (This is automatically performed in CReMS.)
Does the protocol have any conflicts of interests and acknowledgments declared, appendices
attached, and references in Vancouver style?
Once a protocol has been approved, it is published on the JBI website. Protocols can be found at:
http://www.joannabriggs.edu.au/Access%20Evidence/Systematic%20Review%20Protocols
Joanna Briggs Institute 33
Reviewers’ Manual 2011
Chapter Three:
The Systematic Review and Synthesis
of Qualitative Data
Please refer to the JBI website for specific presentation requirements for systematic review
reports http://www.joannabriggs.edu.au
All JBI systematic reviews are based on approved peer reviewed systematic reviews protocols.
Deviations from approved protocols are rare and should be clearly justified in the report. JBI
considers peer review of systematic review protocols as an essential part of a process to enhance
the quality and transparency of systematic reviews.
JBI systematic reviews should use Australian spelling and authors should therefore follow the latest
edition of the Macquarie Dictionary. All measurements must be given in Systeme International
d’Unites (SI) units. Abbreviations should be used sparingly; use only where they ease the reader’s
task by reducing repetition of long, technical terms. Initially use the word in full, followed by the
abbreviation in parentheses. Thereafter use the abbreviation only. Drugs should be referred to
by their generic names. If proprietary drugs have been used in the study, refer to these by their
generic name, mentioning the proprietary name, and the name and location of the manufacturer,
in parentheses.
Review authors:
The names, contact details and the JBI affiliation of each reviewer.
Executive Summary:
This section is a summary of the review in 500 words or less stating the purpose, basic
procedures, main findings and principal conclusions of the review. The executive summary
should not contain abbreviations or references.
34 Section 2
Qualitative Evidence
Objectives:
The review objectives should be stated in full, as detailed in the protocol section.
Inclusion criteria:
Types of participants
The report should provide details about the type participants included in the review.
Useful details include: age range, condition/diagnosis or health care issue, administration
of medication. Details of where the studies were conducted (e.g. rural/urban setting and
country) should also be included. Again the decisions about the types of participants should
have been explained in the background.
Phenomena of interest
This section presents all of the phenomena examined, as detailed in the protocol.
Types of studies
As per the protocol section, the types of studies that were considered for the review are
described. There should be a statement about the target study type and whether or not this
type was found. The types of study identified by the search and those included should be
detailed in the report.
Search Strategy
A brief description of the search strategy should be included. This section details search
activity (e.g. databases searched, initial search terms and any restrictions) for the review, as
predetermined in the protocol.
Data Collection
This section includes a brief description of the types of data collected and the instrument
used to extract data.
Data Synthesis
This section includes a brief description of how the data was synthesised, included whether
there is a meta-synthesis and/or a narrative summary.
Conclusions
This section includes a brief description of the findings and conclusions of the review.
Joanna Briggs Institute 35
Reviewers’ Manual 2011
Inclusion Criteria
As detailed in the protocol, the inclusion criteria used to determine consideration for inclusion
should be stated. For a qualitative review aspects include: Population, phenomenon of
Interest and Context as per the PICo mnemonic.
Types of studies
This section should flow from the background. There should be a statement about the target
study type (e.g. interpretive or critical) and the range of studies that were used.
Types of participants
There should be details about the type of individuals targeting including characteristics (e.g.
age range), condition/diagnosis or health care issue (e.g. administration of medication in rural
area and the setting(s) in which these individuals are managed. Again, the decisions about
the types of participants should have been justified in the Background.
Search strategy
Documenting a search strategy
This section should include an overview of the search strategy used to identify articles considered
by the review. The search strategy needs to be comprehensively reported. Commonly, electronic
databases are used to search for papers, many such databases have indexing systems or
thesauri, which allow users to construct complex search strategies and save them as text files.
The documentation of search strategies is a key element of the scientific validity of a systematic
review. It enables readers to look at and evaluate the steps taken, decisions made and to consider
the comprehensiveness and exhaustiveness of the search strategy for each included database.
Each electronic database is likely to use a different system for indexing key words within their
search engines. Hence the search strategy will be tailored to the nuances of each particular
database. These variations are important and need to be captured and included in the systematic
review report. Additionally, if a comprehensive systematic review is being conducted through
SUMARI, the search strategies for each database for each approach are recorded and reported
via CReMS. Commonly, these are added as appendices.
Using QARI, there is a series of standardised fields related to critical appraisal, which focus on
the methods of the review and assessment of methodological quality.
Critical appraisal
This section of the review includes the results of critical appraisal with the QARI instrument. As
discussed in the section on protocol development, it is JBI policy that qualitative studies should
be critically appraised using the QARI critical appraisal instrument. The primary and secondary
reviewer should discuss each item of appraisal for each study design included in their review.
In particular, discussions should focus on what is considered acceptable to the needs of the
review in terms of the specific study characteristics. The reviewers should be clear on what
constitutes acceptable levels of information to allocate a positive appraisal compared with a
negative, or response of “unclear”. This discussion should take place before independently
conducting the appraisal. The QARI critical appraisal tool should be attached to the review.
The methodological quality of included papers is discussed in the results section of the review.
Has the QARI critical appraisal tool been appended to the review? Have the results of critical
appraisal been discussed? Were there any differences of opinion between the reviewers and,
if so, how were these resolved?
Joanna Briggs Institute 37
Reviewers’ Manual 2011
Data extraction
This section of the review includes details of the types of data extracted for inclusion in the
review. Data extraction begins with recording of the methodology (such as phenomenology,
ethnography or action research), identifying the setting and describing the characteristics of the
participants. When data extraction of the study background detail is complete, the extraction
becomes highly specific to the nature of the data of interest and the question being asked in the
review. In SUMARI, elements of data extraction are undertaken through the analytical modules
and the data extracted is automatically transferred to CReMS. For qualitative reviews, synthesis
is conducted in the QARI analytical module, and the final report is generated in CReMS.
Methodology
A methodology usually covers the theoretical underpinnings of the research. In a review, it is
useful to add further detail such as the particular perspective or approach of the author/s such
as “Critical” or “Feminist” ethnography.
Method
The method is the way that the data was collected; multiple methods of data collection may be
used in a single paper, and these should all be stated. Be sure to specify how the method was
used. If for example it was an interview, what type of interview was it; consider whether open or
closed questions were used, or whether it was face-to-face or by telephone.
Phenomena of Interest
Phenomena of interest are the focus of a QARI review, whereas in a quantitative review,
interventions are the focus. An intervention is a planned change made to the research situation
by the researcher as part of the research project. As qualitative research does not rely on having
an intervention (as they are traditionally thought of in quantitative research), the focus is called
phenomenon/phenomena of interest, which refers to the experience, event or process that is
occurring, for example: response to pain or coping with breast cancer.
Setting
This term is used to describe where the research was conducted - the specific location, for
example: at home; in a nursing home; in a hospital; in a dementia specific ward in a sub-acute
hospital. However, some research will have no setting at all, for example discourse analysis.
38 Section 2
Qualitative Evidence
Geographical Context
The Geographical Context is the location of the research. It is useful to be as specific as
possible in describing the location, by including not just the country, but whether it was a rural or
metropolitan setting, as this may impact upon the research.
Cultural Context
Cultural Context seeks to describe the cultural features in the study setting such as, but not limited
to: time period (e.g. 16th century); ethnic groupings (e.g. indigenous people); age groupings (e.g.
older people living in the community); or socio-economic groups (e.g. high socio-economic).
When entering information be as specific as possible. This data should identify cultural features
such as employment, lifestyle, ethnicity, age, gender, socio-economic class, location and time.
Participants
Information entered in this field should be related to the inclusion and exclusion criteria of the
research, and include (but not be limited to) descriptions of age, gender, number of included
subjects, ethnicity, level of functionality, and cultural background. Included in this section should
be definitions of terms used to group people that may be ambiguous or unclear, for example, if
the paper includes role definitions.
Data Analysis
This section of the report should include the techniques used to analyse the data; a list, (though
not exhaustive) of examples is provided below:
Named software programs;
Contextual analysis;
Comparative analysis;
Thematic analysis;
Discourse analysis; and
Content analysis.
Authors Conclusions
This the conclusion reached by the study author.
Reviewers Conclusions
This is the conclusion reached by the Reviewer.
Has the QARI data extraction tool been appended to the review? Have all of the extracted
findings been discussed and assigned levels of credibility in the review?
Data extraction in the analytical module QARI is also step one in data synthesis. It may be useful
to read this section on extraction, then the corresponding section on data synthesis to gain an
overview of how the two processes are inter related.
Qualitative research findings cannot be synthesised using quantitative techniques and although
it is possible to mirror the systematic process used in of quantitative reviews, reviewers need to
exercise their judgement when extracting the findings of qualitative studies, particularly as the
nature of a “finding” for practice is poorly understood.
Joanna Briggs Institute 39
Reviewers’ Manual 2011
Reports of qualitative studies frequently present study findings in the form of themes, metaphors
or categories. In QARI the units of extraction are specific findings (reported by the author(s) of the
paper, often presented as themes, categories or metaphors) and illustrations from the text that
demonstrate the origins of the findings.
In QARI a finding is therefore defined as a conclusion reached by the researcher(s) that is often
presented as a theme or metaphor.
Once a reviewer has collected all the individual Findings, with illustrations, the Findings can be
collated to form user-defined categories. To do this, the reviewer needs to read all of the findings
and identify similarities that can then be used to create categories of one or more findings.
As the process relates to textual findings rather than numeric data, the need for methodological
homogeneity – so important in the meta-analysis of the results of quantitative studies – is
not a consideration. The meta-aggregation of findings of qualitative studies can legitimately
aggregate findings from studies that have used radically different, competing and antagonistic
methodological claims and assumptions, within a qualitative paradigm. Meta-aggregation in
QARI does not distinguish between methodologies or theoretical standpoints and adopts a
pluralist position that values viewing phenomena from different perspectives. Qualitative meta-
aggregation evaluates and aggregates qualitative research findings on the basis of them being
the result of rigorous research processes.
Data synthesis
This section of the report should include how the findings were synthesised. Where meta-
aggregation is possible, qualitative research findings should be pooled using QARI. This should
involve the aggregation or synthesis of findings to generate a set of statements that represent that
aggregation, through assembling the findings rated according to their credibility, and categorising
these findings on the basis of similarity in meaning. These categories should then be subjected to
a meta-synthesis in order to produce a single comprehensive set of synthesised findings that can
be used as a basis for evidence-based practice. Where textual pooling is not possible the findings
can be presented in narrative form. The optional set text in CReMS describes the process by
which these options are implemented in the protocol development section as follows:
Qualitative research findings will, where possible be pooled using JBI-QARI. This will involve
the aggregation or synthesis of findings to generate a set of statements that represent
that aggregation, through assembling the findings (Level 1 findings) rated according to
their quality, and categorising these findings on the basis of similarity in meaning (Level 2
findings). These categories are then subjected to a meta-synthesis in order to produce a
single comprehensive set of synthesised findings (Level 3 findings) that can be used as a
basis for evidence-based practice. Where textual pooling is not possible the findings will be
presented in narrative form.
Prior to carrying out data synthesis, reviewers need to establish, and then document:
their own rules for setting up categories;
how to assign findings to categories; and
how to aggregate categories into synthesised findings.
40 Section 2
Qualitative Evidence
The JBI-QARI approach to synthesising the findings of qualitative studies requires reviewers to
consider the validity of each study report as a source of guidance for practice; identify and extract
the findings from papers included in the review; and to aggregate these findings as synthesised
findings. To reiterate:
Findings are conclusions reached and reported by the author of the paper, often in the form
of themes, categories or metaphors.
Findings as explicitly stated in the paper are extracted and textual data that illustrates or supports
the findings are also extracted and inserted with a page reference. Many qualitative reports only
develop themes and do not report findings explicitly. In such cases, the reviewer/s may need to
develop a finding statement from the text.
Each finding should be assigned a level of credibility, based on the congruency of the finding with
supporting data from the study from which the finding was taken. Qualitative evidence has three
levels of credibility:
Unequivocal - relates to evidence beyond reasonable doubt which may include findings that are
matter of fact, directly reported/observed and not open to challenge
Credible - relates to those findings that are, albeit interpretations, plausible in light of the data
and theoretical framework. They can be logically inferred from the data. Because the findings are
interpretive they can be challenged.
Unsupported - is when the findings are not supported by the data
When all findings and illustrative data have been identified, the reviewer needs to read all of the
findings and identify similarities that can then be used to create categories of more than one
finding.
Categorisation is the first step in aggregating study findings and moves from a focus on individual
studies to consideration of all findings for all studies included in the review. Categorisation is
based on similarity in meaning as determined by the reviewers. Once categories have been
established, they are read and re-read in light of the findings, their illustrations and in discussion
between reviewers to establish synthesised findings. QARI sorts the data into a meta-synthesis
table or “QARI-View”, when allocation of categories to synthesised findings (a set of statements
that adequately represent the data) is completed. These statements can be used as a basis for
evidence-based practice.
Have all of the findings been extracted from the included studies? Do all of the findings have
illustrations? Do all of the findings have levels of credibility assigned to them?
Joanna Briggs Institute 41
Reviewers’ Manual 2011
Results
Description of studies
This section should include the type and number of papers indentified by the search and the
numbers of studies that were included and excluded from the review. A flowchart should be
used to illustrate this, such as that shown in Figure 2. 29
Potentially relevant papers
identified by literature search
(n = 738)
Papers excluded after
evaluation of abstract
(n = 682)
Papers retrieved for detailed
examination
(n = 55)
Papers excluded after
review of full paper
(n = 22)
Papers assessed for
methodological quality
(n = 33)
Trials included in
systematic review
(n = 18)
Multifaceted Targeted
interventions intervetntions
(n = 10) (n = 8)
Figure 2. illustration of how numbers of studies and their management can be reported. 29
The results section should be framed in such a way that as a minimum, the following fields are
described or given consideration by the reviewers in preparing their systematic review report:
Studies: Numbers of studies identified, Numbers of retrieved studies, Numbers of studies
matching preferred study methodology (i.e. grounded theory, action research), Numbers and
designs of other types of studies, Numbers of appraised studies, Numbers of excluded studies
and overview of reasons for exclusion, Numbers of included studies.
The description of studies may also incorporate details of included studies. This additional detail
may include the assessment of methodological quality, characteristics of the participants and the
phenomenon/phenomena studied.
42 Section 2
Qualitative Evidence
With detail on the number and type of studies reported, the results section then focuses
on providing a detailed description of the results of the review. Where a systematic review
has several foci, the results should be presented in a logical, structured way, relevant
to the specific questions. The role of tables and appendices should not be overlooked.
Adding extensive detail on studies in the results section may “crowd” the findings, making them
less accessible to readers, hence the use of tables, graphs and in-text reference to specific
appendices is encouraged.
Methodological Quality
The discussion of the overall methodological quality of included studies should be included in
this section.
Review findings
There is no standardised international approach to structuring how the findings of qualitative
reviews are reported. The audience for the review should be considered when structuring and
writing up the findings. QARI-view graphs represent a specific item of analysis that can be
incorporated in to the results section of a review. However, the results are more than the QARI-
view graphs, and whether it is structured based on the intervention of interest, or some other
structure, the content of this section needs to present the results with clarity using the available
tools (QARI-view graphs, tables, figures) supported by textual descriptions.
Given there is no clear international standard or agreement on the structure or key components
of this section of a review report, and the level of variation evident in published systematic reviews
the parameters described in this section should be considered guidance for consideration rather
than a prescription.
This section must be organised in a meaningful way based on the objectives of the review and
the criteria for considering studies. The reviewer should comment on the appropriateness of the
QARI-view graph.
Discussion
This section should provide a detailed discussion of issues arising from the conduct of the
review, as well as a discussion of the findings of the review and of the significance of the
review findings in relation to practice and research. Areas that may be addressed include:
A summary of the major findings of the review;
Issues related to the quality of the research within the area of interest (such as poor
indexing);
Other issues of relevance;
Implications for practice and research, including recommendations for the future; and
Potential limitations of the systematic review (such as a narrow timeframe or other
restrictions).
The discussion does not bring in new literature or findings that have not been reported in the
results section but does seek to establish a line of argument based on the findings regarding the
phenomenon of interest, or its impact on the objectives identified in the protocol.
Joanna Briggs Institute 43
Reviewers’ Manual 2011
Conclusions
Implications for practice
Where evidence is of a sufficient level, appropriate recommendations should be made.
The implications must be based on the documented results, not the reviewer’s opinion.
Recommendations must be clear, concise and unambiguous.
Developing recommendations
The Joanna Briggs Institute develops and publishes recommendations for practice with each
systematic review, wherever possible. Across the different types of evidence and approaches
to systematic reviews, a common approach is the construct of recommendations for practice,
which can be summed up as the requirement for recommendations to be phrased as declamatory
statements. Recommendations are drawn from the results of reviews and given a level of
evidence (see below) based on the nature of the research used to inform the development of
the recommendation. Recommendations are a reflection of the literature and do not include any
nuances of preference or interpretation that reviewers or review panels may otherwise infer.
Assigning levels of evidence
The Joanna Briggs Institute and its entities, assign a level of evidence to all recommendations
drawn in JBI Systematic Reviews. The reviewers (in conjunction with their review panel) should
draft and revise recommendations for practice and research, and include a level of evidence
congruent with the research design that led to the recommendation. The JBI Levels of Evidence
are in Appendix VI.
The level of evidence relates to individual papers included in the systematic review. The levels
of evidence for clinical and economic effectiveness reflect current international standards and
expectations. However, as JBI takes a broader conceptual view of evidence, as reflected in the
capacity to conduct reviews on the feasibility, appropriateness or meaningfulness of health care
or health care experiences, the JBI levels of evidence incorporate particular criteria related to the
appraisal of included studies, with the overall of assessing the trustworthiness of the evidence.
44 Section 2
Qualitative Evidence
www.CartoonStock.com
3
Joanna Briggs Institute 45
Reviewers’ Manual 2011
SECTION Section 3 -
Quantitative
Evidence
Chapter Four:
Quantitative Evidence
and Evidence-Based Practice
Correlational studies
A correlational study aims to summarise associations between variables but is unable to make
direct inferences about cause and effect as there are too many unknown factors that could
potentially influence the data. This type of study design is often useful where it is unethical to
deliberately expose participants to harm. The most commonly used Correlational study designs
are Cohort and Case-control.
Cohort study
A cohort study is a type of longitudinal study that is commonly used to study exposure-disease
associations. A cohort is a group of participants who share a particular characteristic such as
an exposure to a drug or being born in the same year for example. Cohort studies can either
be prospective or retrospective. Prospective cohort studies collect data after the cohort has
been identified and before the appearance of the disease/condition of interest. The appearance
of the disease/condition is then counted as an event (e.g. new case of cancer). In theory, all
of the individuals within the cohort have the same chance of developing the event of interest
over time.
48 Section 3
Quantitative Evidence
A major advantage of this study design is that data is collected on the same participants over
time, reducing inter-participant variability. However this type of study is expensive to conduct
and can take a long time to generate useful data. Retrospective cohort studies are much less
expensive to conduct as they utilise already collected data, often in the form of medical records.
Effectively in a retrospective cohort design, the exposure, latent period and development of the
disease/condition have already occurred – the records of the cohort are audited backwards
in time to identify particular risk factors for a disease/condition. A disadvantage of this study
design is that the data was collected for purposes other than research so information relevant
to the study may not have been recorded. Statistically, the prospective cohort study should be
summarised by calculating relative risk and retrospective cohort studies should be summarised
by calculating odds ratio.
Case-control study
The case control study also uses a retrospective study design – examining data that has already
been collected, such as medical records. “Cases” are those participants who have a particular
disease/condition and the “Controls” are matched participants who do not. The records of each
are examined and compared to identify characteristics that differ and may be associated with the
disease/condition of interest. One recognised disadvantage of this study design is that is does
not provide any indication of the absolute risk associated with the disease of interest.
Descriptive Studies
Descriptive studies aim to provide basic information such as the prevalence of a disease within
a population and generally do not aim to determine relationships between variables. This type of
study design is prone to biases such as selection and confounding bias due to the absence of a
comparison or control. Case reports and case series are types of descriptive studies.
Expert Opinion
JBI regards the results of well-designed research studies grounded in any methodological
position as providing more credible evidence that anecdotes or personal opinion; however, in
situations where no research evidence exists, expert opinion can be seen to represent the ”best
available” evidence.
Joanna Briggs Institute 49
Reviewers’ Manual 2011
Chapter Five:
Quantitative Protocol and Title Development
Objectives
The objectives of the review should provide a clear statement of the questions being addressed
with reference to participants, interventions, comparators and outcomes. Clear objectives and
specificity in the review questions assist in focusing the protocol, allow the protocol to be more
effectively indexed, and provides a structure for the development of the full review report. The
review objectives should be stated in full. Conventionally, a statement of the overall objective is
made and elements of the review are then listed as review questions.
Review Objective
“To systematically review the evidence to determine the best available evidence related to the
post harvest management of Split Thickness Skin Graft donor sites.”
This broad statement can then be clarified by using focussed review questions.
50 Section 3
Quantitative Evidence
Review Questions
Among adults in the acute postoperative phase (5 days) following skin grafting, what dressings
used in the management of the STSG donor site are most effective:
in reducing time to healing;
in reducing rates of infection; and
in reducing pain levels and promoting comfort?
What interventions/dressings are most effective in managing delayed healing/infection in the
split skin graft donor site?
What interventions are most effective in managing the healed split skin donor site?
Does the review have a concise, informative title?
Are the review objectives and questions clearly stated?
Background
The Joanna Briggs Institute places significant emphasis on a comprehensive, clear and meaningful
background section to every systematic review. The background should be of sufficient length to
discuss all of the elements of the review (approximately 1000 words) and describe the issue under
review including the target population, intervention(s) and outcome(s) that are documented in the
literature. The background should provide sufficient detail to justify the conduct of the review
and the choice of interventions and outcomes. Where complex or multifaceted interventions are
being described, it may be important to detail the whole of the intervention for an international
readership. Any topic-specific jargon or terms and specific operational definitions should also
be explained. In describing the background literature value statements about the effectiveness
of interventions should be avoided. The background should avoid making statements about
effectiveness unless they are specific to papers that illustrate the need for a systematic review
of the body of literature related to the topic (for example: “the use of acupuncture is effective
in increasing smoking cessation rates in hospitalised patients” if this is what the review aims to
examine. If value statements are made, it should be clear that it is not the reviewer’s conclusion
but that of a third party, such as “Smith indicates that acupuncture is effective in increasing
smoking cessation rates in hospitalised patients”. Such statements in the background need
to be balanced by other points of view, emphasising the need for the synthesis of potentially
diverse bodies of literature. It is the responsibility of the reviewers to ensure that their review is
not a duplicate of an existing review. If systematic reviews already exist on the topic, then the
background should explain how this systematic review will be different (e.g. different population,
type of outcomes measured).
Questions to consider:
Does the background cover all the important elements (PICO) of the systematic review?
Are operational definitions provided? Do systematic reviews already exist on the topic, is so
how is this one different? Why is this review important?
Joanna Briggs Institute 51
Reviewers’ Manual 2011
Inclusion criteria for considering studies for a JBI quantitative systematic review
The PICO model aims to focus the systematic review and is used to define the properties of
studies to be considered for inclusion in the review. PICO is used to construct a clear and
meaningful question when searching for quantitative evidence.
P = Population. What are the most important characteristics of the population? (e.g., age,
disease/condition, gender). In the earlier example, the PICO mnemonic describes the population
(adults) within a specific setting (acute care) within a specific time frame (5 days). There are no
subgroups or exclusions described; hence all patients meeting the described criteria would be
included in the analysis for each outcome. Specific reference to population characteristics, either
for inclusion or exclusion should be based on a clear, scientific justification rather than based on
unsubstantiated clinical, theoretical or personal reasoning.
I = Intervention, factor or exposure. What is the intervention of interest? (e.g., drug, therapy
or exposure). In the earlier example, there is no single intervention of interest, rather the term
“dressings” is used to indicate that the review will consider all wound dressing products.
Where possible, the intervention should be described in detail, particularly if it is multifaceted.
Consideration should also be given to whether there is risk of exposure to the intervention in
comparator groups in the included primary studies.
C = Comparison. What is the intervention being compared with? (e.g., placebo, standard care,
another therapy or no treatment). The protocol should detail what the intervention of interest is
being compared with. This can be as focused as one comparison e.g. comparing “dressing X
with dressing Y’ or as broad as “what dressings” from the example above. This level of detail
is important in determining study selection once searching is complete. For JBI reviews of
effectiveness, the comparator is the one element of the PICO mnemonic that can be either left
out of the question/s, or posited as a generalised statement. Systematic reviews of effectiveness
based on the inclusive definition of evidence adopted by JBI often seek to answer broader
questions about multifaceted interventions.
O = Outcome(s). How is effectiveness of the intervention measured? (e.g., reduction in mortality
or morbidity, improved memory or reduced pain). The protocol should list of all the outcome
measures being considered. The relevance of each outcome to the review objective should be
apparent from the background section. Outcomes should be measurable and appropriate to
the review objective. Outcomes might be classified as being of primary or secondary interest in
relation to the review objective. It is useful to list outcomes and identify them as either primary or
secondary, short-term or absolute and discuss which ones will be included.
Types of studies
Generally, JBI systematic reviews consider primary research studies and the main research
designs used in primary studies to examine effectiveness are discussed in the previous chapter.
Where appropriate however, a systematic review can draw on other systematic reviews as a
source of evidence. These types of review are often called “reviews of reviews” or “umbrella
reviews” 2, 43 and they are useful to summarise existing systematic reviews, especially in areas
where much research is undertaken. However, as the majority of JBI quantitative reviews are
those that draw on primary studies of effectiveness, these types of studies will be the focus of
the reminder of this section. Any reviewers interested in undertaking an umbrella review through
JBI, are urged to contact the SSU.
52 Section 3
Quantitative Evidence
As previously mentioned, if there are existing systematic reviews on the topic, the purpose of
conducting another and how this differs to those, should be clearly explained in the background
section. The appropriate JBI instruments should be used for critical appraisal and data extraction
and all details should be transparent. If a systematic review plans to consider primary research
studies and secondary research (such as other systematic reviews), the differences in research
methodologies should be taken into account. As with different study designs, it is inappropriate
to combine findings of systematic reviews directly with those of primary studies, due to the
differences in research methods and underlying statistical assumptions used by those research
methods, therefore data from systematic reviews and primary studies should be analysed
separately within the review.
This section of the protocol should flow naturally from the review objective and questions. The
review question will determine the methodological approach and therefore the most appropriate
study designs to include in the review. Many JBI reviews will consider a hierarchy of study studies
for inclusion. If this is to be the case, there should be a statement about the primary study design
of interest and the range of studies that will be considered if primary studies with that design
are not found. In reviews of effectiveness, it is common to begin with a statement that RCTs
will be sought, but in the absence of RCTs other experimental study designs will be included.
Other study designs may be listed in hierarchical form, giving preference to those designs which
aim to minimise risk of bias (e.g. have some form of randomisation or control, or blinding), and
end with those most at risk of bias (e.g. descriptive studies with no randomisation, control or
blinding), or which are most appropriate to the nature of the question. In addition to risk of bias,
study selection may be based on the scope of the question. The hierarchy of study designs is
reasonably consistent internationally, with widespread acceptance that RCTs provide the most
robust evidence of effectiveness.
In the systematic review report, JBI levels of evidence that describe effectiveness should be used
alongside any recommendations and these levels are based upon the included study designs
that provided evidence for those recommendations. This is discussed further in subsequent
sections. As different study designs use different approaches and assumptions, it is important to
use the critical appraisal tool appropriate to the study design when determining methodological
quality of a study for inclusion into a review. The types of studies that can be included in a JBI
quantitative review is standardised in CReMS, dependant on study design and consists of the
following statements.
Type of Studies
1. Experimental (e.g. RCT, quasi-experimental)
This review will consider any experimental study design including randomised controlled
trials, non-randomised controlled trials, quasi-experimental, before and after studies,
#modify text as appropriate# for inclusion.
2. Observational (e.g. Cohort/Case control)
This review will consider analytical epidemiological study designs including prospective
and retrospective cohort studies; case control studies and analytical cross sectional
studies #modify text as appropriate# for inclusion.
Joanna Briggs Institute 53
Reviewers’ Manual 2011
Does the type of studies to be considered for inclusion in the review match
with the review objective/questions?
Search strategy
The aim of a systematic review is to identify all relevant international research on a given topic.
This is achieved by utilising a well-designed search strategy across a breadth of resources.
There is insufficient evidence to suggest a particular number of databases or whether particular
databases provide sufficient topic coverage, therefore literature searching should be based on
the principal of inclusiveness - with the widest reasonable range of databases included that are
considered appropriate to the focus of the review. If possible, authors should always seek the
advice of a research librarian in the construction of a search strategy.
The protocol should provide a detailed strategy including the search terms to be used and
the resources (e.g. electronic databases and specific journals, websites, experts etc.) to be
searched. Within systematic reviews, the search strategy is often described as a three-phase
process beginning with the identification of initial key words followed by analysis of the text words
contained in the title and abstract, and of the index terms used to describe relevant articles. The
second phase is to construct database-specific searches for each database included in protocol,
and the third phase is to review the reference lists of all studies that are retrieved for appraisal to
search for additional studies. The text describing searching has been standardised in CReMS:
The search strategy aims to find both published and unpublished studies. A three-step search
strategy will be utilised in this review. An initial limited search of MEDLINE and CINAHL will be
undertaken followed by analysis of the text words contained in the title and abstract, and of
the index terms used to describe article. A second search using all identified keywords and
index terms will then be undertaken across all included databases. Thirdly, the reference list
of all identified reports and articles will be searched for additional studies. Studies published
in #insert language(s)# will be considered for inclusion in this review. Studies published #insert
dates# will be considered for inclusion in this review.
54 Section 3
Quantitative Evidence
The search strategy should also describe any limitations to the scope of searching in terms of
dates, resources accessed or languages; each of these may vary depending on the nature of the
topic being reviewed, or the resources available. Limiting by date may be used where the focus of
the review is on a more recent intervention or innovation or if there has been a previously published
systematic review on the topic and the current review is an update. However, date limiting may
exclude potentially relevant studies and should thus be used with caution; the decision preferably
being endorsed by topic experts and justified in the protocol. Similarly, restricting study inclusion
on the basis of language will have an impact on the comprehensiveness and completeness of the
review findings. Where possible, reviewers should seek collaborative agreements with other JBI
entities to ensure that minimal language restrictions are placed on the identification and inclusion
of primary studies. Examples of search terms and databases that may be useful can be found
in Appendix XV.
The Joanna Briggs Institute is an international collaboration with an extensive network of
collaborating centres, Evidence Synthesis Groups (ESGs) and other entities around the world.
This creates networking and resource opportunities for conducting reviews where literature of
interest may not be in the primary language of the reviewers. Many papers in languages other
than English are abstracted in English, from which reviewers may decide to retrieve the full paper
and seek to collaborate with other JBI entities regarding translation. It may also be useful to
communicate with other JBI entities to identify databases not readily available outside specific
jurisdictions for more comprehensive searching.
The comprehensiveness of searching and documenting the databases searched is a core
component of the systematic review’s credibility. In addition to databases of published research,
there are several online sources of Grey or unpublished literature that should be considered.
Grey or Gray literature is also known as Deep or Hidden Web material and refers to papers that
have not been commercially published and include: theses and dissertations, reports, blogs,
technical notes, non-independent research or other documents produced and published by
government agencies, academic institutions and other groups that are not distributed or indexed
by commercial publishers. Rather than compete with the published literature, Grey literature has
the potential to complement and communicate findings to a wider audience, as well as to reduce
publication bias.
Joanna Briggs Institute 55
Reviewers’ Manual 2011
However, an important thing to remember is that the group of databases should be tailored to
the particular review topic.
JBI entities that do not have access to a range of electronic databases to facilitate searching of
published and unpublished literature are encouraged to contact the SSU liaison officer regarding
access to the University of Adelaide’s Barr Smith Library to enable them to access an increased
range of electronic resources.
Does the search strategy detail the initial search terms and databases to be searched?
Are any restrictions clearly explained?
Assessment criteria
The basis for inclusion (and exclusion) of studies in a systematic review needs to be transparent
and clearly documented in the protocol. A systematic review aims to synthesise the best available
evidence; therefore the review should aim to include the highest quality of evidence possible.
Methodological quality is assessed by critical appraisal using validated tools. There are a variety
of checklists and tools available to assess the validity of studies that aim to identify sources
of bias and JBI checklists are based on study design. Appropriate MAStARI critical appraisal
tools should be used for JBI quantitative reviews (Appendix VII). These checklists use a series
of criteria that can be scored as being met, not met or unclear or if deemed appropriate not
applicable (N/A) to that particular study. The decision as to whether or not to include a study can
be made based on meeting a pre-determined proportion of all criteria, or on certain criteria being
met. It is also possible to weight the different criteria differently, for example blinding of assessors
(to prevent detection bias) may be considered to be twice as important as blinding the caregivers
(to prevent performance bias).
It is important that critical appraisal tools are appropriate for the design of the study so that
the questions of those tools are relevant to that study design.
The decisions about the scoring system and the cut-off for inclusion of a study in the review
should be made in advance, and be agreed upon by all participating reviewers before critical
appraisal commences. It is JBI policy that all study types must be critically appraised using the
standard critical appraisal instruments for specific study designs, built into the analytical modules
of the SUMARI software. The protocol must therefore describe how the methodological quality/
validity of primary studies will be assessed; any exclusion criteria based on quality considerations;
and include the appropriate JBI critical appraisal instruments in appendices to the protocol. The
optional standardised set text in CReMS states:
Quantitative papers selected for retrieval will be assessed by two independent reviewers for
methodological validity prior to inclusion in the review using standardised critical appraisal
instruments from the Joanna Briggs Institute Meta Analysis of Statistics Assessment and
Review Instrument. Any disagreements that arise between the reviewers will be resolved
through discussion, or with a third reviewer.
MAStARI optional set text can be extended by reviewers who wish to add or edit information.
However, the assessment tools included in the analytical module MAStARI are required for all JBI
entities conducting reviews through JBI.
56 Section 3
Quantitative Evidence
The main object of critical appraisal is to assess the methodological quality of a study and to
determine the extent to which a study has addressed the possibility of bias in its design, conduct
and analysis. If a study has not excluded the possibility of bias, then its results are questionable
and could well be invalid. Therefore, part of the systematic review process is to evaluate how
well the potential for bias has been excluded from a study, with the aim of only including high
quality studies in the resulting systematic review. A secondary benefit of critical appraisal is to
take the opportunity to ensure each retrieved study has included the population, intervention and
outcomes of interest specified in the review.
The most robust study design for an effectiveness study in terms of excluding bias is the double
blinded randomised placebo controlled trial (RCT). Some have argued that systematic reviews
on the effects of interventions should be limited to RCTs, since these are protected from internal
bias by design, and should exclude non-randomised studies, since the effect sizes in these are
almost invariably affected by confounders. 31
Nevertheless, there are four main forms of bias that can affect even this study design. These
types of bias (as well as others) are the focus of checklist items on the JBI critical appraisal tools.
Main types of bias are: selection bias, performance bias, attrition bias and detection bias:
Selection bias refers chiefly to whether or not the assignment of participants to either
treatment or control groups (e.g. in a comparison of only two groups) has been made so
that all potential participants have an equal chance of being assigned to either group, and
that the assignment of participants is concealed from the researchers, at least until the
treatment has been allocated.
Performance bias refers to any systematic differences in the intervention administered
to participants which may arise if either the researcher, participant or both, are aware of
what treatment (or control) has been assigned.
Detection bias occurs if an assessor evaluates an outcome differently for patients
depending on whether they are in the control or treatment group.
Attrition bias refers to differences between control and treatment groups in terms of
patients dropping out of a study, or not being followed up as diligently.
Critical appraisal tools are included in MAStARI and can be completed electronically for RCTs,
quasi-experimental, case-control/cohort studies and descriptive/case series studies. A separate
checklist should be used for each type of study design considered for inclusion in the review
and each should be appended to the protocol (this occurs automatically in CReMS). MAStARI
has been designed with the intention that there will be at least two reviewers (a primary and a
secondary) independently conducting the critical appraisal. Both reviewers are initially blinded to
the appraisal of the other review. Once both reviewers have completed their appraisal, the primary
reviewer then compares the two appraisals. The two reviewers should discuss cases where there
is a lack of consensus in terms of whether a study should be included; it is appropriate to seek
assistance from a third reviewer as required.
Are the critical appraisal tools appropriate to the study designs? Are copies of the critical
appraisal tools appended to the protocol? Has the primary reviewer assigned a secondary
reviewer to the review?
Joanna Briggs Institute 57
Reviewers’ Manual 2011
Data extraction
Data extraction refers to the process of identifying and recording relevant details from either primary
or secondary research studies that will be included in the systematic review. A standardised
extraction tool is used to minimise the risk of error when extracting data and to ensure that the
same data is recorded for each included study (Appendix IX). Other error-minimising strategies
include; ensuring that both reviewers have practised using the extraction tool and can apply
the tool consistently. It is also recommended that reviewers extract data independently before
conferring. These strategies aim to facilitate accurate and reliable data entry in to MAStARI for
analysis.
Details regarding the participants, the intervention, the outcome measures and the results are to
be extracted from included studies. It is JBI policy that data extraction for all study types must be
carried out using the standard data extraction instruments for specific study designs, built into
the analytical modules of the SUMARI software. The protocol must therefore describe how data
will be extracted and include the appropriate JBI data extraction instruments as appendices to
the protocol.
Set text is included to guide the reviewer as to what should be included in each section of the
protocol, and to ensure standardisation across JBI reviews. However, this text is editable and
reviewers should tailor the text to suit their particular review.
The editable set text for data extraction illustrates what is considered necessary for the write up
of a systematic review, it states:
Quantitative data will be extracted from papers included in the review using the standardised
data extraction tool from JBI-MAStARI. The data extracted will include specific details about
the interventions, populations, study methods and outcomes of significance to the review
question and specific objectives.
Studies may include several outcomes; however the review should focus on extracting information
related to the research questions and outcomes of interest. Information that may impact upon the
generalisability of the review findings such as study method, setting and population characteristics
should also be extracted and reported. Population characteristics include factors such as age,
past medical history, co-morbidities, complications or other potential confounders.
The data extracted will vary depending on the review question; however it will generally either be
dichotomous or continuous in nature. Dichotomous data will include the number of participants
with the exposure/intervention (n) and the total sample (N) for both control and treatment groups.
Classically, this is stated as n/N; therefore there will be two columns of data for each outcome
of interest.
58 Section 3
Quantitative Evidence
For continuous data, the mean and standard deviation (SD), plus sample size are extracted for
each specified outcome for both the control and intervention (or exposure) group. Typically, the
standard deviation is expressed as:
The standard error (SE) may also be reported in addition to the SD. However, if only the SE is
reported, the SD can be calculated as long as the sample size (N) is known using the equation:
In some cases it may not be possible to extract all necessary raw data from an included study
for a systematic review, as sometimes only aggregated data are reported, or perhaps data from
two different patient populations have been combined in the data analysis, and your review is
focussed on only one of the patient populations. In these circumstances, the standard approach
is to make contact with the authors of the publication and seek their assistance in providing the
raw data. Most researchers are obliging when it comes to these requests providing that records
are still available. If the study authors do not respond or if the data is unavailable, this should be
noted in the report and the data presented in narrative summary.
In addition to the data, conclusions that study authors have drawn based on the data are also
extracted. It is useful to identify the study authors’ conclusions and establish whether there is
agreement with conclusions made by the reviewer authors.
What outcomes are anticipated? How have they been measured?
What type of data is anticipated e.g. continuous or dichotomous? Has the
MAStARI data extraction tool been appended to the protocol?
Data synthesis
The protocol should also detail how the data will be combined and reported. A synthesis can
either be descriptive (narrative summary) or statistical (meta-analysis). A meta-analysis of data
is desirable as it provides a statistical summary estimate of the effectiveness (called the effect
size) of one intervention/treatment verses another, for a given population. By combining the
result of primary research studies, a meta-analysis increases precision of the estimate, and
provides a greater chance of detecting a real effect as statistically significant. The overall goal of
meta-analysis in JBI systematic reviews is to combine the results of previous studies to arrive
at summary conclusions about a body of research. It is used to calculate a summary estimate
of effect size, to explore the reasons for differences in effects between and among studies, and
to identify heterogeneity in the effects of the intervention (or differences in the risk) in different
subgroups. 44
In JBI systematic reviews the results of similar individual studies can be combined in the meta-
analysis to determine the overall effect of a particular form of health care intervention (the
treatment) compared to another standard or control intervention for a specified patient population
and outcome. 4
Joanna Briggs Institute 59
Reviewers’ Manual 2011
If there is large variation in either the intervention or the included population, then the summary
estimate is unlikely to be valid. When systematic reviews contain very diverse primary studies
a meta-analysis might be useful to answer an overall question but the use of meta-analysis to
describe the size of an effect may not be meaningful if the interventions are so diverse that an
effect estimate cannot be interpreted in any specific context. 2
Studies to be included in JBI systematic reviews with meta-analysis should be similar to each
other so that generalisation of results is valid. To determine if this is the case, a reviewer should
examine whether the interventions being given to the ‘treatment’ group in each study are similar
enough to allow meta-analysis, and that the control groups in each study are similar enough to
warrant combination in meta-analysis. 4, 31
The main areas where data from included studies should be comparable can be categorised
as: clinical, methodological and statistical. The followings questions should be considered when
deciding whether or not to combine data in a meta-analysis: 30, 31, 45
Clinical – are the patient characteristics similar (such as age, diagnoses, co-morbidities,
treatments)?
Methodological – do the studies use the same study design and measure the same
outcomes?
Statistical – were outcomes measured in the same way, at the same time points, using
comparable scales?
These questions can be very difficult to answer and often involve subjective decision-making.
Involvement of experienced systematic reviewers and/or researchers with a good understanding
of the clinical question being investigated should help in situations where judgement is required.
Such situations should be clearly described and discussed in the systematic review report.
Borenstein et al 31 and Barza et al 30 also provide good reference material.
Another question to ask is whether it is sensible to statistically combine the results. For example,
a systematic review may have a number of included studies that suggest a negative effect of a
therapy and a number that suggest a positive effect, therefore a meta-analysis may conclude that
overall there is no effect of the therapy. In this situation it may not be useful to combine the data
in meta-analysis, and presenting the results in a narrative summary may be more appropriate,
31
however presentation of the results as a table or as a graphic (such as forest plot) may still be
useful in conveying the result to the reader.
Statistical pooling of study data provides a summary estimate, using transparent rules specified
in advance. 31 This allows an overall effect of a treatment/intervention to be determined. Whilst
the ultimate aim of a quantitative systematic review is to combine study data in meta-analysis,
this is not always appropriate or possible. Data from two or more separate studies are required
to generate a synthesis.
It is important to combine the studies in an appropriate manner using methods appropriate to the
specific type and nature of data that has been extracted. In the protocol, the methods by which
studies will be combined should be described in as much detail as is reasonably possible.
60 Section 3
Quantitative Evidence
As the optional MAStARI set text below indicates, this may require describing the approaches
for both dichotomous and continuous data if either or both types of data are anticipated. The set
text may be extended to describe:
which test of statistical heterogeneity is to be used (such as Chi square);
atwhich point statistical heterogeneity is considered significant; and
whether fixed or random effects models will be utilised and which specific methods
of meta analysis may be used for the anticipated types of data (i.e. continuous or
dichotomous):
The set text inserted into the CReMS protocol, will depend on the study design(s) that have been
selected for inclusion in the review.
Data Synthesis
1. Experimental (e.g. RCT, quasi-experimental)
Quantitative papers will, where possible, be pooled in statistical meta-analysis using
JBI-MAStARI. All results will be subject to double data entry. Effect sizes expressed as odds
ratio (for categorical data) and weighted mean differences (for continuous data) and their 95%
confidence intervals will be calculated for analysis modify text as appropriate. Heterogeneity
will be assessed statistically using the standard Chi-square. Where statistical pooling is not
possible the findings will be presented in narrative form including tables and figures to aid in
data presentation where appropriate.
2. Observational (e.g. Cohort/Case control)
Quantitative papers will, where possible, be pooled in statistical meta-analysis using
JBI-MAStARI. All results will be subject to double data entry. Effect sizes expressed as
relative risk for cohort studies and odds ratio for case control studies (for categorical data)
modify text as appropriate and weighted mean differences (for continuous data) and their
95% confidence intervals will be calculated for analysis modify text as appropriate. A Random
effects model will be used and heterogeneity will be assessed statistically using the standard
Chi-square. Where statistical pooling is not possible the findings will be presented in narrative
form including tables and figures to aid in data presentation where appropriate.
3. Descriptive (e.g. Case Series Studies)
Findings from descriptive studies will, where possible, be synthesised and presented in
a tabular summary with the aid of narrative and figures where appropriate modify text as
appropriate. If more than one study design was selected, the set text will change appropriately
to reflect this broader inclusion.
Where possible, study results should be pooled in statistical meta-analysis using either MAStARI
or Review Manager (for reviews conducted through a Cochrane Review Group). All numeric
outcome data must be double entered to prevent data entry errors. Where statistical pooling is
not possible the findings should be presented in narrative summary, although figures and tables
are still encouraged.
Narrative Summary
Where meta-analysis is not possible, the results should be synthesised in words and presented
as a narrative summary. Elements should include raw data as presented in the included studies
Joanna Briggs Institute 61
Reviewers’ Manual 2011
(e.g. weighted mean differences, standard deviations etc.) as well as information that puts the
data in context – such as patient descriptions, study characteristics, and so on. Tables and
figures are encouraged to aid presentation of the results.
Are the methods for data synthesis clearly described? How will heterogeneity be assessed in
the included studies? How will data be presented if not combined in meta-analysis?
Conflict of Interest
A statement should be included in every review protocol being submitted to JBI which either
declares the absence of any conflicts of interest, or which describes a specified or potential
conflict of interest. Reviewers are encouraged to refer to the JBI policy on commercial funding of
review activity for what could constitute a conflict of interest.
Acknowledgments
The source of financial grants and other funding must be acknowledged, including a declaration
of the authors’ industrial links and affiliations. The contribution of colleagues or institutions
should also be acknowledged. Personal thanks and thanks to anonymous reviewers are not
appropriate.
References
Protocols are required to use Vancouver referencing. References should be cited using superscript
Arabic numerals in the order in which they appear, with full details listed in numerical order in the
reference section. An example is shown below.
In text:
The fixed effect model assumes that there is one true effect underlying the studies in the
analysis and that all differences in the data are due to sampling error or chance and that there
is no heterogeneity between the studies1.
In reference section:
1. B
orenstein M, Hedges LV, Higgins JPT and Rothstein HR (2009) Introduction to meta-
analysis, Wiley chapter 10-12, p61-75.
More information about the Vancouver style is detailed in the International Committee of
Medical Journal Editors’ revised ‘Uniform Requirements for Manuscripts Submitted to
Biomedical Journals: Writing and Editing for Biomedical Publication’, and can be found at
http://www.ICMJE.org/
Appendices
Appendices should be placed at the end of the paper, numbered in Roman numerals and
referred to in the text. At a minimum, this section should include the JBI critical appraisal and
data extraction tools (this occurs automatically when using CReMS).
Does the protocol have any conflicts of interests and acknowledgments declared, appendices
attached, and references in Vancouver style?
Once a protocol has been approved, it is published on the JBI website. Protocols can be found at:
http://www.joannabriggs.edu.au/Access%20Evidence/Systematic%20Review%20Protocols
62 Section 3
Quantitative Evidence
Chapter Six:
The Systematic Review and Synthesis
of Quantitative Data
Please refer to the JBI website for specific presentation requirements for systematic review
reports http://www.joannabriggs.edu.au
All JBI systematic reviews are based on approved peer reviewed systematic reviews protocols
(as discussed in chapter 5). Deviations from approved protocols are rare and should be clearly
justified in the report. JBI advocates for approved peer reviewed systematic review protocols as
an essential part of a process to enhance the quality and transparency of systematic reviews.
JBI systematic reviews should use Australian spelling and authors should therefore follow the latest
edition of the Macquarie Dictionary. All measurements must be given in Systeme International
d’Unites (SI) units. Abbreviations should be used sparingly; use only where they ease the reader’s
task by reducing repetition of long, technical terms. Initially use the word in full, followed by the
abbreviation in parentheses. Thereafter use the abbreviation only. Drugs should be referred to
by their generic names. If proprietary drugs have been used in the study, refer to these by their
generic name, mentioning the proprietary name, and the name and location of the manufacturer,
in parentheses.
Review authors:
The names, contact details and the JBI affiliation should be listed for each reviewer (which occurs
automatically when using CReMS)
Joanna Briggs Institute 63
Reviewers’ Manual 2011
Executive Summary:
This is generally the final section of the report to be written and should be a summary of the
review in 500 words or less stating the purpose, basic procedures, main findings and principal
conclusions of the review. The executive summary should not contain abbreviations or references.
The following headings should be included in the executive summary:
Background:
This section should briefly describe the issue under review including the target population,
interventions and outcomes that are documented in the literature. The background should be
an overview of the main issues. It should provide sufficient detail to justify why the review was
conducted and the choice of the various elements such as the interventions and outcomes.
Objectives:
The review objectives should be stated in full, as detailed in the protocol section.
Inclusion criteria:
Types of participants
The report should provide details about the type participants included in the review.
Useful details include: age range, condition/diagnosis or health care issue, administration
of medication. Details of where the studies were conducted (e.g. rural/urban setting and
country) should also be included. Decisions about the types of participants should have been
explained in the background.
Types of interventions
This section should present all the interventions examined, as detailed in the protocol.
Types of outcome measures
There should be a list of the outcome measures considered, as detailed in the protocol.
Types of studies
As per the protocol section, the types of studies that were considered for the review should
be included. There should be a statement about the target study type and whether or not
this type was not found. The types of study identified by the search and those included
should be detailed in the report.
Search strategy
A brief description of the search strategy should be included. This section should detail search
activity (e.g. databases searched, initial search terms and any restrictions) for the review, as
predetermined in the protocol.
Data extraction
This section should include a brief description of the types of data collected and the instrument
used to extract data.
Data synthesis
This section should include a brief description of how the data was synthesised – either as a
meta-analysis or as a narrative summary.
64 Section 3
Quantitative Evidence
Conclusions
This section should include a brief description of the findings and conclusions of the review.
Implications for research
This section should include a brief description of how the findings of the review may lead to
further research in the area – such as gaps identified in the body of knowledge.
Implications for practice
This section should include a brief description of how the findings and conclusions of the
review may be applied in practice, as well as any implications that the findings may have on
current practice.
Following the executive summary, the report should include the following sections:
Background
As discussed in the protocol section, JBI places significant emphasis on a comprehensive,
clear and meaningful background section to every systematic review particularly given the
international circulation of systematic reviews, variation in local understandings of clinical
practice, health service management and client or patient experiences.
Review Objectives/review questions
As discussed previously in the protocol section, the objective(s) of the review should be
clearly stated.
Inclusion criteria
As detailed in the protocol, the inclusion criteria used to determine consideration for
inclusion should be stated. For a qualitative review aspects include: Population, Intervention/
phenomenon of Interest, Comparator and Outcomes, as per the PICO mnemonic.
Search strategy
This section should include an overview of the search strategy used to identify articles
considered by the review. The documentation of search strategies is a key element of the
scientific validity of a systematic review. It enables readers to look at and evaluate the steps
taken, decisions made and consider the comprehensiveness and exhaustiveness of the
search strategy for each included database.
Each electronic database is likely to use a different system for indexing key words within
their search engines. Hence, the search strategy will be tailored to each particular database.
These variations are important and need to be captured and included in the systematic
review report. Additionally, if a comprehensive systematic review is being conducted through
JBI-CReMS, the search strategies for each database for each approach are recorded and
reported via CReMS. Commonly, these are added as appendices.
Joanna Briggs Institute 65
Reviewers’ Manual 2011
Has the MAStARI critical appraisal tool(s) been appended to the review?
Have the results of critical appraisal been discussed? Where there any differences
of opinion between the reviewers? How were any differences resolved?
Data extraction
This section of the report should include details of the types of data extracted from the included
studies, as predetermined in protocol. If no data was available for particular outcomes, that
should also be discussed. The included studies may include several outcomes; however the
review should focus on extracting information related to the research questions and outcomes
of interest. Information that may impact upon the generalisability of the review findings such as
study method, setting and population characteristics should also be extracted and reported. This
is so that the data can be put into context. Population characteristics include factors such as
age, past medical history, co-morbidities, complications or other potential confounders. MAStARI
aims to reduce errors in data extraction by using two independent reviewers and a standardised
data extraction instrument.
Data synthesis
This section should describe how the extracted data was synthesised. If the data was
heterogeneous and is presented as a narrative summary, potentially sources of heterogeneity
should be discussed (e.g. clinical, methodological or statistical) as well as on what basis it was
determined inappropriate to combine the data statistically (such as differences in populations,
study designs or by Chi square test). Where meta-analysis was used, the statistical methods and
the software used (MAStARI or RevMan) should be described.
66 Section 3
Quantitative Evidence
Heterogeneity
When used in relation to meta-analysis, the term ‘heterogeneity’ refers to the amount of variation
in the characteristics of included studies. For example, if three studies are to be included in a
meta-analysis, do each of the included studies have similar sample demographics, and assess
the same intervention? (Note that the method by which the outcome is measured does not need
to be identical). While some variation between studies will always occur due to chance alone,
heterogeneity is said to occur if there are significant differences between studies, and under
these circumstances meta-analysis is not valid and should not be undertaken. But how does one
tell whether or not differences are significant?
Visual inspection of the meta-analysis output – e.g. a forest plot, is the first stage of assessing
heterogeneity.
Figure 3 is an example forest plot which shows the results of individual studies and thus indicates
the magnitude of any effect between the treatment and control groups. Do the individual studies
show a similar direction and magnitude of effect – i.e. are the rectangular symbols at similar
positions on the X-axis?
A formal statistical test of the similarity of studies is provided by the test of homogeneity. 46 This
test calculates a probability (P value) from a Chi-square statistic calculated using estimates of
the individual study weight, effect size and the overall effect size. Note, however, that this test
suffers from a lack of power – and will often fail to detect a significant difference when a difference
actually exists – especially when there are relatively few studies included in the meta-analysis.
Because of this low power, some review authors use a significance level of P < 0.1, rather than
the conventional 0.05 value, in order to protect against the possibility of falsely stating that there
is no heterogeneity present. 47 Often when combining the results from a series of observational
studies, this is the default significance level due to the increased heterogeneity associated
inherent with this study design.
Review Results
Description of studies
The type and number of papers identified by the search strategy and the number of papers that
were included and excluded should be stated. The description should be accompanied by a
flowchart such as that shown in Figure 4, with the following stages of identifying and retrieving
studies for inclusion:
Numbers of studies identified;
Numbers of studies retrieved for detailed examination;
Numbers of studies excluded on the basis of title and abstract;
Numbers of full text articles retrieved;
Numbers of studies excluded on the basis of full text;
Numbers of appraised studies;
Numbers of studies excluded studies following critical appraisal and an overview of
reasons for exclusion; and
Numbers of included studies.
Details of all full text articles that were retrieved for critical appraisal should be given. There should
be separate appendices for details of included and excluded studies. For excluded studies, details
should also be given for why they were excluded. (Note: all of this is automatically documented in
CReMS as reviewers add information in MAStARI and is uploaded to <Report Builder>))
This section should include the type and number of papers identified by the search and the
numbers of studies that were included and excluded from the review.
Potentially relevant papers
identified by literature search
(n = 738)
Papers excluded after
evaluation of abstract
(n = 682)
Papers retrieved for detailed
examination
(n = 55)
Papers excluded after
review of full paper
(n = 22)
Papers assessed for
methodological quality
(n = 33)
Trials included in
systematic review
(n = 18)
Multifaceted Targeted
interventions intervetntions
(n = 10) (n = 8)
The description of studies may also incorporate details of included studies. This additional detail
may include the assessment of methodological quality, characteristics of the participants and
types of interventions.
With detail on the number and type of studies reported, the results section then focuses on
providing a detailed description of the results of the review. Where a systematic review has
several foci, the results should be presented in a logical, structured way, relevant to the specific
questions. The role of tables and appendices should not be overlooked. Adding extensive detail
on studies in the results section may “crowd” the findings, making them less accessible to readers,
hence use of tables, graphs and in text reference to specific appendices is encouraged.
Review Results
This section should be organised in a meaningful way based on the objectives of the review
and the criteria for considering studies. There is no standardised international approach to how
review findings are structured or how the findings of reviews ought to be reported. It would be
logical however, to present findings in the same order as the review questions and/or review
objectives. The audience for the review should be considered when structuring and presenting
the review findings.
With detail on the studies reported, the results section then focuses on providing a detailed
description of the results of the review. For clarity and consistency of presentation, JBI recommends
that the reviewer, in discussion with their review panel, give consideration to whether the specific
review question be used to structure the results section, or whether the findings can be reported
under the outcomes specified in the protocol. For reviews of effectiveness, reporting based on
outcomes identified in the protocol is a common method for establishing clear structure to the
results. Some reviews have taken the approach of reporting RCT-based data for all outcomes of
interest, then repeating the structure for non-RCT papers.
Where a systematic review seeks to address multiple questions, the results may be structured in
such a way that particular outcomes are reported according to the specific questions.
Given there is no clear international standard or agreement on the structure or key components
of this section of a review report, and the level of variation evident in published systematic
reviews, the advice here is general in nature. In general, findings are discussed textually and then
supported with meta-graphs, tables, figures as appropriate.
72 Section 3
Quantitative Evidence
The focus should be on presenting information in a clear and concise manner. Any large or
complex diagrams/tables/figures should be included as appendices so as not to break the flow
of the text.
Meta-view graphs represent a specific item of analysis that can be incorporated in to the results
section of a review. However, the results are more than the meta-view graphs, and whether this
section is structured based on the intervention of interest, or some other structure, the content
of this section needs to present the results with clarity.
For dichotomous data using a fixed effects model, there are three options:
Mantel-Haenszel Relative Risk (RR);
Mantel-Haenszel Odds Ratio (OR); or
Peto Odds Ratio (OR).
There are two options for dichotomous data using a random effects model:
the DerSimonian and Laird Odds Ratio (OR); or
the DerSimonian and Laird Relative Risk (RR).
In terms of confidence intervals, the default setting of MAStARI is to calculate 95% confidence
intervals; however this can be adjusted to either 90% or 99% as required. In the current version of
the software, the preferred meta-view field defaults to ‘Forest plot’ as currently no other options
are available.
Once all of the appropriate settings have been selected, the forest plot summarising the results
of the individual studies and their combined meta-analysis can be generated. The forest plot can
be saved as a jpeg (.jpg) file using the ‘Save graph to disk’ button, and specifying an appropriate
name and location for the file, enabling it to be embedded in a systematic review report or other
document. Simply using the “send to report button” will automatically transfer your forest plot to
your review results in CReMS.
In MAStARI, if you have not previously conducted data extraction on your outcome of interest,
create a new outcome. Include a title for the outcome, a description of the outcome, the units or
scale that the outcome is measured in, and whether the data is dichotomous (i.e. can only take
two possible entities, for example yes/no, dead/alive, disease cured/not cured) or continuous (i.e.
measured on a continuum or scale using a number, for example body mass in kg, blood pressure
in mm Hg, number of infections per year). Note the title of the outcome and its description for
future reference. All relevant outcomes can be added at this time, and will appear in a drop
down list for selection when adding interventions and data, or outcomes can be added one at
a time. Complete data entry undertaken for each outcome prior to commencing extraction of
subsequent outcomes.
Are appropriate statistical methods used? If in doubt, seek specialist help.
Discussion
The aim of this section is to summarise and discuss the main findings - including the strength of the
evidence, for each main outcome. It should address issues arising from the conduct of the review
including limitations and issues arising from the findings of the review (such as search limitations).
The discussion does not bring in new literature or information that has not been reported in the
results section. The discussion does seek to establish a line of argument based on the findings
regarding the effectiveness of an intervention, or its impact on the outcomes identified in the
protocol. The application and relevance of the findings to relevant stakeholders (e.g. healthcare
providers, patients and policy makers) should also be discussed in this section. 49, 50
74 Section 3
Quantitative Evidence
Conclusions
The conclusion section of a systematic review should provide a general interpretation of the
findings in the context of other evidence and provide a detailed discussion of issues arising from
the findings of the review and demonstrate the significance of the review findings to practice and
research. Areas that may be addressed include:
A summary of the major findings of the review;
Issues related to the quality of the research within the area of interest;
Other issues of relevance;
Implications for practice and research, including recommendations for the future; and
Potential limitations of the systematic review.
References
The references should be appropriate in content and volume and include background references
and studies from the initial search. The format must be in Vancouver style, as previously discussed
in the Protocol section.
Appendices
The appendices should include:
critical
appraisal form(s);
data extraction form(s);
table of included studies; and
table of excluded studies with justification for exclusion.
These appendices are automatically generated in CReMS.
Are all appendices correctly numbered and attached to the report?
The level of evidence relates to individual papers included in the systematic review.
Recommendations made in systematic review reports each need level of evidence using the
scale illustrated above and reflect current international standards and expectations. However, as
JBI takes a broader conceptual view of evidence, as reflected in the capacity to conduct reviews
on the feasibility or meaningfulness of health care or experiences, the JBI levels of evidence
incorporate particular criteria related to the appraisal of included studies from paradigms other
than the quantitative paradigm.
76 Section 3
Quantitative Evidence
Developing recommendations
The Joanna Briggs Institute develops and publishes recommendations for practice with each
systematic review. Across the different types of evidence and approaches to systematic reviews,
a common approach to the construct of recommendations for practice has been developed,
which can be summed up as the requirement for recommendations to be phrased as declamatory
statements. Recommendations are drawn from the results of reviews and given a level of evidence
based on the nature of the research used to inform the development of the recommendation.
Recommendations are a reflection of the literature and do not include any nuances of preference
or interpretation that reviewers or review panels may otherwise infer.
Has the correct JBI level of evidence been assigned to each recommendation made
in the systematic review?
www.CartoonStock.com
4
Joanna Briggs Institute 77
Reviewers’ Manual 2011
SECTION Section 3 -
Economic
Evidence
Chapter Seven:
Economic Evidence and Evidence-Based Practice
Economic evidence, similar to the quantitative evidence discussed in the preceding section of
this manual, also deals with numerical data. As its name suggests however, this type of research
introduces another important dimension to the evidence used to inform decisions made across
healthcare, that is, dollar value. A health economic evaluation looks to compare both the health
effects and the costs of two or more alternative health interventions 51. To do this often the study
designs encountered are similar to those for ‘quantitative’ evidence already described (Section
3) with added inclusion of cost measurement. Studies that incorporate sometimes complex
modelling of data are also frequently encountered whilst addressing economic evidence.
In any society, the resources available (including dollars!) have alternative uses. In order to make
the best decisions about alternative courses of action evidence is needed on the health benefits
and also on the types and amount of resource use for these courses of action. Health economic
evaluations are particularly useful to inform health policy decisions attempting to achieve equality
in health care provision to all members of society and are commonly used to justify the existence
and development of health services, new health technologies and also, clinical guideline
development 52.
The generalisability of economic data has been widely debated by health economists. Problems
arising from factors such as differences in time of measurement, epidemiology of disease,
resource availability and currencies to name a few can all impact on the transferability of economic
evidence from one place to another.
Consideration of economic evidence and the different methods available to evaluate this form of
evidence relies on understanding some basic principles of health economics. The remainder of
this chapter will introduce some of the main differences in methods of economic evaluation and
then consider issues inherent to all of these methods used to evaluate economics in healthcare
such as the range of different costs and benefits which may be incurred across healthcare and
differences in how they are measured; differences in perspective on these costs, whether from
the patient, physician, hospital or society as a whole and different categorisation of costs.
78 Section 4
Economic Evidence
Cost-minimisation analysis
Cost-minimisation analysis is only considered to be a partial analysis as the outcomes of the
intervention or program being compared are assumed to be equivalent and only differences in
costs of the interventions are investigated. The preferred option is the cheapest. Clearly, strength
of any CMA relies on the assumption that outcomes are indeed equivalent. For example, it would
not be appropriate to compare different classes of medications using cost-minimisation analysis
if there are noted differences in outcomes.
Cost-effectiveness analysis
Studies which compare not just the costs of different interventions or programs, but also the
outcomes or effects often employ CEA. This is similar in principle to a CBA however the defining
feature being that in a CEA the outcome is measured as you may expect for any study of
effectiveness (e.g. mmHg, cholesterol levels etc), whilst in a CBA the outcome is measured in
monetary terms (see below) 53 54. In a cost effectiveness study results are presented as a ratio of
incremental cost to incremental effect, or in other words, the relative costs to achieve a given unit
of effects 55. One disadvantage of CEA is that programs with different types of outcomes cannot
be compared.
Cost-utility analysis
Studies investigating the cost utility can often be identified by the outcome the study or analysis
reports - quality adjusted life years, or QALYs. Whilst costs are still measured in monetary units,
the QALY measure is the product of two dimensions of life, both quality and length 54.
Cost-benefit analysis
As mentioned above, the distinguishing feature of a cost benefit study or analysis is that both
the intervention and also the outcome are measured in dollars. In a CBA all costs and benefits
are measured in monetary terms and then combined into a summary measure, for example the
Net Present Value (NPV) and the Benefit-Cost Ratio (BCR). A limitation of this type of study is
the difficultly of measuring the value of all health outcomes, for example life, in dollars! Table 2
compares the four basic types of economic evaluation studies.
There are four basic types of economic evaluation studies:
Cost-minimisation analysis (CMA);
Cost-effectiveness analysis (CEA);
Cost-utility analysis (CUA);
Cost-benefit analysis (CBA).
Joanna Briggs Institute 79
Reviewers’ Manual 2011
Table 2. A summary of the different types of economic evaluation, together with the costs
measured and specific advantages and disadvantages associated with each type.
Type of Benefits/
Costs
Economic Consequences Comments
Measures
Evaluation Measures
Cost- Costs measured Not measured CMA is not a form of full economic
Minimisation in monetary units evaluation, the assumption is that
Analysis (CMA) (eg. dollars) the benefits/consequences are the
same, the preferred option is the
cheapest
Cost- Costs measured Benefits measured Results are expressed for example
Effectiveness in monetary units in natural units (eg. as dollars per case averted,
Analysis( CEA) (eg. dollars) mmHg, cholesterol dollars per injury averted; different
levels, symptom-free incremental summary economic
days, years of life measures reported (eg. incremental
saved) cost-effectiveness ratio ICER)
Perspective
Irrespective of the type or method of economic evaluation study there are some economic
principles that must be considered. One important consideration is perspective - put simply, the
benefits and costs of using an intervention in health care depends upon whose perspective it
is. Economic studies will present perspective to make it clear whose or which costs are being
considered. Different perspectives may include those of patients, physicians, hospitals, insurance
companies or even that of society (by combining all healthcare perspectives) just to name a few!
The choice of perspective will influence the types of costs and outcome measures considered
relevant for inclusion in the economic study.
Costs
The measure of cost may seem simple at first, but in health care analyses it is an important and
often multi dimensional concept which includes identification of costs (which costs are included
or not and why), measurement of the factors that result in the costs (expressed in the natural
units used for measurement), and valorisation of every unit from who’s perspective it is.51 Another
important consideration is cost and how it is categorised.
Economic studies use a range of costs hence it is important to be able to distinguish between the
different types of costs that are used. Costs are typically categorised as “direct medical”, “direct
non-medical”, and “indirect costs”. Direct medical costs are those incurred by the health service,
such as physician time, drugs, medical devices and the like. Direct non-medical costs include
things like administration, child care, travel costs and utilities whilst indirect costs would include
for example the time off work a patient has had to take to visit the doctor or whilst ill.
Another category of costs are those labelled “intangible” such as pain and suffering or anxiety,
these costs are often quantified by measures of “willingness-to-pay”. Further cost categories
encountered in the literature may include health care sector costs, patient and family costs,
productivity costs and more! Costs presented in economic studies can also be referred to simply
as variable or fixed. These are terms more commonly used amongst financial circles and in the
case of variable costs simply refer to those costs that vary dependent on the number of cases
treated, such as drugs administered. Fixed costs on the other hand, don’t fluctuate and are
unlikely to vary in the short-medium term irrespective of the number of cases e.g. the cost of a
building. Semi-fixed costs have components of both and would tend to increase only when there
is a large increase in the number of cases treated.
When comparing costs and benefits another key principle in economics is that of discounting.
Discounting is necessary for direct comparison of costs and benefits during different periods of
time. It is necessary to consider in economic studies due to the underlying economic principle
that society places greater value on benefits gained immediately, rather than at some future
time. To reflect this preference, costs and benefits gained in the future are discounted when they
are being compared with the present. The rationale for the choice of the discount rate should
be provided.
Joanna Briggs Institute 81
Reviewers’ Manual 2011
Chapter Eight:
Economic Protocol and Title Development
JBI economic evaluation reviews are conducted through the ACTUARI module of the SUMARI
software. Before reviewers are able to use CReMS or any of the SUMARI modules, they need
to register through the JBI website and obtain a username and password. This process is free
of charge.
The ACTUARI module is designed to manage, appraise, extract and analyse economic data as
part of a systematic review of evidence. ACTUARI has been designed as a web-based database
and incorporates a critical appraisal scale; data extraction forms; and a data analysis function.
The ACTUARI software is one analytical module of the SUMARI software. SUMARI is the Joanna
Briggs Institute’s software for the systematic review of literature.
Systematic reviews are often conducted to address information needs for a particular constituency
or jurisdiction, yet the final review and subsequent guidance is disseminated internationally.
Therefore, the request for JBI reviewers is to develop protocols for systematic review appropriate
to an international audience.
A search of at least the Cochrane Library, Joanna Briggs Institute Library of Systematic Reviews,
MEDLINE and NHS EED databases will assist to establish whether or not a recent systematic
review report exists on the economic evaluation topic of interest.
If a systematic review on the topic of interest has already been conducted, consider the following
questions to establish if continuing with the review topic will be strategic:
Is the date of last update longer than 3 years ago?
Do the methods reflect the specific inclusion and exclusion criteria of interest for
your topic?
Is there a specific gap in terms of population or intervention or outcomes that has not
been addressed in the identified review?
These questions may not be the deciding factor in continuing with a review topic, but do
present some contextual factors that need considering before embarking on a systematic review
process.
Once a topic has been selected, and the decision to conduct a systematic review verified by the
lack of existing systematic reviews within the topic area, the systematic review title should be
registered with JBI. This is done by sending a draft of the review protocol to the SSU for peer
review.
A protocol for a review of economic evaluation evidence should be developed as for a review of
effectiveness evidence. The protocol should establish in advance the methods that will be used
throughout the systematic review process.
82 Section 4
Economic Evidence
Decisions about the review question, inclusion criteria, search strategy, study selection, data
extraction, quality assessment, data synthesis and reporting should be addressed. Specifying
the methods in advance reduces the risk of introducing bias into the review.
JBI systematic reviews of economic evidence are required to use the ACTUARI software.
Do systematic reviews already exist on the topic of interest? How is the current
review different?
Objectives
The objectives of the review should provide a clear statement of the questions being addressed
with reference to participants, interventions, comparators and outcomes. Clear objectives and
specificity in the review questions assist in focusing the protocol, allow the protocol to be more
effectively indexed, and provides a structure for the development of the full review report. The
review objectives should be stated in full. Conventionally, a statement of the overall objective is
made and elements of the review are then listed as review questions.
Joanna Briggs Institute 83
Reviewers’ Manual 2011
For example:56
“To perform a systematic review of economic evaluations of self-monitoring of blood glucose
in patients with type 2 diabetes mellitus.”
This broad statement can then be clarified by using focussed review questions. For example:56
The objectives of this review were to:
- systematically review the cost-effectiveness of self-monitoring of blood glucose in the
treatment of type 2 diabetes mellitus
- where possible, determine the cost-effectiveness of self-monitoring of blood glucose in
differing treatment subgroups
- inform practice and policy regarding the cost-effective use of self-monitoring of blood
glucose in type 2 diabetes mellitus
The review question can be framed in terms of the Population, Intervention(s), Comparator(s)
and Outcomes of the studies that will be included in the review. These elements of the review
question together with study design will be used in order to determine the specific inclusion
criteria for the review.
There is a range of mnemonics available to guide the structuring of systematic review questions,
the most common for JBI reviews being PICO. The PICO mnemonic begins with identification of
the Population, the Intervention being investigated and the Comparator and ends with a specific
Outcome of interest to the review. Use of mnemonics can assist in clarifying the structure of
review titles and questions, but is not a requirement of JBI systematic reviews.
In addition to clarifying the focus of a systematic review topic through the development of a
review question, it is recommended that reviewers establish whether or not a systematic review
has already been conducted to answer their specific review questions, and whether there is a
body of literature available for their review questions.
Does the review have a concise, informative title? Are the review objectives and questions
clearly stated?
Background
The Joanna Briggs Institute places significant emphasis on a comprehensive, clear and meaningful
background section to every systematic review. The background should communicate the
contextual factors and conceptual issues relevant to the review. It should explain why the review
is required and provide the rationale underpinning the inclusion/exclusion criteria and the review
question.
The background should also describe the issue under review including the target population,
interventions and outcomes that are documented in the literature. The background should
provide sufficient detail on each of the elements to justify the conduct of the review and the
choice of various elements such as interventions and outcomes. Where complex or multifaceted
interventions are being described, it may be important to detail the whole of the intervention for
an international readership.
84 Section 4
Economic Evidence
It is often as important to justify why elements are not to be included into the review. In describing
the background literature value statements about effects or impact or value of interventions
should be avoided. The background section of the review protocol should provide statements
based on relevant literature and should provide clear and explicit literature references.
The background should avoid making statements about cost-effectiveness (or cost-benefit or
cost-utility) unless they are specific to papers that illustrate the need for a systematic review of
the body of literature related to the topic. For example the background section should avoid a
statement like “Use of specialised wound clinics in community centres is cost-effective compared
to hospital based treatment”. This is what the review will determine. If this type of statement is
made it should be clear that it is not the reviewer’s conclusion but that of a third party, such as “The
study by Smith et al., 2010 indicates that use of specialised wound clinics in community centres
is cost-effective compared to hospital based treatment”. Such statements in the background
need to be balanced by other view points, emphasising the need for the synthesis of potentially
diverse bodies of literature.
A statement should also be provided that clarifies whether or not a systematic review
has previously been conducted and/or a rationale for performing another review should one
already exist.
Questions to consider:
Does the background cover all the important elements (PICO) of the systematic review? Are
operational definitions provided? Do systematic reviews already exist on the topic, is so how
is this one different? Why is this review important?
Inclusion criteria
The inclusion criteria should be set out in the protocol to ensure that the boundaries of the
review question are clearly defined. All elements should be specified in detail. Complex issues
may require detailed consideration of terms. Reviewers need to be clear about definitions used.
Conceptual and operational definitions will usually be helpful.
The inclusion criteria should capture all studies of interest. If the criteria are too narrowly defined
there is a risk of missing potentially relevant studies. If the criteria are too broad the review
may contain information, which is hard to compare and synthesise. Inclusion criteria need to be
practical to apply.
The PICO model aims to focus the systematic review and is used to define the properties of
studies to be considered for inclusion in the review. PICO is used to construct a clear and
meaningful question when searching for quantitative evidence.
P = Population (type of participants)
When expanding the title and objectives/questions through the criteria for inclusion, reviewers
will need to consider whether the whole population of people with a specific condition should be
included, or if the population will be limited to specific subsets. Specific reference to population
characteristics (participants’ gender, age, disease severity, co-morbidities, socio-economic status,
ethnicity, geographical area) either for inclusion or exclusion should be based on a clear, scientific
justification rather than based on unsubstantiated clinical, theoretical or personal reasoning.
Joanna Briggs Institute 85
Reviewers’ Manual 2011
The included population should be relevant to the population to which the review findings will
be applied. Explicit inclusion criteria should be defined in terms of the disease or condition of
interest. If the inclusion criteria are broad it may be useful to investigate subgroups of participants.
Where analysis of participant subgroups is planned this should be specified in the protocol. For
example: 56
“The population of interest for this review consisted of adult patients diagnosed with type 2
diabetes mellitus. Those patients with type 1 diabetes mellitus were excluded from the review
on the basis that SMBG is recommended as standard practice for all type 1 diabetes mellitus
patients. Where the data permitted, relevant subgroups of interest were also explored, such
as co morbidities (e.g. presence of heart disease or hypertension) and the treatment regime
of the patient i.e. diet and exercise, oral anti-diabetic agents (OADs) and insulin treated
patients).”
It is useful to list outcomes and identify them as primary or secondary, short-term or long-term,
relative or absolute.
In terms of costing data, the outcome may be described in relation to the type of review.
Therefore the outcomes may be described in relation to cost-minimisation analysis, cost-
effectiveness analysis, cost-benefit analysis or cost-utility analysis (these being the economic
models incorporated in the analytical module ACTUARI). For example: 56
“The main outcome measures were in terms of cost-effectiveness and cost-utility i.e. cost
per life year saved or cost per quality adjusted life year saved (QALY) which is determined
not only by the quantity but quality of additional life years. Studies that use other methods to
formally combine cost and outcome data e.g. an average cost-effectiveness ratio, were also
included.”
Types of Studies
This section should flow naturally from the criteria that have been established to this point, and
particularly from the objective and questions the review seeks to address. For JBI reviews of
health economic evaluation evidence, there are specific study designs of interest to specific
economic questions. These include:
Cost-Minimisation studies: intended to identify the least costly intervention where multiple
interventions have demonstrated similar benefit
Cost-Effectiveness studies: where interventions achieve similar outcomes but have unknown
or potentially different resource implications
Cost-Utility studies: seek to establish benefit as measured by quantity and quality of life
(QALY’s)
Cost-Benefit studies: seek to identify a specific monetary ration (gain/loss or cost/benefit) for
an intervention
The reviewers should specify if they will include in the systematic review only one specific study
design (for example, only cost-minimisation studies) or two (cost-effectiveness and cost-utility)
or more than two study design types. The reviewers should also clarify the types of studies
they will include in the systematic review: comparative prospective economic evaluation studies,
comparative retrospective economic evaluation studies, health economic evaluation modelling
studies. For economic evaluation modelling studies the reviewers should specify the types of
modelling studies they will include in the systematic review.
Search Strategy
Systematic reviews are international sources of evidence; particular nuances of local context
should be informed by and balanced against the best available international evidence.
The protocol should provide a detailed search strategy that will be used to identify all relevant
international research within an agreed time frame. This should include databases that will be
searched, and the search terms that will be used. In addition to this, it should also specify what
types of study design for economic evaluation studies (for example, Cost-Effectiveness CEA etc)
will be considered for inclusion in the review.
Joanna Briggs Institute 87
Reviewers’ Manual 2011
Within JBI systematic reviews, the search strategy is described as a three phase process that
begins with identifying initial key words followed by analysis of the text words contained in the
title and abstract, and of the index terms used to describe relevant articles. The second phase
is to construct database-specific searches for each database included in the protocol, and the
third phase is to review the reference lists of all studies that are retrieved for appraisal to search
for additional studies.
The text describing searching has been standardised in JBI CReMS as follows:
The search strategy aims to find both published and unpublished studies. A three-step search
strategy will be utilised in this review. An initial limited search of MEDLINE and CINAHL will be
undertaken followed by analysis of the text words contained in the title and abstract, and of
the index terms used to describe article. A second search using all identified keywords and
index terms will then be undertaken across all included databases. Thirdly, the reference list
of all identified reports and articles will be searched for additional studies. Studies published
in #insert language(s)# will be considered for inclusion in this review. Studies published #insert
dates# will be considered for inclusion in this review.
The standardised text is editable and includes fields for reviewers to specify content relevant
to their available resources.
Reviewers are required to state the databases to be searched, the initial key words that will be
used to develop full search strategies and if including unpublished studies what sources will be
accessed. The search strategy should also describe any limitations to the scope of searching
in terms of dates, resources accessed or languages; each of these may vary depending on the
nature of the topic being reviewed, or the resources available to each reviewer.
Limiting by date may be used where the focus of the review is on a more recent intervention or
innovation. However, date limiting may exclude seminal early studies in the field and should thus
be used with caution, the decision preferably be endorsed by topic experts, and justified in the
protocol.
The validity of systematic reviews relies in part on access to an extensive range of electronic
databases for literature searching. There is inadequate evidence to suggest a particular number
of databases, or even to specify if any particular databases should be included. Therefore,
literature searching should be based on the principle of inclusiveness, with the widest reasonable
range of databases included that are considered appropriate to the focus of the review.
The comprehensiveness of searching and the documentation of the databases searched is a core
component of the systematic review’s credibility. In addition to databases of published research,
there are several online sources of grey, or unpublished literature that should be considered.
Grey literature is a term that refers to papers, reports, technical notes or other documents
produced and published by governmental agencies, academic institutions and other groups that
are not distributed or indexed by commercial publishers. Many of these documents are difficult
to locate and obtain. Rather than compete with the published literature, grey literature has the
potential to complement and communicate findings to a wider audience.
88 Section 4
Economic Evidence
The Joanna Briggs Institute is an international collaboration with an extensive network of centres
and other entities around the world. This creates networking and resource opportunities for
conducting reviews where literature of interest may not be in the primary language of the
reviewers. Many papers in languages other than English are abstracted in English, from which
reviewers may decide to retrieve the full paper and seek to collaborate with other JBI entities
regarding translation.
It may also be useful to communicate with other JBI entities to identify databases not readily
available outside specific jurisdictions for more comprehensive searching.
JBI entities that do not have access to a range of electronic databases to facilitate searching of
published and grey literature are encouraged to contact JBI, which enables them to access an
increased range of resources.
Obtaining the input of an experienced librarian to develop the search strategy is
recommended.
Details of the numbers of titles identified by the search are to be reported in the systematic
review report so it is important to keep track of search results.
Assessment criteria
The systematic review protocol should provide details of the method of study appraisal to be
used. Details of how the study appraisal is to be used in the review process should be specified.
The protocol should specify the process of appraisal of study quality, the number of reviewers
involved and how disagreements will be resolved. The protocol should specify any exclusion
criteria based on quality considerations.
It is JBI policy that all study types must be critically appraised using the standard critical
appraisal instruments for specific study designs, built into the analytical modules of the SUMARI
software.
As with other types of reviews, the JBI approach to reviews of economic evidence incorporates
a standardised approach to critical appraisal, using the ACTUARI software. The protocol must
therefore describe how the validity of primary studies will be assessed. The systematic review
protocol of economic evidence must include a copy of the ACTUARI critical appraisal checklist
(Appendix X) as an appendix. The checklist is a series of criteria that can be scored as being met,
not met or unclear.
The standardised set text in CReMS states:
Economic papers selected for retrieval will be assessed by two independent reviewers for
methodological validity prior to inclusion in the review using standardised critical appraisal
instruments from the Joanna Briggs Institute Analysis of Cost, Technology and Utilisation
Assessment and Review Instrument (JBI-ACTUARI). Any disagreements that arise between
the reviewers will be resolved through discussion, or with a third reviewer.
The ACTUARI set text can be extended by reviewers who wish to add or edit information.
Joanna Briggs Institute 89
Reviewers’ Manual 2011
The assessment tools included in the analytical module ACTUARI are required for all JBI entities
conducting reviews through JBI. A separate checklist should be used for each type of study
design considered for inclusion in the review (when appropriate) and each should be appended
to the protocol (this occurs automatically in CReMS). ACTUARI has been designed with the
intention that there will be at least two reviewers (a primary and a secondary) independently
conducting the critical appraisal. Both reviewers are initially blinded to the appraisal of the other
review. Once both reviewers have completed their appraisal, the primary reviewer then compares
the two appraisals. The two reviewers should discuss cases where there is a lack of consensus
in terms of whether a study should be included; it is appropriate to seek assistance from a third
reviewer as required. A discussion of each checklist items can be found in Appendix XI and
provides clarification of the objective of each of those items.
The main object of critical appraisal is to assess a study’s quality and determine the extent to
which a study has excluded the possibility of bias in its design, conduct and analysis.
If a study has not excluded the possibility of bias, then its results are questionable and could well
be invalid. Therefore, part of the systematic review process is to evaluate how well the potential
for bias has been excluded from a study, with the aim of only including high quality studies in the
resulting systematic review.
Are copies of the critical appraisal tools appended to the protocol? Has the primary reviewer
assigned a secondary reviewer to the review?
Data extraction
The systematic review protocol should outline the information that will be extracted from studies
identified for inclusion in the review. The protocol should state the procedure for data extraction
including the number of researchers who will extract the data and how discrepancies will be
resolved. The protocol should specify whether authors of primary studies will be contacted to
provide missing or additional data.
As with other types of reviews, the JBI approach to reviews of economic evidence incorporates
a standardised approach and tool to data extraction from ACTUARI software. The standardised
data extraction can be found in Appendix XII.
The JBI systematic review protocol of economic evidence must include in appendices to the
protocol the JBI data extraction form for economic evaluation studies. The set text for data
extraction section of the protocol for systematic reviews of economic evidence in CReMS is the
following:
Economic data will be extracted from papers included in the review using the standardised
data extraction tool from JBI-ACTUARI. The data extracted will include specific details about
the interventions, populations, cost, currency, study methods and outcomes of significance
to the review question and specific objectives.
In addition to the standardised text from CReMS, reviewers should consider describing how
papers will be extracted, and how differences between reviewers were to be resolved.
What outcomes are anticipated? How have they been measured? What type of data is
anticipated? Has the ACTUARI data extraction tool been appended to the protocol?
90 Section 4
Economic Evidence
Data synthesis
The protocol should describe the methods of data synthesis. In CReMS, the standardised text
gives an overview of synthesis as follows:
“Economic findings will, where possible be synthesised and presented in a tabular
summary.
Where this is not possible, findings will be presented in narrative form.”
However, reviewers should seek to address the synthesis of clinical as well as cost effectiveness
data in economic reviews which incorporate both. Additional statements can be added to
CReMS and may include descriptions of how data will be presented, including a description of
the measurement of estimate of effects and the stated percentage for the confidence interval.
Specific reference to continuous and dichotomous data synthesis methods is useful.
Synthesis of economic effectiveness data does not follow the same pattern as synthesis of
clinical effectiveness data. While clinical data is synthesised and given a weighting, economic
data is more commonly subject to one or more of three options for synthesis. Economic results
can be described in this section of the protocol as being subject to:
narrative summary
sorting in tables by comparisons or outcomes (as deemed appropriate by reviewers)
tabulated in a permutation matrix
In the ACTUARI analytical module, this is described as a dominance rating; each outcome
of interest is allocated a position in a grid (which extends from A to I) depending on whether
the intervention should be preferred over its comparator. CReMS does not specify these three
methods of managing the results. Reviewers, however, are encouraged to describe them in their
protocol as a cascade of options, which will in part depend on the quantity, quality and nature
of the economic papers they identify. The permutation matrix has three possible outcomes and
these are determined by the reviewer’s rating of the costs of an intervention of interest balanced
against the health outcomes:
Strong dominance is considered appropriate for decisions clearly in favour of either the
treatment or control intervention from both the clinical and economic effectiveness points
of view.
Weak dominance is utilised where the data support either clinical or economic
effectiveness, but not both positions.
Non-dominance is allocated where the intervention of interest is less effective or more
costly.
The decision or dominance matrix illustrates the data, making visualisation and interpretation by
readers clearer and easier.
Are the methods for data synthesis clearly described? How will data be presented?
Joanna Briggs Institute 91
Reviewers’ Manual 2011
Conflict of interests
A statement should be included in every systematic review protocol being submitted to JBI which
either declares the absence of any conflict of interest, or which describes a specified conflict of
interest. Reviewers are encouraged to refer to the Institute’s policy on commercial funding of
systematic review activity.
Acknowledgments
The source of financial grants and other funding must be acknowledged, including a declaration
of the authors’ commercial links and affiliations. The contribution of colleagues or institutions
should also be acknowledged.
References
Protocols are required to use Vancouver style referencing. References should be numbered in the
order in which they appear with superscript Arabic numerals in the order in which they appear in
text. Full reference details should be listed in numerical order in the reference section.
More information about the Vancouver style is detailed in the International Committee of
Medical Journal Editors’ revised ‘Uniform Requirements for Manuscripts Submitted to
Biomedical Journals: Writing and Editing for Biomedical Publication’, and can be found at
http://www.ICMJE.org/
Appendices
Appendices should be placed at the end of the paper, numbered in Roman numerals and
referred to in the text. At a minimum, this section should include the JBI critical appraisal and
data extraction tools (this occurs automatically when using CReMS).
Does the protocol have any conflicts of interests and acknowledgments declared, appendices
attached, and references in Vancouver style?
Chapter Nine:
The Systematic Review and Synthesis
of Economic Data
Please refer also to the JBI website for specific presentation requirements for systematic review
reports http://www.joannabriggs.edu.au.
All JBI systematic reviews are based on approved peer reviewed systematic reviews protocols,
as discussed in chapter 8. Deviations from approved protocols are rare and should be clearly
justified in the report. JBI advocates for approved peer reviewed systematic review protocols as
an essential part of a process to enhance the quality and transparency of systematic reviews.
JBI systematic reviews should use Australian spelling and authors should therefore follow the latest
edition of the Macquarie Dictionary. All measurements must be given in Systeme International
d’Unites (SI) units. Abbreviations should be used sparingly; use only where they ease the reader’s
task by reducing repetition of long, technical terms. Initially use the word in full, followed by the
abbreviation in parentheses. Thereafter use the abbreviation only. Drugs should be referred to
by their generic names. If proprietary drugs have been used in the study, refer to these by their
generic name, mentioning the proprietary name, and the name and location of the manufacturer,
in parentheses.
Review Authors
The names, contact details and the JBI affiliation should be listed for each reviewer (which occurs
automatically when using CReMS).
Executive Summary
This is generally the final section of the report to be written and should be a summary of the
review in 500 words or less stating the purpose, basic procedures, main findings and principal
conclusions of the review. The executive summary should not contain abbreviations or references.
Joanna Briggs Institute 93
Reviewers’ Manual 2011
Inclusion Criteria
Types of participants
The report should provide details about the type participants included in the review. Useful
details include: age range, condition/diagnosis or health care issue, administration of medication.
Details of where the studies were conducted (e.g. rural/urban setting and country) should also
be included. Again the decisions about the types of participants should have been explained in
the background.
Types of interventions
This section should present all the interventions examined, as detailed in the protocol.
Types of studies
As per the protocol section, the types of studies that were considered for the review should be
included. There should be a statement about the target study type and whether or not this type
was not found. The types of study identified by the search and those included should be detailed
in the report.
Search strategy
A brief description of the search strategy should be included. This section should detail search
activity (e.g. databases searched, initial search terms and any restrictions) for the review, as
predetermined in the protocol.
Data collection
This section should include a brief description of the types of data collected and the instrument
used to extract data.
Data synthesis
This section should include a brief description of how the data was synthesised, where is a meta-
analysis of as a narrative summary.
94 Section 4
Economic Evidence
Conclusions
This section should include a brief description of the findings and conclusions of the review.
Following the executive summary, the report should include the following sections:
Background
As discussed in the protocol section, The Joanna Briggs Institute places significant emphasis on a
comprehensive, clear and meaningful background section to every systematic review particularly
given the international circulation of systematic reviews, variation in local understandings of clinical
practice, health service management and client or patient experiences. This section should be
an overview of the main issues and include any definitions or explanation of any technical terms
used in the review.
Inclusion criteria
As detailed in the protocol, the inclusion criteria used to determine consideration for inclusion
should be stated.
Search strategy
This section should include an overview of the search strategy used to identify articles considered
by the review. The documentation of search strategies is a key element of the scientific validity
of an economic systematic review. It enables readers to look at and evaluate the steps taken,
decisions made and consider the comprehensiveness and exhaustiveness of the search strategy
for each included database.
Each electronic database is likely to use a different system for indexing key words within their
search engines. Hence, the search strategy will be tailored to each particular database. These
variations are important and need to be captured and included in the systematic review report.
Additionally, if a comprehensive systematic review is being conducted through CReMS, the
search strategies for each database for each approach are recorded and reported via CReMS.
Commonly, these are added as appendices.
Where there any deviations from the search strategy detailed in the approved protocol? Any
details, together with an explanation should be included in the search strategy section of the
review report.
Joanna Briggs Institute 95
Reviewers’ Manual 2011
Critical appraisal
This section of the review should include the details of critical appraisal of included studies using
the ACTUARI checklist.
The main object of critical appraisal is to assess the methodological quality of a study and
determine the extent to which a study has excluded the possibility of bias in its design, conduct
and analysis. If a study has not excluded the possibility of bias, then its results are questionable
and could well be invalid. Therefore, part of the systematic review process is to evaluate how
well the potential for bias has been excluded from a study, with the aim of only including high
quality studies in the resulting systematic review. A secondary although no less strategic benefit
of critical appraisal is to take the opportunity to ensure each retrieved study has included the
population, intervention and outcomes of interest specified in the review.
It is JBI policy that economic reviews submitted to JBI should use the ACTUARI critical appraisal
checklist, as discussed in chapter 8. The checklist uses a series of criteria that can be scored as
being met, not met or unclear and can be found in Appendix X. The decision as to whether or
not to include a study can be made based on meeting a pre-determined proportion of all criteria,
or on certain criteria being met. It is also possible to weight the different criteria differently. These
decisions about the scoring system and the cut-off for inclusion should be made in advance, and
be agreed upon by all participating reviewers before critical appraisal commences.
There are specific guidelines for various economic evaluation studies/methods 51 including
models, retrospective studies and prospective studies. There are guidelines focusing specifically
on decision-making models and Markov analyses for health economic evaluations.
Has the ACTUARI critical appraisal tool been appended to the review? Have the results
of critical appraisal been discussed? Where there any differences of opinion between the
reviewers? How were any differences resolved?
Data extraction
The ACTUARI data extraction tool lists a range of fields which describe the study: economic
evaluation method, interventions, comparator, setting, geographical context, participants, source
of effectiveness data, author’s conclusion, reviewer’s comments and a field for whether the
extraction details are ‘complete’. The standardised ACTUARI data extraction form can be found
in Appendix XII. More details about the extraction details fields are provided below:
Setting
Specify the practice setting (outpatient care, inpatient care, home care, community care etc) and
the level of healthcare (primary care, secondary care, tertiary care).
Geographical Context
The Geographical field relates to the region (city, state, country) in which the study took place.
Participants/Population
Important details for types of participants are: specific disease/conditions, stage of the disease,
severity of the disease, co-morbidities, age, gender, ethnicity, previous treatments received,
condition, explicit standardised criteria for diagnosis, setting (for example, hospital, community,
outpatient), who should make the diagnosis of the specific disease, other important characteristics
of participants (such as for example different response to the treatment). Summarise any inclusion/
exclusion criteria reported by the authors. Where studies include economic models the study
population may be hypothetical but defined by the authors.
Source of Effectiveness
There are four options for sources of effectiveness data available in ACTUARI. They refer to the
original location of the information from which the effectiveness of the intervention compared to the
comparator was derived: Single Study (same participants); Single Study (different participants);
Multiple Studies (meta-analysis); Multiple Studies (no meta-analysis). Selection of a particular
type of source document determines which data extraction fields become available in ACTUARI
in the next phase of extraction.
Author’s Conclusion
Summarise the main findings of the study from the author’s perspective.
Reviewer’s Comments
Summarise your interpretation of the study and its significance. Once this data has been extracted
and entered, ACTUARI takes users to a second data extraction page specific to the methods
described under “Source of effectiveness data”. There are two primary sections in this last step in
data extraction. The first relates to the clinical effectiveness component of the study, the second
to the data on economic effectiveness.
Joanna Briggs Institute 97
Reviewers’ Manual 2011
Clinical Effectiveness
This section relates to evidence on the clinical effectiveness of the intervention versus the
comparator, or control group. The five fields in this section are designed for numbers and free
text relating to the study design, for instance: randomised controlled study, cohort study, case
control, interrupted time series; the study date (in years); sample size (in numbers, combining
both treatment and comparator groups if relevant); type of analysis used (e.g. intention to treat
analysis, logistic regression etc.); and the clinical outcome results (survival, survival at 1 year,
survival at 5 years, stroke avoided, fracture avoided, pain intensity, frequency of vomiting,
frequency of pain etc) . If either single study method was chosen data extraction includes the
date of publication for the study. If either multiple study option was chosen, the extraction field
requests the date range that was searched, note this is not the date range of included studies,
but the date range for the search strategy used to identify all studies prior to appraisal.
Economic effectiveness
There are ten fields in the economic effectiveness results section. The first relates to the date
(year) when the economic data were collected; the next relates to any linkages between data
collected on effectiveness and cost – for example, were the data collected on the same or
different participants? The third field requires a list of the measurements (or units) of benefits
that were used in the economic evaluation - were benefits measured in only dollar terms, or in
terms of health outcomes? The fourth, fifth and sixth fields relate to costs examined in the study:
direct costs of the intervention/program being evaluated, indirect costs and the currency used to
measure the costs. The seventh field relates to the results of any sensitivity analysis conducted as
part of the study (a sensitivity analysis would be conducted to determine whether the economic
model and its conclusions are robust to changes in the underlying assumptions of the model).
The eighth field relates to listing the estimated benefits to using the intervention instead of the
comparator. The ninth field requires a summary of the cost results findings, and the tenth is a
summary of the synthesis of the costs and results.
Each of the comparisons between intervention and comparator can only be classed as one of
nine options (Figure 5 A – I). For example, an intervention that was shown to be more effective
and less expensive would be scored as ‘G’, whereas an intervention that was less effective and
of equal cost would be scored as ‘F’.
Example 1:
“For studies comparing docetaxel with paclitaxel, the range of cost–utility ratios for QALYs
gained was £1990–£5233. The low estimate was for the UK20 and the high value was for the
USA.56 Two studies did not present an incremental analysis. One showed docetaxel to be the
dominant strategy over paclitaxel, while the other found vinorelbine to be dominant over either
taxane.55,59”
Joanna Briggs Institute 99
Reviewers’ Manual 2011
Example 2:
“In the three studies comparing docetaxel to vinorelbine, the one UK study showed the cost of
docetaxel per QALY gained was £14,050.20 Although the efficacy rates used were not the result
of a direct-comparison clinical study, the economic evaluation was otherwise of a relatively high
quality.”
Example 3:
“Two of the three UK economic evaluations of taxanes in advanced breast cancer compared
docetaxel to paclitaxel and found a range of incremental cost per QALY gained of £1990– £2431.
One also compared docetaxel with vinorelbine and found the incremental cost per QALY gained
to be £14,050.”
Example 5 of a template for Tables of Results (Sources of data for Economic models)
Study Effectiveness Utilities Data Model Short-term Long-term
(Year) Data Probabilities costs data costs data
Data
Example 6 of a template for template for Tables of Results (CUA, cost per DALY averted)
Description of Average cost per DALY Average cost per DALY Average cost per DALY
Intervention averted (AUD$) averted (AUD$) averted (AUD$)
No age weight Discounted Age weighted
Undiscounted Discounted
Decision Matrix
The decision matrix has three possible outcomes and these are determined by the reviewer’s
rating of the costs of an intervention of interest balanced against the health outcomes:
Strongdominance is considered appropriate for decisions clearly in favour of either the
treatment or control intervention from both the clinical effectiveness and costs points of
view.
Weak dominance is considered where the data support either clinical effectiveness or
costs, but not both positions.
Non-dominance is considered where the intervention of interest is less effective or more
costly.
The decision or dominance matrix illustrates the data, making visualisation and interpretation by
readers clearer and easier.
From the data extraction, particularly the outcome specific data per included paper, reviewers are
able to generate a matrix as shown in Figure 6, which lists the comparison of interest, the score
from the three by three matrix for each study (‘the dominance rating’) and the study citation.
Review Results
The results section of a systematic review report has 3 subsections: Description of studies,
Methodological quality, and Review Findings. In the Description of studies subsection the types
and numbers of papers identified and the number of papers that were included and excluded
should be stated. A flow diagram is recommended. The Methodological quality subsection
should be a summary of the overall quality of the literature identified. The Results subsection
must be organised in a meaningful way based on the objectives of the review and the criteria for
considering studies.
There is no standardised international approach to structuring how the results of systematic
reviews of economic evaluation evidence should be reported. The method of synthesis described
in the protocol will have some bearing on the structure of the results report. Additionally, the
audience for the review should be considered when structuring and writing up the review
results.
Graphs represent a specific item of analysis that can be incorporated in to the results section of
a review. However, the results are more than graphs, and whether it is structured based on the
intervention of interest, or some other structure, the content of the review results section needs
to present the results with clarity using the available tools (tables, figures, matrix) supported by
textual descriptions.
There is no clear international standard or agreement on the structure or key components of the
Review Findings section of a systematic review report. Furthermore given the level of variation
evident in published systematic reviews the issues described in this section should be considered
guidance for consideration rather than a prescription.
The following information is provided on identified studies, retrieved studies, and included studies
in the review results section of the systematic review report: numbers of studies identified,
numbers of retrieved studies, numbers of studies matching a specified type of study design
(i.e. cost-minimisation, cost-effectiveness, cost-utility, cost-benefit), numbers of appraised
studies, numbers of excluded studies and overview of reasons for exclusion, numbers of
included studies.
102 Section 4
Economic Evidence
The findings of the search are commonly written in narrative style, and illustrated with a flow
diagram as shown. Figure 7 below.
Where a systematic review seeks to address multiple questions, the results may be structured
in such a way that particular outcomes structured according to the specific questions. The role
of tables, and appendices should not be overlooked. Adding extensive detail on studies in the
results section may “crowd” the findings, making them less accessible to readers, hence use of
tables, graphs and in text reference to specific appendices is encouraged.
Discussion
The aim of this section is to summarise and discuss the main findings - including the strength
of the evidence, for each main outcome. It should address issues arising from the conduct of
the review including limitations and issues arising from the findings of the review (such as search
limitations). The discussion does not bring in new literature or information that has not been
reported in the results section. The discussion does seek to establish a line of argument based
on the findings regarding the effectiveness of an intervention, or its impact on the outcomes
identified in the protocol. The application and relevance of the findings to relevant stakeholders
(e.g. healthcare providers, patients and policy makers) should also be discussed in this section.
49, 50
Conclusions
The conclusion section of a systematic review should provide a general interpretation of the
findings in the context of other evidence and provide a detailed discussion of issues arising from
the findings of the review and demonstrate the significance of the review findings to practice and
research. Areas that may be addressed include:
A summary of the major findings of the review;
Issues related to the quality of the research within the area of interest;
Other issues of relevance;
Implications for practice and research, including recommendations for the future; and
Potential limitations of the systematic review.
References
The references should be appropriate in content and volume and include background references
and studies from the initial search. The format must be in Vancouver style, as previously discussed
in the Protocol section.
Appendices
The appendices should include:
critical appraisal form(s);
dataextraction form(s);
table of included studies; and
table of excluded studies with justification for exclusion.
These appendices are automatically generated in CReMS.
Conflicts of Interest
A statement should be included in every review protocol being submitted to JBI which either
declares the absence of any conflict of interest, or which describes a specified or potential
conflict of interest. Reviewers are encouraged to refer to the JBI’s policy on commercial funding
of review activity.
Acknowledgements
The source of financial grants and other funding must be acknowledged, including a frank
declaration of the reviewer’s commercial links and affiliations. The contribution of colleagues or
Institutions should also be acknowledged.
106 Section 5
Text & Opinion Based Evidence
www.CartoonStock.com
5
Joanna Briggs Institute 107
Reviewers’ Manual 2011
SECTION
Text and Opinion
Based Evidence
Chapter Ten:
Text and Opinion Based Evidence and Evidence
Based Practice: Protocol and Title Development
for Reviews of Textual, Non-research evidence
Expert opinion has a role to play in evidence based health care, as it can be used to either
complement empirical evidence or, in the absence of research studies, stand alone as the best
available evidence. While rightly claimed not to be a product of ‘good’ science, expert opinion
is empirically derived and mediated through the cognitive processes of practitioners who have
been typically trained in scientific method. This is not to say that the superior quality of evidence
derived from rigorous research is to be denied; rather, that in its absence, it is not appropriate to
discount expert opinion as non-evidence’. 4
Opinion-based evidence refers to expert opinions, comments, assumptions or assertions that
appear in various journals, magazines, monographs and reports. 4, 58-60 An important feature
of using opinion in evidence based practice “is to be explicit when opinion is used so that
readers understand the basis for the recommendations and can make their own judgment about
validity”. 60
Protocol title
While a number of mnemonics have been discussed in the sections on quantitative and qualitative
protocol development, and can be used for opinion and text, one additional mnemonic may
be useful to the nature of opinion-based systematic reviews. The mnemonic SPICE includes
the more generalised term evaluation rather than outcome, and may be more useful in textual
evidence by avoiding association with the quantitative implications of outcomes being associated
with causal evidence, particularly randomised controlled trials. SPICE incorporates the Setting,
Perspective, Intervention, Comparison and Evaluation. However, not all elements necessarily
apply to every text or opinion-based review, and use of mnemonics should be considered a
guide rather than a policy.
Background
The background should describe and situate the elements of the review, regardless of whether
a particular mnemonic is used or not. The background should provide sufficient detail on each
of the mnemonic elements to justify the conduct of the review and the choice of the various
elements of the review.
The Joanna Briggs Institute places significant emphasis on an extensive, comprehensive,
clear and meaningful background section to every systematic review. Given the international
circulation of systematic reviews, variations in local understandings of clinical practice, health
service management and client or patient experiences need to be clearly stated. It is often as
important to justify why elements are not to be included.
Review Objectives/Questions
The objectives guide and direct the development of the specific review criteria. Clarity in the
objectives and specificity in the review questions assists in developing a protocol, facilitates
more effective searching, and provides a structure for the development of the full review report.
The review objectives must be stated in full. Conventionally, a statement of the overall objective
is made and elements of the review are then listed as review questions. With reviews of text and
opinion, consideration needs to be given to the phrasing of objectives and specific questions as
causal relationships are not established through evidence of this nature, hence cause and effect
type questions should be avoided.
Questions to consider:
Does the background cover all the population, phenomenon of interest and the context for
the systematic review? Are operational definitions provided? Do systematic reviews already
exist on the topic? Why is this review important? Are the review objectives/questions
clearly defined?
Joanna Briggs Institute 109
Reviewers’ Manual 2011
Inclusion Criteria
Population/Type of participants
Describe the population, giving attention to whether specific characteristics of interest, such as
age, gender, level of education or professional qualification are important to the question. These
specific characteristics should be stated. Specific reference to population characteristics, either
for inclusion or exclusion should be based on a clear justification rather than personal reasoning.
The term population is used but not to imply that aspects of population pertinent to quantitative
reviews such as sampling methods, sample sizes or homogeneity are either significant or
appropriate in a review of text and opinion.
Intervention/phenomena of interest
Is there a specific intervention or phenomena of interest? As with other types of reviews,
interventions may be broad areas of practice management, or specific, singular interventions.
However, reviews of text or opinion may also reflect an interest in opinions around power, politics
or other aspects of health care other than direct interventions, in which case, these should be
described in detail.
Comparator
The use of a comparator is not required for a review of text and opinion based literature. In
circumstances where it is considered appropriate, as with the intervention, its nature and
characteristics should be described.
Outcome
As with the comparator, a specific outcome statement is not required. In circumstances where
it is considered appropriate, as with the intervention, its nature and characteristics should be
described.
Search strategy
This section should flow naturally from the criteria that have been established to this point, and
particularly from the objective and questions the review seeks to address. As reviews of opinion
do not draw on published research as the principal designs of interest, the reference is to types
of “papers” or “publications” rather than types of “studies”.
As with all types of systematic reviews conducted through JBI, the search strategy does need to
reflect current international standards for best practice in literature searching. CReMS includes
the following editable statement on searching:
The search strategy aims to find both published and unpublished studies. A three-step search
strategy will be utilised in this review. An initial limited search of MEDLINE and CINAHL will be
undertaken followed by analysis of the text words contained in the title and abstract, and of
the index terms used to describe article. A second search using all identified keywords and
index terms will then be undertaken across all included databases.
110 Section 5
Text & Opinion Based Evidence
Thirdly, the reference list of all identified reports and articles will be searched for additional
studies. Studies published in #insert language(s)# will be considered for inclusion in this
review. Studies published #insert dates# will be considered for inclusion in this review.
The databases to be searched include:
#insert text#
The search for unpublished studies will include:
#insert text#
Initial keywords to be used will be:
#insert text#
The protocol should also include a list of databases to be searched. If unpublished papers are
to be included, the specific strategies to identify them are also described, and lists of key words
per database are also recorded.
Data extraction
The section of the protocol should detail what data is to be extracted and the tool that will be
used for extracting that data. JBI reviewers of textual data are required to use the NOTARI data
extraction tool which can be found in Appendix XIV. Data extraction serves the same purpose
across evidence types - as in the previous modules that considered quantitative, qualitative and
economic evidence, extraction aims to facilitate the accurate retrieval of important data that
Joanna Briggs Institute 111
Reviewers’ Manual 2011
can be identified from many papers and summarised into a single document. An extraction is a
summary of the main details of the publication and should be conducted after carefully reading
the publication. Data extraction incorporates several fields relating to the type of text, its authors
and participants, then the content of the paper in the form of conclusions.
The specific fields and types of text to extract are as follows:
−− Types of Text
The type of opinion that is being appraised, for example, an expert opinion, a guideline, a
Best Practice Information Sheet.
−− Those Represented
To whom the paper refers or relates.
−− Stated Allegiance/Position
A short statement summarising the main thrust of the publication.
−− Setting
Setting is the specific location where the opinion was written, for example, a nursing
home, a hospital or a dementia specific ward in a sub-acute hospital. Some papers will
have no setting at all.
−− Geographical Context
The Geographical context is the location of the author(s) - be as specific as possible, for
example Poland, Austria, or rural New Zealand.
−− Cultural Context
The Cultural context is the cultural features in the publication setting, such as, but not
limited to, time period (16th century); ethnic groupings (indigenous Australians); age
groupings (e.g. - older people living in the community); or socio-economic groups (e.g.
- working class). When entering information it is important to be as specific as possible.
This data should identify cultural features such as time period, employment, lifestyle,
ethnicity, age, gender, and socio-economic class or context.
−− Logic of Argument
An assessment of the clarity of the argument’s presentation and logic. Is other evidence
provided to support assumptions and conclusions?
−− Author’s Conclusion
The main finding(s) of the publication.
−− Reviewer’s Comments
A summary of the strengths and weaknesses of the paper.
Textual data extraction involves transferring conclusions from the original publication using an
approach agreed upon and standardised for the specific review. Thus, an agreed format is
essential to minimise error, provide an historical record of decisions made about the data in
terms of the review, and to become the data set for categorisation and synthesis. Specifically, the
reviewer is seeking to extract the Conclusions drawn by the author or speaker and the argument
that supports the conclusion. The supporting argument is usually a quotation from the source
document and is cited by page number with the Conclusion if using NOTARI. Many text and
opinion based reports only develop themes and do not report conclusions explicitly.
112 Section 5
Text & Opinion Based Evidence
It is for this reason that reviewers are required to read and re-read each paper closely to identify
the conclusions to be generated into NOTARI.
The editable set text in NOTARI states:
Textual data will be extracted from papers included in the review using the standardised
data extraction tool from JBI-NOTARI. The data extracted will include specific details about
the phenomena of interest, populations, study methods and outcomes of significance to the
review question and specific objectives.
Data synthesis
This section of the protocol should include details of how the extracted data will be synthesised.
The aim of meta-aggregation is to: firstly, assemble conclusions; secondly, categorise these
conclusions into categories based on similarity in meaning; and thirdly, to aggregate these to
generate a set of statements that adequately represent that aggregation. These statements are
referred to as synthesised findings - and they can be used as a basis for evidence-based practice.
In order to facilitate this process, as with ensuring a common understanding of the appraisal
criteria and how they will be applied, reviewers need to discuss synthesis and work to common
understandings on the assignment of categories, and assignment to synthesised findings.
NOTARI describes a particular approach to the synthesis of textual papers. As with meta-
aggregation in QARI, synthesis in NOTARI is a three-step analytical process undertaken within
the module:
Textual papers will, where possible be pooled using JBI-NOTARI. This will involve the
aggregation or synthesis of conclusions to generate a set of statements that represent that
aggregation, through assembling and categorising these conclusions on the basis of similarity
in meaning. These categories are then subjected to a meta-synthesis in order to produce a
single comprehensive set of synthesised findings that can be used as a basis for evidence-
based practice. Where textual pooling is not possible the conclusions will be presented in
narrative form.
The aim of synthesis is for the reviewer to establish synthesised findings by bringing together key
conclusions drawn from all of the included papers. Conclusions are principal opinion statements
embedded in the paper by the reviewer(s) after examining the text in the paper. It is for this reason
that reviewers are required to read and re-read the paper closely to identify the conclusions to
be generated into NOTARI.
Once all information on a review is collected (see section on extraction) in the form of extractions
and conclusions, the conclusions can be allocated by the reviewer on the basis of similarity to
user defined “Categories”. Categories are groups of conclusions that reflect similar relationships
between similar phenomena, variables or circumstances that may inform practice.
Categorising is the first step in aggregating conclusions and moves from a focus on individual
papers to the conclusions as a whole. To do this, the reviewer needs to read all of the conclusions
from all the papers to identify categories.
To synthesise the categories, the reviewer needs to consider the full list of categories and identify
categories of sufficient similarity in meaning to generate synthesised findings.
Joanna Briggs Institute 113
Reviewers’ Manual 2011
A synthesis is defined as a group of categorised conclusions that allows for the generation of
recommendations for practice. This process is illustrated in Figure 8.
Conflict of Interest
A statement should be included in every review protocol being submitted to JBI which either
declares the absence of any conflict of interest, or which describes a specified or potential
conflict of interest. Reviewers are encouraged to refer to the JBI’s policy on commercial funding
of review activity.
Acknowledgements
The source of financial grants and other funding must be acknowledged, including a frank
declaration of the reviewers commercial links and affiliations. The contribution of colleagues or
Institutions should also be acknowledged.
References
Protocols are required to use Vancouver style referencing. References should be numbered in the
order in which they appear with superscript Arabic numerals in the order in which they appear in
text. Full reference details should be listed in numerical order in the reference section.
More information about the Vancouver style is detailed in the International Committee of Medical
Journal Editors’ revised ‘Uniform Requirements for Manuscripts Submitted to Biomedical
Journals: Writing and Editing for Biomedical Publication’, and can be found at http://www.
ICMJE.org/
Appendices
Appendices should be placed at the end of the protocol and be numbered with Roman numerals
in the order in which they appear in text. At a minimum this will include critical appraisal and data
extraction tools.
Does the protocol have any conflicts of interests and acknowledgments declared, appendices
attached, and references in Vancouver style?
Once a protocol has been approved, it is published on the JBI website. Protocols can be found
at: http://www.joannabriggs.edu.au/Access%20Evidence/Systematic%20Review%20Protocols
114 Section 5
Text & Opinion Based Evidence
Chapter Eleven:
The Systematic Review and Synthesis
of Text and Opinion Data
Please refer to the JBI website for specific presentation requirements for systematic review
reports http://www.joannabriggs.edu.au.
All JBI systematic reviews are based on approved peer reviewed systematic reviews protocols.
Deviations from approved protocols are rare and should be clearly justified in the report. JBI
advocates for approved peer reviewed systematic review protocols as an essential part of a
process to enhance the quality and transparency of systematic reviews.
JBI systematic reviews should use Australian spelling and authors should therefore follow the latest
edition of the Macquarie Dictionary. All measurements must be given in Systeme International
d’Unites (SI) units. Abbreviations should be used sparingly; use only where they ease the reader’s
task by reducing repetition of long, technical terms. Initially use the word in full, followed by the
abbreviation in parentheses. Thereafter use the abbreviation only. Drugs should be referred to
by their generic names. If proprietary drugs have been used in the study, refer to these by their
generic name, mentioning the proprietary name, and the name and location of the manufacturer,
in parentheses.
Review authors:
The names, contact details and the JBI affiliation should be listed for each reviewer
Executive Summary:
This section should be a summary of the review in 500 words or fewer stating the purpose,
basic procedures, main findings and principal conclusions of the study. The executive summary
should not contain abbreviations or references. The following headings should be included in the
Executive Summary:
Joanna Briggs Institute 115
Reviewers’ Manual 2011
−− Background:
This section should briefly describe the issue under review including the target population,
interventions and outcomes that are documented in the literature. The background should
be an overview of the main issues. It should provide sufficient detail to justify why the
review was conducted and the choice of the various elements such as the interventions
and outcomes.
−− Objectives:
The review objectives should be stated in full, as detailed in the protocol section.
Inclusion criteria:
−− Types of participants
The report should provide details about the types of participants included in the review.
Useful details include: age range, condition/diagnosis or health care issue, administration
of medication. Details of where the studies were conducted (e.g. rural/urban setting and
country) should also be included. Again the decisions about the types of participants
should have been explained in the background.
−− Types of interventions
This section should present all the interventions examined, as detailed in the protocol.
−− Types of outcome measures
There should be a list of the outcome measures considered, as detailed in the protocol.
−− Types of publications
As per the protocol section, the types of publications that were considered for the review
should be included. There should be a statement about the target publication type and
whether or not this type was not found. The types of publication identified by the search
and those included should be detailed in the report.
−− Search strategy
A brief description of the search strategy should be included. This section should detail
search activity (e.g. databases searched, initial search terms and any restrictions) for the
review, as predetermined in the protocol.
−− Data collection
This section should include a brief description of the types of data collected and the
instrument used to extract data.
−− Data synthesis
This section should include a brief description of how the data was synthesised, where is
a meta-analysis of as a narrative summary.
−− Conclusions
This section should include a brief description of the findings and conclusions of the
review.
−− Implications for practice
This section should include a brief description of how the findings and conclusions of the
review may be applied in practice, as well as any implications that the findings may have
on current practice.
−− Implications for research
This section should include a brief description of how the findings of the review may lead
to further research in the area – such as gaps identified in the body of knowledge.
116 Section 5
Text & Opinion Based Evidence
Following the Executive Summary, the report should include the following sections:
Background
As discussed in the protocol section, The Joanna Briggs Institute places significant emphasis on a
comprehensive, clear and meaningful background section to every systematic review particularly
given the international circulation of systematic reviews, variation in local understandings of
clinical practice, health service management and client or patient experiences.
Review Objectives/Questions
As discussed previously in the protocol section, the objective(s) of the review should be clearly
stated. Conventionally a statement of the overall objective should be made and elements of the
review then listed as review questions.
Inclusion Criteria
As detailed in the protocol, the inclusion criteria used to determine consideration for inclusion
should be stated. For a review of text and opinion the SPICE mnemonic (Setting, Perspective,
Intervention, Comparison and Evaluation) may be helpful. .
Types of Participants
There should be details about the type of individuals targeted including characteristics (e.g. age
range), condition/diagnosis or health care issue (e.g. administration of medication in rural areas
and the setting(s) in which the individuals are being managed. Again the decisions about the
types of participants should have been justified in the background.
Search Strategy
Developing a search strategy for Opinion and Text-based evidence
There are a range of databases that are relevant to finding expert opinion based literature.
Examples include CINAHL, Pubmed, CRD database from the NHS Centre for Reviews and
Dissemination, University of York, PsychINFO, National Guideline Clearing House and Cochrane
Library.
The following text works through the critical appraisal checklist items.
1. Is the source of opinion clearly identified? Is there a named author?
Unnamed editorial pieces in journals or newspapers, or magazines give broader licence for
comment, authorship should be identifiable.
2. Does the source of opinion have standing in the field of expertise?
The qualifications, current appointment and current affiliations with specific groups need to be
stated in the publication and the reviewer needs to be satisfied that the author(s) has some
standing within the field.
3. Are the interests of patients/clients the central focus of the opinion?
This question seeks to establish if the paper’s focus is on achieving the best health outcomes
or on advantaging a particular professional or other group? If the review topic is related to
a clinical intervention, or aspect of health care delivery, a focus on health outcomes will be
pertinent to the review. However, if for example the review is focused on addressing an issue
of inter-professional behaviour or power relations, a focus on the relevant groups is desired
and applicable. Therefore this question should be answered in context with the purpose of
the review. The aim of this question is to establish the author’s purpose in writing the paper by
considering the intended audience.
4. Is the opinion’s basis in logic/experience clearly argued?
In order to establish the clarity or otherwise of the rationale or basis for the opinion, give
consideration to the direction of the main lines of argument. Questions to pose of each textual
paper include: What are the main points in the conclusions or recommendations? What
arguments does the author use to support the main points? Is the argument logical? Have
important terms been clearly defined? Do the arguments support the main points?
5. Is the argument that has been developed analytical? Is the opinion the result of an analytical
process drawing on experience or the literature?
Does the argument present as an analytical construct of a line of debate or does it appear that
ad hoc reasoning was employed?
6. Is there reference to the extant literature/evidence and any incongruence with it
logically defended?
If there is reference to the extant literature, is it a non-biased, inclusive representation, or is
it a non-critical description of content specifically supportive of the line of argument being
put forward? These considerations will highlight the robustness of how cited literature was
managed.
Joanna Briggs Institute 119
Reviewers’ Manual 2011
Data extraction
This section of the review should include details of the types of data extracted for inclusion
in the review. Data extraction begins with recording the type of text. Once data extraction of
the background details is complete, the extraction becomes highly specific to the nature of
the data of interest and the question being asked in the review. In SUMARI, elements of data
extraction are undertaken through the analytical modules and the data extracted is automatically
transferred to CReMS. For reviews of text and opinion, synthesis is conducted in the NOTARI
analytical module, and the final report is generated in CReMS.
6. Cultural Context
The Cultural Context refers to the cultural features in the publication setting, such as, but
not limited to: time period (16th century); ethnic groupings (indigenous nationalities); age
groupings (e.g. older people living in the community); or socio-economic groups (e.g. working
class). When entering information be as specific as possible. This data should identify cultural
features such as employment, lifestyle, ethnicity, age, gender, socio-economic class, and
time period.
7. Logic of Argument
An assessment of the clarity of the argument’s presentation and logic. Is other evidence
provided to support assumptions and conclusions?
8. Data Analysis
This section of the report should include any techniques that may have been used to analyse
the data – e.g. named software program.
9. Author’s Conclusion
Use this field to describe the main finding of the publication.
10. Reviewer’s Comments
Use this field to summarise the strengths and weaknesses of the paper.
The results section then focuses on providing a detailed description of the results of the review.
For clarity and consistency of presentation, JBI recommends that the reviewers, in discussion
with their review panel give consideration to whether the findings can be reported under the
outcomes specified in the protocol.
Where a systematic review seeks to address multiple questions, the results may be structured in
such a way that particular outcomes are presented under specific questions.
The role of tables and appendices should not be overlooked. Adding extensive detail on studies
in the results section may “crowd” the findings, making them less accessible to readers, hence
use of tables, graphs and in text reference to specific appendices is encouraged. Additionally,
and significantly, the report structure should give consideration to the needs of the journal, for JBI
systematic reviews, the preferred journal is the International Journal of Evidence-Based Health
Care, details about this journal are available online.
Has the NOTARI data extraction tool been appended to the review? Have all of the extracted
findings been discussed and assigned levels of credibility in the review?
Data Analysis
As the process relates to textual findings rather than numeric data, the need for methodological
homogeneity – so important in the meta-analysis of the results of quantitative studies – is not a
consideration. The meta-aggregation of findings of qualitative studies can legitimately aggregate
findings from studies that have used radically different, competing and antagonistic methodological
claims and assumptions, within a qualitative paradigm. Meta-aggregation in NOTARI does not
distinguish between methodologies or theoretical standpoints and adopts a pluralist position that
values viewing phenomena from different perspectives.
Joanna Briggs Institute 121
Reviewers’ Manual 2011
Data Synthesis
This section of the report should include how the findings were synthesised. Where meta-
aggregation is possible, textual findings should be pooled using NOTARI however, if necessary,
the reviewer may use interpretive techniques to summarise the findings of individual papers .
The processes for categorisation and formulating synthesised findings mirror that of QARI. For
a more detailed discussion of synthesis reviewers are encouraged to read the section on data
synthesis for qualitative studies.
Data synthesis should involve the aggregation or synthesis of findings to generate a set of
statements that represent that aggregation, through assembling the findings rated according
to their credibility, and categorising these findings on the basis of similarity in meaning.
These categories should then be subjected to a meta-synthesis in order to produce a single
comprehensive set of synthesised findings that can be used as a basis for evidence-based
practice. Where textual pooling is not possible the findings can be presented in narrative form.
The editable NOTARI set text states:
Textual papers will, where possible be pooled using JBI-NOTARI. This will involve the
aggregation or synthesis of conclusions to generate a set of statements that represent that
aggregation, through assembling and categorising these conclusions on the basis of similarity
in meaning. These categories are then subjected to a meta-synthesis in order to produce a
single comprehensive set of synthesised findings that can be used as a basis for evidence-
based practice. Where textual pooling is not possible the conclusions will be presented in
narrative form.
The set text in CReMS describes the process by which these options are implemented in the
protocol development section as follows:
Prior to carrying out data synthesis, reviewers first need to establish, and then document:
−− their
own rules for setting up categories;
−− how to assign conclusions (findings) to categories; and
−− how to aggregate categories into synthesised findings.
Conclusions are principal findings reached by the reviewer(s) after examining the results of
data analysis, for example themes, metaphors, consisting of a statement that relates to two
or more phenomena, variables or circumstances that may inform practice. A reviewer can add
conclusions to a study after an extraction is completed on that paper
The JBI approach to synthesising the conclusions of textual or non-research studies requires
reviewers to consider the validity of each report as a source of guidance for practice; identify and
extract the conclusions from papers included in the review; and to aggregate these conclusions
as synthesised findings. To reiterate:
Findings are conclusions reached and reported by the author of the paper, often in the form
of themes, categories or metaphors.
122 Section 5
Text & Opinion Based Evidence
The most complex problem in synthesising textual data is agreeing on and communicating
techniques to compare the conclusions of each publication. The JBI approach uses the NOTARI
analytical module for the meta-synthesis of opinion and text. This process involves categorising
and re-categorising the conclusions of two or more studies to develop synthesised findings. In
order to pursue this, reviewers, before carrying out data synthesis, need to establish their own
rules on:
−− howto assign conclusions to categories, and
−− howto aggregate categories into synthesised findings.
Reviewers should also document these decisions and their rationale in the systematic
review report.
Many text and opinion-based reports only develop themes and do not report conclusions
explicitly. It is for this reason that reviewers are required to read and re-read each paper closely
to identify the conclusions to be generated into NOTARI.
Each conclusion/finding should be assigned a level of credibility, based on the congruency of the
finding with supporting data from the paper where the finding was found. Textual evidence has
three levels of credibility:
Unequivocal - relates to evidence beyond reasonable doubt which may include findings that are
matter of fact, directly reported/observed and not open to challenge
Credible - relates to those findings that are, albeit interpretations, plausible in light of the data
and theoretical framework. They can be logically inferred from the data. Because the findings are
interpretive they can be challenged.
Unsupported - is when the findings are not supported by the data
When all conclusions and supporting illustrative data have been identified, the reviewer needs to
read all of the conclusions and identify similarities that can then be used to create categories of
more than one finding.
Categorisation is the first step in aggregating conclusions and moves from a focus on individual
papers to consideration of all conclusions for all papers included in the review. Categorisation
is based on similarity in meaning as determined by the reviewers. Once categories have been
established, they are read and re-read in light of the findings, their illustrations and in discussion
between reviewers to establish synthesised findings. NOTARI sorts the data into a meta-
synthesis table or “NOTARI-View”, when allocation of categories to synthesised findings (a set of
statements that adequately represent the data) is completed. These statements can be used as
a basis for evidence-based practice.
Have all of the conclusions been extracted from the included papers?
Do all of the conclusions have illustrations?
Do all of the conclusions have levels of credibility assigned to them?
Joanna Briggs Institute 123
Reviewers’ Manual 2011
Results
Description of publications
This section should include the type and number of papers identified by the search and the
numbers of studies that were included and excluded from the review. A flowchart such as that
shown in Figure 9.
Trials included in
systematic review
(n = 18)
Multifaceted Targeted
interventions intervetntions
(n = 10) (n = 8)
Review Findings
There is no standardised international approach to structuring how the findings systematic reviews
of textual or non-research evidence should be reported. The audience for the review should
be considered when structuring and writing up the findings. NOTARI-view graphs represent a
specific item of analysis that can be incorporated in to the results section of a review. However,
the results are more than the NOTARI-view graphs, and whether it is structured based on the
intervention of interest, or some other structure, the content of this section needs to present the
results with clarity using the available tools (NOTARI-view graphs, tables, figures) supported by
textual descriptions.
Given there is no clear international standard or agreement on the structure or key components
of this section of a review report, and the level of variation evident in published systematic reviews
the parameters described in this section should be considered guidance for consideration rather
than a prescription.
Discussion
This section should provide a detailed discussion of issues arising from the conduct of the review,
as well as a discussion of the findings of the review and to demonstrate the significance of the
review findings in relation to practice and research. Areas that may be addressed include:
−− A summary of the major findings of the review
−− Issues related to the quality of the research within the area of interest (such as poor
indexing)
−− Other issues of relevance
−− Implications for practice and research, including recommendations for the future
−− Potential limitations of the systematic review (such as a narrow search timeframe or other
restrictions)
The discussion does not bring in new literature or findings that have not been reported in the
results section but does seek to establish a line of argument based on the findings regarding the
phenomenon of interest, or its impact on the outcomes identified in the protocol.
Conclusions
Implications for practice
Where evidence is of a sufficient level, appropriate recommendations should be made. The
implications must be based on the documented results, not reviewer opinion. Recommendations
must be clear, concise and unambiguous.
Developing recommendations
The Joanna Briggs Institute develops and publishes recommendations for practice with each
systematic review, wherever possible. Across the different types of evidence and approaches
to systematic reviews, a common approach is the construct of recommendations for practice,
which can be summed up as the requirement for recommendations to be phrased as declamatory
statements.
Conflict of Interest
A statement should be included in every review protocol being submitted to JBI which either
declares the absence of any conflict of interest, or which describes a specified or potential
conflict of interest. Reviewers are encouraged to refer to the JBI’s policy on commercial funding
of review activity.
Acknowledgements
The source of financial grants and other funding must be acknowledged, including a frank
declaration of the reviewers commercial links and affiliations. The contribution of colleagues or
Institutions should also be acknowledged.
References
Protocols are required to use Vancouver style referencing. References should be numbered in the
order in which they appear with superscript Arabic numerals in the order in which they appear in
text. Full reference details should be listed in numerical order in the reference section.
More information about the Vancouver style is detailed in the International Committee of
Medical Journal Editors’ revised ‘Uniform Requirements for Manuscripts Submitted to
Biomedical Journals: Writing and Editing for Biomedical Publication’, and can be found at
http://www.ICMJE.org/
126 Section 5
Text & Opinion Based Evidence
Appendices
The appendices should include:
critical
appraisal form(s);
data
extraction form(s);
table of included studies; and
table of excluded studies with justification for exclusion.
These appendices are automatically generated in CReMS.
SECTION
Publishing
The process for publishing a review that has been conducted using the JBI approach to the
systematic review of literature involves an internal quality improvement process through the
submission (and subsequent approval) of a protocol and a review to the Synthesis Science Unit
(SSU). The review is then uploaded to the JBI Library of Systematic Reviews. The JBI Library
of Systematic Reviews publishes systematic reviews undertaken by the Joanna Briggs Institute
and its international collaborating centres and groups. However there is facility within the Library
for publication of systematic reviews undertaken by authors unrelated to the Joanna Briggs
Institute.
Centres undertaking systematic reviews as their core focus are required to submit their systematic
review reports to the JBI Library of Systematic Reviews in order for it to be considered as output
for their Centre. This output is used to determine the Centre’s status and funding eligibility on an
annual basis.
Please note: Only systematic reviews that have had their protocols approved by the SSU prior to
review submission are eligible to be published in the JBI Library of Systematic Reviews. Those
who have not submitted a protocol to the SSU will be invited to submit their review to the
International Journal of Evidence-Based Healthcare (see below).
Editor-in-Chief
Emeritus Professor Derek Frewin AO
Faculty of Health Sciences, The University of Adelaide and the Cardiac Clinic, Royal Adelaide
Hospital, Adelaide, South Australia, Australia
Objectives
The objectives of the SSU, in relation to protocols and systematic reviews developed by JBI, the
Collaboration or ESGs are to:
i. Support Collaborating Centres and Evidence Synthesis Groups to develop high quality
Systematic Review Protocols and Systematic Review Reports;
ii. Develop an annual “Best Practice Information Sheet” booklet.
Specifically, the SSU supports quality improvement and increased output of systematic reviews
by accepting responsibility for all internal peer reviews of systematic review protocols. It provides
constructive feedback to reviewers, and assists reviewers in improving protocols, search
strategies and reporting. Furthermore, the SSU encourages reviewers to complete systematic
reviews in a timely fashion by monitoring the progress of registered systematic reviews and
maintaining contact with reviewers.
130 Section 6
Publishing
On receipt of new protocols submitted to the Institute for approval (prior to uploading to the
database of systematic review protocols), members of the SSU will rigorously review the protocol
and give supportive, constructive feedback to authors and establish a supportive peer review
process to enable authors to develop a high quality protocol. Unit staff will also offer additional
assistance – such as re-designing search strategies, conducting searches and assisting with
the retrieval of documents – to groups with limited access to electronic databases and the
international literature. Following submission this feedback is provided to authors within a two-
week timeframe.
Once a protocol is approved and uploaded to the database of protocols, Unit staff will work with
reviewers to facilitate completion of the review and will monitor progress towards completion on
a three monthly basis. When a completed report is submitted to the unit, SSU research fellows
rigorously review the report and give supportive, constructive feedback to authors and establish
a supportive process to enable authors to finalise and subsequently publish the report. Following
submission of the report, feedback is provided to authors within a four-week time frame.
Essentially, the goal of the Unit is to increase both the quality and output of systematic reviews
by providing support, advice and assistance, rather than acting as critics requiring reviewers to
interpret and act upon critique.
Reviewers who have completed the JBI CSRTP and the four day Train-the-Trainer Program may
be granted a license to deliver the JBI CSRTP. Reviewers who are interested in the Train-the-
Trainer Program must be affiliated with a Collaborating Centre.
Companion Publications
While a core function of JBI, the JBC and ESGs is to develop and produce systematic reviews,
the intended result of this review activity is to improve global health by providing practitioners
with the best available evidence concerning the feasibility, appropriateness, meaningfulness and
effectiveness of health care practice, interventions and experiences. To maximise the exposure
to best practice, systematic reviews produced through JBI, or entities known for the production
of high quality reviews are re-written as Best Practice Information Sheets. Each Best Practice
Information Sheet is accompanied by a Technical Report, which is also written by the Synthesis
Science Unit. Further information on these documents is provided below.
Technical Reports
A technical report is developed along side the Best Practice information sheet to detail the
development process between the systematic review and the guideline for health professionals.
Technical reports contain all details of reviewers and review panel members, as well as all
references used. Technical reports are produced by the SSU.
Joanna Briggs Institute 135
Reviewers’ Manual 2011
References
1. Grant M, Booth A. A typology of reviews: an 14. Wong S, Wilczynski N, Haynes R. Developing
analysis of 14 review types and associated optimal search strategies for detecting clinically
methodologies. Health Information and Libraries relevant qualitative studies in Medline. Stud
Journal. 2009;26:91-108. Health Technol Inform. 2004;107(1):311-6.
2. Higgins J, Green S. Cochrane Handbook for 15. Black N. Why we need qualitative research.
Systematic Reviews of Interventions Version Journal of Epidemiological Community Health.
5.0.2 Oxford: The Cochrane Collaboration; 1994;48:425-6.
2009 [cited 2010 October 2010]; Available from: 16. Greenhalgh T, Taylor R. Papers that go beyond
www.cochrane-handbook.org. numbers (qualitative research). BMJ. 1997 Sep
3. Guyatt G, Sackett D, Sinclair J, Hayward R, Cook 20;315(7110):740-3.
D, Cook R, et al. Users’ Guides to the Medical 17. Centre for Reviews and Dissemination. Centre
Literature IX. A Method for Grading Health Care for Reviews and Dissemination, Guidelines
Recommendations. JAMA. 1995;274(22):1800- for Undertaking Reviews in Health Care. York:
4. University of York; 2009.
4. The Joanna Briggs Institute. The Joanna Briggs 18. Barbour RS. The case for combining qualitative
Institute Reviewers Manual 2008 Edition. and quantitative approaches in health services
Adelaide: The Joanna Briggs Institute; 2008. research. J Health Serv Res Policy. 1999
5. Kotaska A. Inappropriae use of randomised trials Jan;4(1):39-43.
to evaluate complex phenomena: case study of 19. Forman J, Creswell JW, Damschroder L,
vaginal birth delivery. BMC Medical Research Kowalski CP, Krein SL. Qualitative research
Methodology. 2004;329(7473):1039-42. methods: key features and insights gained from
6. Muir Gray J. Evidence based policy making. use in infection prevention research. Am J Infect
British Medical Journal. 2004;329(7374):988. Control. 2008 Dec;36(10):764-71.
7. Proctor S, Renfew M. Linking research and 20. Moffatt S, White M, Mackintosh J, Howel D.
practice in midwifery. Aslam R, editor. Edinburgh: Using quantitative and qualitative data in health
Bailliere Tindall; 2000. services research - what happens when mixed
8. Pearson A. The JBI Approach to Evidence- method findings conflict? [ISRCTN61522618].
Based Practice. Adelaide: The Joanna Briggs BMC Health Serv Res. 2006;6:28.
Institute; 2008 [cited 2010 Nov 30]; Available 21. Thorne, Jensen, Kearney, Noblit, Sandelowski.
from: www.joannabriggs.edu.au/pdf/about/ 2004.
Approach.pdf 22. Sandelowski M, Docherty S, Emden C. Focus on
9. Pearson A, editor. Evidence based practice in qualitative Methods Qualitative Metasynthesis:
nursing and healthcare: Assimilating reserach, Issues and Techniques. Research in Nursing
experience and expertise. Oxford: Blackwell; and Health. 1997;20:365-71.
2007. 23. Hannes K, Lockwood C. Pragmatism as the
10. Pope C. Qualitative research in health care. philosophical foundation for the JBI meta-
Mays N, editor. London: BMJ publishing group; aggregative approach to qualitative evidence
1999. synthesis. Journal of Advanced Nursing. In
11. Jordan Z, Donnelly P, Pittman P. A short history Press.
of a BIG idea: The Joanna Briggs Institute 1996- 24. Pearson A. Balancing the evidence: incorporating
2006. Institute TJB, editor. Melbourne2006. the synthesis of qualitative data into systematic
12. Ailinger R. Contributions of qualitative evidence reviews. JBI Report. 2004;2(2):45-65.
to evidence based practice in nursing. 25. Noblit G, Hare R. Meta-Ethnography:
Revista Latino-americana de Enfermagem. Synthesizing Qualitative Studies. Qualitative
2003;11(3):275-9. Research Methods Newbury Park: Sage
13. Denzin N, Lincoln Y. Handbook of qualitative Publications; 1988.
research. 3rd ed. Thousand Oaks CA: Sage 26. Doye. 2003.
Publications; 2005.
136 References
27. Shaw R, Booth A, Sutton A, Miller T, Smith 40. Shadish W, Cook T, Campbell T. Experimental
J, Bridget Young B, et al. Finding qualitative and quasi-experimental designs for generalised
research: an evaluation of search strategies. causal inference. Boston Houghton Mifflin
BMC Medical Research Methodology. Company; 2001.
2004;4(5):1-5. 41. Altman D, Bland J. Treatment allocation in
28. Hannes K, Lockwood C. Pragmatism as the controlled trials: why randomise? British Medical
philosophical foundation for the Joanna Briggs Journal. 1999;318:1209.
Meta-aggregative Approach to Qualitative 42. Schulz K, Grimes D. Blinding in randomised trials:
Evidence Synthesis. Journal of Advanced hiding who got what. Lancet. 2002;359:696-
Nursing. 2010: ( in press). 700.
29. Halcomb E, Suzanne Moujalli S, Griffiths R, 43. Ioannidis JP. Integration of evidence from
Davidson P. Effectiveness of general practice multiple meta-analyses: a primer on umbrella
nurse interventions in cardiac risk factor reviews, treatment networks and multiple
reduction among adults. International Journal of treatments meta-analyses. CMAJ. 2009 Oct
Evidence Based Health Care. 2007;5(3):299-5. 13;181(8):488-93.
30. Barza M, Trikalinos TA, Lau J. Statistical 44. Petitti D. Meta-analysis, Decision Analysis,
Considerations in Meta-analysis. Infectious and Cost-Effectiveness Analysis: Methods for
Disease Clinics of North America. Quantitative Synthesis in Medicine. Second
2009;23(2):195-210. Edition ed. New York: Oxford University Press;
31. Borenstein M, Hedges L, Higgins J, Rothstein 2000.
H. Introduction to Meta-analysis. Chichester: 45. Deeks J, Higgins J, Altman D. Analysing and
John Wiley & Sons; 2009. presenting results: section 8. 426 ed. Higgins
32. Greenhalgh T. How to read a paper. Statistics for J, Green S, editors. Chichester: John Wiley and
the non-statistician. II: “Significant” relations and Sons Ltd; 2006.
their pitfalls. BMJ. 1997 Aug 16;315(7105):422- 46. Higgins J, Thompson S, Deeks J, Altman D.
5. Statistical heterogeneity in systematic reviews of
33. Greenhalgh T. How to read a paper. Statistics clinical trials: a critical appraisal of guidelines and
for the non-statistician. I: Different types of data practice. Journal of Health Services Research
need different statistical tests. BMJ. 1997 Aug and Policy. 2002;7(1):51-61.
9;315(7104):364-6. 47. Hardy R, Thompson S. Detecting and describing
34. Bastian H. Personal views: Learning from heterogeneity in meta-analysis. Statistics in
evidence based mistakes. British Medical Medicine. 1998;17(8):841-56.
Journal. 2004;329(7473):1053. 48. Deeks J, Higgins J, Altman D. Statistical
35. Moher D. The inclusion of reports of randomised methods for examining heterogeneity and
trials published in languages other than English combining resuls from several studies in meta-
in systematic reviews Health Technology analysis. Egger M, Davey-Smith G, Altman D,
Assessment. 2003;7(41):1-90. editors. London: BMJ Publishing Group; 2001.
36. Pearson A, Field J, Jordan Z. Evidence based 49. Liberati A, Altman D, Tetzlaff J, Mulrow C,
practice in nursing and health care; Assimilating Gotzsche P, Ioannidis J, et al. The PRISMA
research, experience and expertise. Oxford: statement for reporting systematic reviews and
Blackwell Publishing; 2007. meta-analyses of studies that evaluate health
37. Trohler U. The 18th century British Origins of a care interventions: explanation and elaboration.
critical approach. Edinburgh: Royal College of PLoS Medicine. 2009;6(7):e1000100.
Physicians; 2000. 50. Moher D, Liberati A, Tetzlaff J, Altman D.
38. Miller S, Fredericks M. The Nature of “Evidence” Preferred reporting items for systematic reviews
in Qualitative Research Methods. International and meta-analyses: The PRISMA statement.
Journal of Qualitative Evidence. 2003;2(1):39- PLoS Medicine. 2009;6(7):e1000097.
51. 51. Annemans L. Health economics for non-
39. Pearson A, Wiechula R, Court A, Lockwood C. A economists. An introduction to the concepts,
re-consideration of what constitutes “evidence” methods and pitfalls of health economic
in the healthcare professions. Nursing Science evaluations. . Gent: Academia Press; 2008.
Quarterly. 2007 Jan;20(1):85-8. 52. Elliott R, Payne K. Essentials of Economic
Evaluation in Healthcare. London:
Pharmaceutical Press; 2005.
Joanna Briggs Institute 137
Reviewers’ Manual 2011
Glossary
Action Research Cost-benefit analysis
a method of collaborative research which seeks (CBA) is an analytic tool for estimating the net
to create self-critical communities as a basis for social benefit of a programme or intervention as
change the incremental benefit of the program less the
Association incremental costs, with all benefits and costs
a term to describe a relationship between two measured in monetary units (eg., dollars)
factors. Often used where there is no clear causal Cost-effectiveness analysis
effect of one variable upon the other (CEA) is an analytic tool in which costs and effects of
Benefit-Cost Ratio a program and at least one alternative are calculated
is a ratio commonly used to describe the conclusion and presented in a ratio of incremental costs to
of a Cost–Benefit study. It is the ratio of the present incremental effect
value of benefits to the present value of costs. Cost-effectiveness ratio
Category/categories is the incremental cost of obtaining a unit of health
terms used to describe a group of findings that can effect (such as dollars per year, or life expectancy)
be grouped together on the basis of similarity of from a given health intervention, when compared
meaning. This is the first step in aggregating study with an alternative
findings in the JBI meta-aggregation approach of Cost-minimisation analysis
meta-synthesis. (CMA) is an analytic tool used to compare the net
Causation costs of programs that achieve the same outcome
a term to describe a relationship between two factors Costs
where changes in one factor leads to measurable in economic evaluation studies refer to the value of
changes in the other resources that have a cost as a result of being used
Comprehensive systematic review in the provision of an intervention
a JBI comprehensive review is a systematic review Cost-utility analysis
that incorporates more than one type of evidence, (CUA) is an economic evaluation study in which costs
e.g. both qualitative and quantitative evidence are measured in monetary units and consequences
Continuous are typically measured as quality-adjusted life-years
data that can be measured on a scale that can take (QALYs)
any value within a given range such as height, weight Critical appraisal
or blood pressure the process of comparing potentially relevant studies
Control to pre-defined criteria designed in order to assess
in general, refers to a group which is not receiving the methodological quality. Usually checklists are used
new intervention, receiving the placebo or receiving with items designed to address specific forms of
standard healthcare and is being used to compare bias dependent on study design. Action research,
the effectiveness of a treatment Feminist research and Discourse Analysis are
methodologies associated with this paradigm.
Convenience sampling
a method for recruiting participants to a study. A Critical Research Paradigm
convenience sample refers to a group who are being a qualitative research paradigm that aims to not
studied because they are conveniently accessible only describe and understand but also asks what is
in some way. A convenience sample, for example, happening and explores change and emancipation.
might be all the people at a certain hospital, or Dichotomous
attending a particular support group. A convenience data that can be divided into discrete categories
sample could make be unrepresentative, as they are such as, male/female or yes/no
not a random sample of the whole population Direct costs
Correlation represent the value of all goods, services, and other
the strength and direction of a relationship between resources that are consumed in the provision of an
variables intervention or in dealing with the side effects or
other current and future consequences linked to it
Joanna Briggs Institute 139
Reviewers’ Manual 2011
Appendices
Appendix I – Title registration form�������������������������������������������������������������������������������������� 144
Appendix II – Critical appraisal tool for Systematic reviews ������������������������������������������������ 145
Appendix III – Data extraction tools for Systematic reviews�������������������������������������������������� 146
Appendix IV – QARI critical appraisal tools ��������������������������������������������������������������������������� 147
Appendix V – QARI data extraction tools ����������������������������������������������������������������������������� 148
Appendix V – QARI data extraction tools – Findings������������������������������������������������������������ 149
Appendix VI – JBI Levels of evidence������������������������������������������������������������������������������������ 150
Appendix VII – MAStARI critical appraisal tools
Randomised Control / Pseudo-randomised Trial�������������������������������������������� 151
Appendix VII – MAStARI critical appraisal tools
Comparable Cohort / Case Control Studies������������������������������������������������� 152
Appendix VII – MAStARI critical appraisal tools
Descriptive / Case Series Studies ������������������������������������������������������������������ 153
Appendix VIII – Discussion of MAStARI critical appraisal checklist items ������������������������������ 154
Appendix IX – MAStARI data extraction tools
Extraction details�������������������������������������������������������������������������������������������� 161
Appendix IX – MAStARI data extraction tools
Dichotomous Data – Continuous Data����������������������������������������������������������� 162
Appendix X – ACTUARI critical appraisal tools �������������������������������������������������������������������� 163
Appendix XI – Discussion of ACTUARI critical appraisal checklist items������������������������������ 164
Appendix XII – ACTUARI data extraction tools����������������������������������������������������������������������� 167
Appendix XII – ACTUARI data extraction tools
Clinical effectiveness and economic results �������������������������������������������������� 168
Appendix XIII – NOTARI critical appraisal tools����������������������������������������������������������������������� 169
Appendix XIV – NOTARI data extraction tools (Conclusions) �������������������������������������������������� 170
Appendix XV – Some notes on searching for evidence����������������������������������������������������������� 171
144 Appendices
Appendix IV – Q
ARI critical appraisal tools
148 Appendices
4 Expert opinion Expert opinion Expert opinion Expert opinion, Expert opinion, or
or physiology based on economic
bench research, or theory
consensus
Joanna Briggs Institute 151
Reviewers’ Manual 2011
As discussed in the section on protocol development, it is JBI policy that all study types must be
critically appraised using the critical appraisal instruments for specific study designs incorporated
in to the analytical modules of the SUMARI software. The primary and secondary reviewer should
discuss each item of appraisal for each study design included in their review.
In particular, discussions should focus on what is considered acceptable to the needs of the
review in terms of the specific study characteristics such as randomisation or blinding in RCTs.
The reviewers should be clear on what constitutes acceptable levels of information to allocate
a positive appraisal compared with a negative, or response of “unclear”. This discussion should
take place before independently conducting the appraisal.
Whichever approach to randomisation was used, it should be described with sufficient detail to
enable reviewers to determine whether the method used was sufficient to minimise selection
bias. Authors of primary studies have competing interests in describing their methods, the need
to be descriptive at times conflicts with the need to fit within word limits. However, brevity in
the methods often leaves reviewers unable to determine the actual method of randomisation.
Generalist phrases such as “random”, “random allocation” or “randomisation” are not sufficient
detail for a reviewer to conclude randomisation was “truly random”, it is then up to the reviewer
to determine how to rank such papers. This should be raised in initial discussion between the
primary and secondary reviewers before they commence their independent critical appraisal.
2. Were participants blinded to treatment allocation?
Blinding of participants is considered optimal as patients who know which arm of a study
they have been allocated to may inadvertently influence the study by developing anxiety or
conversely, being overly optimistic, attempting to “please” the researchers. This means under- or
over-reporting outcomes such as pain or analgesic usage; lack of blinding may also increase loss
to follow-up depending on the nature of the intervention being investigated.
3. Was allocation to treatment groups concealed from the allocator?
Allocation is the process by which individuals (or groups if stratified allocation was used) are
entered in to one of the study arms following randomisation. The Cochrane Systematic Review
handbook states: When assessing a potential participant’s eligibility for a trial, those who are
recruiting participants … should remain unaware of the next assignment in the sequence until
after the decision about eligibility has been made. Then, after assignment has been revealed,
they should not be able to alter the assignment or the decision about eligibility. The ideal is for the
process to be impervious to any influence by the individuals making the allocation 2
Allocator concealment of group allocation is intended to reduce the risk of selection bias. Selection
bias is a risk where the allocator may influence the specific treatment arm an individual is allocated
to, thus optimally, trials will report the allocator was unaware of which group all study participants
were randomised to, and had no subsequent influence on any changes in allocation.
4. Were the outcomes of people who withdrew described and included in the analysis?
Commonly intention to treat analysis is utilised where losses to follow-up are included in the
analysis. Intention to treat (ITT) analysis may reduce bias due to changes in the characteristics
between control and treatment groups that can occur if people either drop out, or if there is a
significant level of mortality in one particular group. The Cochrane Systematic Review handbook
identifies two related criteria for ITT analysis, although it is equally clear that how these criteria are
applied remains an issue of debate:
−− Trialparticipants should be analysed in the groups to which they were randomised
regardless of which (or how much) treatment they actually received, and regardless of
other protocol irregularities, such as ineligibility
−− All participants should be included regardless of whether their outcomes were actually
collected. 2
156 Appendices
Descriptive/Case-series
1. Was the study based on a random or pseudo-random sample?
Recruitment is the calling or advertising strategy for gaining interest in the study, and is not the
same as sampling. Studies may report random sampling from a population, and the methods
section should report how sampling was performed.
Joanna Briggs Institute 159
Reviewers’ Manual 2011
Extraction details
162 Appendices
Dichotomous Data
Continuous Data
Joanna Briggs Institute 163
Reviewers’ Manual 2011
3. Are all important and relevant costs and outcomes for each alternative identified?
Questions that will assist you in addressing this criterion 55:
−− Was the range wide enough for the research question at hand?
−− Did it cover all relevant viewpoints (e.g., those of the community or society, patients and
third-party payers)?
−− Were capital costs as well as operating costs included?
−− Was justification provided for the ranges of values (for key parameters) used in the
sensitivity analysis?
−− Were the study results sensitive to changes in the values (within the assumed range)?
166 Appendices
11. Are the results generalisable to the setting of interest in the review?
−− Factors limiting the transferability of economic data are: demographic factors;
epidemiology of the disease; availability of health care resources; variations in clinical
practice; incentives to health care professionals; incentives to institutions; relative prices;
relative costs; population values.
Joanna Briggs Institute 167
Reviewers’ Manual 2011
Appendix XIV – N
OTARI data extraction tools
(Conclusions)
Joanna Briggs Institute 171
Reviewers’ Manual 2011
There is insufficient evidence to suggest that searching a particular number or even particular
databases will identify all of the evidence on a particular topic, therefore JBI recommend that
a search should be as broad and as inclusive as possible. The following section offers some
suggestions for search terms and databases that may be helpful in constructing a search
strategy.
Search filters are pre-tested strategies that identify articles based on criteria such as specified
words in the title, abstract and keywords. They can be of use to restrict the number of articles
identified by a search from the vast amounts of literature indexed in the major medical databases.
Search filters look for sources of evidence based on matching specific criteria – such as
certain predefined words in the title or abstract of an article. Search filters have strengths and
weaknesses:
(i) Strengths: they are easy to implement and can be pre-stored or developed as an interface
(ii) Limitations: database-specific; platform-specific; time-specific; not all empirically tested
and therefore not reproducible; assume that articles are appropriately indexed by authors
and databases.
MEDLINE (Medical Literature Analysis and Retrieval System Online) is the U.S. National Library of
Medicine’s main bibliographic database with references to journal articles in biomedicine and the
life sciences. This is the main component of PubMed, which provides access to MEDLINE and
some other resources, including articles published in MEDLINE journals which are beyond the
scope of MEDLINE, such as general chemistry articles. Approximately 5,200 journals published
in the United States and more than 80 other countries have been selected and are currently
indexed for MEDLINE. A distinctive feature of MEDLINE is that the records are indexed with
NLM’s controlled vocabulary, the Medical Subject Headings (MeSH®).
In addition to MEDLINE citations, PubMED also contains:
−− In-process citations which provide a record for an article before it is indexed with MeSH
and added to MEDLINE or converted to out-of-scope status.
−− Citations that precede the date that a journal was selected for MEDLINE indexing (when
supplied electronically by the publisher).
−− Some OLDMEDLINE citations that have not yet been updated with current vocabulary
and converted to MEDLINE status.
−− Citations to articles that are out-of-scope (e.g., covering plate tectonics or astrophysics)
from certain MEDLINE journals, primarily general science and general chemistry journals,
for which the life sciences articles are indexed with MeSH for MEDLINE.
−− Some life science journals that submit full text to PubMed Central® and may not yet have
been recommended for inclusion in MEDLINE although they have undergone a review
by NLM, and some physics journals that were part of a prototype PubMed in the early to
mid-1990’s.
−− Citations to author manuscripts of articles published by NIH-funded researchers.
One of the ways users can limit their retrieval to MEDLINE citations in PubMed is by selecting
MEDLINE from the Subsets menu on the Limits screen.
NLM distributes all but approximately 2% of all citations in PubMed to those who formally lease
MEDLINE from NLM.
MEDLINE® is the U.S. National Library of Medicine’s® (NLM) premier bibliographic database
that contains approximately 18 million references to journal articles in life sciences with a
concentration on biomedicine.
Ovid is the search system provided to the Health Sciences/UCH/TCH community by the Health
Sciences Library. It includes MEDLINE, as well as 12 other databases. PubMed is provided
free of charge by the National Library of Medicine. PubMed includes MEDLINE, as well as Pre-
MEDLINE and select online publications provided directly from publishers. Below is a brief list
of selected features.
The US Interagency on Gray Literature Working Group (1995) defined grey literature (or ‘greylit’
as it is sometimes referred to in the information management business) as: “foreign or domestic
open source material that usually is available through specialised channels and may not enter
normal channels or system of publication, distribution, bibliographical control or acquisition by
booksellers or subscription agents”. 61
Furthermore, grey literature has been defined as:
That which is produced on all levels of government, academics, business and industry in print
and electronic formats, but which is not controlled by commercial publishers moves the field
of grey literature beyond established borders into new frontiers, where lines of demarcation
between conventional/non-conventional and published/unpublished literature cease to
obstruct further development and expansion. At the same time, this new definition challenges
commercial publishers to rethink their position on grey literature. 4
When building a search strategy for grey literature, it is important to select terms specifically for
each source. In using mainstream databases, or Google-type searches (including GoogleScholar),
it is best to draw from a list of keywords and variations developed prior to starting the search. To
be consistent and systematic throughout the process, using the same keywords and strategy
is recommended. It is important to create a strategy, compile a list of keywords, wildcard
combinations and identify organisations that produce grey literature. If controlled vocabularies
are used, record the index terms, qualifiers, keywords, truncation, and wildcards.
176 Appendices
Searching the medical grey literature can be time-consuming because there is no ‘one-stop
shopping’ database or search engine that indexes materials the way, for example as CINAHL
does for nursing and allied health or MEDLINE does for the biomedical sciences. The Mednar
database indexes qualitative grey literature articles and may be useful :
http://mednar.com/mednar/
as may the Qualitative times website:
http://www.qualitativeresearch.uga.edu/QualPage/
It should be remembered that your access to bibliographic databases may depend on the
subscriptions taken by your library service and the search interface may also vary depending on
the database vendor, for example Ovid, EBSCO, ProQuest, etc. or whether you access MEDLINE
via the free PubMed interface:
The following search engines are very useful for finding health-based scientific literature:
www.scirus.com
www.metacrawler.com
www.disref.com.au/
www.hon.ch/Medhunt/Medhunt.html
www.medworld_stanford.edu/medbot/
http://sumsearch.uthscsa.edu/cgi-bin/SUMSearch.exe/
www.intute.ac.uk/healthandlifesciences/omnilost.html
www.mdchoice.com/index.asp
www.science.gov/
http://www.eHealthcareBot.com/
http://medworld.stanford.edu/medbot/
http://omnimedicalsearch.com/
http://www.ingentaconnect.com/
http://www.medical-zone.com/
Scirus (www.scirus.com), for example, is a science-specific search engine with access to over
410 million science-related web pages (as of February 2011), and it indexes sites that other search
engines do not. Its medical sites include ArXiv.org, Biomed Central, Cogprints, DiVa, LexisNexis,
and PsyDok. PsyDok is a disciplinary Open Access repository for psychological documents.
PsyDok is operated by Saarland University and State Library (SULB), which also hosts the special
subject collection psychology and the virtual library psychology. PsyDok is a free, full-text e-print
archive of published, peer-reviewed journal post-prints plus pre-publications, reports, manuals,
grey literature, books, journals, proceedings, dissertations and similar document types.
Joanna Briggs Institute 177
Reviewers’ Manual 2011
Search the World Wide Web for higher level - usually government-affiliated- funding bodies, for
instance Australia’s NHMRC (National Health and Medical Research Council) or MSAC (Medical
Services Advisory Committee) for pointers to reports such as clinical trials or reviews from funded
research programmes.
Be aware that there are health information gateways or portals on the Internet containing links
to well organised websites containing primary research documents, clinical guidelines, other
sources and further links. For example:
World Health Organisation,
http://www.who.int/library/
National Institute on Alcohol Abuse and Alcoholism,
http://www.niaaa.nih.gov/
Canadian Health Network,
http://www.canadian-health-network.ca/customtools/homee.html
Health Insite,
http://www.healthinsite.gov.au/
MedlinePlus,
http://www.nlm.nih.gov/medlineplus
National Guidelines Clearinghouse,
http://www.guideline.gov/index.asp
National Electronic Library for Health (UK),
http://www.nelh.nhs.uk/
Partners in Information Access for the Public Health Workforce,
http://phpartners.org/guide.html
Clinical guidelines sites Identify universities, colleges, institutes, collaborative research centres
(CRCs) nationally and internationally that have profiles or even specialisations in your area of
interest, and check their library websites – they should provide a range of relevant resources
and web links already listed. For example, theses or dissertations are generally included on
universities’ library pages because these have to catalogued by library technicians according to
subject heading, author, title, etc. University library pages will also have links to other universities’
theses collections, for example:
Dissertation Abstracts
Theses Canada Portal
Networked Digital Library of Theses and Dissertations (NDLTD)
Index to Theses
Search academic libraries’ Online Public Access Catalogues (OPACS), which are excellent
sources of grey literature in that these catalogues provide access to local and regional materials,
are sources for bibliographic verification, they index dissertations, government and technical
reports, particularly if the authors are affiliated with the parent organisation or agency as scholars
or researchers
178 Appendices
Authors, if in academic positions, sometimes have their own web pages. Find home pages for
specific researchers - either by navigating through their institution’s home page or by Internet.
Contact others working in the same/similar area to see if they already have reference lists they
are prepared to share or names of others working in the same/related fields, for example contact
authors of Cochrane protocols that are not yet completed. This is especially useful for clinicians
because they know who works in their specific area of interest.
Identify any conference series in the area of interest. You will find these in academic or national
libraries due to the legal deposit rule.
Many national libraries collect grey literature created in their countries under legal deposit
requirements. Their catalogues are usually available on the Internet. Some also contain holdings
of other libraries of that country, as in the Australian National Library’s Libraries Australia: http://
librariesaustralia.nla.gov.au/apps/kss If you want to conduct an international search, be aware
of the existence of WORLDCAT, a service which aims to link the catalogues of all major libraries
under one umbrella. http://www.worldcat.org/
The media often reports recent medical or clinical trials so check newspaper sites on the Internet.
Take note (if you can) of who conducted the trial, where, when, the methodology used, and
nature of experimental group or groups so you can locate the original source.
Set up ‘auto alerts’ if possible on key databases so that you can learn about new relevant
material as it becomes available.
Join a relevant web discussion group/list and post questions and areas of interest; your contacts
may identify leads for you to follow.
Grey literature is increasingly referenced in journal articles, so reference lists should be checked
via hand-searching. Hand searching is recommended for systematic reviews because of the
hazards associated with missed studies. Hand searching is also a method of finding recent
publications not yet indexed by or cited by other researchers.
For our topic, here are a few organisations that are relevant to your search:
ETOH - Alcohol and Alcohol Problems Science Database, referred to as ETOH,
http://etoh.niaaa.nih.gov/Databases.htm
National Institute on Alcohol Abuse and Alcoholism (NIAAA), http://www.niaaa.nih.gov/
National Institute on Drug Abuse (NIDA), http://www.nida.nih.gov/
Canadian Centre on Substance Abuse (CCSA),
http://www.ccsa.ca/CCSA/EN/TopNav/Home/
National Center for Complementary and Alternative Medicine (NCCAM),
http://nccam.nih.gov/health/acupuncture/
National Acupuncture Detoxification Association (NADA), http://www.acudetox.com
Step Three – Finding and Searching Specialised Databases for Grey Literature
Contacting relevant organisations noted in your mainstream database search is a good way
to assess what resources exist in the form of special databases, library catalogues, etc. Some
websites have resources providing a ‘jumping-off’ point for your search deeper into the World
Wide Web. Finding the web sites in Step Two and ‘digging deeper’ into them will enable you
to discover the documents they have, and their links to more precise sites with databases that
specialise in acupuncture issues. Examples of these are as follows:
HTA Database, http://144.32.150.197/scripts/WEBC.EXE/NHSCRD/start
The Traditional Chinese Drug Database (TCDBASE), http://www.cintcm.com/index.htm
Drug Database (Alcohol and other Drugs Council of Australia),
http://203.48.73.10/liberty3/gateway/gateway.exe?application=Liberty3&displayform=op
ac/main
Canadian Centre for Substance Abuse,
http://www.ccsa.ca/CCSA/EN/Addiction_Databases/LibraryCollectionForm.htm
Combined Health Information Database (CHID), http://chid.nih.gov/search/
Grey literature differs from other published literature in that it is:
Not formally part of ‘traditional publishing models’. Producers, to name a few, include
research groups, non-profit organisations, universities and government departments.
In many cases high-quality research still waiting to be published and/or indexed.
Not widely disseminated but nonetheless important in that an infrastructure does exist to
disseminate this material and make it visible.
Some organisations create their own reports, studies of trials, guidelines, etc.
Specialised strategies are still needed to facilitate identification and retrieval.
Librarians try to adopt pro-active approaches to finding this material, though web-based
searching, self-archiving and open access are helping to facilitate access. If you have access to
a library service, your librarian should be able to assist you in your quest for uncovering the grey
literature in your area of interest.
Intute is a free online service providing access to the very best web resources for education and
research. All material is evaluated and selected by a network of subject specialists to create the
Intute database.
Joanna Briggs Institute 181
Reviewers’ Manual 2011
http://www.intute.ac.uk/
This database includes pre-vetted resources by subject-specialists in areas of health, science,
tech, social sciences, and arts/ humanities. I like Intute’s brilliant search options: browse by
MeSH or by keywords. It is like a happy and fun version of the internet–someone else has already
gone ahead and removed the rubbish so you don’t have to wade through it.
With millions of resources available on the Internet, it is difficult to find relevant and appropriate
material even if you have good search skills and use advanced search engines.
Issues of trust, quality, and search skills are very real and significant concerns - particularly in
a learning context. Academics, teachers, students and researchers are faced with a complex
environment, with different routes into numerous different resources, different user interfaces,
search mechanisms and authentication processes.
The Intute database makes it possible to discover the best and most relevant resources in one
easily accessible place. You can explore and discover trusted information, assured that it has
been evaluated by specialists for its quality and relevance.
http://mednar.com/mednar/
Mednar is a one-stop federated search engine therefore non-indexing, designed for professional
medical researchers to quickly access information from a multitude of credible sources.
Researchers can take advantage of Mednar’s many tools to narrow their searches, drill down
into topics, de-duplicates, ranks and clusters results as well as allowing you to discover new
information sources. Comprehensively searches multiple databases in real time, instead of
crawling and indexing static content like Google or many meta-search engines, Mednar queries
select, high quality databases to search simultaneously. It utilizes the native search tools available
at each of the 47 related sites/databases. If you follow the search links, you’ll find a search box
at all of the sources.
http://worldwidescience.org/index.html
Another Deep Web search mechanism, WorldWideScience.org is a global science gateway
connecting you to national and international scientific databases and portals. WorldWideScience.
org accelerates scientific discovery and progress by providing one-stop searching of global science
sources. The WorldWideScience Alliance, a multilateral partnership, consists of participating
member countries and provides the governance structure for WorldWideScience.org.
Its very good for a global perspective, includes OpenSIGLE, Chinese, Indian, African, Korean etc
sources and the database interface has only been in existence since June 2007.
Thesis/Dissertations
ProQuest Dissertations & Theses Database (PQDT)
With more than 2.3 million entries, the ProQuest Dissertations & Theses (PQDT) database is
the most comprehensive collection of dissertations and theses in the world. Graduate students
customarily consult the database to make sure their proposed thesis or dissertation topics have
not already been written about. Students, faculty, and other researchers search it for titles related
to their scholarly interests. Of the millions of graduate works listed, we offer over 1.9 million in full
text format. PQDT is a subscription database, so consult your library for availability.
182 Appendices
Dissertation Abstracts Online (DIALOG) is a definitive subject, title, and author guide to virtually
every American dissertation accepted at an accredited institution since 1861. Selected Masters
theses have been included since 1962. In addition, since 1988, the database includes citations
for dissertations from 50 British universities that have been collected by and filmed at The British
Document Supply Centre. Beginning with DAIC Volume 49, Number 2 (Spring 1988), citations
and abstracts from Section C, Worldwide Dissertations (formerly European Dissertations), have
been included in the file.
Abstracts are included for doctoral records from July 1980 (Dissertation Abstracts International,
Volume 41, Number 1) to the present. Abstracts are included for masters theses from Spring
1988 (Masters Abstracts, Volume 26, Number 1) to the present.
Individual, degree-granting institutions submit copies of dissertations and theses completed
to University Microfilms International (UMI). Citations for these dissertations are included in the
database and in University Microfilms International print publications: Dissertation Abstracts
International (DAI), American Doctoral Dissertations (ADD), Comprehensive Dissertation Index
(CDI), and Masters Abstracts International (MAI). A list of cooperating institutions can be found
in the preface to any volume of Comprehensive Dissertation Index, Dissertation Abstracts
International, or Masters Abstracts International.
Qualitative Databases
British Nursing Index: From the partnership of Bournemouth University, Poole Hospital NHS
Trust, Salisbury Hospital NHS Trust and the Royal College of Nursing comes the most extensive
and up-to-date UK nursing and midwifery index. It covers all the major British publications and
other English language titles with unrivalled currency making it the essential nursing and midwifery
database. The database provides references to journal articles from all the major British nursing
and midwifery titles and other English language titles. BNI is an essential resource for nurses,
midwives, health visitors and community staff.
Academic Search™ Premier (Ebscohost) Academic Search™ Premier contains indexing
and abstracts for more than 8,300 journals, with full text for more than 4,500 of those titles. PDF
backfiles to 1975 or further are available for well over one hundred journals, and searchable cited
references are provided for more than 1,000 titles. The database contains unmatched full text
coverage in biology, chemistry, engineering, physics, psychology, religion & theology, etc.
HealthSource®:Nursing/Academic Edition (Ebscohost) This resource provides nearly 550
scholarly full text journals focusing on many medical disciplines. Coverage of nursing and allied
health is particularly strong, including full text from Creative Nursing, Issues in Comprehensive
Pediatric Nursing, Issues in Mental Health Nursing, Journal of Advanced Nursing, Journal of
Child & Adolescent Psychiatric Nursing, Journal of Clinical Nursing, Journal of Community Health
Nursing, Journal of Nursing Management, Nursing Ethics, Nursing Forum, Nursing Inquiry, and
many more.
Joanna Briggs Institute 183
Reviewers’ Manual 2011
In addition, this database includes the Lexi-PAL Drug Guide which covers 1,300 generic drug
patient education sheets with more than 4,700 brand names.
Sociological Abstracts (formerly SocioFile) ex ProQquest CSA Sociological Abstracts
and indexes the international literature in sociology and related disciplines in the social and
behavioural sciences. The database provides abstracts of journal articles and citations to book
reviews drawn from over 1,800+ serials publications, and also provides abstracts of books, book
chapters, dissertations, and conference papers. Records published by Sociological Abstracts in
print during the database’s first 11 years, 1952-1962, have been added to the database as of
November 2005, extending the depth of the backfile of this authoritative resource.
Many records from key journals in sociology, added to the database since 2002, also include
the references cited in the bibliography of the source article. Each individual reference may also
have links to an abstract and/or to other papers that cite that reference; these links increase the
possibility of finding more potentially relevant articles. These references are linked both within
Sociological Abstracts and across other social science databases available on CSA Illumina.
Academic Onefile Gale Academic Onefile is the premier source for peer-reviewed, full-text
articles from the world’s leading journals and reference sources. With extensive coverage of
the physical sciences, technology, medicine, social sciences, the arts, theology, literature and
other subjects, Academic OneFile is both authoritative and comprehensive. With millions of
articles available in both PDF and HTML full-text with no restrictions, researchers are able to find
accurate information quickly.
In addition to all of the traditional services available through InfoTrac, Gale is proud to announce
a number of new services offered through collaboration with Scientific/ISI. Mutual subscribers
of Academic OneFile and Scientific’s Web of Science® and Journal Citation Reports® will be
provided seamless access to cited references, digital object identifier (DOI) links, and additional
article-level metadata, as well as access to current and historical information on a selected
journal’s impact factor. Further, Scientific customers will be able to access the full-text of an article
right from their InfoTrac subscription. This close collaboration will allow for fully integrated and
seamless access to the best in academic, full-text content and the indexing around it. Academic
OneFile also includes a linking arrangement with JSTOR for archival access to a number of
periodicals, as well as full OpenURL compliance for e-journal and subscription access.
184 Appendices
Scopus
Scopus is the largest abstract and citation database of research literature and quality web
sources. It’s designed to find the information scientists need. Quick, easy and comprehensive,
Scopus provides superior support of the literature research process. Updated daily, Scopus
offers.
Over 16,000 peer-reviewed journals from more than 4,000 publishers
−− over 1200 Open Access journals
−− 520 conference proceedings
−− 650 trade publications
−− 315 book series
36 million records
Results from 431 million scientific web pages
23 million patent records from 5 patent offices
“Articles-in-Press” from over 3,000 journals
Seamless links to full-text articles and other library resources
Innovative tools that give an at-a-glance overview of search results and refine them to the
most relevant hits
Alerts to keep you up-to-date on new articles matching your search query, or by
favourite author
Scopus is the easiest way to get to relevant content fast. Tools to sort, refine and quickly
identify results help you focus on the outcome of your work. You can spend less time mastering
databases and more time on research.
Grounded Theory - A qualitative method developed by Glaser and Strauss to unite theory
construction and data analysis.
Multimethod Studies - Studies which combine quantitative and qualitative methods.
Structured Categories - A method where qualitative behaviours and events occurring within the
observational setting are arranged systematically or quantitatively.
Transferability - Potential to extend the findings of a qualitative research study to comparable
social situations after evaluation of similarities and differences between the comparison and
study group(s).
Unstructured Categories or Variable - A qualitative or quantitative entity within the population
under study that can vary or take on different values and can be classified into two or more
categories.
Phenomenology - Method of study to discover and understand the meaning of human life
experiences.
Reviewers may use the following methodological index terms (but NOT limit themselves to these)
as either subject headings or text words (or a combination of both) that appear in citations’ title
or abstract. Use Advanced, Basic, exact phrase, field restrictions (e.g. publication or theory/
research type) search strategies according to database.
−− ethnographic research
−− phenomenological research
−− ethnonursing research or ethno-nursing research
−− purposive sample
−− observational method
−− content analysis or thematic analysis
−− constant comparative method
−− mixed methods
−− author citations, e.g. Glaser & Strauss; Denkin & Lincoln; Heidegger, Husserl, etc.
−− perceptions or attitudes or user views or viewpoint or perspective
−− ethnographic or micro-ethnographic or mini-ethnographic
−− field studies hermeneutics
−− theoretical sample
−− discourse analysis
−− focus groups/
−− ethnography or ethnological research
−− psychology
−− focus group or focus groups
−− descriptions
−− themes
−− emotions or opinions or attitudes
−− scenarios or contexts
−− hermeneutic or hermeneutics
186 Appendices
Cochrane Library
The search interface for this collection permits the user to search all 8 individually or altogether
using a single strategy. CENTRAL - The Cochrane Central Register of Controlled Trials (Clinical
Trials) – filters controlled clinical trials from the major healthcare databases (MEDLINE, EMBASE,
CRD, etc.) and other sources (including unpublished reports). Most of the studies are RCTs and
therefore an excellent starting point for evidence of effectiveness in the absence of a systematic
review.
Search terms for CENTRAL:
−− clinical trial [pt]
−− randomized [tiab]*
−− placebo [tiab]
−− dt [sh]*
−− randomly [tiab]
−− trial [tiab]
−− groups [tiab]
−− animals [mh]
−− humans [mh]
Joanna Briggs Institute 187
Reviewers’ Manual 2011
CINAHL (Ebsco)
There is no specific limiter for Randomised Controlled Trials in CINAHL. The best search strategy
is to search for your topic by using the CINAHL Headings Clinical Trial and Clinical Trial Registry
(see their scope notes). Clinical Trial, which is used for experimental trial/trials, explodes to the
following list of subheadings:
−− Double-Blind Studies
−− InterventionTrials
−− Preventive Trials
−− Single-Blind Studies
−− Therapeutic Trials
PsycINFO (Ovid)
As with CINAHL, there is no specific heading for Randomised Controlled Trials in the PsycINFO
thesaurus. The closest Subject Heading is Clinical Trials, used since 2004; the scope note reads:
“Systematic, planned studies to evaluate the safety and efficacy of drugs, devices, or diagnostic
or therapeutic practices. Used only when the methodology is the focus of discussion”. PsycINFO
picks up English and U.S. spelling) without any limits put on them or put into combined sets.
TRIP database
Search – as Phrase (within single quotation marks)
– ‘randomised controlled trial’
– rct
– rct*
‘clinical trial’– consider this term as well because it appears several times in document title with
randomized controlled trial or RCT
EMBASE (Ovid)
As with CINAHL and PsycINFO, there is no specific heading for Randomised Controlled Trials in
EMBASE. The best heading to use is Clinical Study (14,540 citations), which can be narrowed by
selecting ‘More Fields’ (example title as ‘ti:’), and/or ‘Limits’ and/or ‘More Limits’ as required, very
188 Appendices
similar to MEDLINE and PsycINFO via Ovid. Clinical Study is used for clinical data and medical
trials. Associated subheadings that may contain RCT data are the following:
−− Case report
−− Case study
−− Hospital based case control study
−− Case control study
−− Intervention study
−− Major clinical study
Boolean Searching
Use any combination of terms with Boolean OR, for example “predict.tw OR guide.tw” as
Boolean AND strategy invariably compromises sensitivity. Alternatively, selected combinations
of the above terms with researcher’s considered text words (e.g. ‘diabetes’) may achieve high
sensitivity or specificity in retrieving studies, or journal subsets using the Boolean AND and thus
reducing the volume of literature searched.
Text word searching No indexing terms contribute to optimised search strategies so typing in text
words that are relevant to RCTs and clinical trials is best. Precision may be improved by applying
the application of AND /AND NOT Boolean operators of addition of clinical content terms or
journal subsets using the Boolean AND.
Search terms
−− exp randomized controlled trial/
−− (random$ or placebo$).ti,ab,sh.
−− ((singl$ or double$ or triple$ or treble$) and (blind$ or mask$)).tw,sh
−− controlled clinical trial$.tw,sh
−− (human$ not animal$).sh,hw.
Bandolier
Oxford-based Bandolier finds information about evidence of effectiveness from PubMed,
Cochrane Library and other web-based sources each month concerning: systematic reviews,
meta-analyses, randomised trials, and high quality observational studies. Large epidemiological
studies may be included if they shed important light on a topic. Use the ‘Advanced Search’
capability to find RCTs.
Joanna Briggs Institute 189
Reviewers’ Manual 2011
PsiTri
A free clinical trial-based database with links to the Cochrane Collaboration, on treatments and
interventions for a wide range of mental health-related conditions. The trial data which is extracted
from the references reporting on a specific trial, includes information regarding: health condition,
interventions/treatment, participants, research methods, blinding, outcomes, i.e. how the effect
of the interventions was measured, etc.
Medline
−− Randomized controlled trials/
−− Randomized controlled trial.pt.
−− Random allocation/
−− Double blind method/
−− Single blind method/
−− Clinical trial.pt.
−− Exp clinical trials/
−− Or/1-7
−− (clinic$ adj trial$1).tw.
190 Appendices
Embase
−− Clinical trial/
−− Randomized controlled trial/
−− Randomization/
−− Single blind procedure/
−− Double blind procedure/
−− Crossover procedure/
−− Placebo/
−− Randomi?ed controlled trial$.tw.
−− Rct.tw.
−− Random allocation.tw.
−− Randomly allocated.tw.
−− Allocated randomly.tw.
−− (allocated adj2 random).tw.
−− Single blind$.tw.
−− Double blind$.tw.
−− ((treble or triple) adj (blind$).tw.
−− Placebo$.tw.
−− Prospective study/
−− Or/1-18
−− Case study/
−− Case report.tw.
−− Abstract report/ or letter/
−− Or/20-22
−− 19 not 23
Joanna Briggs Institute 191
Reviewers’ Manual 2011
Cinahl
−− Exp clinical trials/
−− Clinical trial.pt.
−− (clinic$ adj trial$1).tw.
−− ((singl$ or doubl$ or trebl$ or tripl$) adj (blind$3 or mask$3)).tw.
−− Randomi?ed control$ trial$.tw.
−− Random assignment/
−− Random$ allocat$.tw.
−− Placebo$.tw.
−− Placebos/
−− Quantitative studies/
−− Allocat$ random$.tw.
−− Or/1-11
http://www.otseeker.com/
OTseeker is a database that contains abstracts of systematic reviews and randomised controlled
trials relevant to occupational therapy. Trials have been critically appraised and rated to assist you
to evaluate their validity and interpretability. These ratings will help you to judge the quality and
usefulness of trials for informing clinical interventions. In one database, OTseeker provides you
with fast and easy access to trials from a wide range of sources. We are unable to display the
abstract of a trial or systematic review until the journal that it is published in, or the publisher of the
journal, grants us copyright permission to do so. As OTseeker was only launched in 2003, there
are many journals and publishers that we are yet to contact to request copyright permission.
Therefore, the number of trials and systematic reviews for which we are able to display the
abstracts will increase over time as we establish agreements with more journals and publishers.
192 Appendices
Subject Coverage
Subject coverage includes:
Hospital Management
Hospital Administration
Marketing
Human Resources
Computer Technology
Facilities Management
Insurance
Econlit (Ebscohost)
EconLit, the American Economic Association’s electronic database, is the world’s foremost
source of references to economic literature. EconLit adheres to the high quality standards long
recognized by subscribers to the Journal of Economic Literature (JEL) and is a reliable source
of citations and abstracts to economic research dating back to 1969. It provides links to full
text articles in all fields of economics, including capital markets, country studies, econometrics,
economic forecasting, environmental economics, government regulations, labor economics,
monetary theory, urban economics and much more.
EconLit uses the JEL classification system and controlled vocabulary of keywords to index six
types of records: journal articles, books, collective volume articles, dissertations, working papers,
and full text book reviews from the Journal of Economic Literature. Examples of publications
indexed in EconLit include: Accounting Review, Advances in Macroeconomics, African Finance
Journal, American Economist, British Journal of Industrial Relations, Business Economics,
Canadian Journal of Development Studies, Harvard Business Review, Journal of Applied
Business Research, Marketing Science, Policy, Small Business Economics, Technology Analysis
and Strategic Management, etc. EconLit records include abstracts of books, journal articles,
and working papers published by the Cambridge University Press. These sources bring the total
records available in the database to more than 1,010,900.
194 Appendices
Searchable Fields
The default fields for unqualified keyword searches consist of the following: Title, Author, Book
Author, Reviewer, Editor, Author Affiliation, Publisher Information, Geographic Descriptors,
Festschrift, Named Person, Source Information, Subject Descriptors, Descriptor Classification
Codes, Keywords, Availability Note and the Abstract Summary.
*Note: The EBSCOhost Near Operator (N) used in proximity searching interferes with unqualified
keyword searching on a Descriptor Classification Code beginning with an “N”. In this instance,
use the CC (Descriptor Classification Code) search tag to avoid inconclusive search results.
Example Search: CC N110
Joanna Briggs Institute 195
Reviewers’ Manual 2011
The following list will help you locate detailed information referenced in this database as a field.
BioMedCentral
Opinion and text-based evidence as part of research articles can be found using the ‘Advanced’
searching strategy (with filter option as needed) only over any time period and the keyword
results are as follows:
‘expert’ [title] and ‘opinion’ [title]
‘expert opinion’ [title – exact phrase]
‘editorial’ [title] and ‘opinion’ [title]
‘opinion’ [title] and ‘evidence’ [title, abstract and text]
‘editorial opinion’ [title – exact phrase]
‘medical’ [title] and ‘experts’ [title]
‘clinical’ [title] and ‘knowledge’ [title]
‘opinion-based’ [title, abstract and text]
‘opinions’ [title]
‘expert opinion’ [title, abstract and text]
‘testimony’ [title, abstract and text]
‘comment’ [title]
‘opinion-based’ [title, abstract and text] and ‘evidence’ [title, abstract and text]
Also use Boolean search strategy for any combination of the above terms.
Cochrane Library
There are several ways to use Cochrane Library to find opinion or expert-related evidence.
(a) MeSH Searching - Cochrane Library has the same MeSH identifiers as MEDLINE and the
CRD databases, so use them to find expert opinion-type evidence in Cochrane.
198 Appendices
(b) Exact phrase searching – use double quotation marks around terms in ‘Search’ box [option
to use is Title, Abstract or Keywords].
“opinion-based”
“expert testimony”
“medical expert”
“personal opinion”
“clinical opinion”
“medical opinion”
“editorial comment”
“commentary”
(c) Advanced Searching - Boolean Central boxes permit you to specify individual search terms
or phrases; right-hand boxes are for selecting field (author, keywords, all text); left-hand boxes for
Boolean operators. Results of Boolean searching with Title, Abstract and Text option:
expert AND opinion
opinion AND based AND evidence
opinion-based AND evidence
expert-based AND evidence
expert AND opinion AND evidence
expert AND testimony
editorial AND comment AND evidence
editorial AND opinion AND evidence
editorial AND commentary AND evidence
(d) Searching by Restriction - Use the Restrict Search by Product section to limit the search to
a specific Cochrane Library database or databases.
PubMed
The search strategy for citations will involve two kinds: text word and MeSH:
(a) Examples of keyword/phrase searching
Typing in ‘expert opinion’ will be a very broad search term and locate a large number of hits,
so this needs to be refined. Use the ‘Limits’ screen to filter according to needs, for example:
title/abstract; humans, English language, full-text; date range 2001-2011 (‘published in the last
10 years’).
(b) MeSH searching
The relevant Subject Headings are:
i. Expert Testimony – use for: expert opinion; expert opinions; opinion, expert
ii. Comment [Publication Type] - use for commentary, editorial comment, viewpoint
iii. Editorial [Publication Type] – scope note: ‘the opinions, beliefs, and policy of the editor
or publisher of a journal…on matters of medical or scientific significance to the medical
community or society at large’.
Joanna Briggs Institute 199
Reviewers’ Manual 2011
In PubMed, Subject Headings can be searched in conjunction with subheadings. For example,
Expert Testimony has the following: ‘economics’, ‘ethics’, ‘history’, ‘legislation and jurisprudence’,
‘methods’, ‘standards’, ‘statistics and numerical data’, ‘trends’, ‘utilisation’.
Regardless of the specific review approach adopted (e.g. qualitative or quantitative), the search
strategy needs to be comprehensively reported. Commonly, electronic databases are used to
search for papers, many such databases have indexing systems or thesauruses, which allow
users to construct complex search strategies and save them as text files. The documentation of
search strategies is a key element of the scientific validity of a systematic review. It enables readers
to look at and evaluate the steps taken, decisions made and consider the comprehensiveness
and exhaustiveness of the search strategy for each included database.
Managing references
Bibliographic programs such as Endnote can be extremely helpful in keeping track of database
searches and are compatible with CReMS software. Further guidance can be sought from the
SUMARI user guide. A research librarian or information scientist is also an extremely useful
resource when conducting the search.
When conducting a JBI systematic review using CReMS, references can be imported into
CReMS from bibliographic software such as Endnote, either one at a time, or in groups. To
import references in groups, the references need to be exported from the reference manager
software (such as Endnote) as a text file. Endnote contains a series of fields for a range of
publication types.
200 Appendices
The current version of CReMS requires that the “journal” category of publication be chosen, and
that every field be complete. Before exporting a text file from Endnote, ensure that the “author/
date” format has been selected.
Once exported, the results can be imported into CReMS; any references not successfully
imported will be listed in a dialogue box. These can then be added manually to CReMS. In
CReMS, studies can be allocated to the different analytical modules; each study can be
allocated to multiple modules. Papers that are not included studies but are used to develop the
background or to support the discussion can be imported or added to CReMS and allocated
the setting “reference”.
www.CartoonStock.com