Guide To Programme Evaluation in Quality Improvement v10

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

What do we cover in this guide?

We describe an introductory six-step approach to planning and designing


programme evaluation in quality improvement developed for use in the
Evidence and Evaluation Improvement Hub. This approach outlines the range of evaluation
questions that we consider and methods for answering these in the
for Improvement Team
context of quality improvement.
(EEvIT)
For more detailed guidance about individual methods and tools we have
included links at each step including those contained within the Rainbow
A guide to Framework from Better Evaluation.

programme How do we define programme evaluation?


Programme evaluation allows us to better understand and capture the
evaluation in value of quality improvement work and inform decisions about scale and

quality spread. It involves a process of collecting, analysing and synthesising


information to answer priority questions about a programme according to

improvement different evaluation criteria (appropriateness, equity,


process/implementation, efficiency, impact and sustainability).

A priority evaluation question might focus on impact – to what extent has


a programme worked as expected? A broader and deeper question would
consider whether and how a programme is the most impactful and
equitable thing to do. Answering these different questions requires an
understanding of how evaluation is planned and when and how different
info@ihub.scot evaluation methods should be best used1.
@ihubscot
1. Parry GJ, Carson-Stevens A, Luff DF, McPherson ME, Goldmann DA. Recommendations for evaluation of
health care improvement initiatives. Acad Pediatr. 2013;13(6 Suppl):S23-S30.
Six-step approach

Step 1: Planning Step 2: Framing


How to clarify what is being
How to decide the type and
evaluated about a programme
timing of evaluation, who to
(using theory of change) and
involve and what will be
how the evaluation should be
required
structured

Step 3: Focusing Step 4: Designing


How priority questions are set How the methods of collecting
that the evaluation will be and analysing information are
designed to answer developed

Step 5: Collecting and Step 6: Synthesising and


analysing reporting
How the evaluation questions
How information is collected
are answered and the findings
and interpreted
shared
Step 1: Planning: deciding when best to evaluate, how and
who to involve
Key Points
Planning sets out the purpose and scope of • Consider the ‘evaluability’ of your programme when starting to plan
evaluation and starts to identify what will be – how will the programme design and availability of data affect how
required to achieve this. Making an assessment of evaluation is conducted?
the evaluability of a programme can support an • Decide who will be managing the evaluation and when stakeholders
will need to be involved
effective plan to be developed.
Alternatives to impact evaluation should be considered when
Evaluability relates to whether and how a
decisions have been already been made about impact using other
programme is likely to be evaluated in a way that
evidence or where data or resources are inadequate to support this
will be useful and reliable.
type of evaluation.
There are a number different aspects of
For a programme in earlier stages of development and
evaluability to consider:
implementation, capturing how well the programme is being
• whether the programme can be described
implemented and how this could be improved on (using real-
clearly enough including a theory of change to
time/formative methods) would be appropriate. Impact evaluation
provide a basis for evaluation
could then be designed to take place at the end of the programme.
• whether impact is plausible and measurable
as a focus for evaluation
In addition to evaluability, planning should also clarify how the
• what resource, expertise and data is likely to
process will be managed and how and when stakeholders will be
be available within the timeframe for
involved in making decisions about the evaluation (a reference or
evaluation that would influence the feasibility
steering group may be appropriate). A planning checklist can help
of using a particular evaluation approach
to identify the key tasks and decisions before evaluation starts.
Step 1: Planning: deciding when best to evaluate, how and
who to involve

What tools and resources can help?

There are a range of resources available from Better


Evaluation about how to plan and manage an
evaluation.
Evaluability assessment checklists can help structure
and clarify the process of planning when and how
evaluation should be carried out.
A step by step guide for engaging stakeholders in
planning is available from Robert Wood Johnson
Foundation.
Step 2: Framing: what is being evaluated about a programme
(using theory)
Key Points
Once there is plan in place, the next step is to develop a
• Think through how change will occur from the activities
framework for the evaluation. This will set out the that are planned that will provide the focus for evaluating
overall approach or strategy for the evaluation and the whether this happened (impact) and how
(implementation/process)
programme theory that this will be based on.
• Establish detail around your outcomes with regards to what
will be meaningful and realistic to evaluate
The programme theory articulates what is expected to • Consider how differential outcomes are being understood
improve over time (short, medium and long term
outcomes), across what levels (personal, organisation,
system), as a result of the successful implementation
and delivery. How change will take place, where and for As well as defining what is known or expected about the
whom should also be differentiated. programme, it also important for evaluation to consider what
will be emergent and uncertain.
This is sometimes referred to as a combination of
execution theory and change theory which can be In complex systems change many different factors including
articulated together in a logic model. See more guidance adoption in different contexts and varying needs across the
on programme theory here. population influence success. This requires a broader and less
linear understanding of change.
Whether the needs of different groups are met for
equitable outcomes is an important focus. Whether Structuring an evaluation to pay attention to the multiple
equity is primary concern for a QI programme or a more factors influencing success can be supported using existing
general one evaluation should consider how differential theoretical frameworks. See more guidance here.
needs and outcomes will be assessed.
Step 2: Framing: what is being evaluated about a programme
(using theory)

What tools and resources can help?

There are a number of QI tools that can help with


clarifying and developing a programme theory including
those in the QI essentials toolkit.
There are many resources that can help with defining
the theory of change for a programme and what
outcomes are meaningful including the personal
outcomes approach.
Step 3: Focusing: the questions the evaluation will be
designed to answer
The next step is to focus the evaluation in more Key Points
detail by agreeing the questions that the evaluation • Establish key evaluation questions based on the criteria considered
will be designed to answer. There are a range of to be a priority and relevant for the intended use of the evaluation
(appropriateness, equity, process, impact, efficiency and
overarching or key questions that could be sustainability)
prioritised across different evaluation criteria
(appropriateness, equity, process, impact, efficiency
and sustainability). Example set of evaluation questions
1. To what extent are medium to long term outcomes improving as
For an evaluation to be successful, it is important to a result of the programme? (impact)
prioritise these questions based on a shared 2. To what extent has capacity and capability been developed that
understanding of the evaluation’s use and purpose will ensure sustainability of improved outcomes in the medium to
and what appropriate data is likely to be available or long term? (impact and sustainability)
collectable within the evaluation timescale. 3. To what extent is the programme engaging staff and users in
planned activities and how well are these activities working (such
In addition to impact, other key question areas to as training, coaching etc.)? (process/short term outcomes)
consider include: 4. What success factors can be identified as explaining how the
• whether the needs of groups being targeted have programme is working in a particular setting? (process)
been met (appropriateness and equity) 5. What range of outcomes (intended and unintended) has the
• how impact was possible or not in a particular programme contributed to in the medium to long term and how
setting that informs continuous improvement, is this meeting the needs of different groups? (impact, equity,
scale and spread (process), and what costs have appropriateness)
been avoided (efficiency) 6. To what extent have costs been avoided as a result of the
• whether there is capacity for impact to sustain programme? (efficiency)
(sustainability)
Step 3: Focusing: the questions the evaluation will be
designed to answer

What tools and resources can help?

Better Evaluation describe how key evaluation questions


are developed and outline a range of other resources
that can be used.
Step 4: Designing: developing the methods that will be used
to answer the evaluation questions
The evaluation design determines how information will be collected and analysed to Key Points
answer the evaluation questions. The exact design will depend on any constraints on • The evaluation design will
accessing and analyzing existing data or collecting new data. The goal is to be able to depend on available data,
gather the most reliable information about the programme within the time and ability to get new data and level
of resource
resource constraints that will answer the key evaluation questions.
• There are a number of
approaches to evaluation that
For an improvement programme a mixed design is likely to be used since there is can inform your design – each
with different benefits
often a focus on different evaluation criteria at the same time such as impact and depending on what you are
process. As well as being theory-based to at least some extent, one of the following looking to find out
designs or a combination of these can be used:

Simple impact Causal impact Process and case based


Simple impact evaluation Causal impact evaluation design Process evaluation design is used to understand
design focuses on makes a comparison with what was whether a programme was implemented as planned
understanding and describing observed as a result of the and how the process of change resulted in
whether there has been programme with an estimate of what improvement or not and in what circumstances.
improvement or impact by would have happened using a control Related to a process design is the use of a case-based
comparing the programme to group. For more guidance about approach to describe and/or compare particular
itself over time. This would be causal impact evaluation see the case instances of change or improvement as part of a
using a before and after design study based on an evaluation programme. This is particularly useful when change is
or time series analysis from a conducted by the EEvIT. taking place across multiple sites or teams.
baseline.
Step 4: Designing: developing the methods that will be used
to answer the evaluation questions

What tools and resources can help?

There are a range of resources available from Better


Evaluation on the design of theory-based and impact
evaluations.
Step 5: Collecting and analysing: how information will be
retrieved and used
Key Points
A data collection approach will depend on what information will be necessary to describe
and compare to satisfy the requirements of the evaluation design – simple or causal and/or • Data collection depends on
the design being used
process. Selecting appropriate measures or indicators of what is expected to improve over including the frequency of
time (outcomes) and through what process will be the starting point. The following are key collection and level of
comparison being used
principles to consider: (simple vs causal)
• when to start collecting new data or retrieving existing data including having a suitable • Think about how to align
baseline and when specific outcomes are expected to be measurable – formative or data collection and analysis
with how components of the
real-time evaluation will mean collecting information as early as possible programme are expected to
be delivered
• the frequency of collection required such as before and after vs repeated
measurement over time (time series) and any sampling approach being used
• how qualitative narrative information will be collected to answer how and why
questions that will include different perspectives
• whether causal impact is being assessed which would require comparison with a
matched control group
An brief example of what collection and analysis would focus on for a
collaborative learning programme: The availability of data is an important
• in the short term – how well participating staff have been engaged and consideration for selecting
responded to programme activities and what immediate benefit this is measures/indicators – is there routine data
having, already being collected such as the number
• in the medium term – whether targeted practice is improving through tests of hospital admissions? A data sharing
of change at different sites, agreement may be required to ensure access
• in the long term – on understanding the impact/overall difference such as to routinely collected data held elsewhere.
organizational capacity and capability and quality of care (proxy measures of
quality may be used such as emergency hospital admissions).
Step 5: Collecting and analysing: how information will be
retrieved and used

What tools and resources can help?

• Measurement tools can be used to prioritise what


information will be collected and how according to
best practice including data collection checklist.
Step 5: Collecting and analysing: specific approaches to
collecting data
There are a wide variety of existing tools available for use to support collection
and analysis. Comparing pre-defined criteria on a scale through a Key Points
questionnaire or survey tool is a common before and after measurement • Data collection depends on the
design being used including the
approach. Validated tools are recommended but it may also be important to frequency of collection and level of
tailor any tool to the local context. comparison being used (simple vs
causal)
• Think about how to align data
It is unlikely that the use of predefined criteria in this way will provide the collection and analysis with how
the programme is being
balance of detail and depth required to fully answer key evaluation questions. implemented and when and where
Narrative qualitative data is critical for understanding differential and holistic outcomes are expected to occur
perspectives of whether and how impact has occurred.

• Free text responses are a useful way of easily collecting perspective or views
from people. Whereas, in-depth/semi-structured interviews or focus groups
Using two or more methods to be
can be used to explore in detail the process and impact of a programme from
able to triangulate the data enhances
those delivering and receiving the improvement of services.
the credibility of the findings and the
• Observation provides a flexible way of assessing a process or situation that is
interpretations that can be made.
under change by documenting what is seen and heard.
• Videoed or written patient (not all patients) stories can be used to prompt
Case studies can be useful way of
reflection and discussion as part of a formative evaluation approach or for
capturing learning and impact across a
capturing impact from the perspective of patients.
common unit such as
• Reflective information capture as part of a learning log or regular team
organisation/service/team.
discussion can be a flexible way of evaluating a programme during
implementation. See here for an example.
Step 5: Collecting and analysing: specific approaches to
collecting data

What tools and resources can help?

Guidance on how to measure patient experience from


the Health Foundation is a useful starting point for
exploring different approaches to capturing narrative
from people receiving services.
Step 6: Synthesising and reporting: bringing information
together to answer the evaluation questions
Key Points
Synthesising and reporting what evaluation finds should be timed according to the
• Think about how different
how evaluation is being used. There may be different audiences being reached and sources of data would be best
different components of the evaluation that will be meaningful to share. brought together in a valid
Synthesising and reporting could be at regular interview during implementation and clear way to answer the
evaluation questions – could
(formative/real-time) or at the end of the programme in terms of its continuation this involve comparison of
and spread (summative). data for different cases or key
measures visualized over time
as a whole for the
Synthesising and making final interpretations about impact and learning involves programme?
making comparisons of change over time, for different measures or indicators, • Communicating the findings of
relevant to different groups and across different local contexts. There are specific evaluation can involve
reaching different audiences –
considerations in terms of how data would be checked for validity, interpreted and how could visual summary
displayed and produce a consolidated account of the evaluation findings. and virtual dialogue be used
to communicate key
messages?
Qualitative narrative information can tell
the story of impact and what happened How evaluation findings are shared should be tailored to the intended use
during the programme to influence this. of the evaluation (identified as part of the evaluation plan or framework).
When compared across different This could be in the form of a written report bringing together
individuals/teams/organisations this measurement data over time with explanatory qualitative data but could
information can be a useful approach for also include other forms: individual case studies and/or patient stories to
identifying common lessons such as communicate the findings for a particular setting or team
enablers or barriers. • visual summaries of findings including lessons learned
• workshops or virtual learning events
Step 6: Synthesising and reporting: bringing information
together to answer the evaluation questions

What tools and resources can help?

Detailed guidance about how to ensure a good standard


of reporting specific to quality improvement can be
found in the Standards for Quality Improvement
Reporting Excellence (SQUIRE).
Evaluation planning list
Start to identify what type of • Consider the purpose of evaluation and to what extent evaluability should be
evaluation is appropriate and assed in order to clarify what approach would be feasible, credible and
feasible using evaluability usable.

Involve stakeholders and end users • Identify stakeholders and those that have an interest in the evaluation and
plan how they will be supported to be involved in the evaluation at different
stages. Their engagement supports the evaluation throughout the entire
process.

Establish management and decision • Clarify how decisions will be made – should an evaluation steering or
making processes reference group be established

Clarifying who will conduct the • Skills and expertise of people internal and external to an organisation may be
evaluation and the resources required and there should be a clear roles and responsibilities developed.
required • Depending on the design and methods, both internal resources (e.g. staff
time) and external resources (e.g. participants' time to attend meetings to
provide feedback) should be considered.

Document management processes • Develop any formal documents needed, such as Terms of Reference.
and agreements
Developing a framework for evaluation
An evaluation framework is a written document that describes the overall approach or strategy that will structure and guide the
evaluation including how the programme theory is being defined that the evaluation will focus on. It includes the scope and purpose
of the evaluation being conducted, what the evaluation will focus on including key evaluation questions that relate to the
programme theory and how this will be answered using specific sources of information and data.

Table 1 summarises the evaluation framework developed for a national programme supporting the rollout of Near Me video
consultation. The sources of existing and new data gathered and interpreted to answer the key evaluation questions are outlined
where a focus on organization learning was prioritized along with programme evaluation. Measures or indicators of process and
impact would be specified as the measurement or data collection plan is developed.

Table 1. Evaluation framework – key questions and how these will be answered

Evaluation focus What questions are being addressed? How will these be answered using existing and new data?

Organisational  What are teams noticing and experiencing in terms of  Reflective qualitative data captured using a combination of semi-
learning what has been successful and what has been more structured interviews, structured interviews and open ended
challenging? survey questions
 What are teams learning from having worked in a new  Documentary analysis
way to support improvement and that should inform  Shadowing of implementation
future improvement work?

Programme  To what extent is there improvement in Near Me use and  Measures/indicators of impact relating to the spread of Near Me
impact in what circumstances? use including increase in Near Me calls from baseline
 What is being learned about the spread of Near Me use in  Documentary analysis
evaluation
practice including success and barriers?  Shadowing of implementation
Defining programme theory and outcomes
Defining when (and where/among who) outcomes are expected to occur and how these would be measured is
important when evaluating the impact of quality improvement. The logic model below illustrates how this
sequence of outcomes can be articulated.

Outcomes or impact at an organisational or system level commonly relate to what has improved in terms of
efficiency and quality indicators such as service utilisation and satisfaction with care. At a personal outcome level,
impact relates to what matters to service users to be able to live well in the context of their lives.

Content theory – changes that will be made


to improve outcomes (and adapt to a local
setting)

Input and
Short term Long term outcomes
Activities Medium term
outcomes What changes in patient and
What is being outcomes organisational outcomes are
delivered? What is being learned and
What changes in processes expected?
understood in order to be
What resources are and behaviours are expected? Improvement in outcomes at
able to test changes and to Improvement in processes for
required? what extent? different levels such as for
delivering care according to patients (experience of care),
Training and support for Increase in knowledge and protocol or bundled practices staff and the system (avoidable
staff to be able to deliver skills for testing and (involves adaptation to local
improved care processes hospital admission)
implementing improved care settings)
(practice such as care processes
bundles and QI skills)

Execution theory – what the programme will


do
Using wider theory for evaluation
There are different levels of theory that can be used to develop a
programme’s theory of change and how an evaluation should be
structured. The Kirkpatrick model is a widely used for articulating the Level 4: Organisational
outcomes expected from training across four levels: reaction, learning,
behavior, and results. A limitation of the Kirkpatrick model is that it does
improvement
not include other factors that will influence improvement in knowledge
such as individual human factors (motivation) and organizational factors
(culture). Level 3: Behavioural
change
Programme theory should also take account of the factors that are not
within the direct control of the programme but may nevertheless
influence whether there is improvement. For instance, factors in relation
to organizational culture may influence the extent to which participants
are able to engage with the programme and put learning into practice.
Level 2: Learning
The use of broader frameworks such as the Model for understanding
Success in Quality can be used to clarify what other factors would be
appropriate for an evaluation to assess.
Level 1: Reaction
There can also be factors that relate to the individual characteristics of
participants such as the level of motivation to learn or prior experience of
quality improvement. These factors are sometimes referred to as
moderators. There might be ways that the programme can seek to control
these factors as part of the programme design. Such as how selection of
those participating in the programme can be based on evidence of their
prior experience and demonstration of motivation.
Impact evaluation example
Changes in emergency admission, 28-day re-admission and
length of stay following introduction of a new model of care
provided a focus for measuring the expected impact of a
programme evaluated here. Time series analysis illustrates that
for one practice implementing the model there had been a
sustained downwards shift in the rate of emergency hospital
admissions for patients over 65 years.

These results have to be interpreted with caution due to the


issue of ‘regression to the mean’. This relates to how the health
status or hospital use of patients can improve over time on
average as people are expected to get better, especially if
patients being referred to a new service or model had a greater
need for healthcare utilization at the start (recent crisis or
emergency hospital admission).

In the context of evaluating health care initiatives, the health status or hospital use of patients can improve over time as on
average people would be expected to get better over time. A more robust way of assessing impact when this is expected is
to compare with a matched control group. A group of patients would be selected to be as similar as possible (such as in
terms of age, gender, prior health conditions, access to health care services and prior use of hospital services).
Reflective questions
Reflective questions can be used to collect information about progress, explore the process of emergent
change and identify learning. Reflections can be from an individual perspective collected through a
reflective log or survey. They can also be from a team perspective prompted through team huddles or
discussion.

What has How are you


happened? feeling?

What are you


noticing this is
What are you
working less
noticing that is
well? How
working well?
should this be
different?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy