Modelo de Indicadores

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Measuring Safety Performance

Guidelines for Service Providers


Executive Summary
The objective of this paper is to provide guidelines for the definition and implementation of a set of
safety performance indicators as part of your safety management system.
This document proposes an approach to safety performance measurement aiming at increasing your
companys potential for effective safety management that considers systemic and operational issues.
Effective safety performance measurement will be decisive in driving your safety management system
towards excellence.
Throughout this document:
-

any reference to the term 'service provider' is intended to cover providers of aviation products
and services;

any reference to 'operations' is intended to mean your core activities being regulated through
aviation safety regulations; and

any reference to 'regulator' is used in the broad sense, to cover all State functions and
responsibilities as relevant for the management of aviation safety.

Terms and definitions used throughout this document consider definitions contained in International
Civil Aviation Organization (ICAO) Annex 19 Edition 1 and the Safety Management International
Collaboration Group (SM ICG) Safety Management Terminology paper.

July 16, 2013


Table of Contents
1. The concept....................................................................................................2
1.1.

What is safety performance? .................................................................... 2

1.2.

Why measure safety performance? ............................................................ 3

1.3

How to measure: types of safety performance indicators .............................. 5

2. Safety performance measurement process ....................................................7


2.1. Prerequisites for effective safety performance measurement ............................ 7
2.2. Process for defining and reviewing safety performance indicators...................... 8
3. SPI examples ...............................................................................................13
3.1. Indicators for systemic issues .....................................................................13
3.2. Indicators for operational issues..................................................................17
3.3. Indicators to monitor external factors ..........................................................17
Reference documents.......................................................................................22

1|Page

1. The concept
1.1.

What is safety performance?

ICAO Annex 19 defines safety as the state in which risks associated with aviation activities,
related to, or in direct support of the operation of aircraft, are reduced and controlled to an
acceptable level and safety performance as a service providers safety achievement as
defined by its safety performance targets and safety performance indicators. These
definitions provide a good indication of the complexity related to measuring safety
performance. In many areas safety metrics tend to focus on serious incidents and
accidents, as these are easy to measure and often receive more attention. In terms of
safety management, the focus on such negative events should be considered with some
caution, because:
-

in systems such as aviation with a low number of high consequence negative


outcomes, the low frequency of such outcomes may give the wrong impression that
your system is safe;

the information is available too late to act on it;

counting final outcomes will not reveal any of the systemic factors, hazards or latent
conditions that have a potential to result in high consequence negative outcomes,
under the same conditions; and

where the resilience of a system has been undermined, such outcomes are more
likely to occur by chance and therefore these outcomes may draw unwarranted
attention and use scarce resources when they are not predictive of later events.

The issue is further complicated because the aviation system is a highly dynamic, complex
system with many different players, interactions, dependencies and parameters that may
have a bearing on final safety outcomes. Therefore, in most cases it is impossible to
establish a linear relationship between specific parameters or safety actions and the final,
aggregate safety outcome.
Hence, the absolute measurement of safety is itself
unachievable. Whilst there are many models of what makes up the level of safety (and
conversely the level of exposure to risk), indicators will always constitute imperfect markers
of these levels.
Safety is more than the absence of risk; it requires specific systemic enablers of safety to be
maintained at all times to cope with the known risks, to be well prepared to cope with those
risks that are not yet known, and to address the natural erosion of risk controls over time.
Thus, from the perspective of your company there cannot be any direct measures of safety.
Measures should in particular focus on those features of your system that are intended to
ensure safe outcomes those elements that will constitute organizational enablers of safe
outcomes and specific safety controls and barriers for any risks identified. Measures also
need to address how external factors may influence these enabling elements, risk controls
and barriers or how these controls and barriers influence each other. This approach is
aligned with current industry practice in the area of quality management as promoted for
example by International Organization for Standardization (ISO) 9000 series standards;
when the resulting output cannot be directly measured, the underlying systems and
processes need to be validated instead.

2|Page

The principles above are valid both from a regulators perspective and from the perspective
of an individual service provider; in all cases the dynamic nature of the systemic,
operational and external components of safety performance should be considered.

Figure 1: Components of safety performance

1.2.

Why measure safety performance?

ICAO Safety Management System (SMS) standards and recommended practices promote
the development and maintenance of means to verify the safety performance of your
organization and to validate the effectiveness of safety risk controls.
The analysis and assessment of how your company functions to deliver its activities should
form the basis for defining your safety policy, the related safety objectives and the
corresponding safety performance indicators and targets.
SMS requires a systemic approach as with any other element of business management
(e.g., quality, finance), and in this respect safety performance measurement provides an
element that is essential for management and effective control: 'feedback.'

3|Page

Feedback will allow management to validate the analysis and assessment of how well
your organization functions in terms of safety and to make adjustments as required
(Plan-Do-Check-Act).

Feedback to your management will guide decision-making and resource allocation.

Feedback to all staff will ensure that everyone is informed on your companys safety
achievements. This will help to create commitment and contribute to fostering your
companys safety culture.

Figure 2: The measurement cycle

Effective safety performance measurement will support the identification of opportunities for
improvement not only related to safety, but also to efficiency and capacity.
The management of safety relies on the capabilities of your organization to systematically
anticipate, monitor, and further develop your organizational performance to ensure safe
outcomes of your activities.
Effective safety management requires a thorough
understanding and sound management of your system and processes. This cannot be
achieved without some form of measurement. Rather than randomly selecting outcomes
that are easy to measure, you should select safety performance indicators that consider the
type of feedback needed to ensure your companys capabilities for safety management can
be properly evaluated and improved.
This implies that you will need to measure
performance at all levels of your organization by adopting a broad set of indicators involving
key aspects of your system, and operations and allowing to measure those key aspects in
different ways.

4|Page

1.3

How to measure: types of safety performance indicators

ICAO defines safety performance indicator as a data-based safety parameter used for
monitoring and assessing performance and safety performance target as the planned or
intended objective for safety performance indicator(s) over a given period.
Safety performance indicators (SPIs) can be classified in accordance with specific features;
and different classifications are commonly used in different areas. The types of indicators
described in this document have been defined following a review of such commonly used
classifications and definitions to identify commonalities. An explanation is provided where
relevant on the use of each. You may adopt any terms for your specific safety performance
indicators as you see fit; the information below is provided to complement the conceptual
information required for effective safety performance measurement.

Lagging indicator
Metrics that measure safety events that have already occurred including those
unwanted safety events you are trying to prevent (SM ICG).
Lagging indicators are measures of safety occurrences, in particular the negative
outcomes that the organization is aiming to prevent. Lagging indicators are mainly
used for aggregate, long-term trending, either at a high level or for specific
occurrence types or locations. Because they measure safety outcomes, they can be
used to assess the effectiveness of safety measures, actions, or initiatives and are a
way of validating the safety performance of the system. Also, trends in these
indicators can be analyzed to determine if latent conditions exist in present systems
that should be addressed.
Two types of lagging indicators are generally defined as:
1. Indicators for high severity negative outcomes, such as accidents or serious
incidents.
The low frequency of high severity negative outcomes means that aggregation
(e.g., at industry segment level or regional level) may produce more meaningful
analyzes.
Example: number of runway excursions/1000 landings.
2. Indicators for lower level system failures and safety events that did not manifest
themselves in serious incidents or accidents (including system failures and
procedural deviations); however, safety analysis indicates there is the potential
for them to lead to a serious incident or accident when combined with other
safety events or conditions. Such indicators are sometimes referred to as
precursor event indicators 1 .

This term should be used with caution: Before defining one event or condition as a precursor to a more serious
event or condition (e.g., incidents as precursors to accidents), it must be ensured that there is a demonstrable
correlation between the two. Such correlation underlies the concept of measurement validity. The factors that
cause the incidents defined as 'precursors' must be common between those incidents and the probability of
accidents they are assumed to predict.

5|Page

Indicators for lower level system failures and safety events are primarily used to
monitor specific safety issues and measure the effectiveness of safety controls or
barriers put in place for mitigating the risk associated with these hazards.
Example: number of unstabilized approaches/1000 landings

Leading indicator
Metrics that provide information on the current situation that may affect future
performance (SM ICG).
Leading indicators should measure both: things that have the potential to become or
contribute to a negative outcome in the future (negative indicators), and things that
contribute to safety (positive indicators). From a safety management perspective,
it is important to provide sufficient focus on monitoring positive indicators to enable
strengthening of those positive factors that make up your companys safety
management capability.
Leading indicators, which are particularly relevant from a management perspective,
may be used to influence safety management priorities and the determination of
actions for safety improvement. You may use this type of indicator to proactively
develop (drive) your companys safety management capabilities, in particular during
initial implementation of SMS. This may entail the setting of performance targets.
Example: The percentage of changes to Standard Operating Procedures that
have been subject to hazard identification and safety risk management
Leading indicators may also be used to inform your management about the dynamics
of your system and how it copes with any changes, including changes in its operating
environment. The focus will be either: on anticipating emerging weaknesses and
vulnerabilities to determine the need for action, or on monitoring the extent to which
certain activities required for safety are being performed. For these monitoring
indicators, alert levels can be defined.
Example: The extent to which work is carried out in accordance with Standard
Operating Procedures

The concept of leading and lagging indicators has existed in domains outside of aviation for
a number of years. In particular, economists use them as a means to measure the health
of an economy.
Safety performance measurement should ideally consider a combination of leading and
lagging indicators. The main focus should be to measure and to act upon the presence of
those systemic and operational attributes that enable effective safety management within
your company and meanwhile, use lagging indicators to ensure that this safety
management is effective. Lagging indicators, particularly indicators for lower level system
failures, are useful to validate the effectiveness of specific safety actions and risk barriers or
to support the analysis of information derived from your leading indicators.

6|Page

2. Safety performance measurement process


2.1. Prerequisites for effective safety performance measurement
In essence, your safety performance is determined by your capability to implement and
maintain those organizational elements required to ensure safe outcomes. The purpose of
your SMS is to build up, maintain, and continually improve this capability. As a prerequisite
for effective safety management, your organization needs to perform a system analysis to
generate an accurate and reliable description of your organizational structures, policies,
procedures, processes, staff, equipment, and facilities.
This analysis should have a
particular focus on the interactions between system components and external factors. This
will provide you with a model of how your system elements and activities interact to
produce the expected safety outcomes, allowing you to identify the strengths and
weaknesses of your system. The system description and related model of how your
activities lead to the expected outcomes will inform you on what to measure to drive safety
performance and what to monitor to keep an eye on all of those elements that may affect
your organizations safety performance 2 .
Guidance on system description and hazard identification for design and manufacturing
organizations may be found for example in the Federal Aviation Administration (FAA)
Aircraft Certification Service (AIR) SMS Pilot Project Guide. Most of the elements developed
in this guidance document can be adapted for other sectors. 3 Although designed for
regulators, the SM ICG SMS Evaluation Tool 4 may be useful in assessing the completeness
and adequacy of your SMS. Your internal audit system and regulator audits and inspections
may also identify areas of concern or safety critical tasks.
If your organization has a quality management system, such as those defined in ISO
9001/AS9100 or equivalent standards, the existing system and process description is a
starting point for your system analysis, but you should ensure that your system and process
description properly addresses aviation safety risks as well as business risks.
Following completion of the system description, including analysis and assessment, your
company should have gained or confirmed its understanding of where it stands with regard
to safety. Through this exercise you should have identified:
At the systemic level:
- whether the elements that constitute enablers of effective safety management are
present, suitable, and effective;
- the elements that are still missing for effective safety management;
- whether the elements are sufficiently integrated with each other and with the core
management and operational processes of your organization; and
- the weaknesses and vulnerabilities in your organization.
At the operational level:
- the main risks in operations that need to be addressed (the things that may cause
your next accident).

See also ICAO Doc 9859 Edition 3 7.4 SYSTEM DESCRIPTION

http://www.faa.gov/about/initiatives/sms/pilot_projects/guidance/media/DM_SMS_PilotProjectGuide.pdf

http://www.skybrary.aero/index.php/SM_ICG_SMS_Evaluation_Tool

7|Page

This will form the basis for reviewing the adequacy of your safety policy, defining or
adapting your safety objectives, and deriving your safety performance indicators.

2.2. Process for defining and reviewing safety performance indicators


As with anything that relates to effective safety management, defining and using safety
performance indicators must be a dynamic process. A step-by-step process for developing
your own set of safety performance indicators is proposed, which follows the Plan-DoCheck-Act logic for continual improvement. This should help you to involve and get buy-in
from all staff concerned.

Figure 3: Process steps

Step 1: Designate responsibilities


It is critical to the success of the SPI project, as to the SMS journey in general, that your
management are fully committed to implementing SPIs as a fundamental part of your
companys safety management approach. Rather than just supporting a system of SPIs,
management must define aspects of your organization that require measurement and
management and then must commit to a systematic approach to managing those elements,
in accordance with your safety policy and defined safety objectives.
The first step for establishing SPIs will be for management to designate personnel with
responsibilities for initiating the effective promotion and coordination of the introduction of
the SPIs. This will require responsibility for ensuring effective communication and generally
overseeing the implementation, with due consideration of your existing organizational setup
in relation to safety management. These personnel (hereafter referred to as SPI team)
should ideally include, and certainly have access to, personnel with appropriate experience

8|Page

and knowledge of safety and/or quality management principles and data analysis. They
should also have experience applying this knowledge and these skills in the context of your
policies, programs, operational procedures and practices. Process owners must be directly
involved even if specialists are used to supply measurement expertise or to
support/facilitate the SPI development process. Also, it is essential that process owners take
ownership of safety performance measurement for their processes. The SPI team (or
individual with designated responsibilities, depending on the size and complexity of your
organization) must clearly be shown to be in either a support or advisory role to
management and process owners.
Management should be kept informed of progress on a regular basis and should take an
active role in steering the process of implementing SPIs. For larger organizations it may be
useful to develop an analysis of the costs and benefits of the SPI development project, with
particular focus on the positive effects on your companys management information system
that will lead to improved resource allocation.
Finally the SPI team should set a reasonable timetable, including milestones, to ensure
adequate progress in developing the SPIs.
Step 2: Review safety policy and objectives identify key issues and main focus
At this step, the SPI team should identify the scope and focus of measurement considering
the results of the system analysis (cf. 2.1), paying particular attention to the
completeness and adequacy of your SMS.
To define indicators for specific operational safety issues, the bow-tie methodology 5
similar tools can be used to determine the safety actions and risk barriers that would
most suitable for the definition of operational SPIs. A thorough hazard identification will
required as part of your system analysis to provide a good understanding of threats
safety in your operations.

or
be
be
to

The SPI team may also review typical indicators used within your industry segment and
assess them to determine whether they are pertinent to your organization. For example,
measuring the number of internal reports may not be meaningful if your system analysis
reveals that there are no easily accessible means to report or there are concerns about
confidentiality.
Step 3: Determine data needs
To be meaningful, measures of performance must be based on reliable and valid data, both
qualitative and quantitative. Therefore the SPI team should identify all pertinent data and
information that is available within your company and determine what additional
information is needed. It should also consider information available through the internal
audit/compliance monitoring system.
Regardless of the type of data, quality is one of the most important elements in ensuring
that the data can be integrated and used properly for analysis purposes. Data quality
principles and practices should be applied throughout the processes from data capture and
integration to analysis. Guidance about required data attributes and data management can
be found in the SM ICG Risk Based Decision Making Principles document 6 .
5
6

http://www.skybrary.aero/index.php/Bow_Tie_Risk_Management_Methodology
http://www.skybrary.aero/index.php/Risk_Based_Decision_Making_Principles

9|Page

You may be tempted to identify things that lend themselves to being measured instead of
identifying what you should measure. This is likely to result in identifying SPIs that are
most obvious and easy to measure rather than SPIs that are most valuable for effective
safety management. Therefore, at this step of the process, it is important to focus on what
changes your organization wants to drive and what aspects it needs to monitor. You
should also consider that, to be effective at assessing system safety, a broad set of
indicators involving key aspects of your system and operations should be developed; this
will reduce the possibility of having a narrow and therefore potentially flawed view of your
companys safety performance.
Also, it may be necessary to measure the same system in several ways in order to gain a
more precise idea of the actual level of safety performance. For example, only assessing
your companys safety culture without measuring operational parameters will merely
provide a very partial indication of safety performance.
In the area of hazard identification and risk management in operations (core processes),
availability of data will depend in part on the maturity of your internal safety reporting
schemes. Aggregate data for your industry segment may also be considered, particularly
when your SMS has not yet generated sufficient data. Other information, such as number
of flights, fleet size, and financial turnover, may contribute to a better understanding of the
context of operations. Continuous availability of data should be ensured to generate
relevant and timely indicators. Delays in compiling data for the generation of indicators are
likely to delay any safety actions that may be required.
Step 4: Define indicator specifications
Once the scope and focus of your SPIs have been determined and available
data/information reviewed, the specifics need to be defined.
Each SPI should be
accompanied by sufficient information (or metadata) which enables any user to determine
both the source and quality of the information, and place this indicator in the context
necessary to interpret and manage it effectively. 7
Whenever possible, indicators should be quantitative, as this facilitates comparison and
detecting trends. Quantitative metrics should be precise enough to allow highlighting trends
in safety performance over time or deviations from expected safety outcomes or targets.
For qualitative SPIs, it is important to minimize subjectivity. This may be achieved through
an evaluation by members of staff not directly involved in the definition of SPIs.
Depending on the size of your company and the complexity of your activities, a hierarchical
framework for your SPIs could be defined to reflect the different processes and sub-systems
within your organizational structure. While some indicators for assessing systemic issues
may be common to different processes and subsystems, indicators for assessing operational
issues will need to be specific. This underlines the importance of having performed an
accurate system analysis identifying all system components and sub-systems as a
prerequisite for implementing SMS (cf. 2.1).

For an example, see http://aviationsafetywiki.org/index.php/Reporting_metadata_specification. Metadata should


include information on data sources, currency, accuracy, and any other pertinent details.

10 | P a g e

Aspects of good SPIs include :


- The indicator is:
- valid and reliable,
- sensitive to changes in what it is measuring, and
- not susceptible to bias in calculating or interpretation.
- Capturing the data is cost effective.
- The indicator is:
- broadly applicable across company operations, and ideally throughout the
larger aviation sector, and
- easily and accurately communicated.8
Step 5: Collect data and report results
Once you have defined your SPIs, you must decide how you will collect the data and report
the results. Data collection approaches (i.e., data sources, how data will be compiled, and
what the reports will look like), as well as roles and responsibilities for collection and
reporting, should be specified and documented. Data collection procedures should also
consider the frequency with which data should be collected and the results reported for each
SPI. Some of these issues will have been addressed when deciding on the SPIs in steps 3
and 4.
The presentation format of the indicator results should take into account the target
audience. For example, if you track several indicators addressing the same key issue, it
may be useful to identify a subset of the most critical indicators to be given greater
emphasis for reporting to top management. The presentation of indicator results should
facilitate understanding of any deviations and identification of any important trends (e.g.,
scoreboards with traffic lights, histograms, linear graphs).
Step 6: Analyze results and act on findings from SPI monitoring
This is the most relevant step in terms of safety management, as the ultimate goal of
implementing SPIs is to maintain and improve your companys safety performance over
time. There is no point in collecting information if the results are not used. Remember that
SPIs are indicators of safety performance, not direct measures of safety. The information
collected through different SPIs needs to be carefully analyzed, and SPIs collected for
different issues need to be put in perspective and the results interpreted, so as to gain an
overall picture of the organizations safety performance. The results obtained through an
individual indicator may be insignificant if taken in isolation, but may be important when
considered in combination with other indicators.
Inconsistencies between SPIs may be an indication of an inaccurate system description or
problems with the SPIs themselves. For example, you may encounter situations where
leading and lagging indicators associated with the same safety issue provide contradictory
results or where a positive trend in systemic indicators goes with a negative trend in
operational indicators.
If you find that the metrics are not defined well enough to capture safety critical information
the SPIs should be reviewed. Any inconsistencies in the overall picture represent a potential
8

Indicators of safety culture selection and utilization of leading safety performance indicators, Reiman and
Pietikainen. VTT Technical Research Centre of Finland 2010:07

11 | P a g e

opportunity for learning and for adjusting not only the SPIs (see Step 7) but your SMS
itself.
Indicators should not be simply seen as a metric, with actions being taken to get a good
score rather than to improve safety performance. It is important that results obtained
through the collection, analysis and interpretation of SPIs are conveyed to your
management for decision and action. Ideally, these results should be presented at regular
meetings (e.g., management reviews, safety review board meetings) to determine what
actions are required to address deficiencies or to further improve the system. It is
important that such actions do not focus on certain indicators in isolation, but on optimizing
your organizations overall safety performance.
As part of your safety communication and promotion, all staff should be informed of the
results obtained through the collection, analysis, and interpretation of SPIs.
Step 7: Evaluate SPIs and make changes as appropriate
The systems analysis of your organization, along with the set of SPIs and their
specifications, including the metrics and any defined targets, should be periodically reviewed
and evaluated to consider:
-

the value of experience gained,

new safety issues identified,

changes in the nature of risk,

changes in the safety policy, objectives; and priorities identified,

changes in applicable regulations, and

organizational changes, etc.

The frequency of the review cycle should be defined. Periodic reviews will help to ensure
that the indicators are well defined and that they provide the information needed to drive
and monitor safety performance. Periodic reviews will also help identify when specific drive
indicators are no longer needed (e.g., if the intended positive changes have been achieved)
and allow adjustment of SPIs so that they always focus on the most important issues in
terms of safety. Nevertheless, too frequent reviews should be avoided, as they may not
allow establishing a stable system.
After the first two to three cycles, you should have collected enough data and gained
sufficient experience to be able to determine which are your key SPIs - those that are most
valuable and most effective to monitor and to drive safety performance. At this stage you
may be able to derive targets for these key SPIs by extrapolating the data collected during
previous cycles.
Any such extrapolation needs to consider the dynamics of your
organization.
You might also compare your SPIs with those implemented by other
organizations within your industry segment, but you should never simply copy another
organizations SPIs without checking that they are meaningful for your organization.

12 | P a g e

3. SPI examples
Below is a non-exhaustive list with examples of indicators intended to assist your
organization with selecting your own set of safety performance indicators, following the
process described in 2.2. Before adopting any of these as your own SPIs, you should
determine if the particular indicator is relevant to your specific organization, considering the
maturity of your SMS and the specific features you would like to improve or that need
attention.

3.1. Indicators for systemic issues


Area

Focus of measurement

Compliance

SMS
effectiveness

internal audits/compliance monitoring: all noncompliances

internal audits/ compliance monitoring:


significant non-compliances

Metrics

total number per audit planning


cycle / trend

% of findings analyzed for their


safety significance,

number of significant findings


versus total number of findings

number of repeat findings within


audit planning cycle

internal audits/ compliance monitoring:


responsiveness to corrective action requests

average lead time for completing


corrective actions per oversight
planning cycle - trend

external audits/ compliance monitoring: all noncompliances

total number per oversight


planning cycle / trend

% of findings analyzed for their


safety significance,

external audits: significant non-compliances

number of significant findings


versus total number of findings

external audits: responsiveness to corrective


action requests

average lead time for completing


corrective actions per oversight
planning cycle - trend

consistency of results between internal and


external audits/compliance monitoring

number of significant findings


only revealed through external
audits

strategic management

the degree to which safety is


considered in the organizations
official plans and strategy
documents

the frequency with which the


organizations official plans and
strategy documents are reviewed
with regards to safety

number of management walkarounds per month/quarter/year

number of management meetings


dedicated to safety per
month/quarter/year

length of term

number of cases where the


reasons for departure of key
personnel have been analyzed

management commitment

turnover rate of key safety personnel

13 | P a g e

Area

Focus of measurement

Metrics

supervision

number of cases where


supervisors provided positive
feedback on safety-conscious
behavior of your staff per
month/quarter/year

reporting

number of reports received per


month/quarter/year & trend

% of reports for which feedback


to reporter was provided within
10 working days

% of reports followed by an
independent safety review

number of accident/serious
incident scenarios analyzed to
support Safety Risk Management
(SRM) per month/quarter/year

number of new hazards identified


through the internal reporting
system per month/quarter/year &
trend

findings from external audits


concerning hazards that have not
been perceived by personnel/
management previously

number of safety reports received


from staff per
month/quarter/year & trend

number of new risk controls


validated per month/quarter/year

% of overall budget allocated to


new risk controls

% of staff for which a competence


profile has been established

% of staff who have had safety


management training

frequency for reviewing


competence profiles

frequency of reviewing the scope,


content, and quality of training
programs

number of changes made to


training programs following
feedback from staff per
month/quarter/year

number of changes made to


training programs following
analysis of internal safety reports
per month/quarter/year

number of organizational changes


for which a formal safety risk
assessment has been performed
per month/quarter/year & trend

number of changes to Standard


Operating Procedures (SOPs) for
which a formal safety risk
assessment has been performed

hazard identification

risk controls

HR management & competence development

management of change

14 | P a g e

Area

Focus of measurement

Metrics
per month/quarter/year & trend

management of contractors

emergency response planning (ERP)

safety promotion

number of technical changes


(e.g., new equipment, new
facilities, new hardware) for
which a formal safety risk
assessment has been performed
per month/quarter/year & trend

number of risk controls


implemented for changes per
month/quarter/year & trend

% of changes
(organizational/SOP/technical
etc.) that have been subject to
risk assessment

% of contractors whose safety


performance has been assessed

frequency for assessing safety


performance of contractors

% of contractors integrated with


your companys safety reporting
scheme

% of contractors for which safety


training has been provided

% of contractors that have


implemented training control
procedures

% of contractors that have a


feedback system on safety issues
in place with their customer

number of safety reports received


from contractors per
month/quarter/year & trend

number of safety actions initiated


following assessment of safety
performance or safety reports
received per month/quarter/year
& trend

number of emergency drills per


year

frequency of reviewing the ERP

number of trainings on ERP per


month/quarter/year

% of staff trained on the ERP


within a quarter/year

number of meetings with main


partners and contractors to
coordinate ERP per
month/quarter/year

number of safety communications


published

number of trainings performed

number of safety briefings


performed.

(per month/quarter/year)

15 | P a g e

Area

Focus of measurement

safety culture

Metrics

the extent to which personnel


consider safety as a value that
guides their everyday work (e.g.,
on a scale from 1= low to
5=high)

the extent to which personnel


consider that safety is highly
valued by their management

the extent to which human


performance principles are
applied

the extent to which the personnel


take initiatives in improving
organizational practices or report
problems to management

the extent to which safetyconscious behavior is supported

the extent to which staff and


management are aware of the
risks your operations imply for
themselves and for others.

16 | P a g e

3.2. Indicators for operational issues


Area
Air operators

High Severity outcome to be


prevented

Metrics

traffic collision

Air Traffic
management/
Air Navigation
Services

number of Traffic Collision


Avoidance System (TCAS)
resolution advisories per 1000
flight hours (FH)

runway excursion

ground collision

for additional
indicators

controlled flight into terrain

accident/incident related to poor flight


preparation

number of unstabilized
approaches per 1000 landings
number of runway incursions per
1000 take-offs
number of Ground Proximity
Warning System (GPWS) and
Enhanced Ground Proximity
Warning System (EGPWS)
warnings per 100 take-offs
number of cases where flight
preparation had to be done in less
than the normally allocated time
number of short fuel events per
100 flights

See also

Maintenance
organizations

accident/incident related to fatigue

accident/incident related to ground-handling

maintenance related accident/incidents

maintenance planning/rostering related


accident/incidents

number of fuel calculation errors


per 100 flights

number of extensions to flight


duty periods per
month/quarter/year & trends
number of incidents with ground
handlers per month/quarter/year
& trends
number of mass and balance
errors per ground handler per
month/quarter/year & trends
number of dysfunctions per
ground handler per
month/quarter/year & trends
Pilots Reports (PIREPS) per 100
take offs
deferred items per month and
aircraft
In Flight Shut Down (IFSD) per
1000 FH
In Flight Turn Backs (IFTB) and
deviations per 100 take offs
number of service difficulty
reports filed with the Civil
Aviation Authority
dispatch reliability:
number of delays of more than 15
minutes due to technical issues
per 100 take offs
number of cancellations per 100
scheduled flights due to technical
issues
rejected take offs per 100 take
offs due to technical issues

% of work orders for which a


detailed planning has been made

17 | P a g e

Area

Air Traffic
management/
Air Navigation
Services

High Severity outcome to be


prevented

maintenance planning/rostering related


accident/incidents

maintenance related accident/incidents

maintenance data related accident/incidents

maintenance related accident/incidents

traffic collision

traffic collision / controlled flight into terrain

Metrics
maintenance engineer fatigue /
maintenance error:
% of work orders with a
difference > 10% between the
expected lead time and the actual
processing time
% of work orders with a
difference > 10% between the
estimated work force and the
actual needs
maintenance error:
% of work orders that required
re-work
number of duplicate inspections
that identified a maintenance
error
number of safety reports related
to ambiguous maintenance data
number of investigations
performed following components
removed from service
significantly before expected life
limit was reached
number of level busts/exposure
number of TCAS required action
(RA) (with and without loss of
separation) /exposure
number of minimum separation
infringement/exposure
number of inappropriate
separation (airspace in which
separation minima is not
applicable) /exposure
number of aircraft deviation from
air traffic control (ATC)
clearance/exposure

number of airspace
infringements/exposures

number of aircraft deviations


from air traffic management
(ATM) procedures/exposure
number of inappropriate or
absences of ATC assistance to
aircraft in distress
number of near Controlled Flight
Into Terrain (CFIT) IFSD
/exposure
number of inappropriate ATC
instruction (no instruction, wrong
information, action communicated
too late, etc.)
% of runway incursions where no
avoiding action was necessary
% of runway incursion where
avoiding action was necessary

controlled flight into terrain

runway excursion

runway incursion

18 | P a g e

Area
Airports

Flight training
organizations

Design
organizations

High Severity outcome to be


prevented

Metrics

post-accident/incident fire

runway incursion

collision with vehicle on ground / groundequipment

Fire Extinguishing Services (ICAO


Airport Fire Fighting Categories)
decrease in value (# decreasehours/ # airport annual operating
hours)
number of radio/phone failures
per 100 operations
number of fire rescue vehicles
failures per 100 operations
runway incursions per 1000
operations
signage:
number of failures or defects
found during routine inspection
number of defects reported
average lead-time for
repair/replacement
(per month/quarter/year &
trends)
notified platform safety rules
violations per 1000 operations.

ground collision with wildlife

FOD (Foreign Object Damage)

runway incursion

bird-strike In Flight Shut Down (IFSD)

accident/incident related to poor training

accident/incident related to poor


training/complacency during examinations

number of ground collisions with


wildlife
number of inspections of fences
and other protective devices per
month/quarter/year
number of FOD found during
routine inspections
number of FOD found out of
inspections and after report
runway lights
number of failures or defects
found during routine inspection
number of defects reported
average lead-time for
repair/replacement
(per month/quarter/year &
trends)
number IFSD per 10000 FH
following bird-strike
number of trainees per instructor
number of changes in instructor
per training
number of major changes to
training program
(per month/quarter/year &
trends)
number of significant deviations
from average pass rates

design related accident/incidents

During the design phase:

design planning related accident/incident

number of design changes


requested due to design errors
per program and per period

number of rejected compliance


demonstrations per program and
per period

% of technical reports with a


difference > 10% between the
expected lead time and the actual

19 | P a g e

Area

High Severity outcome to be


prevented

Metrics
processing time

Manufacturing
organizations

design related accident/incidents

% of technical reports with a


difference > 10% between the
estimated work force and the
actual needs

Post certification:

number of service
difficulty/safety reports due to
design errors per program and
per period

number of safety reports related


to ambiguous design data

number of design changes


classified incorrectly
(minor/major) per period

manufacturing related accident/incidents

number of service
difficulty/safety reports due to
manufacturing errors per
program and per period

manufacturing process related accident/incidents

% of work orders that required


re-work

number of investigations
performed following work orders
that required re-work

manufacturing process related accident/incidents

% of duplicate inspections that


identified a manufacturing error

manufacturing process related accident/incidents

number of cases where final


delivery was delayed due to
significant non-compliances

number of investigations
performed following delayed
delivery

manufacturing data related accident/incidents

manufacturing planning related


accident/incidents

number of safety reports related


to ambiguous manufacturing data
Production personnel fatigue /
production error:
% of work orders with a
difference > 10% between the
estimated work force and the
actual needs
% work orders with a difference
> 10% between the expected
lead time and the actual
processing time

20 | P a g e

3.3. Indicators to monitor external factors


Area

Monitoring focus

Regulations

new regulations

amendments to regulations

evolution towards performance-based


regulations

new technologies relevant to your core business


hardware

new technologies relevant to your core business


software

% of total investment that is


spent on new technologies

new technologies relevant to your core business

new technologies installed in aircraft

new technologies installed in aircraft

financial turn -over

rate of obsolescence of existing


qualifications
number of aircraft modifications /
Supplemental Type Certificates
(STCs) that require a change to
your companys rating
number of new modifications /
STC that require new
qualifications
evolution in your turnover

staff turnover

market opportunities

Technology

Competition

Metrics

competitors

number of new regulatory


requirements that will affect your
organization within the next 12
months
number of amended regulatory
requirements that will affect your
organization within the next 6
months
number of objective based rules
for which you have defined your
own means of compliance
% of total investment that is
spent on new technologies

average time to fill a vacant post


number of staff leaving to work
for a competitor
evolution in the number of
requests for quotation from new
customers
ratio of requests for quotation
from new customers that are
followed by a firm order
evolution in the number of your
direct competitors

21 | P a g e

Reference documents
1.

Leading indicators of system safety Monitoring and driving the organizational safety potential,
Teemu Reiman, Elina Pietikinen, Safety Science Journal 50 (2012)

2.

Leading Performance Indicators Guidance for effective use 'Step Change in Safety'
http://www.stepchangeinsafety.net/knowledgecentre/publications/publication.cfm/publicationid/
26

3.

ICAO Document 9859 Safety Management Manual, Third edition - unedited advance version
http://www2.icao.int/en/ism/Guidance%20Materials/SMM_3rd_Ed_Advance_R4_19Oct12_clean.
pdf

4.

Organization for Economic Cooperation and Development (OECD) Guidance on Developing


Safety Performance Indicators Series on chemical accidents No. 18, Second edition 2008
http://www.oecd.org/chemicalsafety/risk-management/41269639.pdf

5.

Identifying and Using Precursors. A gateway to gate-to-gate safety enhancement


http://www.skybrary.aero/bookshelf/books/1442.pdf
http://www.skybrary.aero/bookshelf/books/1443.pdf

22 | P a g e

This paper was prepared by the Safety Management International Collaboration Group (SM ICG). The
purpose of the SM ICG is to promote a common understanding of Safety Management System
(SMS)/State Safety Program (SSP) principles and requirements, facilitating their application across the
international aviation community.
The current core membership of the SM ICG includes the Aviation Safety and Security Agency (AESA)
of Spain, the National Civil Aviation Agency (ANAC) of Brazil, the Civil Aviation Authority of the
Netherlands (CAA NL), the Civil Aviation Authority of New Zealand, the Civil Aviation Safety Authority
(CASA) of Australia, the Direction Gnrale de l'Aviation Civile (DGAC) in France, the European
Aviation Safety Agency (EASA), the Federal Office of Civil Aviation (FOCA) of Switzerland, Japan Civil
Aviation Bureau (JCAB), the United States Federal Aviation Administration (FAA) Aviation Safety
Organization, Transport Canada Civil Aviation (TCCA) and the Civil Aviation Authority of United
Kingdom (UK CAA). Additionally, the International Civil Aviation Organization (ICAO) is an observer to
this group.
Members of the SM ICG:

Collaborate on common SMS/SSP topics of interest

Share lessons learned

Encourage the progression of a harmonized SMS

Share products with the aviation community

Collaborate with international organizations such as ICAO and civil aviation authorities that
have implemented or are implementing SMS
For further information regarding the SM ICG please contact:
Regine Hamelijnck
EASA
+49 221 8999 1000

Jacqueline Booth
TCCA
(613) 952-7974

Amer M. Younossi
FAA, Aviation Safety
(202) 267-5164

regine.hamelijnck@easa.europa.eu

jacqueline.booth@tc.gc.ca

Amer.M.Younossi@faa.gov

Carlos Eduardo Pellegrino


ANAC
+55 213 5015 147
carlos.pellegrino@anac.gov.br

Ian Banks
CASA
+61 2 6217 1513
ian.banks@casa.gov.au

SM ICG products can be found on SKYbrary at:


http://www.skybrary.aero/index.php/Safety_Management_International_Collaboration_Group
(SM_ICG)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy