98.726 Safety-Methods-Database PDF
98.726 Safety-Methods-Database PDF
98.726 Safety-Methods-Database PDF
Version 0.9
7 December 2010
Maintained by NLR
This document gives an overview of Techniques, Methods, Databases, or Models that can be used
during a Safety Assessment. This is a living document. Additions are welcome.
Part 2: Statistics
This part, which starts on page 171, gathers some statistics on the number of occurrences of elements in
the table of Safety Methods, e.g. number of occurrences of ‘aviation’ as a Domain, number of
occurrences of ‘Identify hazards’ as a Safety assessment stage.
Part 3: References
This part, which starts on page 180, gives the full list of references used.
1
Document control sheet
Version Date Main changes Number of
methods in
database
0.9 7 December 2010 Description and classification of many methods 726 methods
improved. 69 new methods added. 66 methods added (plus 150 links
without number but with reference to other methods. 15 or alternative
methods removed with reference to other methods. For names to
32 methods, number and description removed, with methods)
reference to other methods. Update of statistics.
Verification and update of all URLs in list of references
and many references added. Introduction of a new
classification type (in column Purpose) which collects
Design (D) techniques, which are aimed at designing
rather than analysing with respect to safety.
0.8 31 January 2008 Descriptions of 19 new methods added plus 3 alternative 701 methods
names to already listed methods. New classification type (plus 53 links
introduced (in column Purpose), which collects (O) or alternative
Organisation techniques. This class now includes about names to
20 methods, most of which were previously classified as methods)
(H) Human performance analysis technique, five were
previously (R) Risk assessment techniques; two were
(M) hazard Mitigating techniques.
0.7 20 February 2007 Descriptions of 31 new methods added. Alternative 682 methods
names or links to 49 methods included as separate entries (plus 50 links
in the table, with link to the original method, and without or alternative
additional details provided. Details for one method names to
removed and replaced by link to same method by methods)
alternative name. Minor details for many other methods
updated.
0.6 28 November 2006 One method added. Update of statistics and minor details 652
of other methods.
0.5 28 August 2006 One method added. Update of statistics and minor details 651
of other methods.
0.4 27 April 2006 24 methods added from various sources. Textual changes 650
and updates of other methods. Insert of statistics on
database attributes.
0.3 31 March 2005 Update, supported by the project CAATS [CAATS SKE 626
II]. Ninety-nine methods added, mainly from references
[GAIN ATM, 2003] and [GAIN AFSA, 2003]. Textual
changes and updates of all methods.
0.2 26 November 2004 Update, supported by the project CAATS [CAATS SKE 527
II]. Seven methods added, and for all methods an
assessment provided of the applicable Safety Assessment
Stages.
0.1 24 September 2004 Initiation of database, with 520 methods gathered during 520
the EEC funded and supported project [Review SAM
Techniques, 2004].
2
Part 1: Overview of Safety Methods
(For explanation of table headers, see first page of this document.)
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
1. @RISK T R 1991 @RISK uses the techniques of Monte Carlo Developed by Palisade. 5 many x • [GAIN ATM, 2003]
simulation for Bias and Uncertainty assessment in a @RISK evolved from PRISM • [GAIN AFSA, 2003]
spreadsheet-based model. Four steps: (1) Developing a (this is another than the PRISM • [FAA HFW]
Model – by defining a problem or situation in Excel elsewhere in this database),
spreadsheet format, (2) Identifying Uncertainty – in released by Palisade in 1984,
variables in Excel spreadsheets and specifying their which also allowed users to
possible values with probability distributions, and quantify risk using Monte Carlo
identifying the uncertain spreadsheet results that are to simulation.
be analyzed, (3) Analyzing the Model with Simulation See also Monte Carlo Simulation.
– to determine the range and probabilities of all
possible outcomes for the results of the worksheet, and
(4) Making a Decision – based on the results provided
and personal preferences.
3D-SART See SART.
(3D-Situation Applicable to aircrew.
Awareness Rating
Technique)
2. ABRM T R 1985 ABRM is a computational model to evaluate the ABRM is programmed in Excel 5 ATM x • [GAIN ATM, 2003]
(Analytic Blunder Risk probability of a collision, given a particular blunder (with macros). • [Geisinger85]
Model) (controller error, pilot error, equipment malfunction) Developed by Ken Geisinger
between one aircraft involved in the error (the (FAA) in 1985.
“blunderer”) and another aircraft (the “evader”).
ABRM considers both the probability of a collision
assuming no intervention, and then the probability of
timely intervention by pilots or controllers. It uses
empirical probability distributions for reaction times
and a closed form probability equation to compute the
probability that a collision will occur. This permits it
to consider combinations of events with small
probabilities efficiently and accurately.
3. Absorbing boundary M R 1964 Collision risk model; Reich-based collision risk Mainly of theoretical use only, 5 ATM x • [Bakker&Blom93]
model models assume that after a collision, both aircraft keep since it requires a parabolic • [MUFTIS3.2-II]
on flying. This one does not. A collision is counted if partial differential equation to
a process state (usually given by a differential have a unique solution..
equation) hits the boundary of a collision area. After
this, the process state is “absorbed”, i.e. does not
change any more.
4. Accident Analysis G 1992 The purpose of the Accident Analysis is to evaluate Many methods and techniques are 3 4 5 nuclear x x x x x • [FAA AC431]
or the effect of scenarios that develop into credible and applied. E.g. PHA, Subsystem • [FAA00]
older incredible accidents. Those that do not develop into HA. • [ΣΣ93, ΣΣ97]
credible accidents are documented and recorded to
verify their consideration and validate the results.
Accident-Concentration See Black Spot Analysis
Analysis
3
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
5. ACT T R 1993 ACT provides instant, real-time statistical analysis of ACT was designed by two human 2 3 aviation x • [FAA HFW]
(Activity Catalog Tool) an observed sequence, including such measures as factors experts (L. Segal and A. • [ACT web]
frequency of occurrence, duration of activity, time Andre, co-founders of Interface
between occurrences and probabilities of transitions Analysis Associates (IAA)), who
between activities. ACT automatically creates a data- designed this tool for use in their
log file that provides a detailed description of all broad fields of work: from
observations, as well as a further important statistical analysing pilot performance in
description of the concurrence of events and activities. the cockpit, through the analysis
To allow for multiple observers and/or multiple of computer workstations, to the
observations of a given video tape, data-log files can evaluation of consumer products
be merged and/or appended using simple post and graphical user interfaces. At
processing functions. present, ACT is being used in
over 250 industries, research
institutions, universities and
usability labs around the world.
6. Action Information T H 1986 Helps in defining those specific actions necessary to Procedure for developing or 2 defence x x x • [MIL-HDBK]
Requirements or perform a function and, in turn, those specific completing action/information • [HEAT overview]
older information elements that must be provided to perform requirements forms is much more
the action. It breaks up the references function informal than that for most
requirement into useful groupings of action analysis methods.
requirements and information requirements.
7. Activity Sampling T H 1950 Method of data collection which provides information Cannot be used for cognitive 5 logistics x • [KirwanAinsworth92]
about the proportion of time that is spent on different activities. • [FAA HFW]
activities. By sampling an operator’s behaviour at
intervals, a picture of the type and frequency of
activities making up a task can be developed.
8. ACT-R T H 1993 Simulates human cognition, using Fitts’s (1964) three- The original ACT was developed 2 4 education x x • [FAA HFW]
(Adaptive Control of step skill acquisition model of how people organise by J.R. Anderson in 1982. In and many • [Anderson82]
Thought - Rational) knowledge and produce intelligent behaviour. ACT-R 1993, Anderson presented ACT- other • [Anderson93]
aims to define the basic and irreducible cognitive and R. There exist several University • [Fitts64]
perceptual operations that enable the human mind. In research groups on ACT-R. • [Koubek97]
theory, each task that humans can perform should Typical for ACT-R is that it • Wikipedia
consist of a series of these discrete operations. The allows researchers to collect • Many other refs at
three steps of this model are (1) the conversion of quantitative measures that can be
http://act-
declarative input, (2) knowledge compilation and compared with the quantitative r.psy.cmu.edu/publicat
procedurisation, and (3) the result of both results of people doing the same ions/
procedurisation and compilation. Procedure: tasks. See also MoFL.
Researchers create models by writing them in ACT-R,
thus adopting ACT-R’s way of viewing human
cognition. Researchers write their own assumptions in
the model and test the model by comparing its results
to results of people actually performing the task.
4
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
9. ACWA T H 2001 ACWA systematically transforms the analysis of the 2 6 nuclear x • [Elm, 2004]
(Applied Cognitive cognitive demands of a domain into supporting defence • [Gualtieri, 2005]
Work Analysis) visualisations and decision-aiding concepts. The first and many
three (analysis) steps in this process relate to the other
analysis of the work domain: 1. Use a Functional
Abstraction Network model to capture the essential
domain concepts and relationships that define the
problem-space confronting the domain practitioners;
2. Overlay Cognitive Work Requirements on the
functional model as a way of identifying the cognitive
demands / tasks / decisions that arise in the domain
and require support; 3. Identify the Information /
Relationship Requirements for successful execution of
these cognitive work requirements. Subsequently,
there are two design steps: 1. Specifying the
Representation Design Requirements (RDR) to define
the shaping and processing for how the information /
relationships should be represented to practitioner(s);
and 2. Developing Presentation Design Concepts
(PDC) to explore techniques to implement the RDRs.
PDCS provide the syntax and dynamics of
presentation forms, in order to produce the
information transfer to the practitioner(s).
10. Adaptive User Model G H 1985 Captures the human’s preference structure by Link with THERP. 4 computer x • [FAA HFW]
observing the information available to the human as medical • [Freedy85]
well as the decisions made by the human on the basis education
of that information.
Adaptive Voting See N out of M vote
11. ADMIRA T R 1991 ADMIRA is based on a Decision Tree approach. It 4 5 nuclear x • [Senni et al, 1991]
(Analytical Dynamic utilises event conditional probabilities, which allows
Methodology for for the development of event trajectories without the
Integrated Risk requirement for detailed boolean evaluation. In this
Assessment) way, ADMIRA allows for the dynamic evaluation of
systems as opposed to the conventionally available
static approaches. Through a systematic design
interrogation procedure it develops a complete series
of logically linked event scenarios, which allows for
the direct evaluation of the scenario probabilities and
their associated consequences. Due to its interactive
nature, ADMIRA makes possible the real time
updating of the model of the plant/system under
examination.
5
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
12. ADREP D 1975 The ICAO ADREP database is based on the ICAO ADREP system was 8 aviation x x • [ATSB, 2004]
(Accident Data accident/incident data report supplied to the ICAO established by the 1974 ICAO
REPorting system) organisation. The database includes worldwide Accident Investigation and
accident/incident data of aircraft (fixed wing and Prevention Divisional Meeting.
helicopter) heavier than 5,700 kg since 1970. The States participating in the
meeting considered it essential
that a world accident data system
be established and that ICAO be
the custodian of the system. The
States undertook to report their
accidents to the system. The
original ADREP system was
developed in 1975 by an expert
made available to ICAO by
Australia.
13. ADSA I H 1994 Cognitive simulation which builds on CREWSIM. 3 4 nuclear x x • [Kirwan95]
(Accident Dynamic Designed to identify a range of diagnosis and • [Kirwan98-1]
Sequence Analysis) decision-making error modes such as fallacy, the
taking of procedural short-cuts, and delayed response.
Performance Shaping Factors (PSF) in the model are
linked to particular Psychological Error Mechanisms
(PEMs), e.g. PSF time pressure leading to the PEM of
taking a short-cut. With this, the simulation
approaches become (apparently) more able to generate
realistic cognitive External Error Modes (EEMs) that
have been observed to occur in real events and
incidents.
14. AEA T H 1981 Action Error Analysis analyses interactions between Any automated interface between 3 5 ATC x x x • [FAA00]
(Action Error Analysis) machine and humans. Is used to study the a human and automated process • [Leveson95]
consequences of potential human errors in task can be evaluated, such as pilot / • [MUFTIS3.2-I]
execution related to directing automated functions. cockpit controls, or controller / • [ΣΣ93, ΣΣ97]
Very similar to FMEA, but is applied to the steps in display, maintainer / equipment
human procedures rather than to hardware components interactions.
or parts.
15. AEMA T H 1994 Resembles Human HAZOP. Human errors for each 3 6 offshore x • [Vinnem00]
(Action Error Mode or task are identified using guidewords such as ‘omitted’,
Analysis) older ‘too late’, etc. Abnormal system states are identified in
order to consider consequences of carrying out the
task steps during abnormal system states.
Consequences of erroneous actions and abnormal
system states are identified, as well as possibilities for
recovery.
16. AERO D 2003 Aim is to organise and manage incidents and Safety Report Management and 8 aviation x x x • [GAIN AFSA, 2003]
(Aeronautical Events or irregularities in a reporting system, to provide graphs Analysis System • http://www.aerocan.co
Reports Organizer) older and reports, and to share information with other users. m
AERO is a FileMaker database developed to support
the management of the safety department of aviation
operators. AERO was created to enhance
communication between the safety department and all
employees, reduce paper handling, and produce
reports. The Data Sharing program allows all AERO
Certified Users to benefit from the experience of the
other users. AERO users review their monthly events
and decide which ones to share with the rest of the
companies using AERO.
6
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
17. AET Method T H 1978 Job evaluation with a regard for stress and strain Developed by K. Landau, and W. 2 3 ergonomi x • [FAA HFW]
(Arbeitswissenschaft- considerations. Assesses the relevant aspects of the Rohmert, TU Darmstadt cs • [Rohmert83]
liches work object, resources, tasks and requirements as well (Germany).
Erhebungsverfahren Zur as the working environment. Focus is on components
Tätigkeitsanalyse and combinations of a one-person job. AET is
Methode) structured in three parts: tasks, conditions for carrying
(Job Task Analysis) out these tasks, and the resulting demands upon the
worker.
Affinity Diagrams See Card Sorting
AGS See Flight Data Monitoring
(Analysis Ground Analysis and Visualisation
Station)
18. AHP T H 1970 Decision-making theory designed to reflect the way AHP was developed in the 1970’s 2 4 5 nuclear x • [FAA HFW]
(Analytic Hierarchy people actually think. Aims to quantify allocation by Dr. Thomas Saaty, while he defence • [Lehto97]
Process) decisions. The decision is first structured as a value was a professor at the Wharton and many • [MaurinoLuxhøj,
tree, then each of the attributes is compared in terms School of Business. other 2002]
of importance in a pairwise rating process. When Software support available (e.g. • [AHP tutorial]
entering the ratings the decision-makers can enter Expert Choice (EC)). • Wikipedia
numerical ratios. The program then calculates a
normalised eigenvector assigning importance or
preference weights to each attribute. Each alternative
is then compared on the separate attributes. This
results in another eigenvector describing how well
each alternative satisfies each attribute. These two sets
of eigenvectors are then combined into a single vector
that orders alternatives in terms of preference.
19. AIDS D 1978 The FAA AIDS database contains incident data The FAA AIDS database contains 8 aviation x x • [AIDS]
(Accident Incident Data records for all categories of civil aviation in the US. incidents that occurred between
System) Incidents are events that do not meet the aircraft 1978 and the present.
damage or personal injury thresholds contained in the
National Transportation Safety Board (NTSB)
definition of an accident. The information contained in
AIDS is gathered from several sources including
incident reports on FAA Form 8020-5. The data are
presented in a report format divided into the following
categories: Location Information, Aircraft
Information, Operator Information, Narrative,
Findings, Weather/Environmental Information, and
Pilot Information and other data fields.
20. AIPA T H 1975 Models the impact of human errors. Uses event trees 4 nuclear x • [Fleming, 1975]
(Accident Initiation and and fault trees to define the explicit human
Progression Analysis) interactions that can change the course of a given
accident sequence and to define the time allowed for
corrective action in that sequence. A time-dependent
operator response model relates the time available for
correct or corrective action in an accident sequence to
the probability of successful operator action. A time-
dependent repair model accounts for the likelihood of
recovery actions for a sequence, with these recovery
actions being highly dependent on the system failure
modes.
7
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
21. Air Safety Database D 1998 This database consists of accident data from a large Maintained at NLR. Currently, 3 8 aviation x x x x x • [VanEs01]
number of sources including, for instance, official the database includes almost ATM
international reporting systems (e.g. ICAO ADREP), 500,000 records of incidents,
Accident Investigation Agencies, and insurance serious incidents en accidents.
companies. These sources provide data for virtually all
reported ATM related accidents. The database also
contains exposure data (e.g. number of flights) and
arrival and departure data of commercial aircraft at
airports worldwide.
22. Air Traffic Control T T 1980 Air Traffic Control Training Tools provide human-in- 2 7 ATC x • [GAIN ATM, 2003]
Training Tools from the-loop simulation environments for air traffic control defence • [FAA HFW]
operators. Examples of tools are: • [MaraTech]
• ARTT (Aviation Research and Training Tools)
(Adacel, 2002) - aviation research and training,
simulating Tower, Radar, Driver, and Coms.
Provides visual display on computer screen or large
screen displays.
• AT Coach (UFA Inc., 1995) - products supporting
standalone training, ATC Automation system based
training and testing, airspace modelling, and voice
recognition based simulation control. There are two
simulation systems: the AT Coach Standalone
Simulation and the AT Coach Embedded Simulator.
• AWSIM (Warrior Preparation Center, early 1980s) -
real-time, interactive, entity-level air simulation
system. Provides capability for training, mission
rehearsal, doctrine and procedures development,
experimentation and operational plans assessment.
AirFASE See Flight Data Monitoring
(Aircraft Flight Analysis Analysis and Visualisation
& Safety Explorer)
23. Air-MIDAS I H 1998 Predictive model of human operator performance Air MIDAS was developed by 4 5 ATM x x x x • [Air-MIDAS web]
(Air- Man-Machine abou (flight crew and ATC) to evaluate the impact of members of the HAIL (Human aviation • [GoreCorker, 2000]
Integrated Design and t automation developments in flight management and Automation Integration • [HAIL]
Analysis System) air traffic control. The model is used to predict the Laboratory) at SJSU (San Jose
performance of flight crews and ATC operators State University). It is currently
interacting with automated systems in a dynamic being used for the examination of
airspace environment. The purpose of the modelling is advanced air traffic management
to support evaluation and design of automated aids for concepts in projects sponsored by
flight management and airspace management and to NASA ARC (Ames Research
predict required changes in both domains. Center) and Eurocontrol.
8
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
24. AIRS D H 1996 AIRS is a confidential human factors reporting system AIRS is part of the Airbus Flight 3 7 8 aviation x x x • [AIRS example]
(Aircrew Incident that provides airlines with the necessary tools to set up Operations Monitoring package. • [GAIN AFSA, 2003]
Reporting System) an in-house human performance analysis system. It Over 20 airlines are using the • [Benoist]
was established to obtain feedback from operators on system and several more are
how well Airbus aircraft operate to identify the considering it.
significant operational and technical human Based on BASIS software.
performance events that occur within the fleet;
develop a better understanding of how the events
occur; develop and implement design changes, if
appropriate, and inform other operators of the “lessons
learned” from the events. AIRS aims to provide an
answer to “what” happened as well as to “why” a
certain incident and event occurred. The analysis is
essentially based on a causal factor analysis, structured
around the incorporated taxonomy. The taxonomy is
similar to the SHEL model that includes
environmental, informational, personal, and
organisational factors that may have had an influence
on crew actions.
25. AIRS D 1967 The AIRS is a group of integrated, regional systems Developed by Environmental 7 8 police x • [AIRS]
(Area Information for the storage, analysis, and retrieval of information Systems Corporation.
Records System) by public safety and justice agencies through the
efficient and effective use of electronic data
processing.
26. Analysable Programs G D 1987 Aim is to design a program in a way that program Necessary if the verification 6 computer x • [Bishop90]
or analysis is easily feasible. The program behaviour process makes use of statistical • [EN 50128]
older must be testable completely on the basis of the program analysis techniques. • [Rakowsky]
analysis. Complementary to program
analysis and program proving.
Tools available. Software design
& development phase.
27. Analysis of field data T R 1984 In-service reliability and performance data is analysed Variants are Stochastic analysis 6 8 many x • [Groot&Baecher,
or to determine the observed reliability figures and the of field data and Statistical 1993]
older impacts of failures. It feeds back into redesign of the analysis of field data.
current system and the estimation processes for new, See also Field study.
but similar, systems. Scoped to the analysis of
performance data of technical equipment.
Animation See Prototype Development or
Prototyping or Animation
28. AoA T Dh 1975 Alternatives for a particular system or procedure are AoA is the new name for Cost 6 nuclear x x • [MIL-HDBK]
(Analysis of analysed, including no-action alternative. The AoA and Operational Effectiveness defence • Wikipedia
Alternatives) attempts to arrive at the best value for a set of Analysis (COEA) or Production road
proposals received from the private sector or other Readiness Analysis.
sources.
29. APHAZ D 1989 APHAZ reporting has been introduced by the UK One should note that the APHAZ 8 ATM x x x • [CAA9095]
(Aircraft Proximity CAA in 1989. In these reports air traffic controllers reporting rate seemed to increase
HAZards) describe conflicts between aircraft, mostly in terminal significantly after the
manoeuvring areas. introduction of Safety Monitoring
Function.
9
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
30. APJ T H 1981 Estimates human error probabilities. For this, experts Can be used together with PC. 5 offshore x x • [Humphreys88]
(Absolute Probability or are asked their judgement on the likelihood of specific Other name for APJ is Direct nuclear • [Kirwan94]
Judgement) older human error, and the information is collated Numerical Estimation. See also rail • [MUFTIS3.2-I]
mathematically for inter-judge consistency. Two SLIM. See also Delphi method. • [SeaverStillwell,
forms: Groups APJ and Single expert APJ. For the 1983]
former, there are four major methods: Aggregated • Wikipedia
individual method. Delphi method, Nominal group
technique, consensus group method. Does not restrict
to human error only.
APMS See Flight Data Monitoring
(Aviation Performance Analysis and Visualisation
Measuring System)
31. APRECIH T H 1999 Preliminary Analysis of Consequences of Human Design phase. 3 4 5 rail x • [PROMAI5]
(Analyse PREliminaire Unreliability. Focuses on the consequence assessment • [Vanderhaegen&Telle
des Conséquences de of human behavioural deviations independently of the 98]
l’Infiabilité Humaine) probabilities of the occurrence of human errors.
APRECIH classifies scenarios of unreliability using a
three-dimensional cognitive model that includes:
acquisition-based unreliability, problem solving-based
unreliability and action-based unreliability. It consists
of four consecutive steps: 1) Functional analysis of
human-machine system; 2) Procedural and contextual
analysis; 3) Identification of task characteristics; 4)
(Qualitative) Consequence analysis.
32. AQD D 1998 AQD is a comprehensive and integrated set of tools to In [RAW2004], AQD is referred 8 aviation x x x x • [GAIN AFSA, 2003]
(Aviation Quality support Safety Management and Quality Assurance. to as one of the big three Safety • [Glyde04]
Database) Provides tools for data gathering, analysis and Event and Reporting Tools, along • [RAW2004]
planning for effective risk management. AQD can be with BASIS and AVSiS. • [GAIN GST03]
used in applications ranging from a single-user Ref. [GAIN GST03] refers to
database to include operations with corporate AQD as a clone of ASMS and
databases over wide-area networks. AQD gathers states that AQD and ASMS are
Incident, Accident and Occurrence Reports together compatible in the sense that
with internal and external quality and safety audits for external organisations are able to
joint analysis. It also offers tools for creating internal gather their own occurrence data,
audit programs, assisting with audits for all airline track their own audit corrective
departments, tracking corrective and preventive actions, analyse the data and
actions, integrating external audit requirements and report their safety performance to
analysing and reporting trends in quality indicators. CAA via an electronic interface.
In practice, AQD is only used by
larger organisations. Version 5
was released in 2005.
Architectural Design See SADA (Safety Architectural
Analysis Design Analysis)
10
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
33. ARP 4761 and ARP I Dh 1994 Guidelines and methods for conducting safety ARP 4754 is the higher level 2 3 4 5 6 7 avionics x x • [ARP 4754]
4754 Ds assessment on civil airborne systems and equipment, document dealing with general aircraft • [ARP 4761]
(Aerospace including hardware as well as software. The certification. ARP 4761 gives a • [Klompstra&Everdij9
Recommended Practice methodology consists of the steps Functional Hazard more detailed definition of the 7]
documents 4761 and Assessment (FHA), Preliminary System Safety safety process. It is a refinement • [Lawrence99]
4754) Assessment (PSSA), System Safety Assessment and extension of the JAR-25 and • Wikipedia
(SSA). In addition, CCA is performed throughout the was developed by the Society of
other steps. CCA, FHA, PSSA and SSA are described Automotive Engineers (SAE). In
separately in this database list. principle, the guidelines in the
ARP documents are written for
electronic systems, but may also
be considered for other aircraft
systems.
34. Artificial Intelligence T M 1995 Aim is to react to possible hazards in a very flexible Software architecture phase. 6 computer x • [EN 50128]
Fault Correction or way by introducing a mix (combination) of process • [Rakowsky]
older models and some kind of on-line safety and reliability
analysis.
Artificial Neural See Neural Networks
Networks
35. ART-SCENE T Dh 2002 ART-SCENE is a process with Web-enabled tool ART-SCENE was developed by 6 ATM x x • [ART-SCENE web]
(Analysing Ds support that organisations can use to generate and City University's Centre for HCI • [ART-SCENE slides]
Requirements Trade- walk through scenarios, and thus discover the Design in London. Its origins
offs - Scenario complete and correct requirements for new computer were in the EU-funded
Evaluation) systems. It enhances current Rational Unified Framework IV 21903 'CREWS'
Processes and Use Case approaches to systems long-term research project. Since
development. then ART-SCENE has been
evaluated and extended in the UK
EPSRC-funded SIMP project and
bi-lateral projects, primarily with
Eurocontrol and the UK's
National Air Traffic Services.
See also CREWS approach.
ARTT See Air Traffic Control Training
(Aviation Research and Tools
Training Tools)
36. ASAT T R 2000 ASAT is a Monte Carlo simulation tool to estimate Developed by ATSI (Air Traffic 2 5 6 aviation x x x • [FAA-AFS-420-86]
(Airspace Simulation ? e.g. probability of mid-air collision during terminal en Simulation, Inc.) ATM • [Lankford03]
and Analysis for TERPS route phase. Uses statistical input for Aircraft (flight
(Terminal En-route dynamics, propulsion/performance, wake turbulence,
Radar Procedures)) on board avionics), Geographical/Geodetic (digital
terrain elevation data, obstacles), Environmental
(standards atmosphere, non-standards atmosphere,
measured wind and temperature gradients data),
Navigation ground systems, Surveillance (PRM, ASR-
9, ARSR, TCAS, ADS-B), Human factors (pilot,
ATC). ASAT can provide answers either in an
deterministic or a probabilistic way.
11
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
37. ASCOT T O 1992 ASCOT provides organisational self-assessment of Qualitative. Developed by IAEA 7 8 nuclear x • [Kennedy&Kirwan98]
(Assessment of Safety safety culture. A review of safety culture involves (International Atomic Energy
Culture in Organisations consideration of all organisations which influence it, Agency).
Team) including the operating organisation, the regulator and
any supporting organisations. For each of these
organisations, there are guide questions which should
be asked during a review of safety culture and key
indicators of an effective safety culture which are used
to assess the responses to these questions.
38. ASEP T H 1987 Abbreviated and slightly modified version of THERP. Nuclear specific tool, developed 5 nuclear x • [HIFA Data]
(Accident Sequence ASEP comprises pre-accident screening with nominal by A.D. Swain. ASEP provides a • [Kirwan94]
Evaluation Programme) human reliability analysis, and post-accident screening shorter route to human reliability • [Kirwan&Kennedy&
and nominal human reliability analysis facilities. analysis than THERP by Hamblen]
Consists of four procedures: Pre-accident tasks, Post- requiring less training to use the • [Straeter00]
accident tasks, Screening human reliability analysis, tool, less expertise for screening • [Straeter01]
Nominal human reliability analysis. estimates, and less time to
complete the analysis.
Is often used as screening method
to identify human actions that
have to be assessed in more detail
using THERP. However, is more
conservative.
39. ASHRAM T H 2000 ASHRAM allows aviation researchers to analyze ASHRAM is a second-generation 8 aviation x • [Fitzgerald, 2007]
(Aviation Safety aviation accidents and incidents that involve human human reliability analysis
Human Reliability errors in ways that account for the operational context, developed by the Nuclear
Analysis Method) crew expectations, training, airframe-related human- Regulatory Commission’s Sandia
system interfaces, crew resource management, and National Laboratories. Based on
generic human-error mechanisms. It examines the ATHEANA, but adapted for
airframe and airspace situational factors, pilot aviation purposes.
performance-shaping factors, and error mechanisms
identified by cognitive psychology to explain and
model the overt and covert events leading up to an
unsafe act. The ASHRAM cognitive model uses three
cognitive functions: environmental perception,
reasoning and decision-making, and action.
40. ASIAS D 2007 The primary objective of ASIAS is to provide a Created by FAA. 3 8 aviation x x x x x • [ASIAS portal]
(Aviation Safety national resource for use in discovering common, ASIAS gathers data from over 73 • [Randolph, 2009]
Information Analysis systemic safety problems that span multiple airlines, U.S. commercial operators.
and Sharing) fleets and regions of the global air transportation Its focus is currently on the
system. ASIAS leverages internal FAA data, de- integration of commercial
identified airline safety data and other government and aviation data, but future plans
publicly available data sources. It fuses these data include the expansion of ASIAS
sources in order to proactively identify trends in the to other sectors of the air
National Airspace System (NAS) and to assess the transportation system.
impact of changes in the aviation operating Former name is NASDAC
environment. Safety information discovered through Database (National Aviation
ASIAS analytic activities is used across the industry to Safety Data Analysis Center
drive improvements and support Safety Management Database).
Systems.
12
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
41. ASMS D 1991 ASMS is a relational database that links information Purpose: to provide the New 8 aviation x x x • [GAIN ATM, 2003]
(Aviation Safety on aviation document holders with safety failures Zealand aviation community with • [GAIN GST03]
Monitoring System) (occurrences and non-compliances) and tracks safety information as determined
corrective actions. It is fully integrated with CAA’s from accidents and incidents. It is
management information system and contains tools for also used to track corrective
creating and maintaining a database, customising and actions against non-compliances
creating occurrence reports, tracking safety that are detected during proactive
investigations, analysing data, and tracking corrective surveillance. It was
actions. Risk management is facilitated though the use commissioned in 1991.
of severity and likelihood codes. Automated Ref. [GAIN GST03] refers to
Occurrence Report forms provide assistance in AQD as a clone of ASMS and
entering data and provide an audit trail of changes states that AQD and ASMS are
made. Investigation reports support full multimedia, compatible in the sense that
including pictures. external organisations are able to
gather their own occurrence data,
track their own audit corrective
actions, analyse the data and
report their safety performance to
CAA via an electronic interface.
42. ASMT T R 2000 ASMT provides an automatic monitoring facility for ASMT was developed by the 7 aviation x • [GAIN ATM, 2003]
(Automatic Safety safety related occurrences based on operational data. It Eurocontrol Experimental Centre ATM
Monitoring Tool) detects and categorises each occurrence for assessment (EEC), in co-operation with the
by trained operational experts. The tool will help Maastricht Upper Airspace
determine causes and assist in the evolution of local Centre, for pilot operational use
procedures, airspace design, equipment and in 2000. It is also being used as
techniques. ASMT collects proximity-related part of the real time ATM
occurrences. It will begin collecting ACAS simulation facilities at the EEC.
occurrences through Mode-S stations, altitude
deviations, runway incursions, airspace penetrations,
and route deviations.
43. ASP T R 1979 ASP is a program containing several models for risk Established by the NRC (Nuclear 4 5 nuclear x x • [HRA Washington]
(Accident Sequence assessment. It identifies nuclear power plant events Regulatory Commission) in 1979 • [NRC-status99]
Precursor) that are considered precursors to accidents with the in response to the Risk • [NSC-ANSTO]
potential for severe core damage and uses risk Assessment Review Group
assessment methodologies to determine the report. In 1994, INEEL (Idaho
quantitative significance of the events. ASP models National Engineering and
contain event trees that model the plant response to a Environmental Laboratory)
selected set of initiating events. When a precursor to started the development for US
be analysed involves one of these initiating events, an NRC of a Human Reliability
initiating event assessment is performed. Analysis methodology as part of
ASP.
44. ASRM T R 1999 The ASRM is a decision support system aimed to ASRM was originally developed 4 5 aviation x • [Luxhøj, 2002]
(Aviation Safety Risk predict the impacts of new safety technologies/ for use by US Naval Aviation, • [Cranfield, 2005]
Model) interventions upon aviation accident rate. First the but has since been used more • [Luxhøj, 2005]
interactions of causal factors are modelled. Next, widely within the aviation • [LuxhøjCoit, 2005]
Bayesian probability and decision theory are used to industry. It makes use of HFACS. • [LuxhøjOztekin,
quantify the accident causal models and to evaluate ASRM is being enhanced and 2005]
the possible impacts of new interventions. Each such further developed by the NASA
model is a BBN, and the models are combined into a Aviation Safety Program Office
Hierarchical BBN, i.e. a HBN. The entire process is to evaluate the projected impact
largely based on expert judgments. ASRM uncertainty upon system risk reduction of
and sensitivity analyses is supported by a tool named multiple new technology
BN-USA (Bayesian Network-Uncertainty and insertions/ interventions into the
Sensitivity Analyses). National Airspace System.
13
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
45. ASRS D 1975 The ASRS receives, processes and analyses The ASRS was established in 3 8 aviation x x x x • [ASRS web]
(Aviation Safety voluntarily submitted incident reports from pilots, air 1975 under a memorandum of ATM • [GAIN ATM, 2003]
Reporting System) traffic controllers, and others. Reports submitted to agreement between FAA and • [FAA HFW]
ASRS describe both unsafe occurrences and hazardous NASA. • Wikipedia
situations. ASRS’s particular concern is the quality of Datamining tool: QUORUM
human performance in the aviation system. Perilog
Individuals involved in aviation operations (pilots,
crew members, ground personnel, etc.) can submit
reports to the ASRS when they are involved in or
observe a situation that they believe compromised
safety. These reports are voluntary and submitted at
the discretion of the individual. Teams of experienced
pilots and air traffic controllers analyse each report
and identify any aviation hazards.
46. Assertions and T Ds 1976 Software Testing technique. Aim is to produce code Applicable if no complete test or 7 computer x • [Bishop90]
plausibility checks or whose intermediate results are continuously checked analysis is feasible. Related to • Wikipedia
older during execution. In case of incorrect results a safety self-testing and capability
measure is taken. checking. Tools available. See
also Software Testing.
47. Assessment Framework T H 2002 This is a translation matrix between the process iCMM stands for integrated 5 6 x • [FAA HFW]
for Human Factors improvement approach advocated by iCMM and that Capability Maturity Model, • [FAA TM]
Process Improvement advocated by the FAA Human Factors Job Aid. sometimes referred to as • http://www.hf.faa.gov/
Content Outline: 1. HF Program Management; 2. Capability Maturity Model docs/508/docs/Transla
Identification of HF Risks and Requirements; 3. Integration (CMMI). tionMatix.pdf
Conduct HF Mitigation and Integration; 4. Conduct • Wikipedia
HF Verification, Validation, & Evaluation; 5. Other;
6. HF Process and Improvement
AT Coach See Air Traffic Control Training
Tools
48. ATCS PMD D 1999 This database aims at selecting appropriate Provides a compilation of 2 7 ATC x x • [FAA HFW]
(Air Traffic Control performance measures that can be used for evaluation techniques that have been proven • [ATCSPMD]
Specialist Performance of FAA NAS (National Airspace System) operations effective for use in human factor • [Hadley99]
Measurement Database) concepts, procedures, and new equipment. This research related to air traffic
database is intended to facilitate measurement of the control.
impact of new concepts on controller performance. Developed by FAA in 1999.
Using standard database techniques, a researcher can
search the database to select measures appropriate to
the experimental questions under study. With the
selection of a particular measure(s), the database also
provides citations for the primary source of the
measure and additional references for further
information. Having a set of measures with
standardised parameters will increase the reliability of
results across experiments, and enable comparisons of
results across evaluations.
14
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
49. ATHEANA T H 1996 Aim is to analyse operational experience and Developed by NRC (Nuclear 8 nuclear x • [Kirwan98-1]
(A Technique for understand the contextual causes of errors, and then to Regulatory Commission). • Wikipedia
Human Error ANAlysis) identify significant errors not typically included in Currently the method relies on
PSAs for nuclear power plants, e.g. errors of operational experience and expert
commission. Key human failure events and associated judgement. It is the intention of
procedures etc. are identified from the PSA, and the authors to produce guidance
unsafe acts are then identified that could affect or material on the technical basis of
cause these events. Associated error-forcing the model. Such material could
conditions are then identified that could explain why reduce the reliance on expert
such unsafe acts could occur. The important point is judgement and increase the
that these forcing conditions are based on the system auditability of the technique.
being assessed, i.e. the real context that is the focus of Goes beyond THERP in its
the assessment. capability to account for and
predict human errors, by
examining cognitive processes.
See also ASHRAM.
50. ATLAS I H 1996 ATLAS is a performance modelling software package Developed by Human 2 8 ATM, x • [Hamilton, 2000]
designed to support Human Factors Integration studies Engineering Limited (UK). rail, • [FAA HFW]
from an early stage in system development. It can be Supports a variety of offshore,
applied to predict and assess operator performance in conventional task analysis defence
critical operating scenarios. It combines a graphically- methods (including hierarchical
based task analysis with a database, aiming at task analysis (HTA), timeline
maximizing the value of task analysis data. The analysis (TLA) and tabular task
analysis data structure was based on GOMS. The task analysis (TTA)) and incorporates
data can be viewed and exported in various ways. more than 60 human
performance, workload, and
human reliability algorithms.
Atmospheric Dispersion See Dispersion Modelling or
Modelling Atmospheric Dispersion
Modelling
51. ATSAT T H 1995 Function is to categorise pilot/controller voice ATSAT uses the phraseology 2 3 ATM x • [FAA HFW]
(Aviation Topics Speech communication to identify communication problems standard contained in FAA Order • [Prinzo95]
Acts Taxonomy) and develop time based performance metrics. ATSAT 7119.65 Handbook of Air Traffic • [Prinzo02]
supports the encoding and hierarchical arrangement of Control. • [GAIN ATM, 2003]
operator and task performance. The encoded messages
can be entered in a spreadsheet and be imported for
statistical analysis.
52. ATWIT T H 1985 ATWIT aims to measure mental workload in “real- The ATWIT tool has been 7 ATM x • [FAA HFW]
(Air Traffic Workload time” by presenting auditory and visual cues that developed and has been in use at • [Stein85]
Input Technique) prompt a controller to press one of seven buttons on the FAA Technical Center.
the workload assessment keypad (WAK) within a
specified amount of time to indicate the amount of
mental workload experienced at that moment.
53. Avalanche/Stress T Ds 1995 Software Testing technique. Helps to demonstrate See also Software Testing. 7 computer x • [EN 50128]
Testing or robustness to overload. There are a variety of test • [Jones&Bloomfield&
older conditions that can be applied. Under these test Froome&Bishop01]
conditions, the time behaviour of the test object is • [Rakowsky]
evaluated. The influence of load changes is observed. • Wikipedia
Aviation Safety Data See Data Mining
Mining Workbench
54. Avoidance of G D 1987 To minimise the chance of error by making the system See also Measurement of 6 computer x x • [Bishop90]
Complexity as simple as possible. Complexity.
15
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
55. AVSiS D 2003 AVSiS is a safety event logging, management and AVSiS was developed by AvSoft 8 aviation x x x • [GAIN AFSA, 2003]
(Aviation Safety analysis tool. Events are divided into two groups: and runs on Windows PCs (95, • [RAW2004]
Information System) happenings (which are noteworthy but not actual 98, NT, 2000 or XP).
incidents), and incidents. Most events recorded will be In [RAW2004], AVSiS is
incidents. The Flight Safety Officer (FSO) on receipt referred to as one of the big three
of an event report consolidates the information into the Safety Event and Reporting
AVSiS system. Reports may be received and Tools, along with BASIS and
consolidated electronically or entered manually. AQD.
AVSiS presents easy to follow forms, with standard AVSiS Version 2.0p was released
pick lists (for example, event type, phase of flight, in 2003.
etc.) and text fields to enable detailed descriptions as
required. The FSO may then request follow up reports
from either internal or external departments (where the
cause is assigned to an internal department, the FSO
may also assign human factors(s)). Event severity is
assessed and recorded on two scales, severity and
likelihood. Once all the information about the event
has been obtained, the FSO may record
recommendations for actions to rectify any safety
system weaknesses identified. As with requested
reports, AVSiS enables the FSO to record
recommendations made and whether or not they have
been accepted and then implemented.
AWSIM See Air Traffic Control Training
(Air Warfare Tool
Simulation)
56. Back-to-back testing T Ds 1986 Software Testing technique. To detect test failures by Useful if two or more programs 7 computer x • [Bishop90]
or comparing the output of two or more programs are to be produced as part of the
older implemented to the same specification. Also known as normal development process.
Comparison Testing. See also Software Testing.
57. Backward Recovery T Ds 1995 Back-up to a previous state that was known to be Software architecture phase. 6 computer x • [EN 50128]
prob correct; then no (or little) knowledge of the error is rail • [Rakowsky]
ably needed. The Backward Recovery approach tends to be • [SSCS]
older more generally applicable than the forward recovery
approach - errors are often unpredictable, as are their
effects.
58. Barrier Analysis T M 1985 Barrier analysis is a structured way to consider the Similar to ETBA (Energy Trace 3 6 chemical x • [FAA00]
events related to a system failure. It suggests that an and Barrier Analysis). Barrier nuclear • [KirwanAinsworth92]
incident is likely preceded by an uncontrolled transfer analysis is a qualitative tool for road • [ΣΣ93, ΣΣ97]
of energy and therefore for an incident to occur there systems analysis, safety reviews, rail • [FAA HFW]
needs to be: 1. A person present 2. A source of energy and accident analysis. Combines • Wikipedia
3. A failed barrier between the two. Barriers are with MORT.
developed and integrated into a system or work
process to protect personnel and equipment from
unwanted energy flows. Is implemented by identifying
energy flow(s) that may be hazardous and then
identifying or developing the barriers that must be in
place to form damaging equipment, and/or causing
system damage, and/or injury. Can also be used to
identify unimaginable hazards.
16
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
59. BASIS D 1992 Database based on voluntary reporting. BASIS Air Supporting tools available, e.g. 3 8 aviation x x x x x • [GAIN AFSA, 2003]
(British Airways Safety Safety Reporting is used to process and analyse flight BASIS Flight Data Tools, • [RAW2004]
Information System) crew generated reports of any safety related incident. purpose of which is to gather and
It has been regularly updated since its inception and analyze digital data derived from
has become the world’s most popular aviation safety onboard flight data recorders in
management tool (according to British Airways). The support of an airline’s Flight Data
following modules are available: Air Safety Reporting Monitoring (FDM) Programme -
(ASR); Safety Information Exchange (SIE); Ground known in the U.S. as Flight
and Cabin Safety modules. Operations Quality Assurance
(FOQA). The following modules
are available: Flight Data Traces
(FDT); Flight Data Events (FDE);
Flight Data Measurements
(FDM); Flight Data Simulation
(FDS); Flight Data Home (FDH).
In [RAW2004], BASIS is
referred to as one of the big three
Safety Event and Reporting
Tools, along with AQD and
AVSiS.
Bayes Networks See BBN (Bayesian Belief
Networks)
Bayesian Networks See BBN (Bayesian Belief
Networks)
60. BBN M 1950 BBN (also known as Bayesian networks, Bayes Bayesian belief networks are 4 5 ATM x x x x x • [Adusei-Poku, 2005]
(Bayesian Belief networks, Probabilistic cause-effect models and based on the work of the aviation • [Belief networks]
Networks) Causal probabilistic networks), are probabilistic mathematician and theologian medical • [BBN04]
networks derived from Bayes theorem, which allows Rev. Thomas Bayes (1702-1761), diagnosis • [GAIN ATM, 2003]
the inference of a future event based on prior who worked with conditional finance • [Bayesian web]
evidence. A BBN consists of a graphical structure, probability theory in the late computer • [Pearl, 1985]
encoding a domain's variables, the qualitative 1700s to discover a basic law of
• [FAA HFW]
relationships between them, and a quantitative part, probability, which was then
• Wikipedia
encoding probabilities over the variable. A BBN can called Bayes’ rule: p(A | B) = (
be extended to include decisions as well as value or p(A) * p(B | A) ) / p(B). The term
utility functions, which describe the preferences of the Bayesian came in use around
decision-maker. BBN provide a method to represent 1950. The term "Bayesian
relationships between propositions or variables, even networks" was coined by Judea
if the relationships involve uncertainty, Pearl (1985).
unpredictability or imprecision. By adding decision Tools available, e.g. SERENE
variables (things that can be controlled), and utility (SafEty and Risk Evaluation
variables (things we want to optimise) to the using bayesian NEts), see [GAIN
relationships of a belief network, a decision network ATM, 2003]; HUGIN. See also
(also known as an influence diagram) is formed. This http://powerlips.ece.utexas.edu/~j
can be used to find optimal decisions, control systems, oonoo/Bayes_Net/bayes.html.
or plans. See also ASRM, BBN, DBN,
HBN.
61. BDD M 1959 Represents a Boolean function, by means of a rooted, Introduced by C.Y. Lee, and 4 computer x x • Wikipedia
(Binary Decision directed, acyclic, binary graph, which consists of further developed by others. In
Diagram) decision nodes and two terminal nodes called 0- literature the term BDD generally
terminal and 1-terminal. Such a BDD is called refers to ROBDD (Reduced
'ordered' if different variables appear in the same order Ordered BDD), the advantage
on all paths from the root. It is called 'reduced' if the which is that it is unique for a
graph is reduced according to two rules: 1) Merge any particular functionality.
isomorphic subgraphs. 2) Eliminate any node whose See also DTA and Decision
two children are isomorphic. Tables.
17
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
Bedford Workload Scale See Rating Scales
Behaviorally Based See Rating Scales
Performance Rating
Scale
62. Behaviour Graphs T Dh 1985 Behaviour graphs are combined control and 4 defence x • [HEAT overview]
abou information flow graphs for describing system
t behaviour within the requirements driven development
systems engineering methodology. The graphs show
system behaviour explicitly as a function of time. The
data flow is shown on the horizontal axis, and time on
the vertical axis. The graphs are used for function
analysis at the system level, and for scenario
modelling. Behaviour graphs model the
communication and coordination between components
and operators in a system. Thus details of the system
architecture must be available. The input and output
item sequences between the system and its
environment must also be available.
63. Beta-factor method T R 1967 Is used to quantify common cause effects identified by See also Multiple Greek Letters 5 aircraft x • [Charpentier00]
Zonal Analysis. The beta-factor represents the method. • [Mauri, 2000]
conditional probability of being a common-mode • [MUFTIS3.2-I]
failure when a component failure occurs. • [Pozsgai&Neher&Ber
tsche02]
64. Bias and Uncertainty T R 2002 Aim is to get insight into the assumptions adopted 5 ATM x x x x [Everdij&Blom02]
assessment during a model-based accident risk assessment, and on [FT handbook02]
their effect on the assessment result. Technique [Henley&Kumamoto
assesses all model assumptions and parameter values 92]
on their effect on accident risk, and combines the [Kumamoto&Henley
results to get an estimate of realistic risk and a 95% 96]
credibility interval for realistic risk. [Nurdin02]
65. Black Spot Analysis T M 1980 Black Spot analysis is a strategic framework for long Also referred to as Accident- 6 road x • [Kjellen, 2000]
or term systematic and cost efficient occupational injury Concentration Analysis • [QWHSS, 2005]
older prevention. The philosophy underlying Black Spot
analysis is to allocate resources where they will lead to
the greatest reduction in the most severe injuries.
Based principally on workers’ compensation statutory
claims data, Black Spot analysis will identify priority
areas for action to prevent severe occupational trauma
66. Boundary value analysis T Ds 1992 Software Testing technique. Boundary value analysis Boundary-value testing of 7 computer x • [EN 50128]
prob is a software testing technique in which tests are individual software components • [Jones&Bloomfield&
ably designed to include representatives of boundary or entire software systems is an Froome&Bishop01]
older values, which are values on the edge of an equivalence accepted technique in the • [Rakowsky]
partition or at the smallest value on either side of an software industry. See also • [Sparkman92]
edge. The values could be either input or output ranges Software Testing. • Wikipedia
of a software component. Since these boundaries are
common locations for errors that result in software
faults they are frequently exercised in test cases.
18
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
67. Bow-Tie Analysis T M 1998 Aim is to enhance communication between safety The Bow-Tie Diagram has 6 ATM x x x • [Edwards99]
or experts (who construct a Bow-Tie diagram) and evolved over the past decades chemical • [Zuijderduijn99]
older operational experts (who identify hazard mitigating from the Cause Consequence medical • [Bishop90]
measures using the Bow-Tie diagram). The knot of the Diagram of the 1970s and the • [Blom&Everdij&Daa
Bow-Tie represents a releasing event or a hazard. The Barrier Diagram of the mid ms99]
left-hand side wing shows threats and Pro-active 1980s. It has been most often • [DNV-HSE01]
measures, which improve the chances to avoid used in chemical and petro- • [EHQ-PSSA]
entering the hazard; the right-hand side wing shows chemical industries. The • [EN 50128]
consequences and Re-active measures to improve the approach has been popularised at
• [Rademakers&al92]
chances to escape from the hazard prior to its EU Safety Case Conference,
• [Trbojevic&Carr99]
escalation. 1999, as a structured approach for
risk analysis within safety cases • [Villemeur91-1]
where quantification is not • [Petrolekas&Haritopo
possible or desirable. ulos01]
• [FAA HFW]
68. BPA T Dh 1979 Bent Pin Analysis evaluates the effects should Any connector has the potential 3 aircraft x • [FAA AC431]
(Bent Pin Analysis) connectors short as a result of bent pins and mating or for bent pins to occur. Connector • [FAA00]
demating of connectors. shorts can cause system • [ΣΣ93, ΣΣ97]
malfunctions, anomalous
operations, and other risks.
Combines with and is similar to
CFMA. Applicable during
maintenance operations.
69. Brainstorming G A group of experts sit together and produce ideas. See also Table Top Analysis. 3 6 many x x x x x • [FAA HFW]
Several approaches are known, e.g. at one side of the • [Rakowsky]
spectrum the experts write down ideas privately, and • Wikipedia
then gather these ideas, and at the other side of the
spectrum, the expert openly generate ideas in a group.
BREAM See CREAM (Cognitive
(Bridge Reliability And Reliability and Error Analysis
Error Analysis Method) Method)
Brio Intelligence 6 See Data Mining
70. Brown-Gibson model M 1972 Addresses multi-objective decision making. The Developed in 1972 by P. Brown 5 x • [Feridun et al, 2005]
model integrates both objective and subjective and D. Gibson. • [MaurinoLuxhøj,
measures (weights) for decision risk factors to obtain Link with AHP and PC. 2002]
preference measures for each alternative identified. • Wikipedia
Makes repeated use of Paired Comparisons (PC).
71. Bug-counting model T Ds 1990 Model that tends to estimate the number of remaining Not considered very reliable, but 3 computer x • [Bishop90]
or errors in a software product, and hence the minimum can be used for general opinion
older time to correct these bugs. and for comparison of software
modules. See also Musa Models.
See also Jelinski-Moranda
models.
72. C3TRACE T O 2003 C3TRACE provides an environment that can be used 8 defence x x • [Kilduff et al, 2005]
(Command, Control, to evaluate the effects of different personnel • [FAA HFW]
and Communication- configurations and information technology on human
Techniques for Reliable performance as well as on overall system
Assessment of Concept performance. This tool provides the capability to
Execution) represent any organisation, the people assigned to that
organisation, the tasks and functions they will
perform, and a communications pattern within and
outside the organisation, all as a function of
information flow and information quality.
19
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
73. CADA T H 1988 CADA is a technique for systematic examination of Apparently not in current use or 5 nuclear x • [Kirwan98-1]
(Critical Action and decision-making tasks. It utilizes checklists to classify else used rarely. Developed from
Decision Approach) and examine decision errors and to assess their Murphy diagrams and SRK.
likelihood. Psychologically-based tool. Model-based
incident analysis / HRA.
74. CADORS D 1996 CADORS is a national data reporting system that is In 2001, CADORS consisted of 3 8 aviation x x x • [GAIN ATM, 2003]
(Civil Aviation Daily used to collect timely information concerning 36,000 safety reports of aviation ATM
Occurrence Reporting operational occurrences within the Canadian National occurrences.
System) Civil Air Transportation System and is used in the
early identification of potential aviation hazards and
system deficiencies. Under the Aeronautics Act, there
is a mandatory requirement for ATS certificate holders
to report items listed in the CADORS Manual.
CADORS reports are collected from a number of
sources. The main information provider is NAV
CANADA, which supplies close to 80% of all reports.
Other information providers include, Transportation
Safety Board, airports, police forces, public, etc.
CADORS captures a wide scope of safety related
events including ATC operating irregularities;
communication, navigation, surveillance, and other air
traffic systems failures; controlled airspace violations;
etc. Included in the collection are occurrences related
to aircraft, aerodromes, security (e.g. bomb threats,
strike actions) and environment (e.g. fuel spills)
75. CAE Diagrams T R 1999 In contrast with Fault Trees (which are, typically, used Developed by Chris Johnson. 4 medical x x • [Johnson, 1999]
(Conclusion, Analysis, to map out a timeline of events leading up to an Link with GSN. CAE is different
Evidence Diagrams) accident), CAE diagrams represent the analytic to GSN, it uses different shapes
framework that is constructed from the evidence about and approach, but similar
those events. They provide a road-map of the evidence concepts. It is an approach to
and analysis of an accident and encourage analysts to ‘graphical argumentation’.
consider the evidence that supports particular lines of
argument. CAE diagrams are graphs. The roots
represent individual conclusions from an accident
report. Lines of analysis that are connected to items of
evidence support these. Each item of evidence either
weakens or strengthens a line of analysis. Lines of
analysis may also strengthen or weaken the conclusion
at the root of the tree.
76. CAHR D H 1992 The Database-System CAHR is a tool for analysing Qualitative and quantitative. The 5 nuclear x x x • [HRA Washington]
(Connectionism - operational disturbances, which are caused by term Connectionism was coined • [Straeter&al99]
Assessment of Human 1998 inadequate human actions or organisational factors. It by modelling human cognition on
Reliability) was implemented using Microsoft ACCESS. CAHR the basis of artificial intelligence
contains a generic knowledge base for the event models. It refers to the idea that
analysis that is extendable by the description of further human performance is affected by
events. The knowledge-base contains information the interrelation of multiple
about the system-state and the tasks as well as for conditions and factors rather than
error opportunities and influencing factors singular ones that may be treated
(Performance Shaping Factors). isolated. Developed 1992-1998.
77. CAHR-VA D H 2007 This is CAHR tailored to air traffic management. Uses MUAC (Maastricht Upper 5 ATM x x x • [Blanchard, 2006]
(Connectionism Area Control) incident database. • [Leva et al, 2006]
Assessment of Human • [Kirwan, 2007]
Reliability - Virtual • [Trucco, 2006]
Advisor)
20
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
78. CAIR D 1988 CAIR aims to gather data that would not be reported CAIR was instituted by the 8 aviation x x x • [GAIN ATM, 2003]
(Confidential Aviation under a mandatory system. It covers flight crews, Australian Transport Safety ATM
Incident Reporting) maintenance workers, and even passengers, as well as Bureau (ATSB) in 1988 as a
air traffic service officers. The program is designed to supplement to their mandatory
capture information, no matter how minor the reporting system, the Air Safety
incident. While confidentiality is maintained, the Incident Report (ASIR). The
report must not be anonymous or contain unverifiable program’s focus is on systems,
information. The ATSB supplement in the ‘Flight procedures and equipment, rather
Safety Australia’ magazine is the primary method of than on individuals.
publishing a report and obtaining feedback on CAIR
issues. Publication of selected CAIR reports on the
Internet is planned. CAIR already covers all aspects of
Australian civil aviation. Air safety investigations are
performed by ATSB independent of the Civil Aviation
Safety Authority (CASA) (the regulator) and
AirServices Australia (the air traffic service provider).
The ATSB has no power to implement its
recommendations.
79. CAMEO/TAT I H 1991 Simulation approach acting as a task analysis tool, This approach is relatively rare in 2 3 nuclear x x • [Fujita94]
(Cognitive Action primarily to evaluating task design, but also for Human Error Identification, • [Kirwan98-1]
Modelling of Erring potential use in Human Reliability Assessment. It where more usually either an
Operator/Task Analysis allows designers to ensure that operators can carry out ‘average’ operator is considered,
Tool ) tasks. Performance Shaping Factors used in the or a conservatively worse than
approach include task load, complexity, time pressure, average one is conceptualised.
opportunistic change of task order, multiple task
environments, negative feedback from previously
made decisions or actions, operator’s policies and
traits, etc.
80. CARA T H 2007 This is HEART tailored to the air traffic controller. Uses the CORE-DATA human 5 ATM x • [Kirwan, 2007]
(Controller Action error database • [KirwanGibson]
Reliability Assessment)
21
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
81. Card Sorting T R 1960 Card Sorting is a technique for discovering the latent The affinity diagram was devised 2 5 x x x • [Affinity Diagram]
structure in an unsorted list of statements or ideas. The by Jiro Kawakita in the 1960s and • [Cluster Analysis]
investigator writes each statement on a small index is sometimes referred to as the • [Anderberg, 1973]
card and requests six or more subject matter experts to Kawakito Jiro (KJ) Method. • Wikipedia
individually sort these cards into groups or clusters.
The results of the individual sorts are then combined
and if necessary analyzed statistically.
Related techniques are:
• Affinity Diagrams, which is a brainstorming method
that helps to first generate, then organize ideas or
concerns into logical groupings. It is used to sort
large amounts of complex, or not easily organized
data. Existing items and/or new items identified by
individuals are written on sticky notes which are
sorted into categories as a workshop activity. Can
incorporate the representation of the flow of time, in
order to describe the conditions under which a task is
performed.
• Cluster analysis is a collection of statistical methods
that is used to organize observed data into
meaningful structures or clusters. The measure of the
relationship between any two items is that pair's
similarity score. Cluster analysis programs can
display output in the form of tree diagrams, in which
the relationship between each pair of cards is
represented graphically by the distance between the
origin and the branching of the lines leading to the
two clusters. Cluster members share certain
properties and thus the resultant classification will
provide some insight into a research topic.
• Content analysis (1969) is a research tool that uses a
set of categorisation procedures for making valid and
replicable inferences from data to their context. It is
analogous to Card Sorting. Researchers quantify and
analyze the presence, meanings and relationships of
words and concepts, then make inferences about the
messages within the texts. CA is usually carried out
as part of an analysis of a large body of data such as
user suggestions. Content analysis is conducted in
five steps: 1. Coding. 2. Categorizing. 3. Classifying.
4. Comparing. 5. Concluding.
• P Sort is a sorting technique where the expert is
asked to sort a limited number of domain concepts
into a fixed number of categories.
• Q Sort is a process whereby a subject models his or
her point of view by rank-ordering items into 'piles'
along a continuum defined by a certain instruction.
22
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
82. CASE T T 1998 CASE is a training system that models the complete Developed by AMS (Alenia 2 ATM x x x x • [GAIN ATM, 2003]
(Controlled Airspace airspace system from gate-to-gate. The CASE Marconi Systems)
Synthetic Environment) simulator is capable of recording every single event
that occurs within the scenario that has been defined.
In addition to modelling the performance/profiles of
any number of aircraft and ground vehicles, CASE is
also able to evaluate and analyse events such as
congestion, sector loading, the number of times a
separation threshold has been violated the number of
aircraft controlled by each control station, etc. The
core elements are: 1) a Central Processing Suite, 2) up
to thirty-five Pilot, Controller (and Supervisor)
Operator Workstations, 3) an Exercise Preparation
System, and 4) Voice and data communications
networks.
83. CASS T O 2003 The CASS questionnaire-based survey was designed Developed at the University of 8 aviation x • [Gibbons et al, 2006]
(Commercial Aviation to measure five organisational indicators of safety Illinois. Ref. [vonThaden, 2006] aircraft • [vonThaden, 2006]
Safety Survey) culture within an airline: Organisational Commitment addresses a translation to a • [Wiegman et al, 2003]
to Safety; Managerial Involvement in Safety; Chinese context. CASS exists in
Employee Empowerment; Accountability System; two versions: one for flight
Reporting System. operations personnel (pilots, chief
pilots, and operations
management) and one for
aviation maintenance personnel
(technicians, inspectors, lead
technicians, supervisors, and
maintenance management).
84. CAT T H 1992 CAT is a computerized GOMS technique for soliciting Developed by Dr. Kent Williams 2 x • [FAA HFW]
(Cognitive Analysis information from experts. CAT allows the user to in 1992.
Tool) describe his or her knowledge in an area of expertise Link with GOMS.
by listing the goals, subgoals, and one or more
methods for achieving these goals, along with
selection rules. These production rules form the basis
of GOMS models that can be used to generate detailed
predictions of task execution time using a proposed
interface. Cognitive aspects of the task may be derived
from this method, but the technique itself does not
guarantee it.
23
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
85. CATS D 2005 A causal model represents the causes of commercial CATS arose from the need for a 8 aviation x • [Ale et al, 2006]
(Causal model for Air air transport accidents and the safeguards that are in thorough understanding of the • [Ale et al, 2008]
Transport safety) place to prevent them. The primary process is further causal factors underlying the risks
subdivided into several flight phases: take-off, en- implied by the air transport, so
route and approach and landing. Events occurring that improvements in safety can
between an aircraft’s landing and its next flight are not be made as effectively as
modeled. The numerical estimates derived in the possible. It was developed for the
model apply to western-built aircraft, heavier than Netherlands Ministry of
5700 kg, maximum take-off weight. The model Transport and Water
apportions the probability per flight of an accident Management by a consortium
over the various scenarios and causes that can lead to including Delft University of
the top event. The CATS model architecture includes Technology, National Aerospace
Event Sequence Diagrams (ESDs), Fault Trees (FTs) Laboratory NLR, White Queen
and Bayesian Belief Nets (BBNs). ESDs represent the Safety Strategies, the National
main event sequences that might occur in a typical Institute for Physical Safety
flight operation and the potential deviations from (NIVF), Det Norske Veritas
normal. FTs resolve the events in an ESD into base (DNV) and JPSC. The model
events. The base events relating to human error are currently consists of 1365 nodes,
further resolved into causal events, which relate to the 532 functional nodes,
base events via probabilistic influence, as captured in representing ESDs and FTs, and
a BBN. 833 probabilistic nodes.
86. Causal Networks M 1940 Graph of random quantities, which can be in different The idea of using networks to 4 manufactu x x • [Loeve&Moek&Arsen
or states. The nodes are connected by directed arcs which represent interdependencies of ring is96]
older model that one node has influence on another node. events seems to have developed logistics • [Wolfram02]
with the systematisation of medical
manufacturing in the early 1900s
and has been popular since at
least the 1940s. Early
applications included switching
circuits, logistics planning,
decision analysis and general
flow charting. In the last few
decades causal networks have
been widely used in system
specification methods such as
Petri nets, as well as in schemes
for medical and other diagnosis.
Since at least the 1960s, causal
networks have also been
discussed as representations of
connections between events in
spacetime, particularly in
quantum mechanics.
Causal probabilistic See BBN (Bayesian Belief
networks Networks)
24
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
87. Cause and Effect T R 1943 The Cause And Effect Diagram is a Logic Diagram Also called the Ishikawa diagram 4 managem x • [FAA00]
Diagram with a significant variation. It provides more structure (after its creator, Kaoru Ishikawa ent • Wikipedia
than the Logic Diagram through the branches that give of Japan, who pioneered quality
it one of its alternate names, the fishbone diagram. management processes in the
The user can tailor the basic “bones” based upon Kawasaki shipyards, and in the
special characteristics of the operation being analyzed. process became one of the
Either a positive or negative outcome block is founding fathers of modern
designated at the right side of the diagram. Using the management), or the Fishbone
structure of the diagram, the user completes the Diagram (due to its shape).
diagram by adding causal factors in either the “M” or
“P” structure. Using branches off the basic entries,
additional hazards can be added.
88. CbC T Ds 1992 In contrast to ‘construction by correction’ (i.e., build Developed by Praxis Critical 5 6 computer x • [Amey, 2006]
(Correctness-by- abou and debug), CbC seeks to produce a product that is Systems. Correctness-by- security • [Leveson, 1995]
Construction) t initially correct. Aim of CbC is to employ constructive Construction is one of the few • [IEC 61508-7]
means that preclude defects. It is a process for secure SDLC processes that • [CbC lecture]
developing high integrity software, aiming at incorporate formal methods into
removing defects at the earliest stages. The process many development activities.
almost always uses formal methods to specify Requirements are specified using
behavioral, security and safety properties of the Z, and verified. Code is checked
software. The seven key principles of Correctness-by- by verification software, and is
Construction are: Expect requirements to change; written in Spark, a subset of Ada
Know why you're testing (debug + verification); which can be statically assured.
Eliminate errors before testing; Write software that is
easy to verify; Develop incrementally; Some aspects
of software development are just plain hard; Software
is not useful by itself.
89. CBFTA T R 2007 CBFTA is a tool for updating reliability values of a CBFTA is for use during the 7 off-shore x • [ShalevTiran, 2007]
(Condition-Based Fault specific system and for calculating the residual life systems operational phase,
Tree Analysis) according to the system’s monitored conditions. It including maintenance, not just
starts with a known FTA. Condition monitoring during design.
methods applied to systems are used to determine
updated failure rate values of sensitive components,
which are then applied to the FTA. CBFTA
recalculates periodically the top event failure rate, thus
determining the probability of system failure and the
probability of successful system operation.
90. CBR T M 2003 The CBR is constructed using ASRMs dealing with ASRM is Aviation Safety Risk 3 6 aviation x x • [Luxhøj, 2005]
(Case-Based Reasoning) Loss of Control, Engine Failure, Runway Incursion, Model. • [LuxhøjOztekin,
Controlled Flight Into Terrain and Maintenance- 2005]
related accidents. Through a series of dialog boxes, the
user answers questions regarding the causal factors
involved in an aircraft accident. The CBR then
searches the case base for the most closely matching
cases and reports a rank ordering. These cases may be
retrieved as “solution possibilities” with suggested
technology and procedural mitigations.
25
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
91. CCA T R 1987 Common Cause Analysis will identify common Common causes are present in 3 aircraft x x • [ARP 4754]
(Common Cause failures or common events that eliminate redundancy almost any system where there is space • [EN 50128]
Analysis) in a system, operation, or procedure. Is used to any commonality, such as human nuclear • [FAA AC431]
identify sources of common cause failures and effects interface, common task, and • [FAA00]
of components on their neighbours. Is subdivided into common designs, anything that • [Mauri, 2000]
three areas of study: Zonal Analysis, Particular Risks has a redundancy, from a part, • [MUFTIS3.2-I]
Assessment, and Common Mode Analysis. component, sub-system or
• [Rakowsky]
system. Related to Root Cause
• [ΣΣ93, ΣΣ97]
Analysis.
CCA is a term mainly used within • [SAE2001]
the aerospace industry. In the • [DS-00-56]
nuclear industry, CCA is referred • [Mauri, 2000]
to as Dependent Failure Analysis. • [Lawrence99]
According to [Mauri, 2000], • [Sparkman92]
common cause failures and • [Mosley91]
cascade failures are specific types
of dependent failures; common
mode failures are specific types
of common cause failures.
CCCMT See CCMT (Cell-to-Cell
(Continuous Cell-to-Cell Mapping Technique).
Mapping Technique)
92. CCD T R 1971 Aim is to model, in diagrammatical form, the Developed at Risø laboratories 4 5 aircraft x x • [Bishop90]
(Cause Consequence sequence of events that can develop in a system as a (Denmark) in the 1970’s to aid in nuclear • [EN 50128]
Diagrams) consequence of combinations of basic events. Cause- the reliability analysis of nuclear • [FAA00]
or Consequence Analysis combines bottom-up and top- power plants in Scandinavian • [Leveson95]
CCA down analysis techniques of event trees and fault countries. For assessment of • [MUFTIS3.2-I]
(Cause Consequence trees. The result is the development of potential hardware systems; more difficult • [Rakowsky]
Analysis) accident scenarios. to use in software systems. CCA • [Ridley&Andrews01]
is a good tool when complex
• [ΣΣ93, ΣΣ97]
system risks are evaluated.
Related to ETA, FTA and
Common Cause Analysis. Tools
available. No task analysis
allowed. Sometimes referred to as
Bow-Tie Analysis.
93. CCMT T R 1987 CCMT is a numerical technique for the global analysis It is particularly useful if the 4 nuclear x • [Hsu, 1987]
(Cell-to-Cell Mapping of non-linear dynamic systems. It models system system has a strange attractor.
Technique) evolution in terms of probability of transitions within a A variation of CCMT is CCCMT
user-specified time interval (e.g., data-sampling (Continuous Cell-to-Cell
interval) between sets of user-defined parameter/state Mapping Technique) where the
variable magnitude intervals (cells). The cell-to-cell Solution method is ODE
transition probabilities are obtained from the given (ordinary differential equation)
linear or nonlinear plant model. CCMT uses Matrix solvers rather than Matrix
solvers as solution method. solvers.
94. CCS T Ds 1980 CCS is an algebra for specifying and reasoning about Introduced by Robin Milner. 2 computer x • [Bishop90]
(Calculus of concurrent systems. As an algebra, CCS provides a set Formal Method. Descriptive tool • [CCS]
Communicating of terms, operators and axioms that can be used to in cases where a system must • [EN 50128]
Systems) write and manipulate algebraic expressions. The consist of more than one process. • [Rakowsky]
expressions define the elements of a concurrent Software requirements • Wikipedia
system and the manipulations of these expressions specification phase and design &
reveal how the system behaves. CCS is useful for development phase.
evaluating the qualitative correctness of properties of a
system such as deadlock or livelock.
26
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
95. CDA T Ds 1996 Code data analysis concentrates on data structure and 3 avionics x • [NASA-GB-1740.13-
(Code Data Analysis) or usage in the coded software. Data analysis focuses on 96]
older how data items are defined and organised. This is • [Rakowsky]
accomplished by comparing the usage and value of all
data items in the code with the descriptions provided
in the design materials.
96. CDM T H 1989 The CDM is a semi-structured interview technique CDM is a variant of CIT, 8 many x x • [Klein et al, 1989]
(Critical Decision developed to obtain information about decisions made extended to include probes that • [FAA HFW]
Method) by practitioners when performing their tasks. A elicit aspects of expertise such as
subject-matter expert is asked to recount a particularly the basis for making perceptual
challenging or critical incident in which his/her skills discriminations, conceptual
were needed. The operator is asked to provide a discriminations, typicality
general description of the incident followed by a more judgments, and critical cues.
detailed account of the sequence of events. The Output can be represented in
interviewer and the operator then establish a timeline various ways, e.g. through
and identify the critical points in the incident. The narrative accounts, or in the form
interviewer then uses a number of probes to elicit of a cognitive requirements table
more detailed information about the problem solving that lists the specific cognitive
processes at each of the critical points in the incident. demands of the task, as well as
The interviewer probes to identify decision points, contextual information needed to
shifts in situation assessment, critical cues leading to a develop relevant training or
specific assessment, cognitive strategies, and potential system design recommendations.
errors.
CEFA See Flight Data Monitoring
(Cockpit Emulator for Analysis and Visualisation
Flight Analysis)
97. CELLO method T H 1999 CELLO is similar to heuristic or expert evaluation Is largely derived from the 2 6 ergonomi x x • [FAA HFW]
or except it is collaborative in that multiple experts, expert-based heuristic method cs
older guided by a defined list of design criteria, work promoted by Jacob Neilsen.
together to evaluate the system in question. The CELLO can be used throughout
criteria may be principles, heuristics or the lifecycle but it is most useful
recommendations which define good practice in when applied early in the
design and are likely to lead to high quality in use. The development cycle as a check that
criteria represent compiled knowledge derived from the user and usability
psychology and ergonomics theory, experimental repuirements for the system in
results, practical experience and organisational or question are being observed.
personal belief. At the conclusion of the inspection an See also Heuristic Evaluation.
evaluation report is created that details how specific
functions or features of the system contravene the
inspection criteria and may provide recommendations
as to how the design should be changed in order to
meet a criterion or criteria. The results of the
inspection are reported in a standard form related to
the criteria used and the objectives of the inspection.
The usual severity grading used is: 1. Show stopper. 2.
Inconvenient. 3. Annoyance.
98. Certificated Hardware G D 1990 Aim is to assure that all hardware components that are In some fields (e.g. military, 6 avionics x • [Bishop90]
Components or used will not reveal inherent weaknesses after their space, avionics) mandatory. defence
older use within the system by screening and segregating the Tools available. space
positively certified components.
99. Certificated Software G D 1990 Aim is to minimise the development of new software Additional validation and 6 computer x • [Bishop90]
Components or through the use of existing components of known level verification may be necessary.
older of confidence or quality. Tools available.
27
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
100. Certificated Tools G D 1990 Tools are necessary to help developers in the different Software design & development 7 computer x • [Bishop90]
or or phases of software development. Certification ensures phase. • [EN 50128]
Certified Tools and older that some level of confidence can be assumed • [Rakowsky]
Certified Translators regarding the correctness of software.
101. CES I H 1987 Human performance assessment. Dynamic. Was 4 5 nuclear x • [Kirwan98-1]
(Cognitive Environment developed for simulating how people form intentions • [MUFTIS3.2-I]
Simulation) to act in nuclear power plant personnel emergencies.
CES can be used to provide an objective means of
distinguishing which event scenarios are likely to be
straightforward to diagnose and which scenarios are
likely to be cognitively challenging, requiring longer
to diagnose and which can lead to human error. Can
also be used to predict human errors by estimating the
mismatch between cognitive resources and demands
of the particular problem-solving task.
102. CESA T H 2001 Aims at identifying potential Error of Commission Developed at Paul Scherrer 2 3 4 nuclear x • [Dang et al, 2002]
(Commission Errors (EOC) situations, based on a catalogue of key actions Institute, Switzerland. • [Reer, 2008]
Search and Assessment) required in the responses to the plant events. • [HRA Washington]
Step 1: Select and catalog possible actions, on the
basis of emergency operating procedures and related
practices. The result is a plausible set of intervention
options. Step 2: Identify system failures (or
degradations) that may result from these actions, and
prioritize these. Step 3: Identify the scenarios in which
an EOC event may occur. Event sequences that have
similar performance conditions are grouped, and each
group is defined as a scenario with the opportunity of
the EOC event in question. The combination of an
EOC event with a group of similar event sequences
defines an EOC split fraction, i.e. an operator action
that contributes to a system failure in a specific
scenario. For each EOC split fraction, the procedural
decision points and the scenario conditions
corresponding to the branching criteria are analyzed,
in order to identify the EOC paths.
103. CFA T H 1998 Cognitive Function Analysis (CFA) is a methodology 2 x • [FAA HFW]
(Cognitive Function that enables a design team to understand better the
Analysis) right balance between cognitive functions that need to
be allocated to human(s) and cognitive functions that
can be transferred to machine(s). Cognitive functions
are described by eliciting the following inputs: task
requirements; users‚ background (skills and
knowledge to cope with the complexity of the artifact
to be controlled); users' own goals (intentional
actions); and external events (reactive actions).
104. CFMA T Dh 1979 Cable Failure Matrix Analysis identifies the risks Should cables become damaged 3 aircraft x • [FAA AC431]
(Cable Failure Matrix associated with any failure condition related to cable system malfunctions can occur. • [FAA00]
Analysis) design, routing, protection, and securing. The CFMA Less than adequate design of • [ΣΣ93, ΣΣ97]
is a shorthand method used to concisely represent the cables can result in faults, failures
possible combinations of failures that can occur within and anomalies, which can result
a cable assembly. in contributory hazards and
accidents. Similar to Bent Pin
analysis.
28
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
105. CGHDS M 1998 Interaction collection of dynamical (mathematical) 4 x x x x x • [Branicky&Borkar&
(Controlled General systems, each evolving on continuous valued state Mitter98]
Hybrid Dynamical spaces, and each controlled by continuous controls.
System) Considers switching as a general case of impulses; the
general term is jump. Each jump goes to a new
dynamical system.
Chain of Multiple See Domino Theory
Events
106. Change Analysis T R 1965 Change Analysis examines the effects of Cause-Consequence analysis is 2 3 5 managem x x x • [FAA AC431]
? modifications from a starting point or baseline. It is a also used during accident/ ent • [FAA00]
technique designed to identify hazards that arise from incident investigation. • [ORM]
planned or unplanned change. Four steps: 1) review • [ΣΣ93, ΣΣ97]
previous operation / current practice; 2) Review • [FAA HFW]
operational analysis of planned operation; 3) For each • Wikipedia
step / phase of the operation, identify differences
(“changes”) between the two; 4) Determine impact on
risk of the operation. The change analysis
systematically hypothesises worst-case effects from
each modification from the baseline.
Characterisation See Trend Analysis
Analysis
107. CHASE T O 1987 CHASE is a general management health and safety Qualitative. Developed by 7 8 health x • [Kennedy&Kirwan98]
(Complete Health And audit method for general industry. There are two HASTAM Ltd., UK. Designed transport • [Kuusisto, 2001]
Safety Evaluation) versions: CHASE-I is for small and medium sized for both monitoring by line
organisations, CHASE-II is for large organisations managers and auditing by safety
(100+ employees). CHASE is comprised of sections (4 professionals.
in CHASE-I; 12 in CHASE-II) which include a
number of short questions. Answering Yes gives 2-6
points depending on the activity assessed; answering
No gives zero points. The scores on the sub-sets of
safety performance areas are weighted and then
translated into an overall index rating.
CHAZOP (Computer See SHARD (Software Hazard
HAZOP) Analysis and Resolution in
Design)
108. Check List Analysis G 1974 Checklist Analysis is a comparison to criteria, or a Checklist Analysis can be used in 3 chemical x x x x x • [EN 50128]
or device to be used as a memory jogger. The analyst any type of safety analysis, safety • [FAA00]
Checklist Analysis uses a list to identify items such as hazards, design or review, inspection, survey, or • [Leveson95]
operational deficiencies. Checklists enable a observation. Combines with • [ΣΣ93, ΣΣ97]
systematic, step by step process. They can provide What-if analysis. See also
formal documentation, instruction, and guidance. Ergonomics Checklists.
China Lake Situational See Rating Scales
Awareness Rating Scale
29
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
109. CHIRP D H 1982 The aim of CHIRP is to contribute to the enhancement CHIRP has been in operation 8 aviation x x x • [CHIRP web]
(Confidential Human of flight safety in the UK commercial and general since 1982 and is currently ATM • For other systems like
Factor Incident aviation industries, by providing a totally independent available to flight crew members, aircraft this, see [EUCARE
Reporting Programme ) confidential (not anonymous) reporting system for all air traffic control officers, web]
individuals employed in or associated with the licensed aircraft maintenance • [GAIN ATM, 2003]
industries. Reporters’ identities are kept confidential. engineers, cabin crew and the GA
Important information gained through reports, after (General Aviation) community.
being disidentified, is made available as widely as
possible, CHIRP provides a means by which
individuals are able to raise issues of concern without
being identified to their peer group, management, or
the Regulatory Authority. Anonymous reports are not
normally acted upon, as they cannot be validated.
110. CIA T Ds 1996 Code interface analysis verifies the compatibility of 3 avionics x • [FAA00]
(Code Interface or internal and external interfaces of a software • [NASA-GB-1740.13-
Analysis) older component. A software component is composed of a 96]
number of code segments working together to perform • [Rakowsky]
required tasks. These code segments must
communicate with each other, with hardware, other
software components, and human operators to
accomplish their tasks. Check that parameters are
properly passed across interfaces. CIA is intended to
verify that the interfaces have been implemented
properly.
111. CIT T M 1954 This is a method of identifying errors and unsafe This technique can be universally 7 8 aviation x x • [Flanagan, 1954]
(Critical Incident conditions that contribute to both potential and actual applied in any operational nuclear • [FAA00]
Technique) accidents or incidents within a given population by environment. Generally, the health • [Infopolis2]
means of a stratified random sample of participant- technique is most useful in the • [Kirwan94]
observers selected from within the population. early stages of a larger task or • [KirwanAinsworth92]
Operational personnel can collect information on activity. • [ΣΣ93, ΣΣ97]
potential or past errors or unsafe conditions. Hazard
• [MIL-HDBK 46855A]
controls are then developed to minimise the potential
• [FAA HFW]
error or unsafe condition.
• Wikipedia
112. CLA T Ds 1996 Code Logic Analysis evaluates the sequence of 3 avionics x • [FAA00]
(Code Logic Analysis) or operations represented by the coding program and will computer • [NASA-GB-1740.13-
older detect logic errors in the coded software. This analysis 96]
is conducted by performing logic reconstruction, • [Rakowsky]
equation reconstruction and memory coding.
Clocked Logic See Dynamic Logic
113. ClusterGroup T H 2002 ClusterGroup uses cluster analysis techniques to Up to 80% reduction in the 5 aviation x • [Luxhøj, 2002]
facilitate the priorisation of the importance of aviation number of computations is • [MaurinoLuxhøj,
safety risk factors by groups of experts. Aims to gain reported possible, yet results are 2002]
an understanding of the rationale behind decisions said to compare favorably with
made in situations involving risk. It uses various more traditional methods, such as
clustering algorithms to aggregate similar opinions of the AHP.
groups of experts into "majority" and "minority"
clusters. The underlying methodology eliminates the
necessity of performing numerous pairwise
comparisons.
30
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
114. CMA T H 1981 Determines human reliability. Is aimed specifically at Is sometimes followed after an 5 nuclear x • [Kirwan94]
(Confusion Matrix two of the diagnostic error-forms, namely FSMA. • [Kirwan98-1]
Analysis) misdiagnoses and premature diagnoses. A confusion • [MUFTIS3.2-I]
matrix is an array showing relationships between true • [GAIN ATM, 2003]
and predicted classes. Typically the variables are an • [FAA HFW]
observation and a prediction. Each row in the • [CM]
confusion matrix represents an observed class, each
• [Potash81]
column represents a predicted class, and each cell
• [Volpe98]
counts the number of samples in the intersection of
those two classes. Probabilities can be derived
experimentally or using expert judgments.
115. CMA T R 1987 CMA provides evidence that the failures assumed to CMA is the third step in a 3 aircraft x x • [ARP 4761]
(Common Mode be independent are truly independent in the actual Common Cause Analysis (CCA). • [Mauri, 2000]
Analysis) implementation. It covers the effect of design, Particular Risks Assessment is
manufacturing and maintenance errors and the effects the second, and provides input to
of common component errors. A common mode the CMA.
failure has the potential to fail more than one safety
function and to possibly cause an initiating event or
other abnormal event simultaneously. The analysis is
complex due to the large number of common mode
failures that may be related to the different common
mode types such as design, operation, manufacturing,
installation and others.
116. CMFA T R 1979 Aim is to identify potential failures in redundant The technique is not well 3 nuclear x x • [Bishop90]
(Common Mode Failure abou systems or redundant sub-systems that would developed but is necessary to computer • Wikipedia
Analysis) t undermine the benefits of redundancy because of the apply, because without
appearance of the same failures in the redundant parts consideration of common mode
at the same time. failures, the reliability of
redundant systems would be
over-estimated. Related methods:
ETA, CCA, FMEA.
CMN-GOMS See GOMS
(Card, Moran and
Newell GOMS)
117. COCOM T H 1993 COCOM models human performance as a set of Developed by Erik Hollnagel. 4 ATM x • [Hollnagel93]
(COntextual COntrol control modes - strategic (based on long-term • [Kirwan98-1]
Model) planning), tactical (based on procedures), • [COCOM web]
opportunistic (based on present context), and • [HollnagelNaboLau,
scrambled (random) - and proposes a model of how 2003]
transitions between these control modes occur. This • Wikipedia
model of control mode transition consists of a number
of factors, including the human operator's estimate of
the outcome of the action (success or failure), the time
remaining to accomplish the action (adequate or
inadequate), and the number of simultaneous goals of
the human operator at that time.
31
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
118. CODA T H 1997 Method for analysing human-related occurrences (i.e., Quantification may be done with 3 5 nuclear x • [Reer97]
(Conclusions from incorrect human responses) from event cases expert judgement or THERP. • [Straeter&al99]
Occurrences by retrospectively. The CODA method uses an open list
Descriptions of Actions) of guidelines based on insights from previous
retrospective analyses. It is recommended in this
method to compile a short story that includes all
unusual occurrences and their essential context
without excessive technical details. Then the analysis
should envisage major occurrences first. For their
description, the method presents a list of criteria which
are easy to obtain and which have been proved to be
useful for causal analysis. For their causal analysis,
various guidelines are provided. They are mainly of
holistic, comparative and generalising nature. It is
demonstrated by various event cases that CODA is
able to identify cognitive tendencies (CTs) as typical
attitudes or habits in human decision-making.
119. Code Analysis T Ds 1995 Code analysis verifies that the coded program 7 avionics x • [FAA00]
abou correctly implements the verified design and does not computer • [NASA-GB-1740.13-
t? violate safety requirements. The techniques used in the 96]
performance of code analysis mirror those used in • [Rakowsky]
design analysis. • Wikipedia
120. Code Coverage T Ds 1963 This is a check if all lines in the software code are Code coverage techniques were 7 avionics x • NLR expert
used when running the program. Unused lines can be amongst the first techniques computer • Wikipedia
removed. Alternatively, code coverage describes the invented for systematic software
degree to which the source code of a program has been testing. The first published
tested. It is a form of testing that inspects the code reference was by Miller and
directly and is therefore a form of ‘white box testing’. Maloney in Communications of
In time, the use of code coverage has been extended to the ACM in 1963.
the field of digital hardware. See also Unused Code Analysis.
121. Code Inspection G D Checklists are developed during formal inspections to See also Design and Coding 6 avionics x • [EN 50128]
Checklists facilitate inspection of the code to demonstrate Standards. See also Formal computer • [FAA00]
conformance to the coding standards. Bug databases Inspections. See also Software • [NASA-GB-1740.13-
are also good sources to use when creating checklists Testing. Appendix H of [NASA- 96]
for code inspections. GB-1740.13-96] provides a • [Rakowsky]
collection of checklists.
Co-discovery See Think Aloud Protocol
COEA See AoA (Analysis of
(Cost and Operational Alternatives)
Effectiveness Analysis)
122. COGENT T H 1993 Extension of the THERP event tree modelling system, It requires significant analytical 4 many x • [COGENT web]
(COGnitive EveNt Tree) dealing particularly with cognitive errors, although the judgement. At present, it appears • [Kirwan98-1]
approach appears to deal with other errors as well. The to be a relatively simple step
aim is to bring current more cognitively-based forward in modelling
approaches into the Human Error Identification (representation), rather than in
process. This has led to a hybrid taxonomy with terms Human Error Identification.
such as Skill-based slip, rule-based lapse, and
knowledge-based lapses or mistakes. The approach
thereafter is for the analyst to develop cognitive event
trees.
32
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
123. COGNET T H 1992 COGNET is a framework for creating and exercising Development of COGNET was 2 5 navy x • [GAIN ATM, 2003]
(Cognition as a Network models of human operators engaged in primarily led by Dr. W. Zachary, CHI • [Zachary96]
of Tasks) cognitive (as opposed to psychomotor) tasks. Its Systems. • [FAA HFW]
purpose is to develop user models for intelligent The basis for the management of
interfaces. It has been used to model surrogate multiple, competing tasks in
operators (and opponents) in submarine warfare COGNET is a pandemonium
simulations. The most important assumption behind metaphor of cognitive processes
COGNET is that humans perform multiple tasks in composed of ''shrieking demons",
parallel. These tasks compete for the human's proposed by Selfridge (1959). In
attention, but ultimately combine to solve an overall this metaphor, a task competing
information-processing problem. COGNET is based for attention is a demon whose
on a theory of weak task concurrence, in which there shrieks vary in loudness
are at any one time several tasks in various states of depending on the problem
completion, though only one of these tasks is context. The louder a demon
executing. That is, COGNET assumes serial shrieks, the more likely it is to get
processing with rapid attention switching, which gives attention. At any given time, the
the overall appearance of true parallelism. demon shrieking loudest is the
focus of attention and is
permitted to execute.
Cognitive Walkthrough See Inspections and
Walkthroughs
124. COMET T H 1991 Modified event trees that deal with errors of Relation with SNEAK and ETA. 4 nuclear x • [Kirwan98-1]
(COMmission Event commission and cascading errors whose source is
Trees) either erroneous intention or a latent error. COMETs
are developed e.g., using SNEAK, and are basically
event trees, their results feeding into fault trees. The
main significance of this approach appears to be as a
means of integrating errors of commission into PSA
and quantifying them. It does not help too much in
terms of actually identifying errors of commission.
125. Comparison Risk T R 1995 Is used during the design of new plants (and Method was originally developed 5 offshore x • [Kjellen, 2000]
Analysis modifications) in order to predict the occupational to meet the Norwegian risk-
accident-frequency rate for the plant during operation. analysis regulations for the
Results are expressed as relative changes in the offshore industry.
accident-frequency rate in relation to the experienced
rate of a reference plant that has been in operation for
some years. Method follows four steps: 1)
Establishment of a database for the reference
installation; 2) Grouping of the accidents with respect
to area and activity; 3) Establishment of a simulated
database for the analysis object; 4) Estimation of the
injury-frequency rate for the analysis object.
126. Complexity Models T Ds 1976 Aim is to predict the reliability of programs from Can be used at the design, coding 3 computer x • [Bishop90]
abou properties of the software itself rather than from its and testing phase to improve
t development or test history. quality of software by the early
identification of over-complex
modules and by indicating the
level of testing required for
different modules. Tools
available.
33
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
127. Computer Modelling M R 1978 Involves the use of computer programs to represent Four well-known variants are 4 5 many x x x x • [KirwanAinsworth92]
and Simulation or e.g. operators and/or system activities or features. often referred to as Real-Time • Wikipedia
older Human performance data that have been previously Simulation (simulator clock runs
collected, or estimates of task components, error with speed according to real
probabilities, etc., are entered into the computer clocks), Fast Time Simulation
program. The program either can then simulate (simulator clock does not run
graphically the environment and workspace or can with speed according to real
dynamically run the task in real or fast time as a way clocks, and can even make
of estimating complete cycle times and error jumps), Discrete Event
likelihoods, etc. Simulation, and Monte Carlo
simulation.
128. Conceptual Graph T H 1999 Conceptual graph analysis is a method of visually 4 x • [Jonassen et al, 1999]
Analysis depicting internal knowledge structures during a • [FAA HFW]
cognitive task analysis. These graphs consist of nodes • Wikipedia
connected via arcs. The nodes contain either single
concepts or single statements. Constructing a
conceptual graph is similar to concept mapping, but it
includes a formal and detailed collection of nodes,
relations, and questions. The nodes can include more
than just concepts. Nodes can be goals, actions, or
events. There are specific relations for each type of
node, and a set of formal, probing questions is
developed for each node type.
129. Configuration G D 1950 Configuration management is a field of management Tools available. Configuration 6 computer x x • [Bishop90]
Management that focuses on establishing and maintaining management was first developed • Wikipedia
consistency of a system's or product's performance and by the United States Air Force for
its functional and physical attributes with its the Department of Defense in the
requirements, design, and operational information 1950s as a technical management
throughout its life. Aim is to ensure the consistency of discipline of hardware.
groups of development deliverables as those
deliverables change. It applies to both hardware and
software development
130. Confined Space Safety T R 1992 The purpose of this analysis technique is to provide a Any confined areas where there 3 chemical x • [FAA00]
systematic examination of confined space risks. A may be a hazardous atmosphere, • [ΣΣ93, ΣΣ97]
confined space is defined to be an area that has both toxic fume, or gas, the lack of • Wikipedia
(1) insufficient ventilation to remove dangerous air oxygen, could present risks.
contamination and/or oxygen deficiency, and (2) Confined Space Safety is
restricted access or egress. applicable to tank farms, fuel
storage areas, manholes,
transformer vaults, confined
electrical spaces, race-ways.
Consequence Tree See ETA (Event Tree Analysis)
Method
131. Contextual Inquiry T H 1996 Contextual Inquiry is both a form of field study and a See also Plant walkdowns/ 2 x • [FAA HFW]
data analysis technique. The guiding principle is that surveys. See also Field Study. • Wikipedia
users are to be studied in their normal working
context. Users are studied as they execute ordinary
work assignments. Experimenters following the
contextual inquiry method observe users working and
record both how the users work and the experimenters'
interaction with the users. This recording can be hand-
written notes or, if possible, through the use of video
or audiotape recordings. The aim is to gather as much
data as possible from the interviews for later analysis.
34
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
132. Contingency Analysis T M 1972 Contingency Analysis is a method of minimising risk Contingency Analysis can be 3 6 many x x x • [FAA00]
? in the event of an emergency. Potential accidents are conducted for any system, • [ΣΣ93, ΣΣ97]
identified and the adequacies of emergency measures procedure, task or operation
are evaluated. Contingency Analysis lists the potential where there is the potential for
accident scenario and the steps taken to minimise the harm. It is an excellent formal
situation. training and reference tool.
133. Control Flow Checks T Ds 1981 Control flow analysis is a static code analysis Not necessary if the basic 2 computer x x • [Bishop90]
or technique for determining the control flow of a hardware is fully proven or self- • [EN 50128]
Control Flow Analysis program. For many languages, the control flow of a checking. Otherwise, it is • [Rakowsky]
program is explicit in a program's source code. As a valuable technique for systems • Wikipedia
result, control-flow analysis implicitly usually refers that can fail to a safe state where
to a static analysis technique for determining the there is no hardware redundancy
receiver(s) of function or method calls in computer or no software diversity in the
programs written in a higher-order programming program or support tools. Tools
language. For both functional programming languages available.
and object-oriented programming languages, the term
CFA refers to an algorithm that computes control
flow. Aim is to detect computer mal-operation by
detecting deviations from the intended control flow.
Cooper Harper Rating See Rating Scales
Scale
Cooperative Evaluation See Think Aloud Protocol
134. CORE T Ds 1979 Aim is to ensure that all the requirements are Developed for British Aerospace 3 computer x • [Bishop90]
(Controlled identified and expressed. Intended to bridge the gap in the late 1970s to address the space • [EN 50128]
Requirements between the customer/end user and the analyst. Is need for improved requirements • [Rakowsky]
Expression) designed for requirements expression rather than expression and analysis. Despite
specification. Seven steps: 1) Viewpoint identification its age, CORE is still used today
(e.g. through brainstorming); 2) Viewpoint on many projects within the
structuring; 3) Tabular collection (Table with source, aerospace sector. Is frequently
input, output, action, destination); 4) Data structuring used with MASCOT. Tools
(data dictionary); 5,6) Single viewpoint modelling and available.
combined viewpoint modelling (model viewpoints as
action diagrams, similar as in SADT); 7) Constraint
analysis.
135. CORE-DATA D 1992 Database on human errors and incidents, for human Originally collated from nuclear 5 ATM x • [Kirwan&Basra&Tayl
(Computerised Human from reliability support. Currently contains about 1500 data power industry, recently extended nuclear or.doc]
Error Database for points. to other sectors, such as offshore offshore • [Kirwan&Basra&Tayl
Human Reliability lifeboat evacuation, manufactu or.ppt]
Support) manufacturing, offshore drilling, ring • [Kirwan&Kennedy&
permit-to-work, electricity electricity Hamblen]
transmission, nuclear power plant
emergency scenarios, calculator
errors, and a small number of
ATM-related human error
probabilities have been
developed. Initially developed at
the University of Birmingham,
UK.
35
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
136. COSIMO I H 1992 A parallel to CES in that it is a simulation of the 2 5 nuclear x • [Kirwan95]
(Cognitive Simulation human operator and his/her thought processes, using a • [Kirwan98-1]
Model) computerised blackboard architecture. The simulated • [MUFTIS3.2-I]
operator comprises a set of properties and attributes
associated with particular incident scenarios, and
‘packets’ of process knowledge and heuristics rules of
thumb. When diagnosing, each scenario and its
associated attributes are contrasted to ‘similarity-
match’ to the symptom set being displayed to the
‘operator’, and the simulated operator will either
determine unequivocally which scenario matches the
symptoms, or, if there is ambiguity, will ‘frequency-
gamble’. Once hypotheses are formulated, they are
evaluated according to a confidence threshold, and
may be accepted or rejected.
137. CPA T R 1957 Critical Path Analysis identifies critical paths in a This technique is applied in 2 3 managem x x • [FAA AC431]
(Critical Path Analysis) Program Evaluation graphical network. Simply it is a support of large system safety ent • [FAA00]
or CPM graph consisting of symbology and nomenclature programs, when extensive system chemical • [KirwanAinsworth92]
(Critical Path Method) defining tasks and activities. The critical path in a safety-related tasks are required. • [ΣΣ93, ΣΣ97]
network is the longest time path between the Combines with PERT. Tools • Wikipedia
beginning and end events. available.
138. CPIT T H 2000 The CPIT process focuses on a cognitive approach to The CPIT approach, developed 8 aviation x • [GAIN AFSA, 2003]
(Cabin Procedural or understand how and why the event occurred, not who by Boeing, has benefited from
Investigation Tool) later was responsible. CPIT depends on an investigative lessons learned by its sister
philosophy, which acknowledges that professional program, Procedural Event
cabin crews very rarely fail to comply with a Analysis Tool (PEAT), which
procedure intentionally, especially if it is likely to Boeing has provided to airlines
result in an increased safety risk. It also requires the since 1999.
airline to explicitly adopt a non-jeopardy approach to CPIT is a stand-alone service, but
incident investigation. CPIT contains more than 100 is normally offered with PEAT
analysis elements that enable the user to conduct an training.
in-depth investigation, summarise findings and
integrate them across various events. The CPIT data
organisation enables operators to track their progress
in addressing the issues revealed by the analyses.
CPIT is made up of two components: the interview
process and contributing analysis. It provides an in-
depth structured analytic process that consists of a
sequence of steps that identify key contributing factors
to cabin crew errors and the development of effective
recommendations aimed at the elimination of similar
errors in the future.
CPM See CPA (Critical Path Analysis)
(Critical Path Method) or CPM (Critical Path Method)
36
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
139. CPM-GOMS I H 1988 CPM-GOMS builds on previous GOMS models by CPM-GOMS is a variation of the 2 x x • [FAA HFW]
(Cognitive-Perceptual- assuming that perceptual, cognitive and motor GOMS technique in human • Wikipedia
Motor GOMS) operators can be performed in parallel. Where other computer interaction. CPM stands
GOMS techniques assume that humans do one thing at for two things: Cognitive,
a time, CPM-GOMS assumes as many operations as Perceptual, and Motor and the
possible will happen at any given time subject to project planning technique
constraints of the cognitive, perceptual, and motor Critical Path Method (from which
processes. Models are developed using PERT charts it borrows some elements).
and execution time is derived from the critical path. CPM-GOMS was developed in
CPM-GOMS generally estimates unit-tasks serial 1988 by Bonnie John, a former
executions to be faster than the other version of student of Allen Newell. Unlike
GOMS. This happens because the model assumes that the other GOMS variations,
the users are expert and are executing the operations CPM-GOMS does not assume
as fast as the Model Human Processor can perform. that the user's interaction is a
serial process, and hence can
model multitasking behavior that
can be exhibited by experienced
users. The technique is also based
directly on the model human
processor - a simplified model of
human responses. See also Apex,
CAT, CTA, GOMS, KLM-
GOMS, NGOMSL.
140. CPQRA T R 1989 Quantitative risk assessment within chemical process Processes of all types. Is related 3 4 5 chemical x x • [ΣΣ93, ΣΣ97]
(Chemical Process industry. Stands for the process of hazard to Probabilistic Risk Assessment manufactu • [CPQRA]
Quantitative Risk identification, followed by numerical evaluation of (PRA) used in the nuclear ring
Analysis) incident consequences and frequencies, and their industry.
combination into an overall measure of risk when
applied to the chemical process industry. Ordinarily
applied to episodic events.
141. CRC T M 1980 Control Rating Code is a generally applicable system Control Rating Code can be 6 defence x • [FAA AC431]
(Control Rating Code ? safety-based procedure used to produce consistent applied when there are many • [FAA00]
Method) safety effectiveness ratings of candidate actions hazard control options available. • [ΣΣ93, ΣΣ97]
intended to control hazards found during analysis or The technique can be applied
accident analysis. Its purpose is to control toward any safe operating
recommendation quality, apply accepted safety procedure, or design hazard
principles, and priorities hazard controls. control.
142. CREAM I H 1998 Cognitive modelling approach. Applies cognitive Developed by Erik Hollnagel. 4 nuclear x • [Kirwan98-1]
(Cognitive Reliability systems engineering to provide a more thoroughly Related to SHERPA, SRK and • [CREAM web]
and Error Analysis argued and theory supported approach to reliability COCOM. A version of traffic • [FAA HFW]
Method) studies. The approach can be applied retrospectively safety has been implemented • Wikipedia
or prospectively, although further development is (DREAM - Driver Reliability
required for the latter. The ‘meat’ of CREAM is the And Error Analysis Method).
distinction between phenotypes (failure modes) and Later, a version was developed
genotypes (possible causes or explanations). for use in maritime accident
analysis (BREAM - B for the
ship's Bridge).
143. CREATE T H 1990 Human error reliability assessment. Describes how 5 nuclear 5 • [Woods et al, 1992]
(Cognitive Reliability or Cognitive Environment Simulation (CES) can be used
Assessment Technique) older to provide input to human reliability analyses (HRA)
in probabilistic risk assessment (PRA) studies.
37
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
144. CREWPRO I H 1994 Cognitive simulation which builds on CREWSIM. 3 4 nuclear? x x • [Kirwan95]
(CREW PROblem Intends to be able to model communication and • [Kirwan98-1]
solving simulation ) confidence in other crew members. These represent
ambitious but significant enhancements of the external
validity or realism of modelling.
145. CREWS approach T O 1998 The ESPRIT CREWS approach focuses more on goal CREWS has been developed as 6 x • [Rolland et al. 1998]
definition and the linking of goals to stakeholders’ part of ESPRIT, a European • [CREWS]
actual needs by linking goals and scenarios. It uses a Strategic Program on Research in
bi-directional coupling allowing movement from goals Information Technology and ran
to scenarios and vice-versa. The complete solution is from 1983 to 1998. It was
in two parts: when a goal is discovered, a scenario can succeeded by the Information
be authored for it and once a scenario has been Society Technologies (IST)
authored, it is analysed to yield goals. By exploiting programme in 1999.
the goal-scenario relationship in the reverse direction, See also ART-SCENE.
i.e. from scenario to goals, the approach pro-actively
guides the requirements elicitation process. In this
process, goal discovery and scenario authoring are
complementary steps and goals are incrementally
discovered by repeating the goal-discovery, scenario-
authoring cycle. The steps are: 1. Initial Goal
Identification; 2. Goal Analysis; 3. Scenario
Authoring; 4. Goal Elicitation through Scenario
Analysis. Steps 2 - 4 are repeated until all goals have
been elicited
146. CREWSIM I H 1993 Simulation model that models the response of an Has been particularly developed 3 4 nuclear x x • [Kirwan98-1]
(CREW SIMulation) operating team in a dynamically evolving scenario. to date to focus on a particular
The model simulates operator interactions within a nuclear power plant scenario.
three-person crew, as well as the cognitive processes
of the crewmembers, and the crew-plant dynamic
interaction. Although the model has a knowledge base
as other simulations do (e.g. COSIMO and CES),
CREWSIM differs by using a set of prioritised lists
that reflect the priorities of different concerns. Some
other interesting aspects are 1) attentional resources
control is simulated, such that diagnosis will be
suspended while the operator is communicating or
carrying out some other task. 2) the model’s usage
focuses particularly on transitions between procedures,
and hence is looking in particular for premature,
delayed, and inappropriate transfer within the
emergency procedures system. 3) several error
mechanisms are treated by the model: memory lapse;
jumping to conclusions; communication failures;
incorrect rules; and improper prioritisation.
147. CRIOP T R 1990 CRIOP is a structured method for assessing offshore Developed by the Norwegian Oil 7 8 offshore x x • [CRIOP History]
(CRisis Intervention in control rooms. The main focus is to uncover potential & Gas industry. It underwent a • [Kjellen, 2000]
Offshore Production) weaknesses in accident/incident response. CRIOP significant revision in 2004. • [SAFETEC web]
assesses the interface between operators and technical
systems within the control room. The assessment is
comprised of two main parts: (1) a design assessment
in the form of a checklist; and (2) a scenario based
assessment intended to assess the adequacy of
response to critical situations.
38
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
148. Critical Task Analysis T H 1987 A Critical Task Analysis aims to describe the results Developed by FAA. 2 defence x • [FAA HFW]
or of analyses of critical tasks performed to provide a Critical tasks are elemental navy • [HEAT overview]
older basis for evaluation of the design of the system, actions required to perform the
equipment, or facility, verifying that human task. ‘Critical’ is usually defined
engineering technical risks have been minimised and as being necessary for mission
solutions are in hand. success.
149. Criticality Analysis T R 1972 The purpose of the Criticality Analysis is to rank each The technique is applicable to all 5 aircraft x x x • [FAA00]
or ? failure mode identified in a Failure Modes and Effect systems, processes, procedures, • [ΣΣ93, ΣΣ97]
Criticality Matrix Analysis. Once critical failures are identified they can and their elements. Combines
be equated to hazards and risks. Designs can then be with FMEA to become FMECA.
applied to eliminate the critical failure, thereby See also Nuclear Criticality
eliminating the hazard and associated accident risk. Analysis.
Criticality Matrix See Criticality Analysis or
Criticality Matrix
150. CRM T R 1964 Collision risk model, adopted by ICAO. Also named Mainly applies to largely strategic 5 ATM x • [Bakker&Blom93]
(Collision Risk Model Reich Collision risk model. Estimates the level of risk procedures only. No dynamic role • [Brooker02]
(ICAO)) of a mid-air collision between two aircraft. Based on 7 for ATCos and pilots; basic logic • [MUFTIS3.2-II]
assumptions, two of which are rather restrictive. is “navigational errors -> mid-air • [Reich64]
Calculates collision risk from traffic factors, aircraft collisions”
parameters and navigational performance.
151. CRM I T 1998 CRM examines the implications of human factors and Sometimes CRM is referred to as 6 aviation x x • [TRM web]
(Crew Resource abou limitations, and the effect they have on performance. It Cockpit Resource Management ATM and • Wikipedia
Management) t introduces the concept of the ‘Error Chain’, the many
application of which can lead to recognition of other
incipient error situations, and develops tools for error
intervention and avoidance.
152. CRT M 1984 A CRT is a statement of an underlying core problem Developed by Eliahu M. Goldratt 4 x • Wikipedia
(Current Reality Tree) and the symptoms that arise from it. It maps out a in his theory of constraints that
sequence of cause and effect from the core problem to guides an investigator to identify
the symptoms. Most of the symptoms will arise from and relate all root causes using a
the one core problem or a core conflict. Remove the cause-effect tree whose elements
core problem and we may well be able to remove each are bound by rules of logic
of the symptoms as well. Operationally one works (Categories of Legitimate
backwards from the apparent undesirable effects or Reservation).
symptoms to uncover or discover the underlying core
cause.
153. CSA T M 2000 Each safety hazard is investigated in the context of The input hazards for CSA are 5 6 aviation x x x • [FAA00] (App B)
(Comparative Safety or investment alternatives. The result is a ranking of identified in an Operational health • [FAA tools]
Assessment) older alternative solutions by reduction in safety risk or Safety Assessment (OSA, see
other benefits. Steps are to: • Define the alternative ED-78A), which is conducted
solutions under study in system engineering terms during Mission Analysis in
(mission, human, machine, media and management); • accordance with the NAS
Develop a set of hierarchical functions that each Modernisation System Safety
solution must perform; • Develop a Preliminary Management Plan (SSMP).
Hazard List (PHL) for each alternative solution; • List
and evaluate the risk of each hazard for the viable
alternative solutions; • Evaluate the risk; • Document
the assumptions and justifications for how the severity
and probability of each hazard condition was
determined.
39
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
154. CSE T H 1983 In CSE the focus is not on human cognition as an CSE emerged as a research 4 x • [CSE web]
(Cognitive Systems internal function or as a mental process, but rather on direction in the early 1980s, and • [Gualtieri, 2005]
Engineering) human activity or "cognition at work", i.e., on how has since then grown to become a • [HollnagelWoods83]
cognition is necessary to accomplish effectively the recognized discipline. CSE • [HollnagelWoods,
tasks by which specific objectives related to either addresses the problems of people 2005]
work or non-work activities can be achieved. CSE working with complex systems.
proposes that composite operational systems can be
looked at as joint cognitive systems. Structurally they
may comprise the individual people, the organisation
(both formal and informal), the high level technology
artefacts (AI, automation, intelligent tutoring systems,
computer-based visualisation) and the low level
technology artefacts (displays, alarms, procedures,
paper notes, training programs) that are intended to
support human practitioners. But functionally they can
be seen as a single system consisting of interacting
multiple agents (human and technical).
155. CSP T Ds 1978 Technique for the specification of concurrent software Descriptive tool in cases where a 3 computer x • [Bishop90]
(Communicating ; systems, i.e. systems of communicating processes system must consist of more than • [EN 50128]
Sequential Processes) upda operating concurrently. Allows one to describe one process. Related to CCS. The • [Rakowsky]
te in systems as a number of components (processes) which restriction that the component
1985 operate independently and communicate with each processes must be sequential was
other over well-defined channels. removed between 1978 and 1985,
but the name was already
established. Software
requirements specification phase
and design & development phase.
156. CSSA T R 1982 The purpose to specifically examine cryogenic Use with PHA or SSHA. 3 6 chemical x • [FAA AC431]
(Cryogenic Systems systems from a safety standpoint in order to eliminate Cryogenic is a term applied to • [ΣΣ93, ΣΣ97]
Safety Analysis) or to mitigate the hazardous effects of potentially low-temperature substances and
hazardous materials at extremely low temperatures. apparatus.
157. CSSM T M 1997 This is a form of hazard analysis that uses observation 3 4 6 manufactu x x • [HIFA Data]
(Continuous Safety and sampling techniques to determine and maintain a ring • [FAA HFW]
Sampling Methodology) pre-set level of the operator’s physical safety within
constraints of cost, time, and operational effectiveness.
Sampling is performed to observe the occurrence of
conditions that may become hazardous in a given
system and could result in an accident or occupational
disease. The collected data are then used to generate a
control chart. Based on the pattern of the control chart,
a system "under control" is not disturbed whereas a
system "out of control" is investigated for potential
conditions becoming hazardous. Appropriate steps are
then taken to eliminate or control these conditions to
maintain a desired safe system.
40
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
158. CTA T H 1994 CTA thoroughly describes some aspect of human [MIL-HDBK] describes three 2 defence x • [Davison]
(Cognitive Task or operation and cognitive processing within a work examples for conducting CTA: 1) • [MIL-HDBK]
Analysis) older domain. CTA is used to design human-system The Precursor, Action, Results • [Mislevy&al98]
interaction and displays, assess job requirements, and Interpretation method • [Klein04]
develop training, or evaluate teamwork. A CTA is an (PARI); 2) Conceptual Graph • [Kieras88]
analysis of the knowledge and skills required for a Analysis (CGA); 3) Critical • [Johnson92]
proper performance of a particular task. The Decision Method
• [SchaaftalSchraagen2
framework consists of three elements: (a) an analysis Tools: GOMS, DNA Method 000]
of the task that has to be carried out to accomplish (Decompose, Network, and
• [FAA HFW]
particular goals; (b) an analysis of the knowledge and Assess Method)
skills required to accomplish these tasks; and (c) an
analysis of the cognitive (thought) processes of
experienced and less experienced persons.
159. CTC T R 1993 The purpose of CTC is to provide a formal and Comparison-To-Criteria is a 6 7 nuclear x x x • [FAA00]
(Comparison-To- structured format that identifies safety requirements. listing of safety criteria that could • [McClure&Restrepo9
Criteria) Any deviations between the existing design be pertinent to any FAA system. 9]
requirements and those required are identified in a This technique can be considered • [ΣΣ93, ΣΣ97]
systematic manner, and the effect of such deviations in a Requirements Cross-Check
on the safety of the process or facility is evaluated. Analysis. Applicable safety-
The deviations with respect to system upsets are those related requirements such as
caused by operational, external, and natural events. OSHA (Occupational Safety and
Operational events include, among others, individual Health Administration), NFPA
component failures, human error interactions with the (National Fire Protection
system (to include operation, maintenance, and Association), ANSI (American
testing), and support system failures. For systems that National Standards Institute), are
do not meet current design requirements, an upgrade is reviewed against an existing
not done automatically until an assessment of their system or facility.
importance to safety is made.
160. CTD T D 1995 The aim of Cognitive Task Design is to focus on the In some references, e.g. 4 many x • [CTD web]
(Cognitive Task Design) or consequences that artefacts have for how they are [Sutcliffe, 2003], CTD is • [Hollnagel, 2003]
older used, and how this use changes the way we think presented as a generic term rather • [Sutcliffe, 2003]
about them and work with them – on the individual as than a specific technique. CTD • [WordenSchneider,
well as organisational level. The ambition is to ensure has the same roots as Cognitive 1995]
that Cognitive Task Design is an explicit part of the Task Analysis (CTA), but the
design activity, rather than something that is done focus is on macro-cognition
fortuitously and in an unsystematic manner. rather than micro-cognition, i.e.,
the requisite variety of the joint
system, rather than the
knowledge, thought processes,
and goal structures of the humans
in the system. CTD goes beyond
CTA, as the emphasis is on the
potential (future) rather than the
actual (past and present)
performance.
CTM See FTA (Fault Tree Analysis)
(Cause Tree Method)
41
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
161. CWA T H 1975 CWA analyzes the work people do, the tasks they Cognitive Work Analysis was 2 many x • [FAA HFW]
(Cognitive Work perform, the decisions they make, their information developed in the 1970s at the • [CWA portal]
Analysis) behavior, and the context in which they perform their Risø National Laboratory in
work - for the purpose of systems design. It offers a Denmark, to facilitate human-
mechanism to transfer results from an in-depth centered design. It complements
analysis of human-information work interaction traditional task analysis by adding
directly to design requirements. CWA focuses on the capability of designing for the
identifying the constraints that shape behavior rather unanticipated by describing the
than trying to predict behavior itself. It consists of five constraints on behavior rather
layers of analysis: 1. Work Domain - The functional than behavior per se.
structure of the work domain in which behavior takes
place. 2. Control Tasks - The generic tasks that are to
be accomplished. 3. Strategies - The set of strategies
that can be used to carry out those tasks. 4. Social-
Organisational - The organisation structure. 5. Worker
Competencies - The competencies required of
operators to deal with these demands.
162. DAD T H 1950 Aim is to show how to navigate a system, based on Developed by Dunlap & 2 defence x • [HEAT overview]
(Decision Action decisions and actions. Actions are drawn as rectangles, Associates in the 1950s. nuclear • [Kennedy&Kirwan98]
Diagram) decisions as diamonds, and possible decision Similar in appearance and logic • [Kirwan94]
outcomes are labelled on arrows from decision to the mechanical handling • [KirwanAinsworth92]
diamonds. Decisions can be phrased as yes/no or as diagrams which are used in • [MIL-HDBK]
multiple choice questions. mechanical HAZOPs. Also • [Silva&al99]
known as Information Flow
• [FAA HFW]
Charts or Decision-Action-
Information Diagrams. Also
similar to functional flow
diagrams except that decision
points are added.
163. Data Flow Analysis T Ds 1973 Data flow analysis is a static analysis technique that is A simple way to perform 3 computer x • [EN 50128]
performed both at procedure level and also as part of dataflow analysis of programs is • [Rakowsky]
the system wide analysis, which is one aspect of to set up dataflow equations for • [SPARK web]
integration testing. It identifies data flow anomalies in each node of the control flow • Wikipedia
the program, e.g. the use of uninitialised variables. graph and solve them by
It gathers information about the possible set of values repeatedly calculating the output
calculated at various points in a computer program. A from the input locally at each
program's control flow graph is used to determine node until the whole system
those parts of a program to which a particular value stabilizes, i.e., it reaches a
assigned to a variable might propagate. The fixpoint.
information gathered is often used by compilers when
optimizing a program.
42
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
164. Data Mining D 1750 Data Mining is defined as the systematic and Early methods of identifying 3 5 7 8 many x x x x • [Fayyad et al, 1996]
automised searching of a database in order to extract patterns in data include Bayes' • Wikipedia
information and patterns. Data mining commonly theorem (1700s) and regression • [GAIN AFSA, 2003]
involves four classes of tasks: analysis (1800s). • [Halim et al, 2007]
• Clustering - is the task of discovering groups and Some tools are: • [GAIN ATM, 2003]
structures in the data that are "similar", without • Aviation Safety Data Mining • [Nazeri03]
using known structures in the data. Workbench (MITRE, 2001) - • [FAA HFW]
• Classification - is the task of generalizing known three data mining techniques
• [ASDMW
structure to apply to new data. Common for application to aviation application]
algorithms include decision tree learning, nearest safety data
neighbor, naive Bayesian classification, neural • Brio Intelligence 6 (Brio
networks and support vector machines. Software Japan (1999);
• Regression - attempts to find a function which Hyperion (2003))
models the data with the least error. • IMS (Inductive Monitoring
• Association rule learning - searches for System) (NASA Ames, 2003)
relationships between variables. – Health monitoring
• QUORUM Perilog (NASA,
1995) – four data mining
techniques; supports FAA’s
Aviation Safety Reporting
System (ASRS)
Other tools related to Data
Mining are: Mariana, ReADS
(Recurring Anomaly Detection
System). See also
SequenceMiner.
Data Recording and See Self-Reporting Logs
Analysis
165. Data Security G D 1975 Data security is the means of ensuring that data is kept Tools available. 6 computer x • [Bishop90]
or safe from corruption and that access to it is suitably • Wikipedia
older controlled. Thus data security helps to ensure privacy.
It also helps in protecting personal data. Aim is to
guard against external and internal threats which can
either accidentally or deliberately endanger the
objectives of design and may lead to unsafe operation.
166. DBN M 1997 Dynamic Bayesian Networks (or Dynamic Bayseian A Dynamic Bayesian Network 4 5 many x x x x x • [GoranssonKoski,
(Dynamic Bayesian or Belief Networks) are a method for studying state- extends the static Bayesian Belief 2002]
Network) older transition systems with stochastic behaviour. A DBN Network (BBN) by modelling • [Murphy, 2002]
is a Bayesian network that represents sequences of changes of stochastic variables • Wikipedia
variables. These sequences are often time-series (for over time.
example, in speech recognition) or sequences of The hidden Markov model and
symbols (for example, protein sequences). DBNs the Kalman filter can be
comprise a large number of probabilistic graphical considered as the most simple
models, which can be used as a graphical dynamic Bayesian networks.
representation of dynamic systems. With this, they
provide a unified probabilistic framework in
integrating multi-modalities.
43
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
167. DCPN M 1997 Extension of Petri Nets to include dynamic colours, DCPN can be mapped to and 4 5 ATM x x x x x • [Everdij&Blom&Klo
(Dynamically Coloured i.e. variables attached to Petri net tokens that can take from Piecewise Deterministic mpstra97]
Petri Nets) on real values and that can change through time Markov Processes. An extension • [Everdij&Blom03]
according to the solutions of differential equations. of DCPN, named Stochastically • [Everdij&Blom04]
The transitions of tokens happen according to two and Dynamically Coloured Petri • [Everdij&Blom05]
kinds of events: 1) at a time generated by a Poisson Net (SDCPN) includes Brownian • [Everdij&al04]
point process; 2) at a time when the values of the motions terms in the token colour • [Everdij&Blom06]
tokens reach a boundary. differential equations. SDCPN • [Everdij&al06]
are the modelling format used for
• [Everdij, 2010]
the TOPAZ methodology, in
which they are used to model
aircraft behaviour through time,
influenced by nominal and non-
nominal human behaviour,
technical system behaviour,
weather, etc.)
168. DD T R 1994 Structured, deductive, top-down analysis that In some references stated to be 4 aircraft x • [ARP 4761]
(Dependence Diagrams) or identifies the conditions, failures, and events that equivalent to Reliability Block • [FAA memo02]
older would cause each defined failure condition. Graphical Diagrams (RBD).
method of identifying the logical relationship between
each particular failure condition and the primary
element or component failures, other events, or
combinations of these that can cause the failure
condition. Similar to FTA, except that a Fault Tree
Analysis is failure-oriented and is conducted from the
perspective of which failures must occur to cause a
defined failure condition. A Dependence Diagram
Analysis is success-oriented and is conducted from the
perspective of which failures must not occur to
preclude a defined failure condition.
169. DDET M 1988 DDET is a simulation method implemented by DDET is an extension of the 4 nuclear x • [Amendola, 1988]
(Discrete Dynamic forward branching event trees; the branch points are classical event trees, by removing • [Hu, 2005]
Event Tree) restricted at discrete times only. The knowledge of the the binary logic restriction. The
physical system under study is contained in a construction of the DDET can be
numerical simulation, written by the analyst. The computerized.
components of the system are modeled in terms of In order to better manage the
discrete states. All possible branches of the system multiple generated scenarios by
evolution are tracked. The events (branches) can only the DDET, methods as DYLAM
happen at predefined discrete time intervals. It is and DETAM were developed.
assumed that if the appropriate time step is chosen,
DDETs would investigate all possible scenarios.The
systematic branching would easily lead to such a huge
number of sequences that the management of the
output Event Tree becomes awkward. Measures have
been taken to eliminate the explosion
Decision Analysis See DTA (Decision Tree
Analysis) or Decision Analysis.
See Risk-Based Decision
Analysis.
44
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
170. Decision Matrix T M 1982 Used to form an initial allocation hypothesis. 6 nuclear x x • [FAA HFW]
The “goodness” in response to some performance • [Price82]
demand is scaled from unsatisfactory (U) to excellent • [Sharit97]
for both the human (h) and automation (a). Demands • Wikipedia
that fall into an Uah region indicate the need for system
redesign; those falling into Uh or Ua regions are biased
toward static allocation design perspectives favouring
the machine or human, respectively; and demands in
the Pa, Ph, and Pha (where both human and machine
can perform the function reasonably well) regions will
offer the most design options, including the potential
for dynamic function allocation.
171. Decision Tables T Dh 1962 Is based on the logic that a set of premises logically Widely used. Can be seen as a 3 4 computer x x • [MorenoVerhelleVant
entails a conclusion, if every interpretation that rigorous generalisation of FMEA. hienen, 2000]
satisfies the premises also satisfies the conclusion. Also referred to as Truth Tables • [EN 50128]
Logical entailment is checked by comparing tables of and several other names. • [Genesereth05]
all possible interpretations. • [HEAT overview]
• [MUFTIS3.2-I]
• [Rakowsky]
• [Sparkman92]
• Wikipedia
Decision-Action- See DAD (Decision Action
Information Diagram Diagram)
172. Defensive Programming G D 1988 Defensive programming is an approach to improve Useful where there is insufficient 3 6 computer x • [Bishop90]
or software and source code, in terms of: General quality confidence in the environment or • [EN 50128]
older - Reducing the number of software bugs and the software. Tools available. • Wikipedia
problems; Making the source code comprehensible - Software architecture phase.
the source code should be readable and understandable
so it is approved in a code audit; Making the software
behave in a predictable manner despite unexpected
inputs or user actions. Aim is to produce programs
which detect anomalous control flow, data flow or
data values during their execution and react to these in
a predetermined and acceptable manner.
173. Delphi Knowledge T R 1959 The Delphi method allows experts to deal Developed by Olaf Helmer, 3 5 aviation x x • [Delphi]
Elicitation Method systematically with a complex problem or task. The Norman Dalkey, and Nicholas ATM • [Rakowsky]
or technique comprises a series of questionnaires sent Rescher to forecast the impact of defence • Wikipedia
Delphi Method either by mail or via computerised systems, to a pre- technology on warfare. The name
selected group of geographically dispersed experts. "Delphi" derives from the Oracle
These questionnaires are designed to elicit and of Delphi (Greece).
develop individual responses to the problems posed The main point behind the Delphi
and to enable the experts to refine their views as the method is to overcome the
group’s work progresses in accordance with the disadvantages of conventional
assigned task. The group interaction in Delphi is committee action. Anonymity,
anonymous; comments, forecasts, and the like are not controlled feedback, and
identified as to their originator but are presented to the statistical response characterise
group in such a way as to suppress any identification. Delphi.
45
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
174. Delta-X Monte Carlo T R 2007 Delta-X deals with quantifying the error made when See also HPLV, Monte Carlo 5 x • [ChoiCho, 2007]
Method low probability cut sets of large fault trees are Simulation, and Importance
truncated. Truncation errors are defined by the Sampling.
difference between the actual structure function of a
fault tree and the equivalent binary function to the
union of all the identified minimal cut sets. For the
assessment, Monte Carlo simulation and Importance
sampling is used to evaluate the binary functions
related to the truncation errors.
Dependent failure See CCA (Common Cause
analysis Analysis)
175. DES M 1982 An event calendar is constructed which indicates what See also at Computer Modelling 4 5 many x x • [MUFTIS3.2-I]
(Discrete Event abou events are scheduled to occur and when. The and Simulation • Wikipedia
Simulation) t? simulation executes the first event on the calendar,
which may lead to a state change, and next updates the
calendar. Can be seen as special case of Monte Carlo
Simulation.
176. Design and Coding G D A Coding Standard aims to avoid potential problems Software design and development 6 computer x • [EN 50128]
Standards with a programming language before the design is phase. See also Code Inspection • [Rakowsky]
actually implemented in code. The standard can Checklists. See also Safe • Wikipedia
indicate what software constructs, library functions, Language Subsets or Safe Subsets
and other language-specific information must or must of Programming Languages.
not be used. As such, it produces, in practice, a “safe”
subset of the programming language. Coding
standards may be developed by the software designer,
based on the software and hardware system to be used,
or may be general standards for a “safer” version of a
particular language.
177. Design Constraint T Ds 1996 Evaluates restrictions imposed by requirements, the A constraint is a design target that 6 avionics x • [FAA00]
Analysis or real world and environmental limitations, as well as by must be met for the design to be computer • [NASA-GB-1740.13-
older the design solution. The design materials should successful. (In contrast, an 96]
describe all known or anticipated restrictions on a objective is a design target where • [Rakowsky]
software component. more (or less) is better.)
178. Design Data Analysis T Ds 1996 Design data analysis evaluates the description and 6 avionics x • [FAA00]
or intended use of each data item in the software design. computer • [NASA-GB-1740.13-
older Data analysis ensures that the structure and intended 96]
use of data will not violate a safety requirement. A • [Rakowsky]
technique used in performing design data analysis is to
compare description-to-use of each data item in the
design logic. Interrupts and their effect on data must
receive special attention in safety-critical areas.
Analysis should verify that interrupts and interrupt
handling routines do not alter critical data items used
by other routines. The integrity of each data item
should be evaluated with respect to its environment
and host. Shared memory, and dynamic memory
allocation can affect data integrity. Data items should
also be protected from being overwritten by
unauthorized applications.
46
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
179. Design for Testability G D 1969 DFT is a name for design techniques that add certain Used wherever fault tolerance 6 computer x • [Bishop90]
(Hardware) testability features to a microelectronic hardware and redundancy is applied. Tools • Wikipedia
product design. The premise of the added features is available.
that they make it easier to develop and apply
manufacturing tests for the designed hardware. The
purpose of manufacturing tests is to validate that the
product hardware contains no defects that could,
otherwise, adversely affect the product’s correct
functioning. Aim is to enable all hardware components
to be fully tested both on and off line.
180. Design for Testability G D 1980 Design for testability aims to include ways that Tools available. 6 computer x • [Bishop90]
(Software) or internals of a component can be adequately tested to • [NASA-GB-1740.13-
older verify that they are working properly. An example is 96]
to limit the number and size of parameters passed to • Wikipedia
routines.
181. Design Interface T Ds 1996 Verifies the proper design of a software component’s 3 avionics x • [FAA00]
Analysis or interfaces with other components of the system. This • [NASA-GB-1740.13-
older analysis will verify that the software component’s 96]
interfaces and control and data linkages between
interfacing components have been properly designed.
182. DESIREE T H 2001 DESIREE is a simulation platform for Air Traffic DESIREE was developed by, and 7 ATC x • [Zingale et al, 2008]
(Distributed Control (ATC) ground systems. Its principal use is for is wholly owned and operated by, • [FAA HFW]
Environment for rapid prototyping and human factors experimentation. the FAA Research, Development,
Simulation, Rapid It provides realistic Terminal and En-route ATC and Human Factors Laboratory
Engineering and simulations simultaneously. The Desiree user interface (RDHFL).
Experimentation) is programmable. It uses an internal messaging
scheme, which allows data to be recorded for later
analysis and also permits to use scripted events. It
emulates multiple en route and terminal sectors with
automatic handoff and transfer of control features.
183. DETAM I R 1991 DETAM is a generalisation of DYLAM to allow This approach can be used to 4 5 nuclear x x x • [MUFTIS3.2-I]
(Dynamic Event Tree scenario branching based on stochastic variations in represent operator behaviours, chemical
Analysis Method) operator state. It treats time-dependent evolution of model the consequences of
plant hardware states, process variable values, and operator actions and also serve as
operator states over the course of a scenario. A a framework for the analyst to
dynamic event tree is an event tree in which employ a causal model for errors
branchings are allowed at different points in time. This of commission. Thus it allows the
approach is defined by: (a) branching set, (b) set of testing of emergency procedures
variables defining the system state, (c) branching and identify where and how
rules, (d) sequence expansion rule and (e) changes can be made to improve
quantification tools. The branching set refers to the set their effectiveness.
of variables that determine the space of possible
branches at any node in the tree. Branching rules refer
to rules used to determine when a branching should
take place (a constant time step). The sequence
expansion rules are used to limit the number of
sequences.
184. Development Standards G D 1990 To enhance software quality by using standard Essential for safety critical 6 computer x • [Bishop90]
or approaches to the software development process. systems. Necessary for
older implementing in a quality
assurance program. Tools
available. See also Design and
Coding standards.
47
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
185. DFD T D 1989 Data flow diagrams illustrate how data is processed by The purpose and value of the data 2 computer x • [AIS-DFD]
(Data Flow Diagrams) or a system in terms of inputs and outputs. Different flow diagram is primarily data • [EN 50128]
older nodes and arrows exist: Processes, Datastores, discovery, not process mapping. • [Rakowsky]
Dataflows, External entities. DFD can be drawn in Several tools exist. • [Smartdraw]
several nested layers. • Wikipedia
186. DFM I Ds 1990 Is an integrated, methodical approach to modelling Combines the benefits of 4 aircraft x • [FAA00]
(Dynamic Flowgraph and analysing the behaviour of software-driven conventional SFTA and Petri avionics • [NASA-GB-1740.13-
Methodology) embedded systems for the purpose of dependability nets. nuclear 96]
assessment and verification. DFM has two • [Rakowsky]
fundamental goals: 1) to identify how events can occur
in a system; 2) to identify an appropriate testing
strategy based on an analysis of system functional
behaviour. To achieve these goals, DFM employs a
modelling framework in which models expressing the
logic of the system being analysed are developed in
terms of causal relationships between physical
variables and temporal characteristics of the execution
of software modules.
187. DFMM or DFM T R 1981 Inductive approach that considers the effects of double Its use is feasible only for 4 5 nuclear x • [FT handbook02]
(Double Failure Matrix failures. All possible failures are placed on the vertical relatively noncomplex systems. • [MUFTIS3.2-I]
Method) and the horizontal axis of a matrix, and all
combinations are considered and put into severity
classes.
48
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
188. DFS Safety Assessment I M 2001 Methodology that consists of three major phases: FHA Developed by DFS (Deutsche 2 3 4 5 6 7 ATM x x x • [DFS Method
Methodology (Functional Hazard Assessment); PSSA (Preliminary FlugSicherung). Main modelling Handbook, 2004]
System Safety Assessment); SSA (System Safety technique used is Bayesian Belief
Assessment). During the FHA, a system's functional Networks.
structure is analysed, all relevant hazards are identified
and assessed according to the severity and conditional
probability of their effects. Safety Objectives are
defined (quantitative values for the maximum
acceptable frequency of a hazard), based on the
maximum acceptable frequency of each of the
identified effects and the conditional probabilities of
the causal links between the hazards and the effects.
The PSSA is carried out in order to create Safety
Requirements suitable to reach the Safety Objectives.
Safety Requirements are concrete specifications for
the architecture, the implementation or the operation
of the future system. They are derived from the Safety
Criticality of a system component, which in turn can
be derived from a conditional probabilities
assessment. The SSA is performed after the system
has been developed and before it goes operational and
aims at providing Safety Evidence, i.e. at ensuring that
the system is free from unacceptable risks. Besides
verifying that all Safety Requirements have been met,
the Hazard Analysis performed during FHA and PSSA
is refined to reflect new insights and perceptions. All
hazards found are classified according to the
frequency (or rate) of their occurrence, the actual
frequency and the severity of their effects. For every
non-acceptable risk found, suitable additional
measures or safeguards have to be taken to mitigate
that risk.
Diary Method See Self-Reporting Logs
Digital Logic See Dynamic Logic
189. Digraphs M 1992 A Digraph or Directed Graph consists of vertices (or 4 aircraft x • [FAA AC431]
‘nodes’), connected by directed arcs (arrows). It • [ΣΣ93, ΣΣ97]
differs from an ordinary or undirected graph, in that • Wikipedia
the latter is defined in terms of unordered pairs of
vertices, which are usually called edges. Sometimes a
digraph is called a simple digraph to distinguish it
from a directed multigraph, in which the arcs
constitute a multiset, rather than a set, of ordered pairs
of vertices. Also, in a simple digraph loops are
disallowed. (A loop is an arc that pairs a vertex to
itself.) On the other hand, some texts allow loops,
multiple arcs, or both in a digraph.
Direct Numerical See APJ (Absolute Probability
Estimation Judgement)
49
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
190. Dispersion Modelling T R 1930 Dispersion modelling is the mathematical simulation Quantitative tool for 2 chemical x • [Dispersion]
or of how air pollutants disperse in the ambient environmental and system safety • Wikipedia
Atmospheric Dispersion atmosphere. It is performed with computer programs engineering. Used in chemical
Modelling that solve the mathematical equations and algorithms process plants, can determine
which simulate the pollutant dispersion. The seriousness of chemical release.
dispersion models are used to estimate or to predict
the downwind concentration of air pollutants or toxins
emitted from sources such as industrial plants,
vehicular traffic or accidental chemical releases.
191. Diverse Programming T D 1977 Diverse Programming (also referred to as N-version Software architecture phase. 3 computer x • [Bishop90]
programming) involves a variety of routines satisfying Useful for safety relevant fault nuclear • [EN 50128]
the same specification being written in isolation from compensating systems. • [Rakowsky]
one another. When a result is sought, voting takes • [SSCS]
place and the routine giving the most satisfactory • [Storey96]
answer wins. Aim is to detect and mask residual • Wikipedia
software design faults during execution of a program
in order to prevent safety critical failures of the
system, and to continue operation for high reliability.
192. DLA T Ds 1996 DLA evaluates the equations, algorithms and control 3 avionics x • [FAA00]
(Design Logic Analysis) or logic of the software design. Logic analysis examines • [NASA-GB-1740.13-
older the safety-critical areas of a software component. Each 96]
function performed by the software component is • [Rakowsky]
examined. If it responds to, or has the potential to
violate one of the safety requirements, it should be
considered critical and undergo logic analysis.
193. DMEA T R 1977 Damage Modes and Effects Analysis evaluates the Risks can be minimised and their 3 5 aircraft x • [FAA AC431]
(Damage Mode and damage potential as a result of an accident caused by associated hazards eliminated by defence • [FAA00]
Effects Analysis) hazards and related failures. It provides early criteria evaluating damage progression • [ΣΣ93, ΣΣ97]
for survivability and vulnerability assessments. The and severity. Related to and
DMEA provides data related to damage caused by combines with FMEA.
specified threat mechanisms and the effects on system
operation and mission essential functions.
194. DO-178B I Ds 1981 International standard on software considerations in Jointly developed by RTCA, Inc. 2 3 5 6 aircraft x • [DO178B]
(RTCA/EUROCAE ED- airborne systems and equipment certification. and EUROCAE. First version avionics • [Storey96]
12B DO-178B) Describes issues like systems aspects relating to was released in 1981. Last • Wikipedia
software development, software lifecycle, software revision is dated 1992. Relates to
planning, etc, until aircraft and engine certification. civil aircraft and represents
The Design Assurance Level (DAL) is determined agreement between Europe and
from the safety assessment process and hazard US.
analysis by examining the effects of a failure condition
in the system. The failure conditions are categorized
by their effects on the aircraft, crew, and passengers.
The categories are: Catastrophic; Hazardous; Major;
Minor; No Effect. This software level determines the
number of objectives to be satisfied (eventually with
independence). The phrase "with independence" refers
to a separation of responsibilities where the objectivity
of the verification and validation processes is ensured
by virtue of their "independence" from the software
development team.
50
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
195. DODT T Dh 1971 DODT are a means of formally reviewing design The original DODT approach was 4 5 aircraft x x • [HEAT overview]
(Design Option options for the human factors implications of design developed by Askren & Korkan defence
Decision Trees) choices. The technique requires a comprehensive (1971) to locate points in the
understanding of the human factors issues and costs design process for the input of
associated with the class of system being developed, human factors data.
together with information on possible technological
choices. This requires the equivalent of a "state-of-the-
art" review of human factors issues for the particular
class of system. The analysis produces a tree of design
decisions which have significant human factors costs,
and detailed descriptions of the human engineering
issues associated with each decision.
196. Domino Theory T M 1932 According to this theory, there are five factors in the Developed by H.W. Heinrich in 6 x x x • [Kjellen, 2000]
accident sequence: 1) the social environment and 1932. Also referred to as Chain of • [Storbakken, 2002]
ancestry; 2) the fault of the person; 3) the unsafe act Multiple Events. • Wikipedia
and/or mechanical or physical hazard; 4) the accident;
5) the injury. These five factors are arranged in a
domino fashion such that the fall of the first domino
results in the fall of the entire row. This illustrates that
each factor leads to the next with the end result being
the injury. It also illustrates that if one of the factors
(dominos) is removed, the sequence is unable to
progress and the injury will not occur.
197. DORA T R 2009 DORA aims at operational risk analysis in oil/gas and 4 5 offshore x x • [Yanga&Mannan,
(Dynamic Operational chemical industries, guiding the process design and chemical 2010]
Risk Assessment) further optimisation. The probabilistic modelling part
of DORA integrates stochastic modelling and process
dynamics modelling to evaluate operational risk. The
stochastic system-state trajectory is modeled
according to the abnormal behavior or failure of each
component. For each of the possible system-state
trajectories, a process dynamics evaluation is carried
out to check whether process variables, e.g., level,
flow rate, temperature, pressure, or chemical
concentration, remain in their desirable regions.
Component testing/inspection intervals and repair
times are critical parameters to define the system-state
configuration, and play an important role for
evaluating the probability of operational failure.
DREAM See CREAM (Cognitive
(Driver Reliability And Reliability and Error Analysis
Error Analysis Method) Method)
51
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
198. DREAMS I H 1995 DREAMS is a DYLAM-related technique for human Developed by Pietro Cacciabue 3 4 5 nuclear x • [MUFTIS3.2-I]
(Dynamic Reliability reliability analysis, which identifies the origin of and others.
technique for Error human errors in the dynamic interaction of the
Assessment in Man- operator and the plant control system. The human
machine Systems) behaviour depends on the working environment in
which the operator acts ("external world"), and on the
"internal world", i.e. his psychological conditions,
which are related to stress, emotional factors,
fixations, lack of intrinsic knowledge. As a logical
consequence of the dynamic interaction of the human
with the plant under control, either the error tendency
or the ability to recover from a critical situation may
be enhanced. Output is an overall probability measure
of plant safety related to human erroneous actions.
199. DSA T M 1997 This analysis identifies safety and health (S&H) Deactivation may include 3 7 chemical x • [FAA AC431]
(Deactivation Safety or concerns associated with facilities that are removal of hazardous materials, • [FAA00]
Analysis) older decommissioned/closed. The S&H practices are chemical contamination, spill • [ΣΣ93, ΣΣ97]
applicable to all deactivation activities, particularly cleanup.
those involving systems or facilities that have used,
been used for, or have contained hazardous or toxic
materials. The deactivation process involves placing
the system or facility into a safe and stable condition
that can be economically monitored over an extended
period of time while awaiting final disposition for
reuse or disposal. The deactivation methodology
emphasises specification of end-points for cleanup and
stabilisation based upon whether the system or facility
will be deactivated for reuse or in preparation for
disposal.
200. DTA T R 1997 A decision tree is a decision support tool that uses a Looks similar to Fault Trees, 4 5 nuclear x x • [MindTools-DTA]
(Decision Tree tree-like graph or model of decisions and their including the quantification part • [Straeter01]
Analysis) possible consequences, including chance event of FTA. A decision tree can be • [FAA HFW]
or outcomes, resource costs, and utility. Decision Trees represented more compactly as an • Wikipedia
Decision Analysis are tools for helping one to choose between several influence diagram, focusing
courses of action. They provide a structure within attention on the issues and
which one can lay out options and investigate the relationships between events.
possible outcomes of choosing those options. They
also help to form a balanced picture of the risks and
rewards associated with each possible course of
action.
201. DYLAM I R 1985 Implementation of concept of Dynamic Event Tree 4 5 nuclear x x x • [Cacciabue&Amendol
(Dynamic Logical Analysis. A physical model for the system is chemical a&Cojazzi86]
Analytical constructed which predicts the response of system • [Cacciabue&Carpigna
Methodology) process variables to changes in component status. no&Vivalda92]
Next, the undesired system states are defined in terms • [Cojazzi&Cacciabue9
of process variable levels. At the end of the first time 2]
interval all possible combinations of component states • [Kirwan98-1]
are identified and their likelihoods are calculated. • [MUFTIS3.2-I]
These states are then used as boundary conditions for
the next round of process variable updating. This is
continued until an absorbing state is reached.
202. Dynamic Event Tree I R 1985 Couples the probabilistic and physical behaviour of a See also DYLAM. See also 4 nuclear x x x • [MUFTIS3.2-I]
Analysis dynamic process, for more detailed reliability analysis. DETAM. chemical
Presents tree-based representation of an accident
scenario.
52
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
203. Dynamic Logic T D 1973 Dynamic logic uses a clock signal in its Desirable in redundant systems as 3 computer x • [Bishop90]
or implementation of combinational logic circuits, i.e. a means of distinguishing faulty • Wikipedia
older logic circuits in which the output is a function of only channels.
the current input. The usual use of a clock signal is to Sometimes referred to as Digital
synchronize transitions in sequential logic circuits, and Logic or Clocked Logic
for most implementations of combinational logic, a
clock signal is not even needed. Aim is to provide
self-supervision by the use of a continuously changing
signal.
204. Dynamic T D 1971 Dynamic Reconfiguration (DR) is a software Valuable where high fault 6 computer x x • [Bishop90]
Reconfiguration or mechanism that allows resources to be attached tolerance and high availability are • [EN 50128]
older (logically added) or detached (logically removed) both required, but costly and • [Rakowsky]
from the operating environment control without difficult to validate. Software • [Vargas, 1999]
incurring any system downtime. Aim is to maintain architecture phase.
system functionality despite an internal fault.
Dynamic Workload See Rating Scales
Scale
205. EATMP SAM I R 2000 Safety Assessment Methodology supported by Developed by a SAM Task Force 1 2 3 4 5 6 7 ATM x x x x • [EHQ-SAM]
(European Air Traffic EATMP. Consists of three steps: FHA (Functional chaired by Eurocontrol. • [Review of SAM
Management Hazard Assessment), PSSA (Preliminary System Is used widely throughout Europe techniques, 2004]
Programme Safety Safety Assessment) and SSA (System Safety by Air Navigation Service
Assessment Assessment) which run parallel to all development Providers. The steps FHA, PSSA
Methodology) stages of a lifecycle of an Air Navigation System. and SSA are described separately
Each step consists of several substeps. in this database. Version 2.1 was
released in 2008.
206. ECCAIRS D 2004 ECCAIRS is a European Union initiative to harmonise Developed by the JRC in Italy. In 8 aviation x x x x • [GAIN ATM, 2003]
(European Co- the reporting of aviation occurrences by Member use in ICAO since 1 January • [JRC ECCAIRS]
Ordination Centre for States so that the Member States can pool and share 2004.
Aviation Incident data on a peer-to-peer basis. The proposed data
Reporting Systems) sharing has not yet been implemented. Each Member
State will enforce the procedures for collecting and
processing the reports. The reports will be placed in an
electronic database together with safety relevant
information derived from confidential reporting. An
electronic network will allow any CAA or AAIB in
the EU to have access to the integrated information. It
will facilitate independent analyses and plans include
having tools for trend and other analysis tools built-in.
207. ECOM T H 2003 ECOM acknowledges that the performance of the joint Link with COCOM, which can be 4 ATM x • [ECOM web]
(Extended Control system can be described as involving different but seen as an elaboration of the basic • [HollnagelNaboLau,
Model) simultaneous layers of control (or concurrent control cyclical model with emphasis on 2003]
loops). Some of these are of a closed-loop type or the different control modes, i.e., • [Engstrom, 2006]
reactive, some are of an open-loop type or proactive, how control can be lost and
and some are mixed. Additionally, it is acknowledged regained, and how the control
that the overall level of control can vary, and this modes affect the quality of
variability is an essential factor with regard to the performance. ECOM adds a
efficiency and reliability of performance. Four layers modelling layer. The degree of
are defined: Tracking, Regulating, Monitoring, control can still be considered
Targeting. The ECOM describes the performance of relative to the levels of the
the joint system by means of four interacting and ECOM.
simultaneous control loops, one for each layer.
53
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
208. ED-78A I Dh 2000 Guidelines for approval of the provision and use of EUROCAE ED-78A is 2 3 5 6 aviation x x • [FAA00] chap 4
(RTCA/EUROCAE Air Traffic Services supported by data equivalent to RTCA DO-264; the • [FAA tools]
ED-78A DO-264) communications. It provides a Safety assessment guidance was developed by a
methodology with steps OSED (Operational Service joint group: EUROCAE WG53/
and Environment Definition), OHA (Operational RTCA SC189.
Hazard Analysis), ASOR (Allocation of Safety OSA is a requirement
Objectives and Requirements), together also named development tool based on the
Operational Safety Assessment (OSA). assessment of hazard severity.
The OSA is normally completed
during the Mission Analysis
(MA) phase. Development of the
OSA should begin as soon as
possible in the MA process.
209. EDAM I M 2005 EDAM is a hybrid approach that aims at the 2 6 defence x x • [Ockerman et al,
(Effects-Based Decision requirements analysis and design of revolutionary 2005]
Analysis Methodology) command and control systems and domains. It uses
knowledge elicitation and representation techniques
from several current cognitive engineering
methodologies, such as GDTA and CTA. The
techniques were chosen to allow for decision analysis
in the absence of an existing similar system or domain.
EDAM focuses on the likely system or domain
constraints and the decisions required within that
structure independent of technology, existing or
planned. It is intended to be used throughout the
design and development of a prototype. Information
gathered with EDAM can also be used throughout a
project to evaluate human performance in the
proposed system.
210. Egoless programming T D 1971 A way of software programming that does not create Developed by Jerry Weinberg in 6 computer x • NLR expert
an environment in which programmers consider the his book The Psychology of • [Weinberg, 1971]
code as their own property, but are willing to share. Computer Programming. • Wikipedia
211. Electromagnetic G D 1990 Aim is to minimise the effects of electromagnetic Tools available. 6 electricity x • [Bishop90]
Protection or interference (EMI) of the system by using defensive
older methods and strategies.
212. EMC T R 1989 The analysis is conducted to minimise/prevent Adverse electromagnetic 3 6 avionics x • [FAA AC431]
(Electromagnetic accidental or unauthorised operation of safety-critical environmental effects can occur defence • [FAA00]
Compatibility Analysis functions within a system. The output of radio when there is any electromagnetic • [ΣΣ93, ΣΣ97]
and Testing) frequency (RF) emitters can be coupled into and field. Electrical disturbances may • Wikipedia
interfere with electrical systems which process or also be generated within an
monitor critical safety functions. Electrical electrical system from transients
disturbances may also be generated within an accompanying the sudden
electrical system from transients accompanying the operations of solenoids, switches,
sudden operation of electrical devices. Design choppers, and other electrical
precautions must be taken to prevent electromagnetic devices, Radar, Radio
interference (EMI) and electrical disturbances. Human Transmission, transformers.
exposure to electromagnetic radiation is also a
concern.
213. Emergency Exercises G R Practising the events in an emergency, e.g. leave 7 8 nuclear x x • [NEA01]
building in case of fire alarm.
54
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
214. EMS D 1998 EMS is designed to ease the large-scale Developed by Austin Digital. 7 aviation x x • [GAIN AFSA, 2003]
(Event Measurement implementation of flight-data analysis in support of
System) the Flight Operational Quality Assurance (FOQA)
Programs and Advanced Qualifications Programs
(AQP). The EMS is a configurable and adaptable
Windows 2000 based flight data analysis system. It is
capable of managing large bodies of flight data, and
can expand with fleet size and changing analysis
needs. As the operations grow, EMS has the capacity
to continue to extract maximum analysed value from
the flight data.
215. Energy Analysis T R 1972 The energy analysis is a means of conducting a system The technique can be applied to 3 chemical x • [FAA00]
or safety evaluation of a system that looks at the all systems, which contain, make electricity • [ΣΣ93, ΣΣ97]
older “energetics” of the system. use of, or which store energy in
any form or forms, (e.g. potential,
kinetic mechanical energy,
electrical energy, ionising or non-
ionising radiation, chemical, and
thermal.) This technique is
usually conducted in conjunction
with Barrier Analysis.
216. Energy Trace Checklist T R 1972 The analysis aids in the identification of hazards Similar to ETBA (Energy Trace 3 chemical x • [FAA00]
or associated with energetics within a system, by use of a and Barrier Analysis), to Energy electricity • [ΣΣ93, ΣΣ97]
older specifically designed checklist. The use of a checklist Analysis and to Barrier Analysis.
can provide a systematic way of collecting The analysis could be used when
information on many similar exposures. conducting evaluation and
surveys for hazard identification
associated with all forms of
energy.
217. Environment Analysis G 1997 Describes the environment in which the activities or 2 chemical x x • [FAA HFW]
basic tasks are performed, with the purpose to identify • [Wickens97]
environment specific factors impacting the task(s).
218. EOCA T H 1995 HAZOP-based approach whereby experienced 3 nuclear x x x x x • [Kirwan94]
(Error of Commission operators consider procedures in detail, and what • [Kirwan98-1]
Analysis) actions could occur other than those desired. Particular
task formats, error mode keywords, and PSF
(Performance Shaping Factors) are utilised to structure
the assessment process and to prompt the assessors.
Identified significant errors are then utilised in the
PSA fault and/or event trees.
55
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
219. E-OCVM I M 2007 E-OCVM includes three aspects of validation that, Developed by Eurocontrol, 1 2 3 4 5 6 7 8 ATM x x x x x • [E-OCVM]
(European Operational when viewed together, help provide structure to an building on several European
Concept Validation iterative and incremental approach to concept Validation project results.
Methodology) development and concept validation: (1) The Concept
Lifecycle Model facilitates the setting of appropriate
validation objectives and the choice of evaluation
techniques, shows how concept validation interfaces
with product development and indicates where
requirements should be determined; (2) The Structured
Planning Framework facilitates programme planning
and transparency of the whole process; (3) The Case-
Based Approach integrates many evaluation exercise
results into key ‘cases’ (safety case, business case,
environment case, human factors case) that address
stakeholder issues about air traffic management
(ATM) performance and behaviours. These three
aspects fit together to form a process. This process is
focused on developing a concept towards an
application while demonstrating to key stakeholders
how to achieve an end system that is fit for purpose.
220. EPIC T H 1997 EPIC is a cognitive architecture model of human Developed by David E. Kieras 2 ergonomi x • [Kieras&Meyer,
(Executive Process information processing that accounts for the detailed and David E. Meyer at the cs 1997]
Interactive Control) timing of human perceptual, cognitive, and motor University of Michigan • [FAA HFW]
activity. EPIC provides a framework for constructing
and synthesising models of human-system interaction
that are accurate and detailed enough to be useful for
practical design purposes. Human performance in a
task is simulated by programming the cognitive
processor with production rules organized as methods
for accomplishing task goals. The EPIC model then is
run in interaction with a simulation of the external
system and performs the same task as the human
operator would. The model generates events (e.g. eye
movements, key strokes, vocal utterances) whose
timing is accurately predictive of human performance.
221. EPOQUES I M 2002 EPOQUES is a collection of methods and tools to treat Developed at CENA, France. 6 ATM x x x • [GAIN ATM, 2003]
safety occurrences at air traffic service providers. • [Gaspard02]
Participatory design and iterative prototyping are
being used to define a set of investigative tools. Two
complementary methods are being conducted in
parallel. One is to study the existing work practices so
that the initial prototype is grounded in current every
day use. The second is to involve participants and
designers to work together to iterate, refine, and
extend the design, using rapid prototyping and
collective brainstorming.
56
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
222. Equivalence Partitioning T Ds 1995 Software Testing technique. Aim is to test the See also Software testing. See 7 computer x • [EN 50128]
and Input Partition or software adequately using a minimum of test data. The also Partitioning. • [ISO/IEC 15443]
Testing older test data is obtained by selecting the partitions of the • [Rakowsky]
input domain required to exercise the software. This • Wikipedia
testing strategy is based on the equivalence relation of
the inputs, which determines a partition of the input
domain. Test cases are selected with the aim of
covering all the partitions previously identified. At
least one test case is taken from each equivalence
class.
223. ER M R 1995 Determination of the third party risk in terms of Tool used mainly by airport 5 airport x • [GfL web]
(External Risk) individual (IR) and societal risk (SR) for the operators • [GfL 2001]
surroundings of airports to individuals induced by air • [TUD05]
traffic. Quantification of the likelihood to die due to an
aircraft accident. Comprises local accident ratio
determination, accident location distribution and
accident consequences. The latter taking into account
consecutive effects such as interaction with dangerous
plants and alike. Both traffic level and infrastructure
layout form individual scenarios for which IR and SR
figures can be computed and graphically displayed.
Intended to support procedure design and allow to
increase, direct the stakeholder’s situational awareness
to bottlenecks and to judge new concepts.
224. ERA T R 1975 The analysis is conducted to assess the risk of The analysis is conducted for any 3 5 chemical x • [FAA00]
(Environmental Risk environmental non-compliance that may result in system that uses, produces or rail • [ΣΣ93, ΣΣ97]
Analysis) hazards and associated risks. transports toxic hazardous road • [Lerche&Paleologos,
materials that could cause harm 2001]
to people and the environment.
225. Ergonomics Checklists G 1992 These are checklists, which an analyst can use to See also Checklist Analysis. 7 nuclear x • [KirwanAinsworth92]
or ascertain whether particular ergonomics are being met ergonomi
older within a task, or whether the facilities that are cs
provided for that task are adequate.
226. Error Detection and M R 1975 Aim is to detect and correct errors in sensitive May be useful in systems where 6 computer x • [Bishop90]
Correction or information. Describes how to transit bits over a availability and response times • [EN 50128]
older possibly noisy communication channel. This channel are critical factors. Software • [Rakowsky]
may introduce a variety of errors, such as inverted bits architecture phase. • Wikipedia
and lost bits.
227. Error Guessing T M 1995 Error Guessing is the process of using intuition and 3 computer x • [EN 50128]
or past experience to fill in gaps in the test data set. There • [Rakowsky]
older are no rules to follow. The tester must review the test • Wikipedia
records with an eye towards recognising missing
conditions. Two familiar examples of error prone
situations are division by zero and calculating the
square root of a negative number. Either of these will
result in system errors and garbled output.
228. Error Message G D 1992 The rules are: Be as specific and precise as possible; 6 computer x x • [EMG]
Guidelines Be positive: Avoid Condemnation; Be constructive: • [FAA HFW]
Tell user what needs to be done; Be consistent in • [Liu97]
grammar, terminology, and abbreviations; Use user- • [Schneiderman92]
centred phrasing; Use consistent display format; Test
the usability of error messages; Try to reduce or
eliminate the need for error messages.
57
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
229. Error Seeding T Ds 1989 Technique that can be used to evaluate the ability of See also Software testing. 7 computer x • [EN 50128]
or language processors to detect and report errors in • [Meek&Siu89]
older source programs. The essence of the technique is to • [Rakowsky]
have a program which accepts correct programs
(“target programs”) as input, and subjects them to
random variations, hence producing as output
corrupted programs which can be used to assess the
ability of a processor to detect errors which have been
‘seeded’ in them.
230. ESAT I H 1992 Artificial intelligence concepts are used to describe the Method established in the 2 5 aviation x • [Straeter01]
(Expertensystem zur human tasks. Quantification of PSF (Performance aviation field (e.g. design of
Aufgaben-Taxonomie Shaping Factors) for any task. Determination of a cockpit displays).
(Expert-System for Task dependability class (from 1-10) by ratings of default
Taxonomy)) PSFs. The functional context between HEP and
dependability class is partly given by expert
judgement (based on generic cognition of human
performance) and partly by measurement of
performance.
231. ESCAPE T T 1996 ESCAPE is an ATC real-time simulator platform. It ESCAPE has been developed by 7 ATC x x • [GAIN ATM, 2003]
(Eurocontrol Simulation uses the Raptor 2500 FPS display technology, using EEC and was launched in 1996.
Capability Platform for LCD flat panel displays, each with a 170-degree
Experimentation) viewing angle. ESCAPE has the capability to simulate
a host of different en route scenarios.
232. ESD T R 1999 An event-sequence diagram is a schematic Mathematically formulated by 4 aviation x • [MUFTIS3.2-I]
(Event Sequence representation of the sequence of events leading up Swaminathan and Smidts in space • [Swaminathan&Smidt
Diagrams) until failure. In other words, it is a flow chart with a 1999. nuclear s, 1999]
number of paths showing the ‘big picture’ of what
happened - a holistic view. It is a variation of Cause
Consequence Diagram and generalisation of ETA, not
restricted to representation of event sequences,
repairable systems can be modelled.
233. ESSAI I T 2000 The ESSAI project developed a training approach for 6 aviation x x x • [ESSAI web]
(Enhanced Safety - problems that occur in cockpits when pilots are
through Situation 2002 confronted with extreme situations (a Crisis) for which
Awareness Integration they do not have appropriate procedures. These
in training) extreme situations may be the result of a rare chain of
events, but may also occur because of lack of Situation
Awareness of the crew. The project plans to develop
training tools and techniques and their implementation
in training programmes.
58
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
234. ETA T R 1980 An Event Tree models the sequence of events that Former name is CTM 4 5 aircraft x x x • [Leveson95]
(Event Tree Analysis) results from a single initiating event and thereby (Consequence Tree Method). ATM • [MUFTIS3.2-I]
describe how serious consequences can occur. Can be Useful in conjunction with fault nuclear • [Rakowsky] claims
used for developing counter measures to reduce the tree analysis as an alternative to offshore this one does handle
consequences. The tool can be used to organise, cause-consequence diagrams. software
characterise, and quantify potential accidents in a Mainly for technical systems; • [ΣΣ93, ΣΣ97]
methodical manner. The analysis is accomplished by human error may also be • [Baybutt89]
selecting initiating events, both desired and undesired, modelled. Tools available, e.g. • [DNV-HSE01]
and develop their consequences through consideration Fault Tree+, RISKMAN, see
• [Rademakers&al92]
of system/ component failure-and-success alternatives. [GAIN AFSA, 2003].
• [Rakowsky]
A variation developed for error of
commission analysis is GASET • [Reason90]
(Generic Accident Sequence • [Siu94]
Event Tree. • [Smith9697]
• [Storey96]
• [Terpstra84]
• [Villemeur91-1]
235. ETBA T R 1973 The analysis can produce a consistent, detailed ETBA is similar to Energy 3 5 6 chemical x • [FAA AC431]
(Energy Trace and understanding of the sources and nature of energy Analysis and to Barrier Analysis. electricity • [FAA00]
Barrier Analysis) flows that can or did produce accidental harm. The The technique can be applied to • [ΣΣ93, ΣΣ97]
ETBA method is a system safety-based analysis all systems, which contain, make
process developed to aid in the methodical discovery use of, or which store energy in
and definition of hazards and risks of loss in systems any form or forms, (e.g. potential,
by producing a consistent, detailed understanding of kinetic mechanical energy,
the sources and nature of energy flows that can or did electrical energy, ionising or non-
produce accidental harm. Outputs support estimation ionising radiation, chemical, and
of risk levels, and the identification and assessment of thermal.) Developed as part of
specific options for eliminating or controlling risk. MORT.
These analyses are routinely started in conjunction
with the System Hazard Analysis and may be initiated
when critical changes or modifications are made.
236. ETTO G H 2002 ETTO is a principle that concludes that both normal 4 x • [Hollnagel-ETTO]
(Efficiency- performance and failures are emergent phenomena, • [Hollnagel, 2004]
Thoroughness Trade- hence that neither can be attributed to or explained by
Off) specific components or parts. For the humans in the
system this means in particular that the reason why
they sometimes fail, in the sense that the outcome of
their actions differ from what was intended or
required, is due to the variability of the context and
conditions rather than to the failures of actions. The
adaptability and flexibility of human work is the
reason for its efficiency. At the same time it is also the
reason for the failures that occur, although it is never
the cause of the failures. Herein lies the paradox of
optimal performance at the individual level. If
anything is unreasonable, it is the requirement to be
both efficient and thorough at the same time – or
rather to be thorough when with hindsight it was
wrong to be efficient.
59
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
237. Event and Causal Factor T R 1995 Event and Casual Factor Charting utilises a block The technique is effective for 4 nuclear x • [FAA00]
Charting or diagram to depict cause and effect. It provides a means solving complicated problems. chemical • [ΣΣ93, ΣΣ97]
older to organise the data, provides a summary of what is
known and unknown about the event, and results in a
detailed sequence of facts and activities. Elements in
the charts are: Condition (Ovals), Event (Blocks),
Accident, Primary event line, Primary events and
conditions, Secondary event lines, Secondary events
and conditions, Causal factors, Items of note.
Execution Flow Check See Memorizing Executed Cases.
Expert Evaluation See Heuristic Evaluation
238. Expert Judgement G Generic term for using human expert judgement for Expert judgement is often used, 5 many x x x x • [Ayyub01]
providing qualitative or quantitative information in especially where statistical data is • [Humphreys88]
safety assessments. Several expert judgement scarce, but needs to be treated • [Kirwan94]
techniques exist, such as APJ or PC. with special care. There are well- • [Kirwan&Kennedy&
proven protocols for maximising Hamblen]
and testing its validity. • [Nijstad01]
• [Williams85]
[Basra&Kirwan98]
[Foot94]
[MUFTIS3.2-I]
[FAA HFW]
239. Explosives Safety T R 1970 This method enables the safety professional to identify See also SAFER. See also 3 5 chemical x • [FAA AC431]
Analysis and evaluate explosive hazards associated with Process Hazard Analysis. • [FAA00]
facilities or operations. The purpose is to provide an • [ΣΣ93, ΣΣ97]
assessment of the hazards and potential explosive • [DDESB, 2000]
effects of the storage, handling or operations with
various types of explosives from gram to ton
quantities and to determine the damage potential.
Explosives Safety Analysis can be used to identify
hazards and risks related to any explosive potential,
i.e. fuel storage, compressed gases, transformers,
batteries.
240. Ex-Post Facto Analysis G 1980 Is employed to study whether a causal relationship 3 4 many x x • [Kjellen, 2000]
or may exist. Statistics on accidents are compared with
older similar statistics for accident-free situations. The aim
is to identify factors which are more common in the
accident material than what is expected because of
pure chance. In the next step, physical, physiological
and psychological theories are brought in to explain
the actual causal relationships.
241. External Events T R 1992 The purpose of External Events Analysis is to focus 3 5 nuclear x • [FAA00]
Analysis or attention on those adverse events that are outside of chemical • [Region I LEPC]
older the system under study. It is to further hypothesise the • [ΣΣ93, ΣΣ97]
range of events that may have an effect on the system • [DOE 1023-95]
being examined. The occurrence of an external event • [NEA98]
such as an earthquake is evaluated and affects on
structures, systems, and components in a facility are
analysed.
60
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
242. FACE I H 1999 Framework for analysing errors of commission. The 3 4 5 nuclear x • [HRA Washington]
(Framework for framework consists of five generic phases: I) Target • [Straeter01]
Analysing Commission selection, II) identification of potential commission
Errors) opportunities, III) screening commission
opportunities, IV) modelling important commission
opportunities, V) probability assessment.
243. FACET I R 2000 FACET is an air traffic management (ATM) Developed at NASA Ames 2 5 ATM x x x x x • [GAIN ATM, 2003]
(Future ATM Concepts or modelling and simulation capability. Its purpose is to Research Center. • [Bilimoria00]
Evaluation Tool) older provide an environment for the development and
evaluation of advanced ATM concepts. FACET can
model system-wide airspace operations over the entire
US. It uses flight plan data to determine aircraft
routing. As options, the routes can be automatically
modified to direct routes or windoptimal routes.
FACET then uses aircraft performance characteristics,
winds aloft, and kinematic equations to compute flight
trajectories. It then computes sector loading and
airspace complexity. As an option, FACET can
compute and simulate advanced concepts such as:
aircraft self-separation and National Playbook
rerouting. FACET also models the en-route impact of
ground delay programs and miles-in-trail restrictions.
244. Factor Analysis T R 1900 The purpose of factor analysis is to discover simple Factor analysis was invented 100 5 medical x • [Darlington]
patterns in the pattern of relationships among the years ago by psychologist Charles • [Rakowsky]
variables. In particular, it seeks to discover if the Spearman, who hypothesized that • Wikipedia
observed variables can be explained largely or entirely the enormous variety of tests of
in terms of a much smaller number of variables called mental ability--measures of
factors. mathematical skill, vocabulary,
other verbal skills, artistic skills,
logical reasoning ability, etc.--
could all be explained by one
underlying "factor" of general
intelligence.
245. Fail safety T D 1987 A fail-safe design is a design that enables a system to Useful for systems where there 6 computer x x • [Bishop90]
or continue operation, possibly at a reduced level (also are safe plant states. Also referred • Wikipedia
older known as graceful degradation), rather than failing to as fault tolerance. See also
completely, when some part of the system fails. That Memorizing Executed Cases. See
is, the system as a whole is not stopped due to also Vital Coded Processor.
problems either in the hardware or the software. Aim
is to design a system such that failures will drive the
system to a safe state.
246. Failure Assertion T Ds 1995 Detects residual software design faults during Software architecture phase. 7 computer x • [EN 50128]
Programming or execution of a program, in order to prevent safety See also Software Testing. • [Heisel, 2007]
older critical failures of the system and to continue • [Rakowsky]
operation for high reliability. Follows the idea of
checking a pre-condition (before a sequence of
statements is executed, the initial conditions are
checked for validity) and a post-condition (results are
checked after the execution of a sequence of
statements). If either the pre-condition or the post-
condition is not fulfilled, the processing reports the
error.
61
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
247. Failure Tracking T Dh 1983 Failure tracking is used to compile and store data upon Desirable for safety-related 6 computer x x • [Bishop90]
Ds or which benchmarking can be performed. Failure applications. Tools available.
older tracking ensures the collection of quality data that
reflects the system as a whole. Aim is to minimise the
consequences of detected failures in the hardware and
software.
248. Fallible machine Human T H 1990 A model of human information processing that 3 chemical x • [Fields01]
Error accounts for a variety of empirical findings. The • [Reason90]
important feature of the model is that items in a long
term “Knowledge base” (such as task knowledge) are
“activated” and recalled into working memory by
processes that depend in the current contents of the
working memory and sensory inputs. Items that are
recalled will ultimately be used in making decisions
that result in motor outputs. Central to the operation of
this ‘machine’ are the processes by which long term
memory items are ‘activated’ in a way that allows
them to be selected for use. According to the model,
two processes govern the activation of long term
memory items: similarity matching and frequency
gambling. Briefly stated, similarity matching means
that items are activated on the basis of how closely
they match environmental and task dependent cues,
and frequency gambling means that items receive
greater activation if they have been activated more
frequently in the past.
249. FANOMOS D 1982 FANOMOS aims to monitor and control the impact of Developed by NLR. Experience 7 airport x • [FANOMOS]
(Flight track and aircraft noise on built-up areas around airports. It has with FANOMOS includes the • [KleinObbink&Smit,
Aircraft Noise the following main functions: Flight track monitoring of flight tracks and/or 2004]
Monitoring System) reconstruction, Monitoring violation of prescribed noise in the vicinity of
flight routes, Correlation between noise measurements Amsterdam Airport Schiphol,
and flights, Correlation between complaint data and Rotterdam Airport,
flights, Calculation and monitoring of actual noise Maastricht/Aachen Airport,
exposure, Statistical processing of flight data. Manchester, Zürich and all major
FANOMOS is also used to collect statistical data of airports in Germany (this is
aircraft trajectories for safety studies. It includes a referred to as STANLY_Track).
database of aircraft trajectories in the Netherlands FANOMOS software has been
since 2001. made available for integration in
the Global Environment System
(GEMS) offered by Lochard Pty.,
Melbourne, Australia. Lochard
GEMS systems, including
FANOMOS software, are
installed world-wide at over 25
airports.
FAR-25 See JAR-25 (Joint Aviation
(Federal Aviation Requirements Advisory Material
Requirements -25) Joint (AMJ) 25.1309)
62
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
250. FAST T Dh 1965 Visually displays the interrelationships between all FAST was first introduced in 2 managem x • [HIFA Data]
(Functional Analysis functions that must be performed to achieve the basic 1965 by the Society of American ent • [KirwanAinsworth92]
System Technique) function. The goal is to provide an understanding of Value Engineers. • [FAA HFW]
how the system works and how cost-effective This tool is used in the early • [Sharit97]
modifications can be incorporated. Steps are: 1 Define stages of design to investigate • [DeMarle92]
all of the functions using the verb-noun pairs. Write system functions in a hierarchical • [Roussot03]
these noun-verb pairs on cards or sticky notes so that format and to analyse and
they can be easily manipulated; 2 Select the two-word structure problems (e.g., in
pairs that best describe the basic function; 3 Use the allocation of function). It asks
basic function to create a branching tree structure from ‘how’ a sub-task links to tasks
the cards described in 2 above; 4 Verify the logic higher up the task hierarchy, and
structure; 5 Delineate the limits of responsibility of the ‘why’ the super-ordinate tasks are
study so that the analysis does not go on to functions dependent on the sub-tasks.
outside of the scope.
251. FAST Method T R 2005 The FAST Method is aimed at identifying future Steps 1 and 2 and small part of 1 2 3 aviation x x x x • [FAST method, 2005]
(Future Aviation Safety hazards that have not yet appeared because the step 3 are Customer ATM
Team Method) changes within the aviation system that may produce responsibility; the main part of
these hazards have not yet taken place. The method step 3 and the main part of step
process flow consists of 12 steps; 1) Be responsible 12 are FAST reporsibility; the
for implementation of global aviation system changes; other steps are Expert Team
recognise your need for systematic prediction of responsibility.
hazards associated with changes and to design those
hazards out of the system or avoid or mitigate the
hazard; 2) Clearly define scope of expert team study;
3) Assemble an expert team; 4), 5) and 6)
Communicate with FAST and Customer to understand
the complete task; to understand pertinent Areas of
Change (AoC); to determine key interactions; 7)
Refine the visions of the future; 8) Compile the
hazards; 9) Determine the watch items; 10) Compile
recommendations; 11) Inform FAST regarding results;
12) Inform customers regarding results.
Fast-Time Simulation See Computer modelling and
simulation
252. Fault Injection T Ds 1970 Faults are injected into the code to see how the See also Software Testing. 7 computer x • [FaultInjection]
s software reacts. When executed, a fault may cause an • [Voas97a]
error, which is an invalid state within a system • [Voas97b]
boundary. An error may cause further errors within the • Wikipedia
system boundary, therefore each new error acts as a
fault, or it may propagate to the system boundary and
be observable.
253. Fault Isolation T Ds 1985 The method is used to determine and locate faults in Determines faults in any large- 3 x x • [FAA00]
Methodology large-scale ground based systems. Examples of scale ground based system that is • [Rakowsky]
specific methods applied are: Half-Step Search, computer controlled. • [ΣΣ93, ΣΣ97]
Sequential Removal/ Replacement, Mass replacement, Sometimes referred to as Fault • Wikipedia
Lambda Search, and Point of Maximum Signal Detection and Isolation (FDI).
Concentration. See also FDD.
254. Fault Schedule and T R The purpose of a fault schedule is to identify hazards 3 6 nuclear x • [Kirwan&Kennedy&
Bounding Faults to operators and to propose engineered, administrative Hamblen]
and contingency controls to result in acceptable risks.
63
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
255. FDD T Ds 1995 Fault detection is the process of checking a system for Software architecture phase. 3 computer x x • [EN 50128]
(Fault Detection and or erroneous states caused by a fault. A fault is evaluated See also Safety Bag. • [Rakowsky]
Diagnosis scheme) older by means of a classification into non-hazard and • [Schram&Verbruggen
hazard classes that are represented by fuzzy sets. 98]
Through the use of diagnostic programs, the software • [Sparkman92]
checks itself and hardware for incorrect results.
FDI See Fault Isolation Methodology
(Fault Detection and
Isolation)
256. FFC T T 1999 Full-scale high fidelity interactive Air Traffic Control Opened December 13, 1999 at 3 7 ATC x x • [FFC guide 2004]
(Future Flight Central) Tower simulator that aims to use human-in-the-loop NASA Ames Research Center, • [FFC web]
simulation to study improvements to airport safety and Moffett Field, California. • [GAIN ATM, 2003]
capacity. It is designed to allow virtual reality tests of Its design and development was a • [SAP15]
new tower procedures, airport designs and joint NASA and FAA supported
technologies. project. NASA maintains and
operates the simulator.
257. FFD T Dh 1971 Block diagram that illustrates the relationship between Is called the most popular 2 defence x • [HEAT overview]
(Functional Flow or different functions. It is constructed by identifying the systems method for the • [KirwanAinsworth92]
Diagram) older functions to be performed in the system, and then determination of functional • [MIL-HDBK]
arranging these as a sequence of rectangular blocks, requirements. FFDs are • [FAA HFW]
which represent the interrelationships between the sometimes called Function Flow • Wikipedia
functions. AND and OR gates are used to represent Block Diagrams.
necessary sequences of functions or alternative Tool: FAST
courses of action.
258. FHA T Dh 2000 The FHA according to EATMP SAM analyses the The FHA according to EATMP 1 3 4 ATM x • [EHQ-SAM]
(Functional Hazard potential consequences on safety resulting from the SAM is a refinement and • [Review of SAM
Assessment) loss or degradation of system functions. Using service extension of the FHA according techniques, 2004]
according to EATMP experience, engineering and operational judgement, to JAR-25 and of the FHA
SAM the severity of each hazard effect is determined according to ARP 4761, but its
qualitatively and is placed in a class 1, 2, 3, 4 or 5 scope is extended to Air
(with class 1 referring the most severe effect, and class Navigation Systems, covering
5 referring to no effect). Safety Objectives determine AIS (Aeronautical Information
the maximum tolerable probability of occurrence of a Services), SAR (Search and
hazard, in order to achieve a tolerable risk level. Five Rescue) and ATM (Air Traffic
substeps are identified: 1) FHA initiation; 2) FHA Management).
planning; 3) Safety objectives specification; 4a) FHA
validation; 4b) FHA verification; 4c) FHA assurance
process; 5) FHA completion. Most of these steps
consist of subtasks.
259. FHA T Dh 1992 In FHA according to JAR-25, all system functions are FHA was developed by the 3 aircraft x • [JAR 25.1309]
(Functional Hazard or systematically examined in order to identify potential aerospace industry to bridge • [Klompstra&Everdij9
Assessment) older failure conditions which the system can cause or between hardware and software, 7]
according to JAR-25 contribute to; not only if it malfunctions, but also in its since functions are generally • [Mauri, 2000]
normal response to unusual or abnormal external identified without specific • [MUFTIS3.2-I]
factors. Each failure condition is classified according implementations.
to its severity. If the system is not complex and similar
to systems used on other aeroplanes, this identification
and classification may be derived from design and
installation appraisals and the service experience of
the comparable, previously-approved, systems. If the
system is complex it is necessary to systematically
postulate the effects on the safety of the aeroplane and
its occupants resulting from any possible failure,
considered both individually and in combination with
other failures or events.
64
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
260. FHA T Dh 1994 FHA according to ARP 4761 examines aircraft and This FHA is a refinement and 3 aircraft x x • [ARP 4754]
(Functional Hazard Ds system functions to identify potential functional extension of the FHA according avionics • [ARP 4761]
Assessment) failures and classifies the hazards associated with to JAR-25. It covers software as • [Klompstra&Everdij9
according to ARP 4761 specific failure conditions. The FHA is developed well as hardware. 7]
early in the development process and is updated as • [Lawrence99]
new functions or failure conditions are identified. • [Mauri, 2000]
FHA is applied at two different levels: an aircraft level • Wikipedia
and a system level. The former is a qualitative
assessment of the basic known aircraft functions, the
latter examines each system which integrates multiple
aircraft functions. An aircraft level FHA, which is a
high level FHA, is applied during an activity to
determine functional failure consequences and
applications; i.e. to determine the classification of the
failure conditions associated with each function. This
classification is based on hazard severity. A system
level FHA is applied during an activity in which
functions are allocated to systems and people; this
stage consists of establishing the appropriate grouping
of aircraft functions and the allocation of the related
requirements to people or systems. The allocation
should define inputs, processes performed and outputs.
From the function allocations and the associated
failure consequences, further specific system
requirements necessary to achieve the safety
objectives are determined. The output is a set of
requirements for each human activity and aircraft
system together with associated interfaces.
261. FHA T R 1965 A system safety technique that is an offshoot from Any electrical, electronics, 3 aircraft x • [FAA AC431]
(Fault Hazard Analysis) abou FMEA. It is similar to FMEA however failures that avionics, or hardware system, avionics • [FAA00]
t could present hazards are evaluated. Hazards and sub-system can be analysed to electricity • [FT handbook02]
failure are not the same. Hazards are the potential for identify failures, malfunctions, • [Leveson95]
harm, they are unsafe acts or conditions. When a anomalies, and faults, that can • [ΣΣ93, ΣΣ97]
failure results in an unsafe condition it is considered a result in hazards. Hazard analysis • [GAIN AFSA, 2003]
hazard. Many hazards contribute to a particular risk. during system definition and
The Fault Hazard Analysis of a subsystem is an development phase. Emphasis on
engineering analysis that answers a series of the cause. Inductive. FHA is very
questions: What can fail? How it can fail? How similar to PHA and is a subset of
frequently will it fail? What are the effects of the FMEA.
failure? How important, from a safety viewpoint, are
the effects of the failure?
262. Field Study G A systematic observation of events as they occur in Alternative names: Systematic 3 many x • [FAA HFW]
their natural environment with the purpose to identify observation; Naturalistic • Wikipedia
structural and process characteristics of a system, to observation. See also Plant
identify ways to maintain system performance, to walkdowns/ surveys. See also
improve the system or to correct the system. Contextual Inquiry. See also
Analysis of field data.
263. Finite State semi- M These are Markov processes having a finite state 4 x x • [Markov process]
Markov processes space, that also allow non-exponential distributions.
65
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
264. Fire Hazards Analysis G R Fire Hazards Analysis is applied to evaluate the risks Any fire risk can be evaluated. 3 5 rail x x • [FAA AC431]
associated with fire exposures. There are several fire- • [FAA00]
hazard analysis techniques, i.e. load analysis, hazard • [Peacock&al01]
inventory, fire spread, scenario method. Subtechniques • [ΣΣ93, ΣΣ97]
are: Preliminary Fire Hazard Analysis, Barrier
Analysis, Fuel Load Analysis, National Fire Protection
Association Decision Tree Analysis.
265. FIs I Ds 1976 The inspection process involves the following steps - One of the best methodologies 3 aircraft x • [EN 50128]
(Fagan Inspections) 1) Identify Deliverable To Inspect 2) Choose available to evaluate the quality avionics • [NASA-GB-1740.13-
Moderator and Author 3) Run Deliverable Through of code modules and program 96]
Code Validator 4) Identify Concerns (Create sets. • [Rakowsky]
Inspection Checklist) 5) Choose Reviewers and Scribe Named after Michael Fagan who • Wikipedia
6) Hold Initial Briefing Meeting 7) Perform the is credited with being the
Inspection Itself 8) Hold the Inspection Meeting 9) inventor of formal software
Generate Issues Report 10) Follow-up on Issues And inspections.
the following people - a) Author b) Moderator c) See also Code Inspection
Reviewer d) Scribe. Checklists.
Fishbone Diagram See Cause and Effect Diagram
266. Fitts Lists T H 1951 These lists summarise the advantages of humans and Static allocation of functions. 2 6 ATC x x • [FAA HFW]
machines with regards to a variety of functions. They Erroneous Usage: Sole basis for defence • [Fitts51]
list characteristics of tasks that humans are most suited allocating functions. Broader • [HEAT overview]
for and characteristics of tasks that machines are most organisational and cultural issues • Wikipedia
suited for. as well as psychological and
financial issues are not taken into
account in these lists. The
function may need to be allocated
to the human because of these.
The performance data that the
lists are based on may not
generalise. Named after Paul M.
Fitts, who developed a model of
human movement, Fitts's law.
267. Five Star System T O 1988 The Five Star Health and Safety Management System Qualitative. Adopted by British 8 health x • [HE, 2005]
Audit is an independent evaluation of an Safety Council. • [Kennedy&Kirwan98]
organisation’s health and safety management system.
Its aim is to give an independent perspective to
support systems and reassure companies that they are
working towards best practice and to resolve poor
practice. The audit is based upon a Business
Excellence Model and aims to cover eight areas of the
management systems: Best practice, Continuous
improvement, Safety organisation, Management
control systems, Fire control systems, Measurement
and control systems, Workplace implementation,
Verification.
268. FLASH T Dh 2000 FLASH enables the assessment of a hierarchically 3 4 5 6 health x x • [Mauri, 2000]
(Failure Logic Analysis Ds described system by identifying potential functional
for System Hierarchies) failures of the system at the application level and then
to systematically determine the causes of those
failures in progressively lower levels of the design.
The result of the assessment is a consistent collection
of safety analyses (a hierarchy of tables) which
provides a picture of how low-level failures are
stopped at intermediate levels of the design, or
propagate and give rise to hazardous malfunctions.
66
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
269. Flight Data Monitoring D R 1990 These tools assist in the routine analysis of flight data Other examples of tools for visual 3 7 8 aviation x x x x • [GAIN AFSA, 2003]
Analysis and from generated during line operations, to reveal situations display of data are Brio • [AirFASE web]
Visualisation Tools that require corrective action, enable corrective action Intelligence 6 (Brio Software • [AGS example]
before problems occur, and identify operational trends. Japan, 1999), Spotfire (TIBCO • [AGS slides]
FDM tools capture flight data, transform these into an Spotfire, Inc) and Starlight • [CEFA example]
appropriate format for analysis, and visualise them to (Battelle Memorial Institute). • [APMS example]
assist analysis. Examples of tools are:
• [APMS guide]
• AirFASE (Airbus and Teledyne Controls, 2004) - • [Statler2004]
measurement, analysis and reporting dealing with in-
flight operational performance of commercial aircraft
• AGS (Analysis Ground Station) (SAGEM, 1992) -
provide report generation from automatic and
manual data selection, import/export functions,
advanced analysis, and database management
• APMS (Aviation Performance Measuring System)
(NASA Ames, 1993) - flight-data analyses and
interpretation; enables airline carriers to analyse the
flight data in order to identify safety trends and
increase flight reliability
• CEFA (Cockpit emulator for Flight Analysis) (CEFA
Aviation) - restores universal flight synthesis
extracted from flight data. Emulates a view of the
cockpit, and a 3D outside view of the aircraft moving
in flight environment.
• FlightAnalyst (SimAuthor, Inc.) - analyse routine
and special events, categorical events, exceedances,
negative safety trends, and other flight training,
operational or tactical issues
• FlightTracer (Stransim Aeronautics Corporation) -
3D-visualisation tool for flight investigations,
training, and monitoring programs
• FlightViz (SimAuthor, Inc.) - facilitates analysts to
visually recreate a flight in 3D, using actual aircraft
or simulator flight data
• FltMaster (Sight, Sound & Motion) - 3D animation
and flight data replay using a suite of visualisation
tools able to accept data from simulations, manned-
motion simulators, and recorded flight data
• LOMS (Line Operations Monitoring System)
(Airbus) – creates database of flight data recorded in
the digital flight data recorder media, compares flight
data, identifies exceedances, and monitors events to
propose: menu-driven reporting, identification of risk
scenario, and trend analysis.
• RAPS & Insight (Recovery, Analysis, &
Presentation System & Insight) (Flightscape, 1990) -
ground data replay and analysis station including
flight animation as well as aircraft accident and
incident investigations
• SAFE (Software Analysis for Flight Exceedance)
(Veesem Raytech Aerospace) - analyse FDR data of
flights, to indicate adverse trends creeping in, which
can be monitored and preventive action can be taken
before a chronic breakdown of vital systems occurs.
67
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
FlightAnalyst See Flight Data Monitoring
Analysis and Visualisation
FlightTracer See Flight Data Monitoring
Analysis and Visualisation
FlightViz See Flight Data Monitoring
Analysis and Visualisation
270. Flow Analysis T Dh 1982 The analysis evaluates confined or unconfined flow of The technique is applicable to all 3 chemical x x • [Bishop90]
or fluids or energy, intentional or unintentional, from one systems which transport or which electricity • [FAA00]
older component/sub-system/ system to another. Also used control the flow of fluids or computer • [ΣΣ93, ΣΣ97]
to detect poor and potentially incorrect program energy. Complementary to • Wikipedia
structures. Two types: Control FA and Data FA inspection methods. Useful
especially if there is suitable tool
support. Tools available.
FltMaster See Flight Data Monitoring
Analysis and Visualisation
271. FMEA T Dh 1949 FMEA is a reliability analysis that is a bottom up Any electrical, electronics, 3 nuclear x • [Bishop90]
(Failure Mode and approach to evaluate failures within a system. It avionics, or hardware system, chemical • [Cichocki&Gorski]
Effect Analysis) provides check and balance of completeness of overall sub-system can be analysed to space • [FAA00]
safety assessment. It systematically analyses the identify failures and failure windturbi • [KirwanAinsworth92]
components of the target system with respect to modes. Useful in system ne • [Leveson95]
certain attributes relevant to safety assessment. reliability analyses. Tools rail • [MUFTIS3.2-I]
available. Not suitable for • [ΣΣ93, ΣΣ97]
humans and software. Sometimes
• [Storey96]
referred to as SFMEA (Systems
• [GAIN AFSA, 2003]
Failure Mode and Effect
Analysis). See also AEA, CMFA, • Wikipedia
Decision Tables, DMEA, FHA,
FMECA, FMES, GFCM,
HESRA, HF PFMEA, PHA,
PRA, SEEA, SHERPA, SFMEA,
SPFA.
272. FMECA T Dh 1967 Is FMEA completed with a measure for criticality (i.e. Useful for safety critical 3 5 aircraft x • [Bishop90]
(Failure Mode Effect probability of occurrence and gravity of hardware systems where chemical • [FAA00]
and Criticality Analysis) consequences) of each failure mode. Aim is to rank reliability data of the components offshore • [Leveson95]
the criticality of components that could result in is available. Less relevant windturbi • [MUFTIS3.2-I]
injury, damage or system degradation through single- technique now that HAZOP is ne • [ΣΣ93, ΣΣ97]
point failures in order to identify those components developed. See also Criticality rail • [Pentti&Atte02]
that might need special attention and control measures Analysis.
• [DNV-HSE01]
during design or operation.
• [Hoegen97]
• [Kumamoto&Henley
96]
• [Matra-HSIA99]
• [Page&al92]
• [Parker&al91],
• [Rademakers&al92]
• [Richardson92]
• [SAE2001]
• [Storey96]
• [Villemeur91-1]
• [GAIN AFSA, 2003]
• Wikipedia
68
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
273. FMES T Dh 1994 Groups failure modes with like effects. FMES failure 5 aircraft x • [ARP 4761]
(Failure Modes and or rate is sum of failure rates coming from each FMEA.
Effects Summary) older Is used as an aid to quantify primary FTA events.
274. FORAS T R 2004 FORAS gives a quantitative assessment of accident / 5 6 aviation x x x • [NRLMMD, 2006]
(Flight Operations Risk incident risk for a flight operation, broken down into a • [Cranfield, 2005]
Assessment System ) variety of subgroups: by fleet, region, route, or
individual flight. This assessment is performed using a
mathematical model which synthesizes a variety of
inputs, including information on crew, weather,
management policy and procedures, airports, traffic
flow, aircraft, and dispatch operations. The system
will identify those elements that contribute most
significantly to the calculated risk, and will be able in
some cases to suggest possible interventions.
275. Formal Inspections T Ds 1996 A safety checklist, based on safety requirements, is 7 aircraft x • [NASA-GB-1740.13-
or created to follow when reviewing the requirements. avionics 96]
older After inspection, the safety representative reviews the
official findings of the inspection and translates any
that require safety follow-up on to a worksheet.
276. Formal Methods M Formal Methods refer to techniques and tools based on Generation of code is the ultimate 4 6 avionics x • [DO178B]
mathematical modelling and formal logic that are used output of formal methods. In a computer • [EN 50128]
to specify and verify requirements and designs for pure formal methods system, • [FAA00]
computer systems and software. analysis of code is not required. • [NASA-GB-1740.13-
In practice, however, attempts are 96]
often made to apply formal • [Rakowsky]
methods to existing code after the • [Storey96]
fact. • Wikipedia
277. Formal Proof T Ds 1969 A number of assertions are stated at various locations Software verification and testing 6 computer x • [EN 50128]
or in the program and they are used as pre and post phase. • [Rakowsky]
older conditions to various paths in the program. The proof • Wikipedia
consists of showing that the program transfers the
preconditions into the post conditions according to a
set of logical rules and that the program terminates.
278. Formally Designed G D 1988 Aim of formally designed hardware is to prove that Best applied in context where all 6 rail x • [Bishop90]
Hardware or the hardware design meets its specification. A formal components are formally proven. computer • Wikipedia
older specification is a mathematical description of the Can be used in combination with
hardware that may be used to develop an N out of M voting. Tools
implementation. It describes what the system should available.
do, not (necessarily) how the system should do it.
Provably correct refinement steps can be used to
transform a specification into a design, and ultimately
into an actual implementation, that is correct by
construction.
279. Forward Recovery T Ds 1995 The aim of forward recovery is to apply corrections to Software architecture phase. 6 computer x • [EN 50128]
or a damaged state in a ‘bottom-up’ fashion. This starts at • [Rakowsky]
older the lowest levels, up to a failure within the broader • [SSCS]
system. For this approach to work, some
understanding of errors that have occurred is needed.
If errors are very well understood, the Forward
Recovery approach can give rise to efficient and
effective solutions.
69
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
280. FPC T Dh 1921 A Flow Process Chart is a graph with arrows and six Similarities with Operations 2 defence x x x • [FAA00]
(Flow Process Chart) types of nodes: Operation, Move, Delay, Store, Analysis in [FAA00]. • [HEAT overview]
Inspect process, and Decision. It allows a closer FPC were a precursor to • [MIL-HDBK]
examination of the overall process charts for material Operational Sequence Diagram. • [KirwanAinsworth92]
and/or worker flow and includes transportation, • Wikipedia
storage and delays.
281. FPTN T Ds 1993 Hierarchical graphical notation that represents system Originated from HAZOP 4 computer x • [Mauri, 2000]
(Failure Propagation and failure behaviour. Is linked to design notation and is
Transformation both an inductive and deductive analysis. FPTN makes
Notation) consistency checks and is designed to be used at all
stages of the life cycle. FPTN represents a system as a
set of interconnected modules; these might represent
anything from a complete system to a few lines of
program code. The connections between these
modules are failure modes, which propagate between
them.
282. FRAM T R 2004 FRAM is a qualitative accident model that describes Developed by Erik Hollnagel. 4 ATM x x • [HollnagelGoteman,
(Functional Resonance how functions of (sub)systems may under FRAM is based on the premise 2004]
Accident Model) unfavourable conditions resonate and create situations that performance variability, • [Hollnagel, 2004]
that are running out of control (incidents / accidents). internal variability and external • [Hollnagel, 2006]
It can be used in the search for function (process) variability are normal, in the
variations and conditions that influence each other and sense that performance is never
then may resonate in the case of risk analysis, or have stable in a complex system as
resonated in the case of accident analysis. The model aviation. Performance variability
syntax consists of multiple hexagons that are coupled. is required to be sufficiently
Each hexagon represents an activity or function. The flexible in a complex
corners of each hexagon are labelled (T): Time environment and it is desired to
available: This can be a constraint but can also be allow learning from high and low
considered as a special kind of resource; (C): Control, performance events.
i.e. that which supervises or adjusts a function. Can be
plans, procedures, guidelines or other functions; (O):
Output, i.e. that which is produced by function.
Constitute links to subsequent functions; (R):
Resource, i.e. that which is needed or consumed by
function to process input (e.g., matter, energy,
hardware, software, manpower); (P): Precondition, i.e.
system conditions that must be fulfilled before a
function can be carried out; and (I): Input, i.e. that
which is used or transformed to produce the output.
Constitutes the link to previous functions.
283. Front-End Analysis I T 1993 Comprises four analyses: (1) Performance analysis: Also referred to as Training 3 5 road x x x • [FEA web]
Determine if it is a training/ incentive/ organisational Systems Requirements Analysis. • [IDKB]
problem. I.e., identify who has the performance
problem (management/ workers, faculty/learners), the
cause of the problem, and appropriate solutions. (2)
Environmental analysis: Accommodate organisational
climate, physical factors, and socio-cultural climate to
determine how these factors affect the problem. (3)
Learner analysis: Identify learner/ trainee/ employee
characteristics and individual differences that may
impact on learning / performance, such as prior
knowledge, personality variables, aptitude variables,
and cognitive styles. (4) Needs assessment: Determine
if an instructional need exists by using some
combination of methods and techniques.
70
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
284. FSM M 1962 An FSM is a behavior model composed of a finite A simple yet powerful technique 4 computer x x • [Bishop90]
(Finite State Machines) number of states, transitions between those states, and for event driven systems. Tools biology • [EN 50128]
actions, similar to a flow graph in which one can available. Similar to State • [HEAT overview]
inspect the way logic runs when certain conditions are Transition Diagrams. Sometimes • [Rakowsky]
met. Aim is to model and analyse the control structure referred to as (Finite State) • Wikipedia
of a purely discrete state system. Automaton.
285. FSMA T R 1994 A Fault-Symptom Matrix is a matrix with vertically Linked to Confusion Matrix 3 5 nuclear x • [Kirwan94]
(Fault-Symptom Matrix or the faults of a system and horizontally the possible Approach. • [Qiu&al]
Analysis) older symptoms. The cells contain probabilities of
occurrence.
286. FSSA T R 1992 System safety analysis techniques are applied to Facilities are analysed to identify 3 nuclear x x x • [FAA AC431]
(Facilities System or facilities and its operations. Safety analyses, within the hazards and potential accidents chemical • [FAA00]
Safety Analysis) older FSSA, document the safety bases for and associated with the facility and • [ΣΣ93, ΣΣ97]
commitments to the control of subsequent operations. systems, components, equipment,
This includes staffing and qualification of operating or structures.
crews; the development, testing, validation, and
inservice refinement of procedures and personnel
training materials; and the safety analysis of the
person-machine interface for operations and
maintenance. In safety analyses for new facilities and
safety-significant modifications to existing facilities,
considerations of reliable operations, surveillance, and
maintenance and the associated human factors safety
analysis are developed in parallel and integrated with
hardware safety design and analysis. Once a facility or
operation is in service, the responsible contractor and
safety oversight activities use the report.
287. FTA T R 1961 A Fault Tree Analysis is a graphical design technique Former name is CTM (Cause 4 5 aircraft x x x • [EN 50128]
(Fault Tree Analysis) that could provide an alternative to block diagrams. It Tree Method). Any complex ATM • [FAA00]
is a top-down, deductive approach structured in terms procedure, task, system, can be nuclear • [FT Handbook02]
of events. Starting at an event that would be the analysed deductively. Useful for offshore • [GAIN ATM, 2003]
immediate cause of a hazard (the top event), analysis system safety analysis and windturbi • [GAIN AFSA, 2003]
is carried out along a tree path. Combinations of HAZOPs. Tools available, e.g. ne • [Leveson95]
causes are described with logical operators (And, Or, Fault Tree+, FaultrEASE,
• [Mauri, 2000]
etc). Faults are modelled in terms of failures, RISKMAN, see [GAIN ATM,
• [MUFTIS3.2-I]
anomalies, malfunctions, and human errors. 2003] and [GAIN AFSA, 2003]
for some descriptions. Developed • [ΣΣ93, ΣΣ97]
in 1961 by Bell Telephone • [Storey96]
Laboratories for US ICBM • [Henley&Kumamoto
(Intercontinental Ballistic Missile 92]
system) program; guide published • [DNV-HSE01]
in 1981. The logical operations • [Howat02]
are covered within IEC • [Kumamoto&Henley
(International Electrotechnical 96]
Commission ) 1025 international • [Villemeur91-1]
standard. For software it can be • Wikipedia
used during the software
architecture phase. Can
incorporate human errors.
Function Allocation See Function Allocation Trades
Evaluation Matrix and See Decision Matrix
71
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
288. Function Allocation T M 1986 Working in conjunction with project subsystem Several techniques are proposed 2 4 defence x x x • [HEAT overview]
Trades or designers and using functional flows and other human to work out the details in this • [MIL-HDBK]
older error methods, plus past experience with similar method.
systems, the practitioner makes a preliminary Also referred to as Function
allocation of the actions, decisions, or functions shown Allocation Evaluation Matrix
in the previously used charts and diagrams to
operators, equipment or software.
289. Fuzzy Logic M 1965 Fuzzy logic is a superset of conventional (Boolean) It was introduced by Dr. Lotfi 4 computer x • [EN 50128]
logic that has been extended to handle the concept of Zadeh of University of • [FuzzyLogic]
partial truth: truth values between “completely true” California, Berkeley, in the • [Rakowsky]
and “completely false”. 1960’s as a means to model the • Wikipedia
uncertainty of natural language.
Software design & development
phase.
290. Gain Scheduling T Dh Gain scheduling is an approach to control of non- Popular methodology. See also 6 aviation x • [Schram&Verbruggen
linear systems that uses a family of linear controllers, FDD. 98]
each of which provides satisfactory control for a • Wikipedia
different operating point of the system. One or more
observable variables, called the scheduling variables,
are used to determine what operating region the
system is currently in and to enable the appropriate
linear controller. Aim is to achieve fault tolerance by
storing pre-computed gain parameters. It requires an
accurate FDD (Fault Detection and Diagnosis scheme)
system that monitors the status of the system.
291. Gantt Charts T H 1915 Graphically illustrates time courses of functions and Developed by Henry Laurence 2 many x x • [FAA HFW]
tasks. The functions and tasks may be used in flow- Gantt (1861-1919). • [Gantt03]
charting methods to address potential workload • Wikipedia
problems that may have implications for function
allocation. May be applied to functions that are
temporal be definition (e.g., scheduling).
292. Gas Model M R 1971 Analytical accident risk model to determine This simple representation may 5 ATM x • [Alexander, 1971]
probability of collision between aircraft or to assess air be only suited to an uncontrolled • [MUFTIS1.2]
traffic controller workload. Based on the physical part of airspace occupied by
model of gas molecules in a heated chamber to pleasure fliers who may indeed
estimate the number of conflicts between aircraft be flying in random directions.
occupying some part of airspace. The model assumes
that aircraft are uniformly and independently
distributed within an area, i.e. a horizontal plane, or a
volume. It is further assumed that aircraft travel in
straight lines in directions which are independently
and uniformly distributed between 0 and 360o and
with speeds that are independent of the direction of
travel and are drawn, independently for each aircraft,
from a probability distribution.
GASET See ETA (Event Tree Analysis)
(Generic Accident
Sequence Event Tree)
72
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
293. GBRAM T Ds 1995 GBRAT is designed to support goal-based 2 6 x x • [Anton, 1996]
(Goal-Based requirements analysis. The tool provides procedural • [Anton, 1997]
Requirements Analysis support for the identication, elaboration, refinement
Method) and organisation of goals to specify the requirements
and for software based information systems. GBRAT
GBRAT employs interactive Web browser technology to
(Goal-Based support the collaborative nature of requirements
Requirements Analysis engineering. GBRAM defines a top-down analysis
Tool) method refining goals and attributing them to agents
starting from inputs such as corporate mission
statements, policy statements, interview transcripts
etc.
294. GDTA T H 1993 GDTA is a cognitive task analysis technique that is Developed by Mica R. Endsley. 2 ATC x • [FAA HFW]
(Goal-Directed Task concerned with the situation awareness requirements defence • [Endsley, 1993]
Analysis) necessary to complete a task. It focuses on the basic • [Bolstad et al, 2002]
goals for each team role (which may change
dynamically), the major decisions that should be made
to accomplish these goals, and the SA requirements
for each decision. GDTA attempts to determine what
operators would ideally like to know to meet each
goal. Structured interviews, observations of operators
performing their tasks, as well as detailed analysis of
documentation on users’ tasks are used to complete
the analysis process. GDTA aims to reveal
information needs for complex decision making in
environments such as air traffic control.
295. GEMS T H 1987 GEMS is an error classification model that is designed Proposed by James Reason. 5 nuclear x • [Kirwan94]
(Generic Error to provide insight as to why an operator may move Rarely used as tool on its own. • [Kirwan98-1]
Modelling System) between skill-based or automatic rule based behaviour
and rule or knowledge-based diagnosis. Errors are
categorised as slips/lapses and mistakes. The result of
GEMS is a taxonomy of error types that can be used to
identify cognitive determinants in error sensitive
environments. GEMS relies on the analyst either
having insight to the tasks under scrutiny or the
collaboration of a subject matter expert, and an
appreciation of the psychological determinants of
error.
296. Generalised Gas Model M R Analytical model. Based on the gas model, but the 5 ATM x • [MUFTIS1.2]
aircraft do not always fly in random directions. Aim is
to determine probability of collision between aircraft
or to assess air traffic controller workload.
297. Generalised Reich T R 1993 Generalisation of Reich collision risk model (CRM). 5 ATM x • [Bakker&Blom93]
Collision Risk Model For the determination of collision risk between • [Blom&Bakker02]
aircraft. Does not need two restrictive assumptions • [MUFTIS3.2-II]
that Reich’s CRM needs. Used within TOPAZ.
298. GFCM T Dh 1991 Extension and generalisation of FMEA. A FMECA is Qualitative and quantitative. 3 4 5 aircraft x x • [MUFTIS3.2-I]
(Gathered Fault or made for all components of the system. Next, failure electricity
Combination Method) older modes (or their combinations), which have the same
effect are gathered in a tree.
299. GO Charts T Dh 1975 Is used for reliability analysis of complex systems Useful for a qualitative analysis 4 education x • [Bishop90]
(Graphics Oriented (including components with two or more failure during the design stage. Related biomedica
Charts) modes), mainly during the design stage. techniques: FTA, Markov l
analysis. Tools available.
73
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
300. Goal Obstruction T M 2000 A goal defines a set of desired behaviors, where a 3 6 x x • [Lamsweerde &
Analysis behavior is a temporal sequence of states. Goal Letier, 2000]
obstruction yields sufficient obstacles for the goal not • [Letier, 2001]
to be reachable; the negation of such obstacles yields • http://lamswww.epfl.c
necessary preconditions for the goal to be achieved. h/reference/goal
301. GOMS I H 1983 GOMS is a task modelling method to describe how GOMS is mainly used in 2 defence x x • [HIFA Data]
(Goals, Operators, operators interact with their systems. Goals and sub- addressing human-computer • [KirwanAinsworth92]
Methods and Selection goals are described in a hierarchy. Operations describe interaction and considers only • [Card83]
rules) the perceptual, motor and cognitive acts required to sequential tasks. The original • [Eberts97]
complete the tasks. The methods describe the version of GOMS is referred to as • [Hochstein02]
procedures expected to complete the tasks. The CMN-GOMS, which takes the • [FAA HFW]
selection rules predict which method will be selected name after its creators Stuart • Wikipedia
by the operator in completing the task in a given Card, Thomas P. Moran and
environment. Allen Newell who first described
GOMS in their 1983 book The
Psychology of Human Computer
Interaction. See also Apex, CAT,
CPM-GOMS, CTA, KLM-
GOMS, NGOMSL.
302. Graceful Degradation T Ds 1978 Aim is to maintain the more critical system functions Useful for systems with no fail- 6 computer x x • [Bishop90]
? available despite failures, by dropping the less critical safe state. Sometimes referred to • [EN 50128]
functions. as Fault Tolerance. • [Rakowsky]
• Wikipedia
Graphic Mission Profile See Mission Profile
GRMS See RMA (Rate Monotonic
(Generalised Rate Scheduling)
Monotonic Scheduling)
303. GSN T R 1997 GSN shows how goals are broken into sub-goals, and Tools available. Developed by 8 avionics x x x x x • [Kelly, 1998]
(Goal Structuring eventually supported by evidence (solutions) whilst Tim Kelly and John McDermid defence • [Pygott&al99]
Notation) making clear the strategies adopted, the rationale for (University of York). rail • [Wilson&al96]
the approach (assumptions, justifications) and the
context in which goals are stated. GSN explicitly
represents the individual elements of a safety
argument (requirements, claims, evidence and context)
and the relationships that exist between these elements
(i.e. how individual requirements are supported by
specific claims, how claims are supported by evidence
and the assumed context that is defined for the
argument).
74
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
304. HACCP T R 1960 HACCP aims at identifying, evaluating and Developed by NASA in the 3 5 6 health x • [McGonicle]
(Hazard Analysis and controlling safety hazards in a food process, at the 1960's to help prevent food space • Wikipedia
Critical Control Points) earliest possible point in the food chain. It is used to poisoning in astronauts. A critical
develop and maintain a system, which minimises the control point is defined as any
risk of contaminants. It identifies who is to be point or procedure in a specific
protected, from what, and how. Risks are identified food system where loss of control
and a corrective or preventative risk management may result in an unacceptable
option is selected and implemented to control the risk health risk. Whereas a control
within the limits of acceptable risk standards. Steps point is a point where loss of
are: 1. Identify hazards; 2. Determine the critical control may result in failure to
control points; 3. Determine the critical limits for each meet (non-critical) quality
control point; 4. Monitor the critical limits; 5. Identify specifications.
corrective action procedures (corrective action Food safety risk can be divided
requests or CARs); 6. Establish records and control into the following three
sheets; 7. Verify the HACCP plan categories: Microbiological
Risks, Chemical Risks, and
Physical Risks.
Hardware/Software See HSIA (Hardware/Software
Safety Analysis Interaction Analysis)
Hart & Bortolussi See Rating Scales
Rating Scale
Hart & Hauser Rating See Rating Scales
Scale
305. HATLEY T D 1987 The Hatley notation uses visual notations for 2 computer x • [Williams91]
modelling systems. Belongs to a class of graphical
languages that may be called “embedded behaviour
pattern” languages because it embeds a mechanism for
describing patterns of behaviour within a flow
diagram notation. Behaviour patterns describe
different qualitative behaviours or modes, together
with the events that cause changes in mode, for the
entity being modelled. The flow notation models the
movement of information through the system together
with processes that use or change this information.
Combining these two modelling capabilities makes it
possible to model control of processes. A process may,
for example, be turned on or off when a change in
mode occurs.
Haworth-Newman See Rating Scales
Avionics Display
Readability Scale
306. Hazard Analysis G R Includes generic and specialty techniques to identify Multi-use technique to identify 3 many x x x • [FAA00]
hazards. Generally, it is a formal or informal study, hazards within any system, sub- • [ΣΣ93, ΣΣ97]
evaluation, or analysis to identify hazards. system, operation, task or • Wikipedia
procedure.
307. Hazard Coverage Based T R 1998 Safety modelling that checks after each modelling 4 ATM x x x x x • [Everdij&Blom&Bak
Modelling iteration if and how all identified hazards have been ker02]
modelled. The following modelling iteration will
focus on the main hazards that have not been modelled
yet. The last iteration ends with an assessment of the
effect of non-coverage of the remaining hazards.
75
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
308. Hazard Indices T Dh 1995 Hazard indices measure loss potential due to fire, Originally developed primarily 3 chemical x • [Leveson95]
or explosion, and chemical reactivity hazards in the for insurance purposes and to aid
older process industries. Can be useful in general hazard in the selection of fire protection
identification, in assessing hazard level for certain methods.
well-understood hazards, in the selection of hazard
reduction design features for the hazards reflected in
the index, and in auditing an existing plant.
309. Hazard Risk Assessment G Aim is to perform a system hazard risk assessment to See also Hazard Analysis 3 many x • [FAA00]
identify and prioritise those safety critical computer • [NASA-GB-1740.13-
software components that warrant further analysis 96]
beyond the architectural design level. • [Rakowsky]
310. HAZid T H 1993 Modification of HAZOP especially to be used for 3 ATC x • [MUFTIS3.2-I]
(Hazard Identification) or identification of human failures. It has an additional • Wikipedia
older first column with some guidewords to lead the
keywords.
311. HAZOP T M 1974 Group review using structured brainstorming using Began with chemical industry in 3 6 ATM x x x • [Kirwan-sages]
(Hazard and Operability keywords such as NONE, REVERSE, LESS, LATER the 1960s. Analysis covers all chemical • [KirwanAinsworth92]
study) THAN, PART OF, MORE. Aim is to discover stages of project life cycle. In rail • [Kirwan98-1]
potential hazards, operability problems and potential practice, the name HAZOP is computer • [Leveson95]
deviations from intended operation conditions. Also sometimes (ab)used for any nuclear • [MUFTIS3.2-I]
establishes likelihood and consequence of event. “brainstorming with experts to fill • [Reese&Leveson97]
Hazardous events on the system should be identified a table with hazards and their • [ΣΣ93, ΣΣ97]
with other technique. effects”.
• [Storey96]
Many variations or extensions of
• [CAA-RMC93-1]
HAZOP have been developed,
see e.g. AEMA, EOCA, FPTN, • [CAA-RMC93-2]
HAZid, Human HAZOP, HzM, • [Foot94]
MHD, PHEA, PHECA, SHARD • [Kennedy&Kirwan98
(or CHAZOP), SCHAZOP, ]
SUSI, WSA. • [Kletz74]
• [Villemeur91-1]
• Wikipedia
312. HBN M 2002 HBN is an extension of BBN and consists of two 4 5 aviation x x x x x • [FlachGyftodimos,
(Hierarchical Bayesian parts. The structural part contains the variables of the 2002]
Network) network and describes the ‘part-of relationships’ and • [GyftodimosFlach,
the probabilistic dependencies between them. The 2002]
part-of relationships in a structural part may be • [Kardes, 2005]
illustrated either as nested nodes or as a tree hierarchy.
The second part of a HBN, the probabilistic part,
contains the conditional probability tables that
quantify the links introduced at the structural part.
313. HCA I M 1991 Design and development concept. Can be used to 7 nuclear x • [Kirwan&al97]
(Human Centred study whether explicit information on the actions of • [Kirwan_HCA]
Automation) the plant automation system improves operator • [Skjerve HCA]
performance when handling plant disturbances caused
by malfunctions in the automation system.
76
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
314. HCR T H 1984 Method for determining probabilities for human errors Developed in nuclear industry. 5 nuclear x • [Humphreys88]
(Human Cognitive after trouble has occurred in the time window Not considered as very accurate. • [Kirwan94]
Reliability model) considered. Probability of erroneous action is • [MUFTIS3.2-I]
considered to be a function of a normalised time • Wikipedia
period, which represents the ration between the total
available time and the time required to perform the
correct action. Different time-reliability curves are
drawn for skill-based, rule-based and knowledge-
based performance.
315. HEA G H Method to evaluate the human interface and error Human Error Analysis is 3 5 many x x x • [FAA AC431]
(Human Error Analysis) potential within the human /system and to determine appropriate to evaluate any • [FAA00]
human-error- related hazards. Many techniques can be human/machine interface. • [HEA practice]
applied in this human factors evaluation. Contributory • [HEA-theory]
hazards are the result of unsafe acts such as errors in • [ΣΣ93, ΣΣ97]
design, procedures, and tasks. This analysis is used to
identify the systems and the procedures of a process
where the probability of human error is of concern.
The concept is to define and organise the data
collection effort such that it accounts for all the
information that is directly or indirectly related to an
identified or suspected problem area. This analysis
recognises that there are, for practical purposes, two
parallel paradigms operating simultaneously in any
human/machine interactive system: one comprising
the human performance and the other, the machine
performance. The focus of this method is to isolate
and identify, in an operational context, human
performance errors that contribute to output anomalies
and to provide information that will help quantify their
consequences.
316. HEART T H 1985 Quantifies human errors in operator tasks. Considers Popular technique. 5 nuclear x • [Humphreys88]
(Human Error particular ergonomic and other task and environmental See also CARA, NARA, NE- chemical • [Kennedy]
Assessment and factors that can negatively affect performance. The HEART. defence • [Kirwan94]
Reduction Technique) extent to which each factor independently affects • [MUFTIS3.2-I]
performance is quantified, and the human error • [Williams88]
probability is then calculated as a function of the • [CAA-RMC93-1]
product of those factors identified for a particular task.
• [CAA-RMC93-2]
• [Foot94]
• [Kirwan&Kennedy&
Hamblen]
• [Kirwan96-I]
• [Kirwan&al97-II]
• [Kirwan97-III]
• [FAA HFW]
• [GAIN ATM, 2003]
• Wikipedia
77
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
317. HECA T H 1999 HECA aims to identify the potentially critical Based on FMECA. 3 4 5 x • [Yu et al, 1999]
(Human Error Criticality problems caused by human error in the human • [Das et al, 2000]
Analysis) operation system. It performs task analysis on the
basis of operation procedure, analyzes the human error
probability (HEP) for each human operation step, and
assesses its error effects to the whole system. The
results of the analysis show the interrelationship
between critical human tasks, critical human error
modes, and human reliability information of the
human operation system.
318. HEDAD D 1992 HEDAD-M (Maintainer) provides a source of data to Developed by FAA. 5 defence x x • [FAA HFW]
(Human Engineering or evaluate the extent to which equipment having an • [HEDADM]
Design Approach older interface with maintainers meets human performance • [HEDADO]
Document) requirements and human engineering criteria.
HEDAD-O (Operator) provides a source of data to
evaluate the extent to which equipment having an
interface with operators meets human performance
requirements and human engineering criteria.
319. HEDGE T H 1983 HEDGE is a comprehensive T&E (Test and Developed by Carlow Associates. 6 defence x • [FAA HFW]
(Human factors or Evaluation) procedural manual that can be used as a • [MIL-HDBK]
Engineering Data Guide older T&E method. It provides the HE practitioner with
for Evaluation) explanations of methods and sample checklists for
evaluating system design and performance. The
purpose of the information in HEDGE is to expand
test capabilities in considering the human element. It
will provide a strategy for viewing an item which is
undergoing testing from the standpoint of the soldier
who must ultimately operate, maintain, or otherwise
utilise it.
320. HEIST T H 1994 HEIST can be used to identify external error modes by HEIST was developed by Barry 3 x • [Kirwan94]
(Human Error In using tables that contain various error prompt Kirwan as a component of • [Stanton et al, 2006]
Systems Tool) questions. There are eight tables in total, under the HERA.
headings of Activation/Detection; Observation/Data
collection; Identification of system state;
Interpretation; Evaluation; Goal selection/Task
definition; Procedure selection and Procedure
execution. The analyst applies each table to each task
step from an HTA and determines whether any errors
are credible. For each credible error, the analyst then
records the system cause or psychological error
mechanism and error reduction guidelines (which are
all provided in the HEIST tables) and also the error
consequence.
78
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
321. HEMECA T H 1989 A FMECA-type approach to Human Error Analysis. It 2 3 5 ergonomi x • [Kirwan98-1]
(Human Error Mode, uses a HTA (Hierarchical Task Analysis) followed by cs
Effect and Criticality error identification and error reduction. The PSF
Analysis) (Performance Shaping Factors) used by the analyst are
primarily man-machine interface related, e.g.
workplace layout, information presentation, etc.
Typically, an FMEA approach identifies many errors,
primarily through detailed consideration of these PSF
in the context of the system design, in relation to the
capabilities and limitations of the operator, based on
Ergonomics knowledge. Only those errors that are
considered to be probable within the lifetime of the
plant are considered further.
322. HERA I H 2000 HERA-JANUS is a method of human error HERA is TRACEr for European 3 8 ATM x • [Isaac et al, 2003]
or identification developed by Eurocontrol for the use. JANUS is named for the • [Isaac&al99]
HERA-JANUS retrospective diagnosis during ATM system Roman two-headed god of gates • [Isaac&Pounds01]
(Human Error in ATM development. It places the air traffic incident in its and doorways. HERA was provides pros and
Technique) ATM context by identifying the ATC behaviour, the renamed HERA-JANUS cons compared to
equipment used and the ATC function being following harmonisation HFACS
performed. It identifies the root causes of human activities with the FAA. • [Kirwan98-2]
errors in aviation accidents/ incidents and associated See also HEIST. • [Shorrock01]
contextual factors by selecting appropriate ‘error • [FAA HFW]
types’ from the literature, and shaping their usage • [GAIN ATM, 2003]
within a conceptual framework. This conceptual
framework includes factors to describe the error, such
as error modes and mechanisms and factors to
describe the context, e.g. when did the error occur,
who was involved, where did it occur, what tasks were
being performed?
323. HESC T H 2000 HESC describes the contractor’s intended use of May be used by the procuring 2 defence x • [FAA HFW]
(Human Engineering or mock-ups and simulators in support of human activity to assist and assess
Simulation Concept) older engineering analysis, design support, and test and simulation approaches when there
evaluation. It contains the format and content is a need to resolve potential
preparation instructions for the HESC resulting from human performance problems,
applicable tasks delineated in the statement of work. particularly where government
facilities, models, data or
participants are required.
324. HESRA T H 2003 HESRA is a human error analysis approach that is Developed by HCR LLC (Human 3 5 ATC x • [HCR-HESRA, 2005]
(Human Error and abou based on a FMEA. While FMEA focuses on Centric Research). In 2005 medical • [Hewitt, 2006]
Safety Risk Analysis) t component failures, HESRA focuses on tasks, steps, HESRA has been adapted to the
and the associated human errors that can occur for needs of the FAA, for analysis of
each task and step. Errors are rated, using ordinal risk of human error in ATC
scales, in terms of likelihood of occurrence, severity of maintenance tasks.
the consequences of the error, and the likelihood of
detecting and mitigating the error. These ratings are
used to calculate a Hazard Index (HI) and a Risk
Priority Number (RPN).
79
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
325. HET T H 2003 HET is designed specifically as a diagnostic tool for HET can be applied to each 3 6 aviation x • [Li et al, 2009]
(Human Error the identification of design-induced error on the flight bottom level task step in a HTA. • [Stanton et al, 2006]
Template) deck. It is a checklist style approach to error prediction The HET error taxonomy consists
that comes in the form of an error proforma containing of 12 basic error modes that were
twelve error modes. The technique requires the analyst selected based upon a study of
to indicate which of the HET error modes are credible actual pilot error incidence and
(if any) for each task step, based upon their existing error modes used in
judgement. For each credible error the analyst 1) contemporary HEI methods.
provides a description of the error; 2) Determines the Has been benchmarked against
outcome or consequence associated with the error; and SHERPA, HAZOP, and HEIST,
3) Estimates the likelihood of the error (low, medium and was found to outperform
or high) and the criticality of the error (low, medium these when comparing predicted
or high). If the error is given a high rating for both errors to actual errors reported
likelihood and criticality, the aspect of the interface during an approach and landing
involved in the task step is then rated as a ‘fail’, task in a modern, highly
meaning that it is not suitable for certification. automated commercial aircraft.
326. Heuristic Evaluation T H 1994 Usability heuristic evaluation is an inspection method Developed by Jakob Nielsen. 6 computer x • [HIFA Data]
for finding the usability problems in a human- Heuristic evaluation is the most • Wikipedia
computer interface design so that they can be attended popular of the usability methods,
to as part of an iterative design process. Heuristic particularly as it is quick, easy
evaluation involves having a small set of evaluators and cheap.
examine the interface and judge its compliance with See also CELLO method.
recognised usability principles (the “heuristics”).
327. HF PFMEA T R 2002 HF PFMEA provides a systematic means for assessing Alternative name is Relex Human 3 5 6 x • [FAA HFW]
(Human Factors Process or human errors in any process. It is based on Process Factors Risk Analysis.
Failure Mode and older Failure Mode and Effects Analysis (PFMEA). The
Effects Analysis) analysis includes six steps: 1) Breakdown the process
into discrete steps 2) Determine potential human
errors 3) Identify positive and negative contributing
factors 4) Define barriers and controls 5) Assess the
error 6) Employ risk reduction strategies
328. HFACS I H 1997 Human factors taxonomy. HFACS examines instances It is based on James Reason's 3 8 aviation x x • [Isaac&Pounds01]
(Human Factors or of human error as part of a complex productive system Swiss cheese model of human navy provides pro-s and
Analysis and older that includes management and organisational error in complex systems. con’s compared to
Classification System) vulnerabilities. HFACS distinguishes between the Originally developed for the US HERA
"active failures" of unsafe acts, and "latent failures" of navy for investigation of military • [FAA HFW]
preconditions for unsafe acts, unsafe supervision, and aviation incidents. Is currently • [Shappell00]
organisational influences. being used by FAA to investigate • [Wiegman00]
civil aviation incidents. • [GAIN AFSA, 2003]
• [GAIN ATM, 2003]
• Wikipedia
80
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
329. HFAM T H 1993 HFAM is comprised of 20 groups of factors that are Management-level factors fall 3 5 x • [Pennycook&Embrey,
(Human Factors subdivided into three broad categories: 1) management into various categories, including 1993]
Analysis Methodology) level factors; 2) operational level generic factors; 3) 1) those that can be specifically
operational level job specific factors. HFAM first linked to operational-level
invokes a screening process to identify the major areas factors; 2) those that are
vulnerable to human error; the generic and appropriate indicators of the quality of safety
job-specific factors are then applied to these areas. culture and therefore can affect
The problems that are identified ultimately reflect the potential for both errors and
failures at the management control level. violations; 3) those that reflect
Corresponding management-level factors would then communication of information
be evaluated to identify the nature of the management- throughout the organisation, incl
based error (latent errors). the capability for learning lessons
from operational experience
based on various forms of
feedback channels.
330. HF-Assessment Method T H 2003 HF-Assessment Method can be used for systematically Was developed by HFS (Human 2 5 chemical x • [HFS, 2003]
(Human Factors reviewing both the process of how Human Factors Factors Solutions) for the PSA to
Assessment Method) have been integrated into the design and operation of allow them to assess how well
control rooms and for evaluating the results of this operating companies comply with
process. The method can be used under the the Health, Safety and
development of new control rooms, modifications, Environment (HSE) Regulations.
upgrades or evaluation of existing control rooms. It The tool is for use by the
consists of seven revision checklists: One checklist of Norwegian Petroleum Directorate
Questions and references that cover minimum (NPD) and the petroleum
requirements to documentation; One checklist of industry.
Questions and references that cover minimum
requirements to all phases; and Five checklists of
Questions and references that cover minimum
requirements to each phase.
331. HHA T R 1988 The method is used to identify health hazards and The technique is applicable to all 3 chemical x x • [FAA00]
(Health Hazard or risks associated within any system, sub-system, systems which transport, handle, nuclear • [FAA tools]
Assessment) older operation, task or procedure. The method evaluates transfer, use, or dispose of defence • [ΣΣ93, ΣΣ97]
routine, planned, or unplanned use and releases of hazardous materials of physical • Wikipedia
hazardous materials or physical agents. agents.
High-Fidelity See Prototyping
Prototyping
332. HITLINE I R 1994 Incorporates operator errors of commission in Tool available. 4 5 nuclear x • [Macwan&Mosley94]
(Human Interaction probabilistic assessments. It is based on a cognitive • [MUFTIS3.2-I]
Timeline) model for operators errors of omission and
commission. The result of the methodology is similar
to a human event tree, with as initiating event an error
of commission. The generic events that determine the
branch splittings are called performance influencing
factors. The quantification part is performed using
mapping tables.
HLMSC See MSC (Message Sequence
(High Level Message Chart).
Sequence Chart)
333. HMEA T Dh 1997 Method of establishing and comparing potential 5 aircraft x • [FAA00]
(Hazard Mode Effects or effects of hazards with applicable design criteria. • [ΣΣ93, ΣΣ97]
Analysis) older Introductory technique.
81
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
334. HOL T D 1991 Formal Method. Refers to a particular logic notation Software requirements 2 computer x • [EN 50128]
(Higher Order Logic) or and its machine support system. The logic notation is specification phase and design & • [Melham&Norrish01]
older mostly taken from Church’s Simple Theory of Types. development phase. • [Rakowsky]
Higher order logic proofs are sequences of function • Wikipedia
calls. HOL consists of 1) two theories, called ‘min’
and ‘bool’; 2) eight primitive inference rules, and 3)
three rules of definition.
335. HOS T H 1989 HOS is a computer simulation for modelling the Originally conceived in 1970 by 4 navy x • [FAA HFW]
(Human Operator effects of human performance on system performance. Robert Wherry Jr. (Navy at Point • [HOS user’s guide,
Simulator) Can be used to estimate effects of human performance Magu), but has undergone a 1989]
on a system before development/ modification. series of major upgrades. Version
IV became available in 1989.
336. How-How Diagram T M 1973 A How-How Diagram is a Tree Diagram where each Also referred to as Relevance 4 x • [IE, How-How]
child is determined by asking 'how' the parent can be Tree. • [Futures Group,
achieved. It is thus useful for creating solutions to See also Why-Why diagram. 1994]
problems. Steps: 1) Place the solution to the left side • [Switalski, 2003]
of a paper; 2) Identify the initial steps needed to
implement the solution and write them in the
appropriate blanks to the right of the solution; 3)
Consider each step individually, breaking it down into
its detailed constituent stages by repeatedly asking
how it might be achieved; 4) The process continues
until each step has been drawn out until its logical
limit; 5) examing the complete diagram for recurring
elements, which tend to indicate the most crucial
stages in the process of implementation.
337. HPED D 1997 Database of events related to human performance that 3 nuclear x • [NUREG CR6753]
(Human Performance or can be used to identify safety significant events in
Events Database) older which human performance was a major contributor to
risk.
338. HPIP I H 1994 HPIP aims at investigation of events that involve HPIP was developed for the US 2 3 4 5 nuclear x • [FAA HFW]
(Human Performance human performance issues at nuclear facilities. It is a Nuclear Regulatory Commission
Investigation Process) suite of six tools: 1) Events and Causal Factors and the safety management
Charting: - Helps to plan an accident Investigation. 2) factors in the Management
SORTM - A guide to HPIP Modules used to assist Oversight & Risk Tree (MORT)
investigation planning and fact collection. 3) Barrier of the US Department of Energy.
Analysis – To identify human performance difficulties
for root cause analysis 4) HPIP Modules - Identifies
important trends or programmatic system weaknesses.
5) Change Analysis – Allows understanding of the
event and ensures complete investigation and accuracy
of perceptions. 6) Critical Human Actions Profile
(CHAP) - Similar to change analysis, CHAP provides
an understanding of the event and ensures complete
investigation and accuracy of perceptions
339. HPLV T H 1990 HPLVs are used as dependency ‘bounding Relation with Fault Trees. JHEDI 5 nuclear x • [Kirwan94]
(Human Performance probabilities’ for human error outsets. They represent applies HPLV to fault trees.
Limiting Values) a quantitative statement of the analyst’s uncertainty as See also Bias and Uncertainty
to whether all significant human error events have assessment. See also Uncertainty
been adequately modelled in the fault tree. Special Analysis.
attention to (in)dependence of human errors. It is
important to note that HPLVs are not HEPs (Human
Error Probabilities); they can only be used to limit
already modelled HEPs once direct dependence has
been considered.
82
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
340. HPRA G H Consists of an analysis of the factors that determine Among published HPRA 4 5 ATM x • [MIL-HDBK]
(Human Performance how reliably a person will perform within a system or methods are THERP, REHMS-D, nuclear
Reliability Analysis) process. General analytical methods include SLIM-MAUD, MAPPS. transport
probability compounding, simulation, stochastic biomedica
methods, expert judgement methods, and design l
synthesis methods. defence
341. HRA G H 1952 The purpose of the Human Reliability Analysis is to The analysis is appropriate were 3 5 many x • [FAA00]
(Human Reliability assess factors that may impact human reliability in the reliable human performance is • [Pyy, 2000]
Analysis) operation of the system. necessary for the success of the • [NEA98]
human-machine systems. • [ΣΣ93, ΣΣ97]
• Wikipedia
342. HRAET T H 1983 Tool used for THERP. Is a simpler form of event tree, Can also be used for maintenance 4 5 nuclear x • [KirwanAinsworth92]
(Human Reliability usually with diagonal line representing success, and errors. • [Kirwan&Kennedy&
Analysis Event Tree) individual branches leading diagonally off the success Hamblen]
diagonal representing failure at each point in the task • [MUFTIS3.2-I]
step.
343. HRMS T H 1990 The HRMS is a fully computerized system dealing Apparently not in current use or 8 nuclear x • [DiBenedetto02]
(Human Reliability with all aspects of the process. It is a quantification else used rarely. JHEDI is a • [Kirwan94]
Management System) module based on actual data, which is completed by derivative of HRMS and provides • [Kirwan98-1]
the author’s own judgments on the data extrapolation a faster screening technique. • [Seignette02]
to the new scenario/tasks. A PSF (Performance • Wikipedia
Shaping Factors)-based sensitivity analysis can be
carried out in order to provide error-reduction
techniques, thus reducing the error likelihood.
344. HSIA T Dh 1991 The objective of HSIA is to systematically examine HSIA is obligatory on ESA 3 6 computer x x • [Hoegen97]
(Hardware/Software Ds or the hardware/ software interface of a design to ensure (European Space Agency) space • [Parker&al91]
Interaction Analysis) older that hardware failure modes are being taken into programmes and is performed for • [Rakowsky]
account in the software requirements. Further, it is to all functions interfacing the • [SW, 2004]
ensure that the hardware characteristics of the design spacecraft and / or other units.
will not cause the software to over-stress the The HSIA is generally initiated
hardware, or adversely change failure severity when once the development of the
hardware failures occur. HSIA is conducted through hardware is already finished and
checklists, according to which an answer shall be the development of the software
produced for each identified failure case. Checklists is not started (or it is at the very
are specific to each analysis and have to take into beginning of the process).
account the specific requirements of the system under See also Interface Testing. See
analysis. The analysis findings are resolved by also Interface Analysis, See also
changing the hardware and/or software requirements, LISA.
or by seeking ESA approval for the retention of the
existing design.
345. HSMP M 1990 Combines deterministic stochastic evolution with Used in e.g. TOPAZ. Numerical 4 ATM x x x • [Blom90]
(Hybrid-State Markov or switching of mode processes. The Hybrid Markov evaluation requires elaborated • [MUFTIS3.2-I]
Processes) older state consists of two components, an n-dimensional mathematical techniques such as
real-valued component, and a discrete valued Monte Carlo simulation.
component. The HSMP is represented as a solution of
a stochastic differential or difference equation on a
hybrid state space, driven by Brownian motion and
point processes. The evolution of the probability
density on the hybrid state space is the solution of a
partial integro-differential equation.
83
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
346. HSYS T H 1990 HSYS provides a systematic process for analyzing HSYS was developed at the 4 offshore x • [Harbour&Hill, 1990]
(Human System Human-System interactions in complex operational Idaho National Engineering • [FAA HFW]
Interactions) settings. It focuses on system interactions from the Laboratory (INEL).
human's perspective and is built around a linear model Similar to MORT.
(based on Fault Tree principles) of human
performance, termed the Input-Action model.
According to the model, all human actions involve, to
varying degrees, five sequential steps: Input Detection,
Input Understanding, Action Selection, Action
Planning, and Action ExecutIon. These five steps form
branches of the hierarchical tree and have aided in
both prospective and retrospective analysis. Based on
the Input-Action model, a series of flow charts
supported by detailed “topical modules,” have been
developed to analyze each of the five main
components in depth.
347. HTA T H 1971 HTA is a method of task analysis that describes tasks 2 ATC x • [KirwanAinsworth92]
(Hierarchical Task in terms of operations that people do to satisfy goals nuclear • [Kirwan94]
Analysis) and the conditions under which the operations are chemical • [Stanton&Wilson00]
performed. The focus is on the actions of the user with • [Shepherd01]
the product. This top down decomposition method • [Kirwan&al97]
looks at how a task is split into subtasks and the order
in which the subtasks are performed. The task is
described in terms of a hierarchy of plans of action.
348. HTLA T H 1987 Investigates workload and crew co-ordination, focuses See also VTLA. See also 3 4 nuclear x x • [Kirwan&Kennedy&
(Horizontal Timeline or task sequencing and overall timing. Is constructed Timeline Analysis. offshore Hamblen]
Analysis) older from the information in the VTLA (Vertical Timeline • [Kirwan94]
Analysis) to determine the likely time required to • [Task Time]
complete the task. Usually a graphical format is used,
with sub-tasks on the y-axis and time proceeding on
the x-axis. The HTLA shows firstly whether the tasks
will be achieved in time, and also where certain tasks
will be critical, and where bottlenecks can occur. It
also highlights where tasks must occur in parallel,
identifying crucial areas of co-ordination and
teamwork.
349. HTRR T R 1985 Method of documenting and tracking hazards and HTRR applies mainly to 8 aviation x x • [FAA00]
(Hazard Tracking and verifying their controls after the hazards have been hardware and software-related • [FAA tools]
Risk Resolution) identified by analysis or incident. The purpose is to hazards. However, it should be • [FAA SSMP]
ensure a closed loop process of managing safety possible to extend the method to • [MIL-STD 882B]
hazards and risks. Each program must implement a also include human and • [NEC02]
Hazard Tracking System (HTS) to accomplish HTRR. procedures related hazards, by
feeding these hazards from
suitable hazard identification
techniques.
350. Human Error Data D 1995 Aim is to collect data on human error, in order to An example of a Human Error 5 nuclear x • [Kirwan&Basra&Tayl
Collection support credibility and validation of human reliability Data Collection initiative is or.doc]
analysis and quantification techniques. CORE-DATA. See also CARA. • [Kirwan96-I]
• [Kirwan&al97-II]
• [Kirwan97-III]
[Kirwan&Basra&Tay
lor.ppt]
[Kirwan&Kennedy&
Hamblen]
84
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
351. Human Error Recovery G 1997 Way of modelling that acknowledges that pilots 4 aviation x • [Amalberti&Wioland9
typically introduce and correct errors prior to those ATM 7]
errors becoming critical. The error correction • [Leiden et al, 2001]
frequency is decreasing under stress. The Stages in
error recovery are: Error, Detection, Identification,
Correction, Resumption.
352. Human Factors Analysis G H Human Factors Analysis represents an entire Human Factors Analysis is 3 5 6 many x x x • [FAA AC431]
discipline that considers the human engineering appropriate for all situations were • [FAA00]
aspects of design. There are many methods and the human interfaces with the • [ΣΣ93, ΣΣ97]
techniques to formally and informally consider the system and human-related
human engineering interface of the system. There are hazards and risks are present. The
specialty considerations such as ergonomics, bio- human is considered a main sub-
machines, anthropometrics. The Human Factors system.
concept is the allocation of functions, tasks, and
resources among humans and machines. The most
effective application of the human factors perspective
presupposes an active involvement in all phases of
system development from design to training, operation
and, ultimately, the most overlooked element,
disposal. Its focus ranges from overall system
considerations (including operational management) to
the interaction of a single individual at the lowest
operational level. However, it is most commonly
applied and implemented, from a systems engineering
perspective, to the system being designed and as part
of the SHA.
353. Human Factors G H The Human Factors Assessment is a process that is . 2 3 4 5 6 aviation x • [FAA HFW]
Assessments in integrated with other processes and provides essential • [FAA HFED]
Investment Analysis components to the products of the Investment
Analysis (IA). Three of these human factors
components are: a) the human-system performance
contribution to program benefits, b) an assessment of
the human-system performance risks, and c) the
estimated costs associated with mitigating human
factors risks and with conducting the engineering
program support. The human factors components
related to benefits, risks, and costs are integrated with
other program components in the IA products and
documentation.
354. Human Factors Case T H 2002 A Human Factors Case is a framework for human Developed by Eurocontrol. 2 3 5 6 ATM x • [HFC]
factors integration, similar to a Safety Case for Safety • [Barbarino01]
Management. The approach has been developed to • [Barbarino02]
provide a comprehensive and integrated approach that
the human factors aspects are taken into account in
order to ensure that the system can safely deliver
desired performance. Human Factors issues are
classified according to six categories: 1. Working
Environment; 2. Organisation and Staffing; 3.
Training and Development; 4. Procedures, Roles and
Responsibilities; 5. Teams and Communication; 6.
Human and System. Subsequently, an Action Plan is
made to address these issues.
85
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
355. Human Factors in the I H This tool provides information about Human Factors Developed by FAA. 2 6 ATC x • [FAA HFW]
Design and Evaluation related issues that should be raised and addressed
of Air Traffic Control during system design and system evaluation. The tool
Systems consists of 2 parts; 1. A handbook describes how
different Human Factors areas apply to (ATC) Air
Traffic Control. This should help the HF practitioner
identify relevant HF issues for the system design and
evaluation process. 2. An application package allows
the construction of checklists to support the system
selection and evaluation process.
356. Human HAZOP T R 1988 Extension of the HAZOP technique to the field of 3 4 6 chemical x x • [Cagno&Acron&Man
or procedures performed by humans. More nuclear cini01]
Human Error HAZOP comprehensive error identification, including the • [KirwanAinsworth92]
(Human (Error) Hazard understanding of the causes of error, in order to • [Kirwan94]
and Operability study) achieve more robust error reduction.
357. Hybrid Automata M 1993 A Hybrid Automaton is a mathematical model for See also Finite State Machines. 4 nuclear x • [Alur93]
describing systems in which computational processes Timed Hybrid Automata also chemical • [Lygeros&Pappas&Sa
interact with physical processes. The behavior of a include the notion of time. stry98]
hybrid automaton consists of discrete state transitions • [Schuppen98]
and continuous evolution. The latter are usually • [Sipser97]
represented by differential equations. • [Tomlin&Lygeros&Sa
stry98]
• [Weinberg&Lynch&D
elisle96]
• Wikipedia
Hyperion Intelligence New name of Brio Intelligence
358. HzM T R 2001 HzM maintains the HAZOP approach, but breaks Combined use with HEART, 3 4 6 chemical x x • [Cagno&Acron&Man
(Multilevel HAZOP) down the analysis in two directions: vertical THERP and Event trees possible. cini01]
(hierarchical breakdown of each procedure in an
ordered sequence of steps) and horizontal (each step is
further broken down into the three logical levels
operator, control system and plant/ process). This
allows recording how deviations may emerge in
different logical levels and establishing specific
preventive/ protective measures for each.
86
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
359. i* Model Analysis T D 1994 i* is an approach originally developed to model 2 ATM x x • [MaidenKamdarBush,
information systems composed of heterogeneous 2005]
actors with different, often-competing goals that • [Yu, 1994]
nonetheless depend on each other to undertake their • Wikipedia
tasks and achieve these goals. The i* approach
supports the development of 2 types of system model.
The first is the Strategic Dependency (SD) model,
which provides a network of dependency relationships
among actors. The opportunities available to these
actors can be explored by matching the depender who
is the actor who wants” and the dependee who has the
ability”. The dependency link indicates that one actor
depends on another for something that is essential to
the former actor for attaining a goal. The second type
of i* model is the Strategic Rationale (SR) model,
which provides an intentional description of processes
in terms of process elements and the relationships or
rationales linking them. A process element is included
in the SR model only if it is considered important
enough to affect the achievement of some goal. The
SR model has four main types of nodes: goals, tasks,
resources and softgoals.
360. IA G Ds Prior to modification or enhancement being performed Software maintenance phase. 3 computer x • [EN 50128]
(Impact Analysis) on the software, an analysis is undertaken to identify Sometimes referred to as Change • [Rakowsky]
the impact of the modification or enhancement on the Impact Analysis. • Wikipedia
software and also identify the affected software
systems and modules. Two forms are traceability IA
and dependency IA. In traceability IA, links between
requirements, specifications, design elements, and
tests are captured, and these relationships can be
analysed to determine the scope of an initiating
change. In dependency IA, linkages between parts,
variables, logic, modules etc. are assessed to
determine the consequences of an initiating change.
Dependency IA occurs at a more detailed level than
traceability IA.
361. IAEA TECDOC 727 I R 1993 Aim is to classify and prioritise risks due to major 3 5 chemical x x • [Babibec&Bernatik&
industrial accidents. The method is the tool to identify rail Pavelka99]
and categorise various hazardous activities and road
hazardous substances. Includes hazard analysis and
quantified risk assessment. The categorisation of the
effect classes is by means of maximum distance of
effect, and affected area.
IDA See STAHR (Socio-Technical
(Influence Diagram Assessment of Human
Approach) Reliability)
87
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
362. IDDA T R 1994 IDDA develops the sequences of events from the point IDDA is based on an enhanced 4 5 chemical x • [Demichela and
(Integrated Dynamic or of view both of the logical construction, and of the form of dynamic event tree. Piccinini, 2003]
Decision Analysis) older probabilistic coherence. The system description has • [Galvagni et al, 1994]
the form of a binary chart, where the real logical and • [Piccinini et al, 1996]
chronological sequence of the events is described; the
direction of each branch is characterised by a
probability of occurrence that can be modified by the
boundary conditions, and in particular by the same
development of the events themselves (probabilities
conditioned by the events dynamic). At the end of the
analysis, the full set of the possible alternatives in
which the system could evolve is obtained. These
alternatives represent a “partition” since they are
mutually exclusive; they are all and the sole possible
alternatives, thus allowing the method to guarantee the
completeness and the coherence of the analysis.
363. IDEF I Dh 1981 Method of system modelling that enables Currently, IDEF comprises a 2 defence x • [HEAT overview]
(Integrated Computer- understanding of system functions and their suite of methods named IDEF0, • [MIL-HDBK]
Aided Manufacturing relationships. Using the decomposition methods of IDEF1, etc. • Wikipedia
Definition) structural analysis, the IDEF methodology defines a IDEF0 is the military equivalent
system in terms of its functions and its input, outputs, to SADT.
controls and mechanisms.
364. ILCI Loss Causation T M 1985 The ILCI model focuses on development of Developed by Mr. Frank E. Bird, 6 offshore x x x • [Kjellen, 2000]
Model performance standards and enforcement of standards Jr. of ILCI in the USA , and • [Storbakken, 2002]
(International Loss to ensure that employees are performing their work in based on an earlier model
Control Institute Loss a safe manner. The ILCI model is based on a sequence developed by H.W. Heinrich.
Causation Model) of events that leads up to an eventual loss. The events In 1991, DNV (Det Norske
in sequential order are, Lack of Control, Basic Causes, Veritas) bought ILCI rights.
Immediate Causes, Incident/Contact, and Loss. Each
event has a role in continuing the loss process to its
conclusion, the Loss.
365. IMAS T H 1986 Aims to model cognitive behaviour aspects of Developed by David E. Embrey. 2 chemical x • [Kirwan98-1]
(Influence Modelling performance, in terms of relationships between offshore
and Assessment System) knowledge items relating to symptoms of events (for
diagnostic reliability assessment).
366. Importance Sampling M R 1980 Technique to enable more frequent generation of rare Developed by Shanmugan and 5 many x x x x x • [MUFTIS3.2-I]
events in Monte Carlo Simulation. Rare events are Balaban. • [Shanmugam&Balaba
sampled more often, and this is later compensated for. Combine with simulations. n, 1980]
• Wikipedia
IMS See Data Mining
(Inductive Monitoring
System)
367. InCAS T R 2000 InCAS is a PC-based interactive simulator for Developed by Eurocontrol. 8 aviation x • [GAIN ATM, 2003]
(Interactive Collision or replaying and analysing Airborne Collision Avoidance ATM
Avoidance Simulator) older System (ACAS) during close encounters between
aircraft. It is designed for case-by-case incident
analysis by investigators. InCAS reads radar data and
provides an interface to examine these data in detail,
removing any anomalies that may be present. The
cleaned data are used to simulate trajectories for each
aircraft at one-second intervals and these data are fed
into a full version of the logic in the Traffic Alert and
Collision Avoidance System, TCAS II (versions
6.04A or 7).
88
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
368. In-Depth Accident G R Aim is to investigate a severe accident or near- 8 offshore x x • [Kjellen, 2000]
Investigation accident on-the-scene. Follows eight generic steps: 1)
Securing the scene; 2) Appointing an investigation
commission; 3) Introductory meeting, planning the
commission’s work; 4) Collection of information; 5)
Evaluations and organising of information; 6)
Preparing the commission’s report; 7) Follow-up
meeting; 8) Follow-up.
Influence Diagram See BBN (Bayesian Belief
Networks). See RIF diagram
(Risk Influencing Factor
Diagram). Also called Relevance
Diagram.
Information Flow Chart See DAD (Decision Action
Diagram)
369. Information Hiding, T D 1972 Aim is to increase the reliability and maintainability of Developed by David Parnas. 3 6 computer x • [Bishop90]
Information software. Encapsulation (also information hiding) Useful for all types of software • [EN 50128]
Encapsulation consists of separating the external aspects of an object, system. Closely related to object- • [Rakowsky]
which are accessible to other objects, from the internal oriented programming and • Wikipedia
implementation details of the object, which are hidden design. Tools available.
from other objects. If an internal state is encapsulated
it cannot be accessed directly, and its representation is
invisible from outside the object.
370. Input-output (block) G 1974 The technique involves first selecting the system, task 2 chemical? x x x • [KirwanAinsworth92]
diagrams or step of interest and then identifying all the inputs • Wikipedia
and outputs which are necessary to complete this task
or step. The inputs are listed along an incoming arc to
a block representing the system, task or step of
interest, and the outputs are listed along an outgoing
arc.
371. Inspections and G 1976 Aim is to detect errors in some product of the Effective method of finding 7 computer x x • [Bishop90]
Walkthroughs or development process as soon and as economically as errors throughout the software • [Inspections]
older possible. An inspection is the most formal type of development process. In a • Wikipedia
group review. Roles (producer, moderator, reader and Cognitive Walkthrough, a group
reviewer, and recorder) are well defined, and the of evaluators step through tasks,
inspection process is prescribed and systematic. evaluating at each step how
During the meeting, participants use a checklist to difficult it is for the user to
review the product one portion at a time. Issues and identify and operate the system
defects are recorded, and a product disposition is element and how clearly the
determined. When the product needs rework, another system provides feedback to that
inspection might be needed to verify the changes. In a action. Cognitive walkthroughs
walkthrough, the producer describes the product and take into consideration the user's
asks for comments from the participants. These thought processes that contribute
gatherings generally serve to inform participants about to decision making, such as
the product rather than correct it. memory load and ability to
reason. See also Walk-Through
Task Analysis.
89
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
372. Integrated Process for I H 1995 This tool provides a step-by-step systematic approach Developed by Transportation 2 3 4 aviation x • [GAIN AFSA, 2003]
Investigating Human in the investigation of human factors. The tool can be Safety Board of Canada.
Factors applied to accidents or incidents. The process consists The process is an integration and
of seven steps 1) collect occurrence data, 2) determine adaptation of a number of human
occurrence sequence, 3) identify unsafe actions factors frameworks - SHEL
(decisions) and unsafe conditions, and then for each (Hawkins, 1987) and Reason's
unsafe act (decision) 4) identify the error type or (1990) Accident Causation and
adaptation, 5) identify the failure mode, 6) identify generic error-modelling system
behavioural antecedents, and 7) identify potential (GEMS) frameworks, as well as
safety problems. Rasmussen's Taxonomy of Error
(1987).
373. INTENT T H 1991 Is aimed at enabling the incorporation of decision- 3 4 5 nuclear x • [Kirwan98-1]
based errors into PSA, i.e. errors involving mistaken
intentions, which appears to include cognitive errors
and rule violations, as well as EOCs. Four categories
of error of intention are identified: action
consequence; crew response set; attitudes leading to
circumvention; and resource dependencies. A set of 20
errors of intention (and associated PSF (Performance
Shaping Factor)) are derived, and quantified using
seven experts. The methodological flow for INTENT
involves six stages: Compiling errors of intention,
quantifying errors of intention, determining human
error probabilities (HEP) upper and lower bounds,
determining PSF and associated weights, determining
composite PSF, and determining site specific HEP’s
for intention.
374. Interface Analysis, T Dh 1995 The analysis is used to identify hazards due to Interface Analysis is applicable to 3 space x x • [FAA00]
or interface incompatibilities. The methodology entails all systems. All interfaces should • [Leveson95]
older seeking those physical and functional incompatibilities be investigated; machine- • [Rakowsky]
between adjacent, interconnected, or interacting software, environment- human, • [ΣΣ93, ΣΣ97]
elements of a system, which, if allowed to persist environment-machine, human-
under all conditions of operation, would generate human, machine-machine, etc.
risks. See also Interface Testing. See
also HSIA. See also LISA. See
also SHEL.
375. Interface Surveys G 1977 Interface surveys are a group of information collection 2 nuclear x • [KirwanAinsworth92]
methods that can be used to gather information about
specific physical aspects of the person-machine
interface at which tasks are carried out. Examples of
these techniques are Control/Display Analysis;
Labelling Surveys; Coding Consistency Surveys;
Operator modifications surveys; Sightline surveys;
Environmental Surveys.
376. Interface Testing T Ds Interface testing is essentially focused testing. It needs Software design & development 7 computer x • [EN 50128]
reasonably precise knowledge of the interface phase. • [Jones&Bloomfield&
specification. It has three aspects: 1) Usability testing See also HSIA. See also Software Froome&Bishop01]
(to discover problems that users have); 2) Correctness Testing. See also Interace • [Rakowsky]
testing (to test whether the product does what it is Analysis. • [Rowe99]
supposed to do); 3) Portability testing (to make a
program run across platforms).
90
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
377. INTEROPS I H 1991 Cognitive performance simulation, which uses the The INTEROPS model allows the 2 4 5 nuclear x • [Kirwan98-1]
(INTEgrated Reactor SAINT simulation methodology. Has three following to be simulated: • [Kirwan95]
OPerator System) independent models: a nuclear power plant model; a forgetting, tunnel-vision;
network model of operator tasks; and a knowledge confirmation bias; and mistakes.
base, the operator model being distributed between the
latter two. The model is a single operator model. It
diagnoses by observance of plant parameters, and
subsequent hypothesis generation and testing of the
hypothesis. The approach uses Markovian modelling
to allow opportunistic monitoring of plant parameters.
The model also simulates various errors and PSF
(Performance Shaping Factor). Cognitive workload is
also modelled, in terms of the contemporary
information processing theory of concurrent task
management. Also, INTEROPS can utilise a confusion
matrix approach to make diagnostic choices.
91
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
378. Interview G 1950 Method of asking participants what they think about a 1 2 3 4 5 6 7 8 many x x x x x • [FAA HFW]
or topic in question, including follow-up questions for • [Hendrick97]
older clarification. Interviews yield rich qualitative data and • [KirwanAinsworth92]
can be performed over the telephone or in person. • [Salvendy97]
Interviews can be held in different ways: • Wikipedia
• Unstructured Interviews: Very open interviews.
Usually an outline, used as a guide, with a limited set
of broad questions to ask
• Semi-Structured Interviews: A more structured set of
open-ended questions is designed before the
interview is conducted. Follow up questions used for
clarification are included.
• Stratified Semistructured Interviews: Representative
subgroups of an organisation are identified and
randomly sampled individuals are interviewed for
each subgroup (e.g., employees, managers, directors,
etc.). This often increases the accuracy of the
intended conclusions because the sampling error is
being reduced. Interviewers have predetermined sets
of issues that will be asked about, but follow up
questions depend on the responses of the
interviewees.
• Structured Interviews: The interviewer has a
standard set of questions that are asked of all
candidates. Is more commonly used for the general
collection of task-based information. The structuring
offers the opportunity for more systematic collection
of data.
• Exit Interview: Open-ended questions asked after a
study, simulation, experiment, etc. Purpose is to
gather information about the perceived effectiveness
of the intervention. To make a global assessment
about the intervention, treatment, etc.
• Laddering is an interview and diagramming
technique that is used to draw out the connections
users make between different constructs of a task, the
consequences of those attributes, and the human
values linked with those consequences. The
researcher begins with a statement and then directs
the expert through the task hierarchy through a series
of questions.
• Teachback is a process of knowledge elicitation in
which the subject matter expert (SME) describes a
concept to the human factors researcher. The
researcher then explains the concept back to the
SME until the SME is satisfied that the researcher
has grasped the concept.
92
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
379. Invariant Assertions T Ds 1967 Aim is to detect whether a computer system has To be used on non-time critical 7 computer x • [Bishop90]
or deviated from its intended function. An invariant safety related systems. Related to • [Keidar&Khazan00]
older assertion of an automaton A is defined as any property formal specification methods and
that is true in every single reachable state of A. fault containment techniques.
Invariants are typically proved by induction on the See also Software Testing.
number of steps in an execution leading to the state in
question. While proving an inductive step, only
critical actions are considered, which affect the state
variables appearing in the invariant.
380. IO D 2002 IO is a web-based information-sharing tool used to Uses Fault Trees. Developed at 8 space x x • [GAIN AFSA, 2003]
(Investigation support mishap investigations in real-time as well as NASA Ames Research Center in • [IO example]
Organizer) providing an analysis capability to optimise the 2002.
investigation activities, report generation, and generic
mishap investigation research. The tool functions as a
document/data/image repository, a project database,
and an “organisational memory” system. Investigation
Organizer permits relationships between data to be
explicitly identified and tracked using a cross-linkage
mechanism, which enables rapid access to interrelated
information. The tool supports multiple accident
models to help give investigators multiple perspectives
into an incident.
381. IPME I H 2000 IPME is a Unix-based integrated environment of Relation with Micro-SAINT. 2 4 5 navy x • [IPME web]
(Integrated Performance ? simulation and modelling tools for answering defence • [Winkler03]
Modelling Environment) questions about systems that rely on human • [FAA HFW]
performance to succeed. IPME provides: 1) A realistic
representation of humans in complex environments; 2)
Interoperability with other models and external
simulations; 3) Enhanced usability through a user
friendly graphical user interface. IPME provides i) a
full-featured discrete event simulation environment
built on the Micro Saint modelling software; ii) added
functionality to enhance the modelling of the human
component of the system; iii) a number of features that
make it easier to integrate IPME models with other
simulations on a real-time basis including TCP/IP
sockets and, in the near future, tools for developing
simulations that adhere to the Higher Level
Architecture (HLA) simulation protocols that are
becoming standard throughout the world.
93
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
382. IRP I R 2006 Intends to provide an integrated risk picture for the The current risk IRP [IRP, 2005] 2 3 4 5 8 ATM x x • [IRP, 2005]
(Integrated Risk Picture) current and an adopted (2015) ATM concept using accumulates overall risk from • [IRP, 2006]
fault tree analysis [IRP, 2006]. Starting point is a fault five kinds of accident risk
tree for the current situation (see next column of this categories (CFIT, Taxiway
table). The predictive IRP for the adopted 2015 ATM collision, Mid-air collision,
concept uses a 4-stage approach: Stage 1: Identify the Runway collision, Wake
future ATM situation, i.e. identify the ATM changes turbulence). For each category
that might be implemented in Europe the period up to there is a fault tree that represents
2020. Use HAZOPs and ongoing safety assessments the specific causal factors. And
for the different future ATM components to identify below each fault tree there is an
which aspects will positively influence safety, and influence model which is used to
which aspects will negatively influence safety represent more diffuse factors
(hazards). Stage 2: Make a functional model including such as quality of safety
the main actors, the information flow between them, management, human
and interdependencies, for the future situation, using performance, etc. Quantification
SADT (Structured Analysis and Design Technique). is done by mixture of historical
Stage 3: Use this and the current risk fault tree to data and expert judgement.
evaluate the future situation. Stage 4: Refine and
quantify the future IRP by assessing correlated
modification factors for the values in the IRP fault tree
and the IRP influence model, thus modelling positive
interactions, negative interactions, and migration of
risk.
383. ISA D 1987 ISA is intended to facilitate the interactive collection ISA is claimed to contribute to 8 chemical x x x x • [HEAT overview]
(Intelligent Safety of data on accidents and near misses. It is a method of consistency in reporting accidents health • [Livingston, 2001]
Assistant) applying MORT methods for registration of incidents and incidents, and to the transport
at work in order to ensure consistent data collection development of causal
and the generation of diagnostic messages about hypotheses. Developed by
critical or failing safety management factors Koorneef and Hale (Delft
underlying a single accident, near miss or Safety University of Technology,
Management System (SMS) failure event. Netherlands).
Ishikawa diagram See Cause and Effect Diagram
384. ISIM I M 1998 ISIM aims to provide a standardised and ISIM was developed by the 8 aviation x x x x • [GAIN AFSA, 2003]
(Integrated Safety comprehensive methodology to support the Transportation Safety Board of transport • [Ayeko02]
Investigation investigation/analysis of multi-modal occurrences in Canada (TSB) in 1998. TSB
Methodology) the transportation sector. ISIM integrates the plans to automate parts of the
identification of safety deficiencies, with the analysis methodology and tie it more
and validation of those deficiencies. The prime closely to their TSB’s database
components of ISIM are: occurrence assessment; data systems.
collection; events and factors diagramming; use of the
TSB’s existing integrated investigation process to
uncover the underlying factors (safety deficiencies);
risk assessment; defence/barrier analysis; risk control
options; and safety communications.
385. ISRS T O 1978 Safety culture audit tool that uses performance Qualitative. ISRS first edition 8 many x • [ISRS brochure]
(International Safety indicators, which are organised into groups. The was developed in 1978 by Frank • [Kennedy&Kirwan98]
Rating System) scores on the sub-sets of safety performance areas are Bird, following his research into
weighted and then translated into an overall index the causation of 1.75 million
rating. ISRS consists of 15 key processes, embedded accidents.
in a continual improvement loop. Each process
contains subprocesses and questions. The questions
are scored and commented, through interviews with
process owners.
JANUS See HERA-JANUS
94
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
386. JAR-25 I Dh 1974 JAR-25 provides Joint Aviation Requirements for First issue was published in 1974. 2 3 4 5 6 7 aircraft x • [JAR 25.1309]
(Joint Aviation large (turbine-powered) airplanes. Its Advisory In 1983, the first aircraft was • [Klompstra&Everdij9
Requirements Advisory Material Joint (AMJ 25.1309) includes a safety certified to JAR-25. AMJ 7]
Material Joint (AMJ) assessment methodology for large airplanes that runs 25.1309 is used as basis for
25.1309) in parallel with the large aeroplane lifecycle stages. several other safety assessment
The steps are: 1) Define the system and its interfaces, methodologies, e.g. ARP 4761,
and identify the functions which the system is to EATMP SAM. ARP 4754 is
perform. Determine whether or not the system is called up in AMJ 25.1309.
complex, similar to systems used on other aeroplanes, Following the establishment of
and conventional; 2) Identify and classify the the European Aviation Safety
significant failure conditions. This identification and Agency in September 2003 and
classification may be done by conducting a Functional the adoption of EASA
Hazard Assessment. The procedure depends on Implementing Rules (IR),
whether or not the system is complex; 3) Choose the Certification Specifications (CS),
means to be used to determine compliance with JAR- and Acceptable Means of
25.1309 b., c. and d. The depth and scope of the Compliance and Guidance
analysis depend on the types of function performed by Material (AMC), the JAA
the system, on the severity of systems failure Committee made the decision that
conditions, and on whether or not the system is in future the JAA would publish
complex; 4) Implement the design and produce the amendments to the airworthiness
data which are agreed with the Certification Authority JARs by incorporation of
as being acceptable to show compliance. reference to EASA IR, AMC and
CS.
USA version of JAR-25 is FAR-
25, which was issued before JAR-
25 as a derivation of older
regulations, and does not limit to
turbine-powered aeroplanes.
387. Jelinski-Moranda T Ds 1972 This is a model that tends to estimate the number of Not considered very reliable, but 7 computer x • [Bishop90]
models remaining errors in a software product, which is can be used for general opinion
considered a measure for the minimum time to correct and for comparison of software
these bugs. modules. See also Musa Models.
See also Bug-counting model.
JHA See Job Safety Analysis. See also
(Job Hazard Analysis) AET (Job Task Analysis)
388. JHEDI I H 1990 JHEDI is derived from the Human Reliability See also HRMS. 8 nuclear x • [HIFA Data]
(Justification of Human Management System (HRMS) and is a quick form of • [Kirwan94]
Error Data Information) human reliability analysis that requires little training • [Kirwan98-1]
to apply. The tool consists of a scenario description, • [PROMAI5]
task analysis, human error identification, a
quantification process, and performance shaping
factors and assumptions. JHEDI is a moderate,
flexible and auditable tool for use in human reliability
analysis. Some expert knowledge of the system under
scrutiny is required.
Job Process Chart See OSD (Operational Sequence
Diagram)
389. Job Safety Analysis T M 1960 This technique is used to assess the various ways a Job Safety Analysis can be 2 3 6 x x • [FAA00]
abou task may be performed so that the most efficient and applied to evaluate any job, task, • [ΣΣ93, ΣΣ97]
t appropriate way to do a task is selected. Each job is human function, or operation. • [FAA HFW]
broken down into tasks, or steps, and hazards Also referred to as Job Hazard • Wikipedia
associated with each task or step are identified. Analysis (JHA).
Controls are then defined to decrease the risk
associated with the particular hazards.
95
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
Job Task Analysis See AET. See also Job Safety
Analysis.
390. Journaled Sessions T Ds 1993 A journaled session is a way to evaluate the usability See also Self-Reporting Logs 5 computer x • [Nielsen, 1993]
or of software remotely. Users are provided with a disk • [FAA HFW]
older or CD containing the prototype interface and are asked
to perform a variety of tasks. The software itself
captures information relative to the users’ actions
(keystrokes, mouseclicks). The software usually has
dialog boxes that allow the user to input comments as
well. Upon completion of the tasks the software is
then returned for subsequent evaluation.
391. JSD I D 1983 JSD is a system development method for developing Developed by Michael A. 2 computer x • [Bishop90]
(Jackson System information systems with a strong time dimension Jackson and John Cameron. • [EN 50128]
Development) from requirements through code. JSD simulates events Considered for real-time systems • [Rakowsky]
dynamically as they occur in the real world. Systems where concurrency can be • Wikipedia
developed using JSD are always real-time systems. allowed and where great
JSD is an object-based system of development, where formality is not called for.
the behaviour of objects is captured in an entity Similarities with MASCOT.
structure diagram. It consists of three main phases: the Tools available. Software
modelling phase; the network phase; and the requirements specification phase
implementation phase. JSD uses two types of and design & development phase.
diagrams to model a system, these are Entity Structure
Diagrams and Network Diagrams. When used to
describe the actions of a system or of an entity, JSD
Diagrams can provide a modelling viewpoint that has
elements of both functional and behavioural
viewpoints. JSD diagrams provide an abstract form of
sequencing description, for example much more
abstract than pseudocode.
392. KAOS T D 1990 KAOS is a goal-oriented software requirements Designed by the University of 2 6 x • [Dardenne, 1993]
(Knowledge Acquisition capturing approach which consists of a formal Oregon and the University of • Wikipedia
in autOmated framework based on temporal logic and AI (artificial Louvain (Belgium).
Specification) intelligence) refinement techniques where all terms Alternatively, KAOS stands for
such as goal and state are consistently and rigorously Keep All Objects Satisfied.
defined. The main emphasis of KAOS is on the formal See also i*.
proof that the requirements match the goals that were
defined for the envisioned system. KAOS defines a
goal taxonomy having 2 dimensions: Goal patterns
(Achieve, Cease, Maintain, Avoid, Optimize.); Goal
categories. Goal categories form a hierarchy. At the
root of the hierarchy are system goals and private
goals. System goals have the following sub-categories:
Satisfaction goal, information goal, robustness goal,
consistency goal, safety and privacy goal.
96
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
393. KLM T H 1983 KLM is an 11-step method that can be used to KLM is a simplified version of 2 7 x • [FAA HFW]
(Keystroke Level estimate the time it takes to complete simple data GOMS in that it focuses on very • [Eberts97]
Model) input tasks using a computer and mouse. It can be low level tasks. It is usually • [Hochstein02]
or used to find more efficient or better ways to complete applied in situations that require • [Card83]
KLM-GOMS a task, by analyzing the steps required in the process minimal amounts of work and • Wikipedia
(Keystroke-Level Model and rearranging or eliminating unneeded steps. A interaction with a computer
GOMS) calculation of the execution time for a task can be interface or software design.
made by defining the operators that will be involved in See also Apex, CAT, CPM-
a task, assigning time values to those operators, and GOMS, CTA, GOMS, GOMS,
summing up the times. Different systems can be NGOMSL.
compared based on this time difference. Uses: Obtain
time predictions to compare systems of predict
improvements. Input: Observable behaviour such as
button pushes and mouse movements. Components:
Six operators: K – keystroke or mouse movement; P –
pointing to a target; D – moving the mouse to draw
line segments; H – moving hands from mouse to
keyboard; M – mental preparation; R – system
response time
394. KTT T R 1970 Mathematical technique used to quantify top effect of Used for FTA. 5 aircraft x • [MUFTIS3.2-I]
(Kinetic Tree Theory) fault trees, allowing for evaluation of instantaneous nuclear • [Vesely70]
reliability or availability. Complete information is offshore
obtained from the existence probability, the failure windturbi
rate, and the failure intensity of any failure (top, mode ne
or primary) in a fault tree. When these three
characteristics are determined, subsequent
probabilistic information, both pointwise and
cumulative, is obtained for all time for this failure.
The application of the addition and multiplication laws
of probability are used to evaluate the system
unavailability from the minimal cut sets of the system.
395. Laser Safety Analysis T Dh 1960 This analysis enables the evaluation of the use of Laser = Light Amplification by 3 6 medical x • [FAA AC431]
Lasers from a safety view. The purpose is to provide a Stimulated Emission of defence • [FAA00]
means to assess the hazards of non-ionising radiation. Radiation. Theoretic foundations • [ΣΣ93, ΣΣ97]
As such, its intent is also to identify associated hazards for the laser were established by • Wikipedia
and the types of controls available and required for Albert Einstein in 1917. The term
laser hazards. Lasers are usually labeled with a safety LASER was coined by Gordon
class number, which identifies how dangerous the Gould in 1958. The first
laser is, ranging from ‘inherently safe’ to ‘can burn functional laser was constructed
skin’. in 1960 by Theodore H. Maiman.
Laser Safety Analysis is
appropriate for any laser
operation, i.e. construction,
experimentation, and testing.
396. Library of Trusted, D D Well designed and structured PESs (Programmable Software design & development 8 computer x • [EN 50128]
Verified Modules and Electronic Systems) are made up of a number of phase. • [Rakowsky]
Components hardware and software components and modules
which are clearly distinct and which interact with each
other in clearly defined ways. Aim is to avoid the need
for software modules and hardware component
designs to be extensively revalidated or redesigned for
each new application. Also to advantage designs
which have not been formally or rigorously validated
but for which considerable operational history is
available.
97
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
Likert Scale See Rating Scales
397. Link Analysis (1) T H 1959 Is used to identify relationships between an individual Typical applications include 2 nuclear x x • [KirwanAinsworth92]
and some part of the system. A link between two parts equipment layout for offices and offshore • [Kirwan94]
of the system will occur when a person shifts his focus control rooms, and the layout of • [HEAT overview]
of attention, or physically moves, between two parts of display and control systems. • [Luczak97]
the system. A link between components represents a • [Wickens99]
relationship between those components. The
relationship may be shown by the thickness of the
link.
398. Link Analysis (2) M This is a collection of mathematical algorithms and Tools available. Can be used in 2 defence x • [MIL-HDBK]
visualisation techniques aimed at the identification and conjunction with Timeline
convenient visualisation of links between objects and Analysis to help determine travel
their values. times, etc.
399. LISA T R 1999 LISA was developed to study the way in which an Developed by University of 4 5 avionics x x • [Pumfrey, 1999]
(Low-level Interaction or operating system manages system resources, both in York. See also Interface testing.
Safety Analysis) older normal operation and in the presence of hardware See also HSIA. See also Interface
failures. Instead of analysing the system functionality, Analysis.
LISA focuses on the interactions between the software
and the hardware on which it runs. A set of physical
resources and timing events is identified, and a set of
projected failure modes of these resources is
considered. The aim of the analysis is to use a
combination of inductive and deductive steps to
produce arguments of acceptability demonstrating
either that no plausible cause can be found for a
projected failure, or that its consequences would
always lead to an acceptable system state. Step 1:
Agree principles for acceptability; Step 2: Assemble
source material; Step 3: Analyse timing events; Step 4:
Analyse physical resources.
400. Littlewood M Ds 1957 Mathematical model that tends to provide the current Not considered very reliable, but 5 computer x • [Bishop90]
failure rate of a program, and hence minimum time can be used for general opinion
required to reach a certain reliability. and for comparison of software
modules.
401. Littlewood-Verrall M Ds 1957 A Bayesian approach to software reliability Not considered very reliable, but 5 computer x • [Bishop90]
measurement. Software reliability is viewed as a can be used for general opinion • [Narkhede02]
measure of strength of belief that a program will and for comparison of software
operate successfully. This contrasts with the classical modules.
view of reliability as the outcome of an experiment to
determine the number of times a program would
operate successfully out of say 100 executions. Almost
all published models assume that failures occur
randomly during the operation of the program.
However, while most postulate simply that the value
of the hazard rate is a function of the number of faults
remaining, Littlewood and Verrall modelled it as a
random variable. One of the parameters of the
distribution of this random variable is assumed to vary
with the number of failures experienced. The value of
the parameters of each functional form that produce
the best fit for that form are determined. Then the
functional forms are compared (at the optimum values
of the parameters) and the best fitting form is selected.
98
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
LOMS See Flight Data Monitoring
(Line Operations Analysis and Visualisation
Monitoring System)
402. LOPA T M 2001 A tabular representation of both the risk factors and Developed by the American 5 6 chemical x • [Gulland04]
(Layer of Protection the risk mitigating factors is used to determine a safety Institute of Chemical Engineers • [ACM, 2006]
Analysis) integrity level (SIL). LOPA starts by quantifying the Centre for Chemical Process
consequences and likelihood of a hazardous event in Safety (CCPS) in response to the
the absence of any forms of protection or risk requirements of ISA S84.01 and
mitigation measures: the underlying process risk is was formally published in 2001
defined. Potential risk reduction measures are then under the title ‘Layer of
systematically analysed and their impact on the Protection Analysis, Simplified
process risk quantified to determine a mitigated risk. Process Risk Assessment’.
The mitigated risk is compared with risk targets, Reference [ACM, 2006] lists
which then determines a risk reduction factor to be some advantages and
provided. The risk reduction factor translates directly disadvantages.
into a SIL. A detailed LOPA procedure is required to
define categories for hazardous event consequences,
and guideline risk reduction factors for typical
protection layers. Calibration of the LOPA procedure
is needed to ensure that defined risk acceptability
criteria are met.
403. LOS M R 2000 Assessment of a Level of Safety for a dedicated block Current investigations focus on 4 5 ATM x x • [GfL web]
(Level of Safety) of airspace expressed as probability to encounter the TMA (Terminal Manoeuvring • [GfL 2001]
aircraft conflicts. LOS is a tool to quantify the Area). • [TUD05]
potential hazards for persons or goods involved in
aviation. Traffic behaviour, traffic level and
infrastructure layout form individual scenarios for
which a LOS figure can be computed. Intended to
support procedure design and allow to increase, direct
the stakeholder’s situational awareness to bottlenecks
and to judge new concepts.
404. LOTOS I D 1987 Formal Method. A means for describing and reasoning Software requirements 2 computer x • [EN 50128]
(Language for Temporal about the behaviour of systems of concurrent, specification phase and design & • [Rakowsky]
Ordering Specification) communicating processes. Is based on CCS (Calculus development phase. • Wikipedia
of Communicating Systems) with additional features
from related algebras CSP (Communicating
Sequential Processes) and Circuit Analysis (CIRCAL).
Describes the order in which events occur.
Low-Fidelity See Prototyping
Prototyping
405. MAIM T M 1989 MAIM is a method of recording information on 8 x x x • [Kjellen, 2000]
(Merseyside Accident accidents. It attempts to capture all relevant • [Liverpool, 2004]
Information Model) information in a structured form so that sets of similar • [MAIM web]
accidents can be compared to reveal common causes.
The concept is to identify the first unexpected event,
perceived by the injured person, and to trace all
subsequent events which lead to injury. Events are
short sentences which can produce a brief report.
MAIM records event verbs and objects in the
environment. In addition, it records personal
information and other components which may be
relevant to the accidents. MAIM can be represented in
a diagram.
99
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
406. MANAGER I O 1990 Safety management assessment audit tool linked to MANAGER was the first 8 nuclear x • [Kennedy&Kirwan98]
(MANagement Quantitative Risk Assessment-type of approach. The technique to consider linking up • [Kirwan94]
Assessment Guidelines tool consists of approximately 114 questions, divided ratings on its audit questions with
in the Evaluation of into 12 areas such as Written procedures, Safety PSA results.
Risk) policy, Formal safety studies, Organisational factors,
etc.
Mapping Tool See ZA or ZSA (Zonal (Safety)
Analysis)
407. MAPPS I H 1984 Computer-based, stochastic, task-oriented model of 4 5 nuclear x x x • [Kirwan94]
(Maintenance Personnel human performance. It is a tool for analysing • [MIL-HDBK]
Performance maintenance activities in nuclear power plants, • [THEMES01]
Simulations) including the influence from environmental,
motivational, task and organisational variables. Its
function is to simulate a number of human
‘components’ to the system, e.g. the maintenance
mechanic, the instrument and control technician
together with any interactions (communications,
instructions) between these people and the control-
room operator.
408. Markov Chains M 1906 Equal to SSG where the transitions to the next stage Named after Russian 4 5 many x x • [Bishop90]
or only depend on the present state. Only for this type of mathematician A.A. Markov • [EN 50128]
Markov Modelling SSG, quantification is possible. Can be used to (1856-1922). Useful for • [FT handbook02]
evaluate the reliability or safety or availability of a dependability evaluation of • [MUFTIS3.2-I]
system. redundant hardware. A standard • [NASA-GB-1740.13-
method in these cases. Combines 96]
with FMEA, FTA, CCD. Tools • [Rakowsky]
available. • [Sparkman92]
• [Storey96]
• Wikipedia
409. Markov Latent Effects T R 1999 The Markov Latent Effects Tool aims at the Markov Latent Effects Model is 5 aviation x x • [GAIN AFSA, 2003]
Tool quantification of safety effects of organisational and based on a concept wherein the • [Cooper01]
operational factors that can be measured through causes for inadvertent operational • [FAA HFW]
“inspection” or surveillance. The tool uses a actions are traced back through
mathematical model for assessing the effects of latent effects to the possible
organisational and operational factors on safety. For reasons undesirable events may
example, organisational system operation might have occurred. The Markov
depend on factors such as accident/incident statistics, Latent Effects Model differs
maintenance personnel/operator competence and substantially from Markov
experience, scheduling pressures, and safety culture of processes, where events do not
the organisation. Because many of the potential depend explicitly on past history,
metrics on such individual parameters could be and Markov chains of arbitrary
difficult (and generally uncertain) to determine, the order, where dependence on past
method includes guidance for this. Also, there may be history is completely
ill-defined interrelations among the contributors, and probabilistic.
this is also addressed through “dependence” metrics.
100
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
410. MASCOT T D 1975 A method for software design aimed at real-time MASCOT originated within the 2 6 defence x • [Bishop90]
(Modular Approach to embedded systems from the Royal Signals and UK defence industry in the computer • [EN 50128]
Software Construction, Research Establishment, UK. It is not a full method in 1970s. The MASCOT III • [MASCOT]
Operation and Test) the current sense of design methodology. It has a standard was published in its final • [Rakowsky]
notation and a clear mapping between the design and form in 1987. Considered for • Wikipedia
physical components. MASCOT III copes better with real-time systems where
large systems than did earlier versions, through better concurrency has to and can be
support for the use of sub-systems. used. Related to JSD. Tools
available. Software requirements
specification phase and design &
development phase.
411. Materials Compatibility T Dh 1988 Materials Compatibility Analysis provides an Materials Compatibility Analysis 3 5 chemical x • [FAA AC431]
Analysis or assessment of materials utilised within a particular in universally appropriate • [FAA00]
older design. Any potential degradation that can occur due throughout most systems. Proper • [ΣΣ93, ΣΣ97]
to material incompatibility is evaluated. System Safety material compatibility analysis
is concerned with any physical degradation due to requires knowledge of the type,
material incompatibility that can result in contributory concentration and temperature of
hazards or failures that can cause mishaps to occur. fluid(s) being handled and the
Material compatibility is critical to the safe operation valve body and seal material.
of a system and personnel safety. The result of a
material misapplication can be catastrophic.
412. Maximum Credible T R 1972 The technique is to determine the upper bounds on a Similar to Scenario Analysis, this 5 aircraft x • [FAA00]
Accident/ Worst Case or potential environment without regard to the technique is used to conduct a aviation • [ΣΣ93, ΣΣ97]
older probability of occurrence of the particular potential System Hazard Analysis. The
accident. technique is universally
appropriate.
413. MCDET M R 2006 MCDET couples DDET with Monte Carlo Simulation 4 nuclear x • [Kloos&Peschke,
(Monte Carlo Dynamic to investigate in a more efficient way (by acceleration 2006]
Event Tree) of simulation) the whole tree of events. • [Hofer et al, 2001]
414. MDTA T R 2005 The MDTA process starts with a given scenario Developed at Korean Atomic 3 4 5 nuclear x x • [Kim et al, 2005]
(Misdiagnosis Tree defined in terms of an initiating event. To identify Energy Research Institute. • [Reer, 2008]
Analysis) diagnosis failures, a misdiagnosis tree is compiled
with the procedural decision criteria as headers and the
final diagnoses as end states. In connection with each
decision criterion presented in the header, the analyst
is guided to consider three types of contributors to
diagnosis failures. • Plant dynamics: mismatch
between the values of the plant parameters and the
decision criteria of the diagnostic rule of the
emergency operating procedure due to dynamic
characteristics. • Operator error: errors during
information gathering or rule interpretation. •
Instrumentation failure: problems in the information
system.
415. Measurement of G As a goal, software complexity should be minimised See also Avoidance of 7 computer x • [FAA00]
Complexity to reduce likelihood of errors. Complex software also Complexity. • [NASA-GB-1740.13-
is more likely to be unstable, or suffer from 96]
unpredictable behaviour. Modularity is a useful • [Rakowsky]
technique to reduce complexity. Complexity can be
measured via McCabe’s metrics and similar
techniques.
101
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
416. MEDA I M 1995 MEDA is a widely used attempt to systematise MEDA was developed by Boeing 8 aircraft x x • [Bongard01]
(Maintenance Error evaluation of events, problems and potential problems as part of the Boeing Safety • [Escobar01]
Decision Aid) by using a repeatable, structured evaluation program. Management System (BSMS). • [HIFA Data]
It is a structured investigation process used to Link to PEAT, REDA and CPIT. • [MEDA]
determine the factors that contribute to errors • [FAA HFW]
committed by maintenance technicians and inspectors.
MEDA is also used to help develop corrective actions
to avoid or reduce the likelihood of similar errors.
Most of these corrective actions will be directed
towards the airline maintenance system, not the
individual technical or inspector. The MEDA process
involves five basic steps: Event, Decision,
Investigation, Prevention Strategies, and Feedback.
417. Memorizing Executed T Ds 1987 Aim is to force the software to fail-safe if it executes Little performance data available. 6 computer x • [Bishop90]
Cases or an unlicensed path. During licensing, a record is made Related to testing and fail-safe • [EN 50128]
older of all relevant details of each program execution. design. Software architecture • [Rakowsky]
During normal operation each program execution is phase. See also Fail Safety. See
compared with the set of licensed executions. If it also Vital Coded Processor. Also
differs a safety action is taken. referred to as Execution Flow
Check. Related to Watchdog
Timers.
418. MERMOS I H 1998 MERMOS is a HRA method that deals with important Developed by Electricité de 4 electricity x • [HRA Washington]
(Méthode d’Evaluation underlying concepts of HRAs. The basic theoretical France, since early 1998. nuclear • [Jeffcott&Johnson]
de la Réalisations des object of the MERMOS method is what is termed See also MONACOS. • [Straeter&al99]
Missions Opérateur pour Human Factor Missions. The Human Factor Missions • [THEMES01]
la Sureté) refer to a set of macroactions the crew has to carry out • [Wiegman00]
in order to maintain or restore safety functions. Four
major steps are involved in the MERMOS method. 1)
Identify the safety functions that are affected, the
possible functional responses, the associated operation
objectives, and determine whether specific means are
to be used. 2) Break down the safety requirement
corresponding to the HF mission. 3) Bridge the gap
between theoretical concepts and real data by creating
as many failure scenarios as possible. 4) Ensure the
consistency of the results and integrate them into PSA
event trees.
419. MES I R 1975 MES is an integrated system of concepts and The first version of MES was 8 x x x x • [GAIN AFSA, 2003]
(Multilinear Events procedures to investigate a wide range of occurrences, developed in 1975 by Starline • [MES guide]
Sequencing) before or after they happen. It treats incidents as Software. • [Benner75]
processes, and produces descriptions of the actions See also STEP. • [MES tech]
and interactions required to produce observed process • [FAA HFW]
outcomes. The descriptions are developed as matrix-
based event flow charts showing the coupling among
the interactions with links where sequential, if-then
and necessary and sufficient logic requirements are
satisfied. The investigations focus on behaviours of
people and objects, demonstrating what they did to
influence the course of events, and then defining
candidate changes to reduce future risks.
420. MHD T R 1998 Mechanical HAZOP. 3 6 chemical x • [Kennedy&Kirwan98]
(Mechanical Handling or
Diagram) older
102
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
Micro-SAINT See SAINT (Systems Analysis of
(Micro-Systems Integrated Networks)
Analysis by Integrated
Networks of Tasks)
421. MIDAS I H 1986 MIDAS is an integrated suite of software components Developed by Jim Hartzell, Barry 1 2 4 5 aircraft x x • [HAIL]
(Man-Machine to aid analysts in applying human factors principles Smith and Kevin Corker in 1986, aviation • [GAIN ATM, 2003]
Integrated Design and and human performance models to the design of although the original software has nuclear • [FAA HFW]
Analysis System) complex human systems; in particular, the conceptual been changed since. Under the space
phase of rotorcraft crewstation development and name Air-MIDAS, it has also
identification of crew training requirements. MIDAS been augmented to cover air
focuses on visualisation, contains different models of traffic controllers by the HAIL
workload and situation awareness within its structure (Human Automation Integration
and contains an augmented programming language Laboratory) at SJSU (San Jose
called the Operator Procedure Language (OPL) State University) in a
incorporated into its programming code. collaborative effort with NASA
ARC (Ames Research Center).
422. Mission Analysis T D 1971 Is used to define what tasks the total system Two methods, Mission Profile, 2 defence x x x • [HEAT overview]
or (hardware, software, and lifeware) must perform. The and Mission Scenarios are space • [MIL-HDBK]
older mission or operational requirements are a composite especially useful for mission • [FAA HFW]
of requirements starting at a general level and analysis. Alternative name for
progressing to a specific level. Has two components: Mission Profile is Graphic
• Mission Profile. Provides a graphic, 2D Mission Profile. Alternative name
representation of a mission segment. Represents the for Mission Scenario is Narrative
events or situations that maintainers or operators Mission Description. The
could confront in a new system. Mission profiles are information from the mission
mostly applicable in the conceptual phase. Mission scenario can be used for
profiles are highly effective for gross analysis. Functional Flow Diagrams
Relative complexity is simple. (FFD), Decision/Action
• Mission Scenario. Is a detailed narrative description Diagrams (DAD), and Action/
of the sequence of actions and events associated with Information Requirements for the
the execution of a particular mission. A description system.
of each distinct event occurring during the mission.
The events should be described from the human’s
perspective as s/he interacts with the system. The
scenarios should describe operator actions and
system capabilities needed to complete the mission.
The detail of the narrative will depend on its
purpose. It is useful to describe all essential system
functions that would be overlooked, such as failure
modes and emergency procedures.
Mission Profile See Mission Analysis.
Mission Scenarios See Mission Analysis.
423. MLD T R 1983 Deductive approach similar to fault tree. Four levels: See also MPLD. 4 space x • [Mauri, 2000]
(Master Logic Diagram) first level is the top event, second level are formed by nuclear • [Statematelatos]
loss of functions leading to this top event, third level
are the system failures leading to the loss of functions.
Fourth level are the initiators.
103
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
424. MMSA T H 1983 The MMSA steps are: 1) Definition: analysis of The MMSA steps can be 2 3 5 nuclear x x • [Straeter01]
(Man-Machine System different types of human actions; 2) Screening: arranged as a subset of the
Analysis) identify the different types of human interactions that SHARP process.
are significant to the operation and safety of the plant;
3) Qualitative analysis: detailed description of the
important human interactions and definition of the key
influences; 4) Representation: modelling of human
interactions in logic structures; 5) Impact integration:
exploration of the impact of significant human actions;
6) Quantification: assignment of probabilities of
interactions; 7) Documentation: making the analysis
traceable, understandable and reproducible.
425. Modelling G A model is anything used in any way to represent Modelling is appropriate for any 4 many x x x x x • [FAA00]
anything else. A few examples are Computer models, system or system safety analysis. • [ΣΣ93, ΣΣ97]
Mathematical models, Scientific models, Logical See also Computer Modelling and • Wikipedia
models. There are many forms of modelling simulation. See also Performance
techniques that are used in system engineering. Modelling.
Failures, events, flows, functions, energy forms,
random variables, hardware configuration, accident
sequences, operational tasks, all can be modelled.
426. MoFL I H 1997 MoFL is a model of the cognitive performance of See also ACT-R. 4 ATC x • [Leuchter&al97]
(Modell der experienced air traffic controllers in en-route control. • [Leuchter, 2009]
Fluglotsenleistungen The model focuses on information acquisition and • [Niessen&Eyferth01]
(Model of air traffic representation of the traffic situation. MoFI's • [Niessen&Leuchter&
controller performance)) architecture comprises five modules: data selection, Eyferth98]
anticipation, conflict resolution, updates, and control
derived from previous research. The implementation
of the model MoFl is based on a production system in
the programming language ACT-R (Adaptive Control
of Thought - Rational).
427. MONACOS I H 1999 MONACOS is a method of retrospective analysis of 4 nuclear x • [HRA Washington]
actual accidents and incidents. Based on MERMOS.
428. Monte Carlo Simulation M 1777 A pattern of system responses to an initiating event is The method has been used for 4 5 many x x x x x • [EN 50128]
built up by repeated sampling. State transition times centuries. The name stems from • [MUFTIS3.2-I]
are generated by direct modelling of the behaviours of WW II. • [Rakowsky]
system components (including operators) and their • [Sparkman92]
interactions. • [GAIN ATM, 2003]
• [GAIN AFSA, 2003]
• Wikipedia
429. MORS D 1972 Primary purpose is to secure free and uninhibited MORS was established by the 8 aviation x x x • [GAIN ATM, 2003]
(Mandatory Occurrence reporting, and dissemination of the substance of the United Kingdom Civil Aviation ATM
Reporting Scheme) reports, where necessary, in the interest of flight Authority (CAA) following a aircraft
safety. It covers operators, manufacturers, fatal accident in 1972.
maintenance, repair and overhaul, air traffic control
services, and aerodrome operators. Only certain kinds
of incidents, namely, those that are “endangering” or
“potentially endangering,” are subject to mandatory
reporting; others are not. Reporting of “day-to-day
defects/incidents, etc” is discouraged. These are left to
the CAA’s Occurrence Reporting Scheme.
104
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
430. MORT I R 1972 MORT technique is used to systematically analyse an Originally developed in 1972 for 8 nuclear x x x x • [Bishop90]
(Management Oversight accident in order to examine and determine detailed US nuclear industry. This is an • [FAA00]
and Risk Tree Analysis) information about the process and accident accident investigation technique • [KirwanAinsworth92]
contributors. To manage risks in an organisation, that can be applied to analyse any • [Kirwan94]
using a systemic approach, in order to increase accident. Useful in project • [Leveson95]
reliability, assess risks, control losses and allocate planning, functional specification • [ΣΣ93, ΣΣ97]
resources effectively. Is standard fault tree augmented of a target (sub)system, accident/
• [FAA HFW]
by an analysis of managerial functions, human incident analysis and safety
behaviour, and environmental factors. programme evaluation. Tools
available. See also SMORT,
Barrier Analysis, ETBA, HPIP,
HSYS, ISA, STEP.
431. MPLD T R 1987 Outgrowth model of MLD (Master Logic Diagram), to 4 x • [Mauri, 2000]
(Master Plan Logic represent all the physical interrelationships among
Diagram) various plant systems and subsystems in a simple logic
diagram.
432. MSC T D 1992 MSC is a graphical way of describing asynchronous MSC specifications have found 2 computer x • [MSC]
(Message Sequence communication between processes. A chart does not their way into many software • Wikipedia
Chart) describe the total system behaviour, but is rather a engineering methodologies and
single execution trace. For this reason an extension to CASE tools, in particular
MSCs, called High Level MSCs has also been in the area of telecommunications
proposed; HLMSCs allow for the combination of and concurrent real-time systems.
traces into a hierarchical model. MSCs have been used MSC Specifications often
extensively in telecommunication systems design and represent early life-cycle
in particular with the formal Specification and requirements and high-level
Description Language (SDL). They are used at various design specifications.
stages of system development including requirement
and interface specification, simulation, validation, test
case specification and documentation. HLMSCs have
greatly increased the descriptive capabilities of MSCs
as they allow for modular specifications.
433. Multiple Agent Based G 2001 Way of modelling where agents are identified as The agents can be modelled with 4 ATM x x x x • [Stroeve&al01]
Modelling entities which have situational awareness. After the techniques such as DCPN • [Blom&Stroeve04]
identification of the agents of the operation, the (Dynamically Coloured Petri
modelling process zooms in, and models the agents in Nets) or i*.
more detail, after which the interconnections between
agents are modelled.
434. Multiple Greek Letters T R 1983 Is used to quantify common cause effects identified by 5 x • [Charpentier00]
method Zonal Analysis. It involves the possible influences of • [Mauri, 2000]
one component on the other components of the same • [MUFTIS3.2-I]
common cause group. Slight generalisation of Beta-
factor method when the number of components
involved is greater than two.
435. Multiple Resources T H 1992 The Multiple Resources Theory offers predictions of Proposed by C.D. Wickens 4 ATM x • [Wickens92]
patterns of interference between competing tasks
during periods of time-sharing. The theory has made
the global assumption that interference is minimised
when different resources are demanded. This
assumption has been empirically validated in
experiments. There are 4 dimensions to resources: (1)
Stages – Perceptual/central processing vs. response
selection/execution (2) Input modalities - Auditory vs.
visual (3) Processing codes - Spatial vs. verbal (4)
Responses - Vocal vs. manual
105
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
436. Murphy Diagrams T H 1981 Method starts from a description of an accident (or Apparently not in current use or 3 electricity x • [KirwanAinsworth92]
significant error sequence) and then an attempt is else used rarely. Name is based • [Kirwan94]
made to identify all the individual sources of error on the axiom of Murphy’s law • [Kirwan98-1]
which occurred, using a standard set of eight Murphy (dated around 1952), which states
diagrams (event-tree-like diagrams) to describe these that ‘if anything can go wrong, it
errors. These Murphy diagrams define, at a general will’.
level, all the likely errors associated with decision
processes.
437. Musa models M Ds 1990 This is a mathematical model that tends to estimate the Also known as “Execution Time 7 computer x • [Bishop90]
or number of remaining errors in a software product, as a Model”. • [Davis, 2007]
older measure for the minimum time to correct these bugs. Developed by John D. Musa. • [KruegerLai]
Uses execution time instead of calendar time. Not considered very reliable, but
Assumptions are that there are no failures at the can be used for general opinion
beginning of testing, and the failure rate will decrease and for comparison of software
exponentially with the expected number of failures modules. Variant is Musa-
experienced. Okumoto model, which is a
logarithmic Poisson model. See
also Bug-Counting Model. See
also Jelinski-Moranda models.
438. N out of M vote T Dh 1981 Voting is a fundamental operation when distributed Essential for systems where any 6 computer x • [Bishop90]
? systems involve replicated components (e.g. after break in service has serious
Diverse Programming). It involves a voter who consequences. ‘N out of M’ is
chooses between several replicated options, and sends usually denoted by ‘NooM’, e.g.
his choice back to the user. Aim of N out of M vote is as in 1oo2 or 2oo3. A variant is
to reduce the frequency and duration of system failure. Adaptive Voting, which aims to
To allow continued operation during test and repair. avoid that fault masking ability
For example, 2 out of 3 voting scheme means that if deteriorates as more copies fail
one of three components fails, the other two will keep (i.e. faulty modules outvote the
the system operational. good modules).
439. NAIMS D 1985 NAIMS is a Federal Aviation Administration program 8 aviation x x x • [GAIN ATM, 2003]
(National Airspace to collect, maintain and analyse aviation statistical ATM • [FAA HFW]
Information Monitoring information based on reports of accidents and
System) incidents in the US national airspace system. NAIMS
produces a monthly report available to the public,
supplies data to NASDAC, and responds to public
inquiries for safety information. Reported incidents
are: 1. near mid-air collisions (NMAC’s); 2.
operational errors (OE’s); 3. operational deviations
(OD’s); 4. pilot deviations (PD’s); 5. vehicle/
pedestrian deviations (VPD’s); 6. surface incidents
(SI’s); 7. runway incursions (RI’s); 8. flight assists
(FA’s). The NAIMS monthly report monitors trends in
and apportionment of each of these indicators. For
example, operational error rates (OE’s per 100,000
operations) are shown for each ATC facility. The
original forms are maintained for five years. A
database containing an electronic copy of each form is
maintained indefinitely.
440. Naked man / Naked T R 1963 This technique is to evaluate a system by looking at The technique is universally 3 x • [FAA00]
person or the bare system (controls) needed for operation appropriate. • [ΣΣ93, ΣΣ97]
older without any external features added in order to
determine the need/value of control to decrease risk.
106
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
441. NARA T H 2004 Enhanced and updated version of HEART specific to Developed by Corporate Risk 5 nuclear x • [Kirwan, 2004]
(Nuclear Action the nuclear industry. Associates (CRA) and
Reliability Assessment) commissioned by the Nuclear
Industry Management Committee
(IMC) and British Energy.
442. NARIM I R 1998 NARIM aims at examining airspace concepts NARIM is developed jointly by 2 4 5 6 aviation x x x x x • [Dorado-Usero et al,
(National Airspace associated with future advances to the National the FAA Investment Analysis and ATM 2004]
Resource Investment Airspace System (NAS). It consists of three Operations Research Directorate • [Sherali et al, 2002]
Model) interrelated parts: 1) Operational modelling analyzes and NASA Interagency • [FAA HFW]
the movement of aircraft through the NAS to Integrated Product Team (IPT)
determine the impacts that new concepts, implemented for Air Traffic Management
through procedures and/or hardware, will have on the (ATM).
overall NAS performance. 2) Architectural/Technical
modelling provides a means of assessing how
procedural/ system changes affect the hardware/
software components of the NAS infrastructure (both
FAA and users). 3) Investment analysis modelling
provides a methodology to cost effectively trade
between alternatives for a system, trade requirements
within a system and across system and procedural
investment alternatives, trade between services to be
provided/included into the NAS, balance risk, and
assess the investment decision as a of part of a total
research portfolio.
Narrative Mission See Mission Scenarios
Description
443. NARSIM T M 1994 NARSIM is an air traffic research simulator. Its aim is NARSIM has been developed by 2 6 7 8 ATC x x x • [GAIN ATM, 2003]
(NLR’s Air Traffic from to evaluate new operational procedures, new controller National Aerospace Laboratory
Control Research assistance tools, and new human/machine interfaces. NLR and is integrated with
Simulator) There are six AT consoles and up to 12 pseudo pilot NLR’s Tower Research
positions, each of which can control up to 15 aircraft. Simulator (TRS).
The AT consoles and pseudo pilots are connected by a
voice communication net. The computers driving each
station are connected to the main NARSIM computer.
The NARSIM software simulates most important
aspects of a real air traffic control system, including
realistic radar information. It has the capability to use
actual recorded radar data, computer-generated data,
pseudo pilot generated data, or combinations of the
three.
NASA TLX See Rating Scales
(NASA Task Load
Index)
NASDAC Database Former name of ASIAS (Aviation
(National Aviation Safety Information Analysis and
Safety Data Analysis Sharing)
Center Database)
Naturalistic Observation See Field Study
NDE See NDI (Non-Destructive
(Non-destructive Inspection technique)
Evaluation)
107
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
444. NDI G 1914 Generic term rather than a specific technique. NDI can NDI is commonly referred to as 3 many x • [Hollamby97]
(Non-Destructive - be defined as inspection using methods that in no way Non-destructive Testing (NDT) • [Wassell92]
Inspection technique) 1918 affect the subsequent use or serviceability of the which is historically the original
war material, structure or component being inspected. An term used - NDI is the more
NDI method explores a particular physical property of commonly used term in the
a material or component in an effort to detect changes manufacturing environment
in that property which may indicate the presence of where the testing of the suitability
some fault. Visual inspection is the most commonly of materials to be used is often
used NDI technique. undertaken non-destructively.
The “non-destructive” description
was adopted to differentiate it
from the various “destructive”
mechanical tests already in use.
The term Non-destructive
Evaluation (NDE) is also used,
most particularly in the sphere of
R&D work in the laboratory.
NDT See NDI (Non-Destructive
(Non-Destructive Inspection technique)
Testing)
445. Needs Assessment T H 1987 The needs assessment decision aid tool is designed to Developed at Georgia Tech 6 x • [FAA HFW]
Decision Aid help decide among three methods of gathering Research Institute • [Patton87]
additional information about the user needs. The three • [NADA]
options for collecting information are questionnaire,
interview, and focus group. The tool includes a list of
questions that when you answer them should assist
you in selecting the preferred method of collecting the
needs assessment data you desire.
446. NE-HEART T H 1999 Extended HEART approach, which adds several new 5 nuclear x • [Kirwan&Kennedy&
(Nuclear Electric or generic error probabilities specific to Nuclear Power electricity Hamblen]
Human Error older Plant tasks and systems.
Assessment and
Reduction Technique)
447. Network Logic Analysis T Dh 1972 Network Logic Analysis is a method to examine a The technique is universally 2 x x • [FAA00]
or system in terms of a Boolean mathematical appropriate to complex systems • [ΣΣ93, ΣΣ97]
older representation in order to gain insight into a system that can be represented in bi-
that might not ordinarily be achieved. Steps are: model elemental form.
Describe system operation as a network of logic
elements, and develop Boolean expressions for proper
system functions. Analyse the network and/or
expressions to identify elements of system
vulnerability to mishap.
108
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
448. Neural Networks M 1890 Neural networks are collections of mathematical The concept of neural networks 4 aviation x x • [May97]
models that emulate some of the observed properties started in the late 1800s as an and many • [FAA HFW]
of biological nervous systems and draw on the effort to describe how the human other • Wikipedia
analogies of adaptive biological learning. The key mind performed. It was inspired
element of the paradigm is the novel structure of the by the way the densely
information processing system. It is composed of a interconnected, parallel structure
large number of highly interconnected processing of the mammalian brain processes
elements that are analogous to neurones and are tied information. In [May97], neural
together with weighted connections that are analogous networks are used to model
to synapses. human operator performance in
computer models of complex
man-machine systems.
Sometimes referred to as
Artificial Neural Network
(ANN).
449. NFR T Ds 1992 NFR is based on the notion of softgoals rather than The NFR approach evolved into 6 computer x • [ChungNixon, 1995]
(Non-Functional (hard) goals. A softgoal is ‘satisficed’ rather than the Goal-oriented Requirement • [Mylopoulos, 1999]
Requirements approach) ‘achieved’. Goal satisficing is based on the notion that Language (GRL). GRL is part of • [Mylopoulos, 1992]
goals are never totally achieved or not achieved. Goals the ITU-T URN standard draft • Wikipedia
are satisficed when there is sufficient positive and which also incorporates Use Case
little negative evidence for this claim, and they are Maps (UCM).
unsatisficeable when there is sufficient negative
evidence and little positive support for their
satisficeability.
450. NGOMSL T H 1988 NGOMSL builds on CMN-GOMS by providing a Natural GOMS Language 2 x x • [Kieras, 1996]
(Natural GOMS natural-language notion for representing GOMS technique was developed by • [FAA HFW]
Language) models, as well as a procedure for constructing the David Kieras in 1988. • Wikipedia
models. Under NGOMSL, methods are represented in See also Apex, CAT, CPM-
terms of an underlying cognitive theory known as GOMS, CTA, GOMS, KLM-
Cognitive Complexity Theory (CCT), which addresses GOMS.
a criticism that GOMS does not have a strong basis in
cognitive psychology. This cognitive theory allows
NGOMSL to incorporate internal operators such as
manipulating working memory information or setting
up subgoals. Because of this, NGOMSL can also be
used to estimate the time required to learn how to
achieve tasks.
451. NOMAC I O 1994 NOMAC is an analysis framework that assesses the Qualitative. 3 7 8 nuclear x • [Kennedy&Kirwan98]
(Nuclear Organisation safety culture health of the organisation by looking for
and Management the presence or absence of indicators of safety
Analysis Concept) performance.
109
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
452. NOSS D 2003 NOSS is a methodology for the collection of safety A normal ATC operation is 8 ATC x x • [NOSS]
(Normal Operations data during normal air traffic control (ATC) defined as an operation during the
Safety Survey) operations. By conducting a series of targeted course of which no accident,
observations of ATC operations over a specific period incident or event takes place of
of time, and the subsequent analysis of the data thus which the reporting and/or
obtained, the organisation is provided with an investigation are required under
overview of the most pertinent threats, errors and existing legislation or regulations.
undesired states that air traffic controllers must Training and check shifts are
manage on a daily basis. One feature of NOSS is that considered to be outside the
it identifies threats, errors and undesired states that are scope of normal operations.
specific to an organisation's particular operational
context, as well as how those threats, errors and
undesired states are managed by air traffic controllers
during normal operations. The information thus
obtained will enhance the organisation's ability to
proactively make changes in its safety process without
having to experience an incident or accident.
453. NOTECH T H 1998 Technique for assessing non-technical skills of crews. Developed in Europe for JAA. 5 aviation x • [Verheijen02]
(Non Technical Skill) Focuses on the individual (pass or fail). Has 4 main JAA intends to use NOTECH as • [Flin, 1998]
categories divided into a number of elements: evaluation tool in the same way • [Avermaete, 1998]
Cooperation; Leadership and managerial skills; as they evaluate technical skills.
Situation awareness; Decision making.
454. NSCCA T Ds 1976 The NSCCA provides a technique that verifies and At present applies to military 3 7 nuclear x x • [FAA AC431]
(Nuclear Safety Cross- validates software designs associated with nuclear nuclear weapon systems. defence • [Rakowsky]
Check Analysis) systems. The NSCCA is also a reliability hazard • [ΣΣ93, ΣΣ97]
assessment method that is traceable to requirements-
based testing.
455. Nuclear Criticality T M 1983 Nuclear criticality safety is dedicated to the prevention All facilities that handle fissile 6 nuclear x x • [ΣΣ93, ΣΣ97]
Safety of an inadvertent, self-sustaining nuclear chain material. See also Criticality • [O’Neal et al, 1984]
reaction, and with mitigating the consequences of a Analysis or Criticality Matrix. • [Lipner&Ravets,
nuclear criticality accident. A nuclear criticality 1979]
accident occurs from operations that involve fissile • Wikipedia
material and results in a release of radiation. The
probability of such accident is minimised by analyzing
normal and abnormal fissile material operations and
providing requirements on the processing of fissile
materials.
456. Nuclear Explosives G M 1997 A nuclear explosive is an explosive device that derives Nuclear or similar high risk 3 5 nuclear x x • [ΣΣ93, ΣΣ97]
Process Hazard Analysis or its energy from nuclear reactions. Aim of Nuclear activities. See also Process
older Explosives Process Hazard Analysis is to identify high Hazard Analysis.
consequence (nuclear) activities to reduce possibility
of nuclear explosive accident.
110
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
457. Nuclear Safety Analysis G M 1980 The purpose is to establish requirements for All nuclear facilities and 6 nuclear x x x • [FAA AC431]
or contractors responsible for the design, construction, operations. DOE (Department of • [ΣΣ93, ΣΣ97]
older operation, decontamination, or decommissioning of Energy) and NRC (Nuclear • Wikipedia
nuclear facilities or equipment to develop safety Regulatory Commission) have
analyses that establish and evaluate the adequacy of rigid requirements.
the safety bases of the facility/equipment. The DOE
requires that the safety bases analysed include
management, design, construction, operation, and
engineering characteristics necessary to protect the
public, workers, and the environment from the safety
and health hazards posed by the nuclear facility or
non-facility nuclear operations. The Nuclear Safety
Analysis Report (NSAR) documents the results of the
analysis.
N-version Programming See Diverse Programming
458. O&SHA T R 1982 The analysis is performed to identify and evaluate The analysis is appropriate for all 3 5 6 aviation x x x x • [FAA AC431]
(Operating and Support or hazards/risks associated with the environment, operational and support efforts. • [FAA00]
Hazard Analysis) older personnel, procedures, and equipment involved Goes beyond a JSA. • [FAA tools]
throughout the operation of a system. This analysis Alternative name is OHA • [ΣΣ93, ΣΣ97]
identifies and evaluates: a) Activities which occur (Operating Hazard Analysis).
under hazardous conditions, their time periods, and the
actions required to minimise risk during these
activities/time periods; b) Changes needed in
functional or design requirements for system
hardware/software, facilities, tooling, or S&TE
(Support and Test Equipment) to eliminate hazards or
reduce associated risk; c) Requirements for safety
devices and equipment, including personnel safety and
life support and rescue equipment; d) Warnings,
cautions, and special emergency procedures; e)
Requirements for PHS&T (packaging, handling,
storage and transportation) and the maintenance and
disposal of hazardous materials; f) Requirements for
safety training and personnel certification.
459. OARU Model T R 1980 Model for analysis of accidents. In this model a Developed by U. Kjellén. 8 x • [Kjellen, 2000]
(Occupational Accident distinction is made between three phases in the • [Engkvist, 1999]
Research Unit Model) accident process: two preinjury phases – the initial and
concluding phase- followed by the injury phase, i.e.
the pathogenic outcome of physical damage in a
person. The initial phase starts when there are
deviations from the planned or normal process. The
concluding phase is characterised by loss of control
and the ungoverned flow of energy. The injury phase
starts when energies meet the human body and cause
physical harm.
111
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
460. OATS T H 1982 Deals with operator errors during accident or 3 4 5 nuclear x • [KirwanAinsworth92]
(Operator Action Trees) abnormal conditions and is designed to provide error • [MUFTIS3.2-I]
types and associated probabilities. The method • [GAIN ATM, 2003]
employs a logic tree, the basic operator action tree, • [FAA HFW]
that identifies the possible postaccident operator
failure modes. Three error types are identified: 1)
failure to perceive that an event has occurred; 2)
failure to diagnose the nature of event and to identify
necessary remedies; 3) failure to implement those
responses correctly and in timely manner. Next, these
errors are quantified using time-reliability curves.
461. OBJ T D 1976 OBJ (not an acronym) is an algebraic Specification Introduced by Joseph Goguen in 6 computer x • [Bishop90]
Language to provide a precise system specification 1976. Powerful yet natural formal • [EN 50128]
with user feed-back and system validation prior to specification language for both • [Rakowsky]
implementation. large- and small-scale systems • Wikipedia
developments. Tools available.
Software requirements
specification phase and design &
development phase.
462. ObjectGEODE I Ds 2001 ObjectGeode is a toolset dedicated to analysis, design, Real-time and distributed 7 avionics x • [Telelogic
or verification and validation through simulation, code applications. Such applications computer, Objectgeode]
older generation and testing of real-time and distributed are used in many fields such as defence,
applications. It supports a coherent integration of telecommunications, aerospace, medical
complementary object-oriented and real-time defence, automotive, process systems
approaches based on the UML, SDL and MSC control or medical systems.
standards languages. ObjectGeode provides graphical
editors, a powerful simulator, a C code generator
targeting popular real-time OS and network protocols,
and a design-level debugger. Complete traceability is
ensured from Requirement to code.
463. Object-oriented Design G D 1966 Uses "objects" – data structures consisting of data Useful as one possible option for 6 computer x • [Bishop90]
and Programming or fields and methods together with their interactions – to the design of safety-related • [EN 50128]
older design applications and computer programs. systems. Also for construction of • [Rakowsky]
Programming techniques may include features such as prototypes. Related to JSD and • Wikipedia
data abstraction, encapsulation, modularity, OBJ. Tools available. Software
polymorphism, and inheritance. Aim is to reduce the design & development phase.
development and maintenance costs and enhance
reliability, through the production of more
maintainable and re-usable software.
464. Observational G 1990 General class of techniques whose objective is to 7 computer x • [KirwanAinsworth92]
Techniques obtain data by directly observing the activity or • [FAA HFW]
behaviour under study. Examples of these techniques • Wikipedia
are direct visual observation, continuous direct
observation, sampled direct observation, remote
observation via closed-circuit television or video
recording, participant observation, time-lapse
photography.
112
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
465. Occupational Health T R 1971 Is carried out to identify occupational health-related Occupational health and safety is 3 6 defence x x x • [DS-00-56]
Hazard Analysis hazards and to recommend measures to be included in a cross-disciplinary area • [ΣΣ93, ΣΣ97]
the system, such as provision of ventilation, barriers, concerned with protecting the • Wikipedia
protective clothing, etc., to reduce the associated risk safety, health and welfare of
to a tolerable level. Is carried out by means of audit people engaged in work or
and checklists. employment. The goal of the
programs is to foster a safe work
environment. OSHA
(Occupational Safety and Health
Administration) have been
regulating occupational safety
and health since 1971.
See also Systematic Occupational
Safety Analysis.
466. Ofan T H 1995 Modelling framework describing human interaction Developed by Asaf Degani. Ofan 2 3 4 aviation x x • [Andre&Degani96]
with systems that have modes. Is based on the is Hebrew for a set of road • [Degani, 1996]
Statecharts and Operator Function Models (OFM). In perpetuating wheels, referring to • [Degani&Kirlik,
Ofan, five concurrently active modules are used to an event in one wheel affecting 1995]
describe the human-machine environment, namely the the adjacent wheel and so on, in • [Smith&al98]
Environment, the Human Functions/Tasks, the perpetuum.
Controls, the Machine, and the Displays. Applying the
Ofan framework allows the identification of potential
mismatches between what the user assumes the
application will do and what the application actually
does. The Ofan framework attempts to separate out the
components of the whole environment.
467. OFM T H 1987 Describes task-analytic structure of operator behaviour The power of OFM is based upon 2 aviation x • [Botting&Johnson98]
(Operation Function in complex systems. The OFM is focused on the several important observations: • [Vakil00]
Model) interaction between an operator and automation in a the event-driven nature of • [FAA HFW]
highly proceduralised environment, such as aviation. automation, the proceduralised
The OFM is a structured approach to specify the nature of high risk tasks, and the
operator tasks and procedures in a task analysis fact that many of the transitions
framework made up of modes and transitions. Using and decisions made during
graphical notation, OFM attempts to graph the high system operation are discrete in
level goals into simpler behaviours to allow the nature.
supervision of the automation.
OHA Alternative name for O&SHA
(Operating Hazard (Operating and Support Hazard
Analysis) Analysis).
468. OMAR T H 1993 OMAR is a modelling and simulation tool developed 2 4 defence x • [Deutsch et al, 1993]
(Operator Model for the Air Force that can generate high fidelity • [FAA HFW]
Architecture) computer models of human behavior, as well as state-
of-the-art intelligent agents for use in synthetic
environments, distributed simulations, and information
systems. OMAR aims at supporting the development
of knowledge-based simulations of human
performance, with a focus on the cognitive skills of
the human operator. It models situated-cognition,
where a human dynamically shifts between goals
based upon events occurring in the environment.
113
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
469. OMOLA T Dh 1989 Object-oriented language tool for modelling of Developed by A. Andersson 4 5 thermal- x • [Andersson93]
(Object Oriented continuous time and discrete event dynamical systems. (Lund Institute of Technology, power- • [OmolaWeb]
Modelling Language) Sweden). OmSim is an plant
environment for modelling and
simulation based on OMOLA.
114
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
474. ORM I R 1991 ORM is a decision-making tool to systematically help The ORM concept grew out of 7 defence x x x x x • [AFP90-902]
(Operational Risk or identify operational risks and benefits and determine ideas originally developed to • [FAA00]
Management) older the best courses of action for any given situation. In improve safety in the • [ORM web]
contrast to an Operational and Support Hazard development of new weapons, • Wikipedia
Analysis (O&SHA), which is performed during aircraft and space vehicles, and
development, ORM is performed during operational nuclear power. The US Army
use. This risk management process, as other safety risk adopted Risk Management in
management processes is designed to minimize risks 1991 to reduce training and
in order to reduce mishaps, preserve assets, and combat losses.
safeguard the health and welfare. Four principles
govern all actions associated with ORM. a) Accept No
Unnecessary Risk; b) Make Risk Decisions at the
Appropriate Level; c) Accept Risk When Benefits
Outweigh the Costs; d) Integrate ORM into Planning
at all Levels. The ORM process comprises six steps,
each of which is equally important. 1) Identify the
Hazard; 2) Assess the Risk; 3) Analyze Risk Control
Measures; 4) Make Control Decisions; 5) Implement
Risk Controls; 6) Supervise and Review.
475. ORR T R 1997 An ORR is a structured method for determining that a DOE (Department of Energy) 7 nuclear x x x • [DOE-3006]
(Operational Readiness or project, process, facility or software application is requirement. Systematic approach • [Dryden-ORR]
Review) older ready to be operated or occupied (e.g. a new Air to any complex facility. The • [ΣΣ93, ΣΣ97]
Traffic Control Centre; a new tower; a new display details of the ORR will be • [Enterprise-ORR]
system, etc.). The ORR is used to provide a dependent on the application. • [NNSA-ORR]
communication and quality check between
Development, Production, and Executive Management
as development is in the final stages and production
implementation is in progress. This process should
help management evaluate and make a decision to
proceed to the next phase, or hold until risk and
exposure can be reduced or eliminated. This review
process can also be used to evaluate post operational
readiness for continuing support and will also provide
information to make necessary system/procedural
modifications, and error and omissions corrections.
476. OSD T H 1960 An operational sequence is any sequence of control Operational Sequence Diagrams 2 4 defence x • [HEAT overview]
(Operational Sequence movements and/or information collecting activities, are extended forms of Flow • [KirwanAinsworth92]
Diagram) which are executed in order to accomplish a task. Such Process Charts. Is useful for the • [MIL-HDBK]
sequences can be represented graphically in a variety analysis of highly complex • [FAA HFW]
of ways, known collectively as operational sequence systems requiring many time
diagrams. Examples are the Basic OSD, the Temporal critical information-decision-
OSD, the Partitioned OSD, the Spatial OSD, Job action functions between several
Process Charts. operators and equipment items.
In [HEAT overview] referred to
as SAT Diagram (Sequence and
Timing Diagram) or Task
Allocation Charts.
477. OSTI T O 1986 Analysis framework that assesses the safety culture Qualitative. 8 managem x • [Kennedy&Kirwan98]
(Operant Supervisory health of the organisation by looking for the presence ent
Taxonomy Index) or absence of indicators of safety performance.
115
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
478. OWAS Method T H 1978 Movement and posture analysis. Limit physiological Developed in the Finnish steel 5 6 health x • [FAA HFW]
(Ovako Working costs and prevent disease. Goal: Work protection to industry between 1974-78 and • [Luczak97]
posture Analysis System prevent occupational diseases; Approach: Evaluation later enhanced by the Finnish • [Laurig89]
Method) and combination of data matrixes of postures and Centre for Occupational Safety. • [Stoffert85]
movements, analysis of frequencies, and derives
necessities and measures for design; Describes:
Working postures and movements; Frequency in a
task structure; Assignment of tasks into the work
segment; Necessity of design interventions;
Distribution of movements over the body; Weights
handled and forces exerted.
479. Pareto Chart T R 1906 The Pareto chart is a specialized version of a Named after Vilfredo Federico 5 6 finance x x • [FAA HFW]
histogram that ranks the categories in the chart from Damaso Pareto (1848 – 1923), • Wikipedia
most frequent to least frequent. A Pareto Chart is who developed this chart as part
useful for non-numeric data, such as "cause", "type", of an analysis of economics data.
or "classification". This tool helps to prioritize where He determined that a large
action and process changes should be focused. If one portion of the economy was
is trying to take action based upon causes of accidents controlled by a small portion of
or events, it is generally most helpful to focus efforts the people within the economy.
on the most frequent causes. Going after an "easy" yet The "Pareto Principle" (later
infrequent cause will probably not reap benefits. generalised by Joseph M. Juran)
states that 80% of the problems
come from 20% of the causes.
480. PARI method T M 1995 In the PARI method, subject-matter experts are The PARI method is particularly 3 4 6 defence x • [Hall et al, 1995]
(Precursor, Action, consulted to identify which issues to probe, and to aid useful in the development of • [FAA HFW]
Result, and in eliciting cognitive information from other subject- training programs.
Interpretation Method) matter experts. For example, subject-matter experts
may be asked to generate lists of potential equipment
malfunctions and then engage in group discussions to
reach agreement regarding a set of malfunction
categories. Experts then design representative
scenarios illustrating each category of malfunction.
These scenarios are used to elicit information from a
different set of subject-matter experts regarding how
they would approach the situation presented in each
scenario. Each expert is asked focused questions to
identify actions or solution steps and the reasons
(precursors) for those actions. The expert is then asked
to interpret the system’s response to his/her actions.
The knowledge gathered in the interviews is
represented using flow charts, annotated equipment
schematics, and tree structures.
481. Particular Risk Analysis T R 1987 Common cause analysis related technique. Defined as Is the second activity in a 3 chemical x • [Mauri, 2000]
those events or influences outside the system itself. Common Cause Analysis; Zonal
For example, fire, leaking fluids, tire burst, High Analysis being the first and
Intensity Radiated Fields (HIRF), exposure, lightning, Common Mode Analysis being
uncontained failure of high energy rotating fields, etc. the third.
Each risk should be the subject of a specific study to
examine and document the simultaneous or cascading
effects, or influences, that may violate independence.
116
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
482. Partitioning T D Technique for providing isolation between See also Equivalence Partitioning 6 avionics x • [DO178B]
functionally independent software components to and Input Partition Testing. computer • [Skutt01]
contain and/or isolate faults and potentially reduce the
effort of the software verification process. If
protection by partitioning is provided, the software
level for each partitioned component may be
determined using the most severe failure condition
category associated with that component.
483. Parts Count method T Dh 1981 Crude way of approximating the reliability of a system It assumes that every subsystem 5 nuclear x • [FT handbook02]
by counting active parts. Inductive approach. failure can lead to total system • [MUFTIS3.2-I]
failure.
484. PAS T T 1990 PAS is an air traffic control (ATC) simulator with a Supported by NASA Ames 7 ATC x x • [GAIN ATM, 2003]
(Pseudo Aircraft or high-fidelity piloting system designed to simulate the Research Center. • [PAS web]
Systems) older flight dynamics of aircraft in controlled airspace.
Realistic air traffic scenarios can be created for
advanced automated ATC system testing and
controller training. With PAS, researchers can
examine air traffic flow in real time. PAS gives
researchers the ability to provide air traffic control
instructions to simulated aircraft, and receive verbal
feedback from PAS operators (“pseudo-pilots”) on a
simulated radio network and visual feedback through a
simulated radar display. PAS consists of three major
software components: Simulation Manager, Pilot
Manager, and one or more Pilot Stations. They
combine to provide dynamic real-time simulations,
robust piloting capabilities, and realistic aircraft
modelling.
485. PC T H 1927 Estimates human error probabilities by asking experts Developed by L.L. Thurstone in 5 transport x x • [Humphreys88]
(Paired Comparisons) which pair of error descriptions is more probable. 1927. Does not restrict to human nuclear • [Kirwan94]
Result is ranked list of human errors and their error only. Can be used together • [MUFTIS3.2-I]
probabilities. The relative likelihoods of human error with APJ. Sometimes referred to • [Hunns, 1982]
are converted to absolute human error probabilities as Pairwise comparison. • Wikipedia
assuming logarithmic calibration equation and two
empirically known error probabilities.
486. PDARS D 1998 Aim of PDARS is to provide Performance See also GRADE, SIMMOD Pro. 7 aviation x • [SAP15]
(Performance data measurement metrics for the Federal Aviation Work on PDARS started in 1997. • [GAIN ATM, 2003]
analysis and Reporting Administration (FAA) at the national, as well as field A first lab prototype, supporting
System) level (individual en route and terminal facilities). off-line data processing, was
PDARS collects and processes operational data demonstrated in 1998. The first
(including aircraft tracks) and provides information to live radar data tap was brought on
the users relevant to the air traffic system performance line at the Southern California
on a daily basis. ‘TAP clients’ are maintained at each TRACON (SCT) in 1999.
facility site to continuously collect selective radar
data, the data is processed and daily reports are
generated, daily data is then sent to a central site for
storage where the user can retrieve historical data, as
well as conduct trend analysis.
487. PDP M 1984 A PDP is a process on a hybrid state space, i.e. a Through the existence of 4 ATM x x x x x • [Davis84]
(Piecewise combination of discrete and continuous. The equivalence relations between • [Everdij&Blom03]
Deterministic Markov continuous state process flows according to an PDP and DCPN (Dynamically • [Everdij&Blom05]
Process) ordinary differential equation. At certain moments in Coloured Petri Nets), the
time it jumps to another value. The time of jump is development of a PDP for
determined either by a Poisson point process, or when complex operations can be
the continuous state hits the boundary of an area. supported by Petri nets.
117
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
488. PEAT I M 1999 PEAT is a structured, cognitively based analytic tool Boeing made PEAT available to 8 aviation x x x • [HIFA Data]
(Procedural Event designed to help airline safety officers investigate and the airline industry in 1999. • [GAIN AFSA, 2003]
Analysis Tool) analyse serious incidents involving flight-crew The PEAT program has benefited • [FAA HFW]
procedural deviations. The objective is to help airlines from lessons learned by its sister
develop effective remedial measures to prevent the program, Maintenance Error
occurrence of future similar errors. The PEAT process Decision Aid (MEDA), which
relies on a non-punitive approach to identify key Boeing has provided to operators
contributing factors to crew decisions. Using this since 1995.
process, the airline safety officer would be able to
provide recommendations aimed at controlling the
effect of contributing factors. PEAT includes database
storage, analysis, and reporting capabilities.
489. Performance Modelling G R 1961 Aim is to ensure that the working capacity of the Valuable provided modelling 5 computer x x • [Bishop90]
or system is sufficient to meet the specified requirements. limitations are recognised. Tools • [EN 50128]
older The requirements specification includes throughput available. • [Rakowsky]
and response requirements for specific functions, See also Computer Modelling and
perhaps combined with constraints on the use of total simulation. See also Modelling.
system resources. The proposed system design is
compared against the stated requirements by 1)
defining a model of the system processes, and their
interactions; 2) identifying the use of resources by
each process; 3) Identifying the distribution of
demands placed upon the system under average and
worst-case conditions; 4) computing the mean and
worst-case throughput and response times for the
individual system functions.
490. Performance T Ds 1995 Aim is to establish that the performance requirements 5 computer x x • [EN 50128]
Requirements Analysis or of a software system have been satisfied. An analysis • [Rakowsky]
older is performed of both the system and the software
requirements specifications to identify all general and
specific explicit and implicit performance
requirements. Each of these performance requirements
is examined in turn to determine: 1) the success
criteria to be obtained; 2) whether a measure against
the success criteria can be obtained; 3) the potential
accuracy of such measurements; 4) the project stages
at which the measurements can be estimated; 5) the
project stages at which measurements can be made.
The practicability if each performance requirement is
then analysed in order to obtain a list of performance
requirements, success criteria and potential
measurements.
491. PERT T M 1957 A PERT shows all the tasks, a network that logically Developed by US navy in 1950s. 2 navy x x x x x • Internet
(Program Evaluation connects the tasks, time estimates for each task and the It is commonly used in and many • Wikipedia
Review technique) time critical part. conjunction with the critical path other
method (CPM).
118
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
492. Petri Nets M 1962 A Petri Net is a bi-partite graph of Places and Petri nets were first developed by 4 5 many x x x x x • Huge amount of
from Transitions, connected by Arcs. A token inside a place C.A. Petri in 1962. P/T nets are a literature available,
denotes that the corresponding discrete state is the special case of SSG. Plenty of see for an overview
current one. Petri Nets can be used to model system tools available, also free. A useful e.g. [PetriNets World]
components, or sub-systems at a wide range of advantage of Petri nets is the • [Abed&Angue94]
abstraction levels; e.g. conceptual, top-down, detail compositional specification • [Bishop90]
design, or actual implementations of hardware, power. • [EN 50128]
software or combinations. GSPN (Generalised Stochastic • [FAA AC431]
The best known Petri net is named Place/Transition Petri Nets) have been used to • [FAA00]
net (P/T net). This basic Petri Net models discrete model an ATC technical support
• [KirwanAinsworth92]
state space systems only, and no random inputs. system). SPN (Synchronised Petri
• [MUFTIS3.2-I]
Numerous extensions exist through which other states Network) has been used for
• [ΣΣ93, ΣΣ97]
and stochastic inputs can be modelled. Some notable modelling Human Operator tasks.
extensions are Time (transitions fire not immediately Petri net extensions that have • [Everdij&Blom&Klo
but after waiting some time; this time may be constant been developed and used in mpstra97]
or stochastic), Colour (tokens have a colour or value, safety assessments for complex • [Everdij&Blom03]
which may be constant or even changing through air traffic operations are DCPN • [Everdij&Blom04]
time), Different types of arcs, different types of and SDCPN. • [Everdij&Blom05]
transitions or places. The Petri Net formalism allows • [Everdij&al04]
to specify in a compositional way an unambiguous • [FAA HFW]
mathematical model of a complex system. For • Wikipedia
different Petri Net extensions, one-to-one mappings
with mathematical formalisms are known, by means of
which the advantages of both Petri Nets and these
mathematical formalisms can be combined.
493. PHA T R 1966 Identification of unwanted consequences for people as PHA was introduced in 1966 3 aircraft x x • [Bishop90]
(Preliminary Hazard result of disfunctioning of system. Aim is to determine after the US Department of rail • [FAA AC431]
Analysis) during system concept or early development the Defense requested safety studies defence • [FAA00]
hazards that could be present in the operational system to be performed at all stages of • [FAA tools]
in order to establish courses of action. Sometimes it product development. • [Mauri, 2000]
consists of PHI and HAZOP and/or FMEA. The PHA PHA is considered for • [MUFTIS3.2-I]
is an extension of a Preliminary Hazard List. As the specification of systems which • [ΣΣ93, ΣΣ97]
design matures, the PHA evolves into a system of sub- are not similar to those already in
system hazard analysis. operation and from which much
experience has been gained.
Design and development phase.
Use with FTA, FMEA, HAZOP.
Initial effort in hazard analysis
during system design phase.
Emphasis on the hazard and its
effects. Inductive and deductive.
119
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
494. PHASER T R 1996 Software tool that has the capability of incorporating Implemented at Snadia National 3 5 nuclear x • [Cooper96]
(Probabilistic Hybrid subjective expert judgement into probabilistic safety Labs, USA. • [ΣΣ93, ΣΣ97]
Analytical System analysis (PSA) along with conventional data inputs. Describes the potential for failure
Evaluation Routine) The basic concepts involve scale factors and and helps in weighing cost/
confidence factors that are associated with the benefit analysis. Applies to
stochastic variability and subjective uncertainty modelling where inputs lack
(which are common adjuncts used in PSA), and the precise definition or have
safety risk extremes that are crucial to safety dependence.
assessment. These are all utilised to illustrate
methodology for incorporating dependence among
analysis variables in generating PSA results, and for
importance and Sensitivity measures associated with
the results that help point out where any major sources
of safety concern arise and where any major sources
of uncertainty reside, respectively.
495. PHEA T M 1993 Simplified version of the earlier SHERPA. Comprises Equivalent to Human HAZOP. 3 6 chemical x • [Kirwan98-1]
(Predictive Human Error an error checklist. Focuses on particular task types
Analysis technique ) depending on the industry concerned. Steps are: 1)
Identify task steps where errors may result in
accidents; 2) Specify the nature of the error; 3)
Identify possible recovery; 4) Recommend
preventative measures. Errors of several types are
analysed: Planning Errors, Action Errors, Checking
Errors, Retrieval Errors, Information Communication
Errors, Selection Errors.
496. PHECA T H 1988 PHECA is s a computerised system based on the Apparently not in current use or 3 6 x • [Kirwan98-1]
(Potential Human Error identification of error causes, which interact with else used rarely. • [PROMAI5]
Causes Analysis) performance shaping factors. It has a wider application
than just error identification (e.g. potential error
reduction strategies). Like HAZOP it uses guidewords
to identify hazards.
497. PHI G 1991 Reduced version of PHA, only containing a column Performed in the early stages of 3 aircraft x • [MUFTIS3.2-I]
(Preliminary Hazard or with hazards. The results are recorded in the lifecycle. • [Storey96]
Identification) older Preliminary Hazard List (PHL). Is sometimes
considered a generic term rather than a specific
technique.
498. PHL T R 1989 Is an initial analysis effort within system safety. Lists The technique is universally 3 aircraft x x • [FAA AC431]
(Preliminary Hazard or of initial hazards or potential accidents are identified appropriate. Usually the results • [FAA00]
List) older during concept development. The PHL may also are fed into a PHA. • [ΣΣ93, ΣΣ97]
identify hazards that require special safety design
emphasis or hazardous areas where in-depth safety
analyses are needed as well as the scope of those
analyses. At a minimum, the PHL should identify: The
Hazard; When identified (phase of system life cycle);
How identified (analysis, malfunction, failure) and by
whom; Severity and Probability of Occurrence;
Probable/ actual cause(s); Proposed
elimination/mitigation techniques; Status (Open-action
pending /Closed-eliminated/Mitigated; Process of
elimination/mitigation; Oversight/approval authority.
120
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
499. PHRA T H 1990 Time-related method. A distinction is made between Update of HCR (Human 5 electricity x • [Straeter00]
(Probabilistic Human routine operation and operation after the event. Error Cognitive Reliability), in which • [Straeter01]
Reliability Analysis) probabilities are calculated for identified classes of advantages of HCR have been
routine operation with the help of simple evaluation used and disadvantages have been
instructions. Simulator experiments can be performed tried to eliminate.
to evaluate the reliability of human actions after
trouble has materialised. Various time-reliability
curves for varying the complex trouble situations are
determined from the experiments. Error probabilities
are determined from the time-reliability curves.
500. Plant walkdowns/ T R Site-based systematic surveys, developed for rapid Alternative name: Site Visits 3 6 chemical x x • [Risktec]
surveys identification of hazards, effects and controls.
501. PMA T R 1984 Mathematical technique used to quantify top effect of 5 x • [MUFTIS3.2-I]
(Phased Mission fault trees, accounting for different phases of a task,
Analysis) and allowing repairable components under certain
conditions.
PMTS See PTS (Predetermined Time
(Predetermined Motion Standards)
Time System)
POMS See Rating Scales
(Profile of Mood States)
502. PPAS T H 1977 Main purpose is providing remedies to minimize pilot Four levels of learning are 5 6 8 aviation x • [Besco, 2005]
(Professional error and optimize pilot performance. The five examined. These include • [Wiegman et al, 2000]
Performance Analysis interactive factors of the model include knowledge, unconsciously incompetent (crew
System) skills, attitudes, systems environment, and obstacles. is unaware that they don’t know
Four analysis steps: 1) Describe the process, function, something), consciously
task, error, or low performance, in order to see if the incompetent (the crew is aware
pilot was aware of risks, threats and consequences of that they don’t know something),
their actions and if there was stimulus that degraded consciously competent (the crew
this awareness. 2) Assess the impact of the error on has knowledge and skill but must
this particular accident or incident by determining apply great effort to accomplish
whether removal would have prevented the accident. it), and unconsciously competent
3) Assess the visibility of the error to the crew (the crew has over learned the
members. 4) Analyze a detailed flow chart to see if the knowledge or skill and can apply
crew had adequate knowledge to cope with the errors it without conscious thought).
and anomalies that occurred. Other questions are
explored to determine deficiencies. Recommendations
are given for each of the situations where a problem
was perceived.
503. PRA I R 1965 Quantified analysis of low probability, high severity Initially nuclear power industry, 3 4 5 aviation x • [Bishop90]
(Probabilistic Risk events. Evaluates the risks involved in the operation of now any system with catastrophic nuclear • [FAA00]
Assessment based on a safety critical system. The risk assessment forms the accident potential. Useful before chemical • [Kirwan94]
FTA/ETA) basis of design decisions. It is a systematic, logical, major design decisions. Not defence • [MUFTIS3.2-I]
comprehensive discipline that uses tools like FMEA, reasonable for the minor system computer • [ΣΣ93, ΣΣ97]
FTA, Event Tree Analysis (ETA), Event Sequence aspects. Tools available, e.g. • [Statematelatos]
Diagrams (ESD), Master Logic Diagrams (MLD), WinNUPRA, see [GAIN AFSA,
• [GAIN AFSA, 2003]
Reliability Block Diagrams (RBD), etc. to quantify 2003]. Alternative name is
• [Storey96]
risk. Probabilistic Hazard Analysis.
• Wikipedia
121
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
504. PRASM I M 2000 Methodology for incorporating human and 4 5 6 8 nuclear? x x • [Kosmowski00]
(Predictive Risk organisational factors in the risk evaluation and safety
Assessment and Safety management in industrial systems. The methodology
Management) includes the cost-benefit analysis of the risk control
measures and options to enable elaborating a rational
risk control strategy for implementing more effective
safety related undertakings in different time horizons.
505. PREDICT T R 1992 Is targeted at the relatively unpredictable or bizarre PREDICT differs from HAZOP 3 4 6 x x • [Kirwan98-1]
(PRocedure to Review event sequences that characterise events, in that such in that it directs the analysis both
and Evaluate events are incredible or not predictable until accidents inside and outside the process and
Dependency In give us 20:20 hindsight. The method utilises a group places greater emphasis on
Complex Technologies) to identify errors, and is thus HAZOP-based, with identifying ways in which latent
keyword systems, followed by three categories of failures may reveal themselves.
assumption-testing keywords. The technique
essentially allows the analyst to test the assumptions
underpinning the design and safety cases for plants.
The method allows inserting a keyword randomly to
enable the analyst to consider more ‘lateral’ possible
causal connections.
506. PRIMA T O 1996 Safety management assessment linked to Quantitative 8 aviation x x • [Kennedy&Kirwan98]
(Process RIsk Risk Assessment-type of approach. The PRIMA • [Roelen&al00]
Management Audit) modelling approach provides insight into the
management factors influencing the accident risk, but
does not permit this insight to be translated into a
detailed quantitative influence.
507. PRISM T O 1993 Safety culture audit tool uses performance indicators Qualitative. 8 chemical x • [Kennedy&Kirwan98]
(Professional Rating of that are organised into groups. The scores on the sub- By AEA Technology.
Implemented Safety sets of safety performance areas are weighted and then
Management) translated into an overall index rating.
508. PRMA T R 1994 Aim is to identify errors of commission (EOC), which Related to SHERPA and 3 5 nuclear x x • [Kirwan98-1]
(Procedure Response are more closely linked to cognitive errors (global and SCHEMA and TEACHER-
Matrix Approach ) local misdiagnoses), and slip-based EOCs during SIERRA. The approach has
emergencies. PRMA to some extent represents a more strong affinities with FSMA,
sophisticated and detailed investigation than the which has faults on one axis of its
FSMA, though one that is more resource-intensive. matrix and symptoms on the
The approach has several major stages: develop a other one. The technique is useful
PRM for all initiating events that produce significantly for considering how system status
different plant responses; for each PRM review the indications and procedures will
decision points in the procedural pathway; identify affect performance in abnormal
potential incorrect decisions resulting from or emergency events, such as a
misinterpretation or failure of the plant to provide the nuclear power plant emergency
appropriate information, or due to a procedural scenario requiring diagnosis and
omission (lapse). recovery actions using emergency
procedures. As such, it can be
used to evaluate alarm system
design adequacy, for example.
Probabilistic cause- See BBN (Bayesian Belief
effect models Networks)
Probabilistic Hazard See PRA (Probabilistic Risk
Analysis Assessment based on FTA/ETA).
122
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
509. Probabilistic testing T Ds 1995 Software Testing technique. Probabilistic Software verification and testing 7 computer x • [EN 50128]
or considerations are based either on a probabilistic test phase and validation phase. • [Jones&Bloomfield&
older or on operating experience. Usually the number of test See also Tests based on Random Froome&Bishop01]
cases or observed operating cases is very large. data. See also Software Testing. • [Rakowsky]
Usually, automatic aids are taken which concern the
details of test data provision and test output
supervision.
Procedure Analysis See Operator Task Analysis
Process Charts See FPC (Flow Process Chart).
510. Process Hazard Analysis G M 1989 Is a means of identifying and analysing the Requirement of 29 CFR (Code of 3 5 chemical x • [FAA AC431]
or significance of potential hazards associated with the Federal Regulations) 1910.119 • [ΣΣ93, ΣΣ97]
older processing or handling of certain highly hazardous for chemical process industry. • Wikipedia
chemicals. It is directed toward analyzing potential A variety of techniques can be
causes and consequences of fires, explosions, releases used to conduct a Process Hazard
of toxic or flammable chemicals and major spills of Analysis. See also Nuclear
hazardous chemicals, and it focuses on equipment, Explosives Process Hazard
instrumentation, utilities, human actions, and external Analysis.
factors that might impact the process.
511. Process simulation G Ds Aim is to test the function of a software system, Hard to accumulate sufficient 2 5 rail x x • [EN 50128]
together with its interface to the outside world, tests to get high degree of • [Rakowsky]
without allowing it to modify the real world in any confidence in reliability. See also • Wikipedia
way. The simulation may be software only or a Computer Modelling and
combination of software and hardware. This is simulation.
essentially testing in a simulated operational situation.
Provides a realistic operational profile, can be valuable
for continuously operating systems (e.g. process
control).
512. PROCRU T H 1980 Control-theoretic model that permits systematic 2 5 ATM x x x • [CBSSE90, p30]
(Procedure-oriented investigation of questions concerning the impact of aviation • [MUFTIS3.2-I]
Crew Model) procedural and system design changes on the
performance and safety of commercial aircraft
operations in the approach-to-landing phase of a flight.
It is a closed-loop system model incorporating
submodels for the aircraft, the approach and landing
aids provided by ATC, three crew members, and an air
traffic controller.
Production Readiness See AoA (Analysis of
Analysis Alternatives)
513. Production System T R 1985 Production System Hazard Analysis is used to identify The technique is appropriate 3 aircraft x • [FAA00]
Hazard Analysis or hazards that may be introduced during the production during development and • [ΣΣ93, ΣΣ97]
older phase of system development which could impair production of complex systems
safety and to identify their means of control. The and complex subsystems.
interface between the product and the production
process is examined.
Program Proving See Formal Proof.
Pro-SWAT See SWAT (Subjective Workload
(Projective Subjective Assessment Technique)
Workload Assessment
Technique)
123
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
514. Protected airspace G R 1996 In attempting to estimate the number of conflicts, a 5 ATM x • [MUFTIS1.2]
models or volume of protected airspace is generally defined
older around each flying aircraft. Typically this volume has
the shape of a right cylinder with radius r and height h.
A conflict is then defined as the overlap of any two of
these cylinders in airspace. The size of the volume
defines the type of conflict, e.g. violation of ATC
separation standards, near miss, actual collision.
515. Prototyping G 1982 Prototyping, or Prototype Development, provides a This technique is appropriate 7 many x x • [Bishop90]
or Modelling / Simulation analysis of the constructed during the early phases of pre- • [FAA00]
older early pre-production products, so that the developer production and test. Valuable if • [ΣΣ93, ΣΣ97]
may inspect and test an early version. Aim is to check the system requirements are • Wikipedia
the feasibility of implementing the system against the uncertain or the requirements
given constraints, and to communicate the need strict validation. Related to
interpretation of the system to the customer, in order performance simulation. Tools
to locate misunderstandings. available. Variations are High-
fidelity Prototyping, Low-fidelity
Prototyping, Rapid Prototyping,
Video Prototyping, Wizard of OZ
Technique, Scale Model,
Storyboarding, Animation.
PSA See PRA (Probabilistic Risk
(Probabilistic Safety Assessment based on FTA/ETA)
Assessment)
516. PSSA T Dh 1994 The PSSA according to ARP 4761 establishes specific This PSSA is a refinement and 4 5 6 aircraft x x • [ARP 4754]
(Preliminary System Ds system and item safety requirements and provides extension of JAR-25 steps avionics • [ARP 4761]
Safety Assessment) preliminary indication that the anticipated system (though JAR-25 does not use the • [Klompstra&Everdij9
according to ARP 4761 architecture can meet those safety requirements. The term PSSA). It covers both 7]
PSSA is updated throughout the system development hardware and software. • [Lawrence99]
process. A PSSA is used to ensure completeness of the
failure conditions list from the FHA and complete the
safety requirements. It is also used to demonstrate how
the system will meet the qualitative and quantitative
requirements for the various failure conditions
identified.
517. PSSA T R 2002 The PSSA according to EATMP SAM determines that This PSSA is a refinement and 1 4 5 6 ATM x x x x • [EHQ-SAM]
(Preliminary System the proposed system architecture is expected to extension of JAR-25 steps and of • [Review of SAM
Safety Assessment) achieve the safety objectives. PSSA examines the the PSSA according to ARP techniques, 2004]
according to EATMP proposed system architecture and determines how 4761, but its scope is extended to
SAM faults of system elements and/or external events could Air Navigation Systems, covering
cause or contribute to the hazards and their effects AIS (Aeronautical Information
identified in the FHA. Next, it supports the selection Services), SAR (Search and
and validation of mitigation means that can be devised Rescue) and ATM (Air Traffic
to eliminate, reduce or control the hazards and their Management).
end effects. System Safety Requirements are derived
from Safety Objectives; they specify the potential
means identified to prevent or to reduce hazards and
their end effects to an acceptable level in combination
with specific possible constraints or measures. Five
substeps are identified: 1) PSSA initiation; 2) PSSA
planning; 3) Safety requirements specification; 4a)
PSSA validation; 4b) PSSA verification; 4c) PSSA
assurance process; 5) PSSA completion. Most of these
steps consist of subtasks.
124
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
518. PTS T H 1986 PTSs are internationally recognised time standards Also referred to as Predetermined 5 defence x • [MIL-HDBK]
(Predetermined Time or used for work measurement. They are employed to Motion Time System (PMTS) • Wikipedia
Standards) older estimate performance times for tasks that can be
decomposed into smaller units for which execution
times can be determined or estimated. The time
necessary to accomplish these fundamental motions
should be constants.
519. PUMA I H 1995 PUMA is a toolset designed to enable the prediction The PUMA Toolset was 2 5 ATC x • [Kirwan&al97]
(Performance and abou and description of controller workload for ATC developed for NATS by Roke • [GAIN ATM, 2003]
Usability Modelling in t scenarios. It is capable of assessing the effect on Manor Research Limited. PUMA • [FAA HFW]
ATM) controller workload of various computer assistance has been applied to a number of
tools. PUMA uses observational task analysis to try to future operational concepts,
capture all the relevant information about cognitive providing useful information in
activities in a task, usually based on video analysis of terms of their likely workload
someone (i.e. an ATCO) performing the task. Each impacts, and potential
task or activity is then classified by a PUMA analyst improvements in the designs of
and its impact on workload calculated as a function of future tools for the ATCO. The
its usage of cognitive resources, and as a function of motivation for using PUMA
other activities’ (competing) resource requirements. stems from the fact that real time
Some tasks or activities will conflict more with each simulation is resource intensive,
other as they are demanding the same cognitive requiring a lot of manpower to
resources, as defined in a ‘conflict matrix’ within plan, prepare for, conduct,
PUMA. Central to the PUMA methodology is a analyse and report each trial. It is
workload prediction algorithm, which calculates how therefore useful to apply the
different task types will impact on workload alone, PUMA ‘coarse filter’ to new
and together. This algorithm is based on the Wickens operational concepts before
(1992) multiple resource theory. The output is a expensive real time simulation.
prediction of MWL (Mental Workload) as it changes This allows the more promising
throughout the overall task. and the less promising options to
be identified, before proceeding
with the better options, to full
simulation.
520. Pure Hazard T R 1996 Hazard identification through “pure” brainstorming Also referred to as Scenario- 3 ATM x x x x x • [DeJong04]
Brainstorming with experts, generally along scenarios. Allows based Hazard brainstorming or
identification of many hazards that are unimaginable TOPAZ-based hazard
for some other approaches. Rule 1: no analysis during brainstorming.
the session and no solving of hazards; Rule 2:
criticism is forbidden; Rule 3: use a small group; Rule
4: brainstormers should not be involved in the
operation’s development; need to play devil’s
advocates; current expertise is better than past
experience; Rule 5: moderator should watch the basic
rules; should make the brainstorm as productive as
possible; needs to steer the hazard identification
subtly; write short notes on flip-over or via beamer;
Rule 6: short sessions and many coffee breaks
and...bottles of wine for the most creative hazard; the
last hazard; and inspiration, if necessary...
Q Sort See Card Sorting
521. QCT T R 1996 Bayesian method to determine probability of top event 5 aviation x • [Loeve&Moek&Arsen
(Quantified Causal or from the probabilities of the basic events of a causal is96]
Tree) older tree.
125
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
522. QRAS T R 1998 QRAS is a PC-based software tool for conducting a Tools available, e.g. 4 5 space x • [GAIN ATM, 2003]
(Quantitative Risk Probabilistic Risk Assessment (PRA) on a system. The WinNUPRA, see [GAIN AFSA, • [GAIN AFSA, 2003]
Assessment System) tool helps in modelling deviations from the system’s 2003].
nominal functions, the timing and likelihood of such Developed by University of
deviations, potential consequences, and scenarios Maryland and by NASA for
leading from initial deviations to such consequences. space missions.
523. Quality Assurance G 1930 Quality Assurance (QA) refers to a program for the Tools available. Very old 8 computer x x • [Bishop90]
and systematic monitoring and evaluation of the various approach; it may even be dated • Wikipedia
older aspects of a project, service, or facility to ensure that back to the time of construction
standards of quality are being met. Two key principles of the Egypt Pyramids (2500 BC)
characterise QA: "fit for purpose" (the product should
be suitable for the intended purpose) and "right first
time" (mistakes should be eliminated). Aim is to
ensure that pre-determined quality control activities
are carried out throughout development.
524. Questionnaires G 1975 Questionnaires are sets of predetermined questions Of all the subjective methods, the 8 many x x x x x • [KirwanAinsworth92]
or arranged on a form and typically answered in a fixed questionnaire is the most • [FAA HFW]
older sequence. Is the basic tool for obtaining subjective frequently used and is invaluable • [MIL HDBK]
data (provided the questions are unbiased). in the expedient collection of • Wikipedia
Questionnaires provide a structured means of human error data
collecting information from system users. They
usually consist of specific questions about the
performance of the system and human interface.
QUORUM Perilog See Data Mining
525. Radiological Hazard T R 1997 Structured approach to characterisation and Broadly applicable to all facilities 3 nuclear x • [ΣΣ93, ΣΣ97]
Safety Analysis or categorisation of radiological hazards. engaged in managing radioactive chemical
older materials.
526. RADS T T 2003 RADS is a PC-based, real-time, tool for playback of Developed by NAV Canada. 8 ATC x x x • [GAIN ATM, 2003]
(Radar Analysis radar and voice in a highly intuitive, three-dimensional RADS is based on Flightscape’s
Debriefing System) format. It can be used for analysis of incidents and/or Recovery, Analysis and
training and is adaptable to any ATC environment. Presentation System (RAPS).
527. RAIT I O 1993 Tool developed to investigate accidents by identifying Developed for use at British rail 8 rail x x x x x • [PROMAI5]
(Railway Accident contributions to and aggregate Railway Problem by James Reason and others. Also • [RAIT slides]
Investigation Tool) Factors, i.e. representative of significant organisational used as basis for training courses. • [Reason et al, 1994]
and managerial root causes of railway infrastructure MAIT (Marine Accident
accidents. RAIT starts with the accident outcome and Investigation Tool) is a derived
then traces back to the active and latent failures that version for Marine safety.
originated higher up within the organisation.
528. RAMS Plus T R 1995 RAMS Plus is a PC-based simulation tool that allows The original RAMS is a fast-time 7 ATM x x x • [GAIN ATM, 2003]
(Reorganized ATC the users to create a model of an air traffic control simulation tool developed by the • [FAA HFW]
Mathematical system, ATC procedures, 4D performance of over 300 Eurocontrol Experimental Center
Simulator) aircraft, 4D conflict detection and rule -based conflict (EEC) at Bretigny (France) and
resolution, and controller actions based on the current CACI Inc. in 1993. RAMS
demand. It includes controller workload assignment official release 2.0 was carried
based on dynamic system conditions, TMA runway out in November 1995.
sequencing and holding stack operations, airspace RAMS Plus is developed,
routing, free flight and Reduced Vertical Separation supported, and distributed
Minima zones, stochastic traffic generation, and exclusively by ISA Software.
graphical animation. The tool produces a detailed list
of events created in text form for analysis.
Rapid Prototyping See Prototyping
126
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
529. Rapid Risk Ranking T R Rapid qualitative judgements of the expected See also Relative Ranking. 5 chemical x • [EQE Web]
frequency and consequences of the identified hazards,
enables trivial hazards to be screened out, such that the
subsequent quantitative work focuses on the
significant hazards only.
RAPS and Insight See Flight Data Monitoring
(Recovery, Analysis, & Analysis and Visualisation
Presentation System &
Insight)
530. RAS T H 1971 Requirements allocation sheets are used to translate The RAS is most useful during 4 6 defence x x • [HEAT overview]
(Requirements or functions into performance and design requirements. concept development and design
Allocation Sheets) older The functional analysis (usually a Function Flow definition. It must be preceded by
Diagram) is used as a basis for the data entered on the a functional analysis and some
sheets. RAS are normally prepared for each function system design synthesis. It
block. In some cases, closely related functions may be provides the basis of detailed task
analysed using the same RAS. Design requirements analyses, performance prediction,
are identified in terms of the purpose of the function, and interface and workspace
parameters of the design, design constraints, and design. It is less useful during the
requirements for reliability, human performance, latter stages of design and
accuracy, safety, operability, maintainability, and. development.
Thus the RAS bridges the systems engineering
activities of function analysis and synthesis. The
format of an RAS is not fixed. Each RAS documents
the performance requirements and the design
requirements for a specific system function.
531. RASRAM T R 1997 RASRAM is used for quantitative assessment of the RASRAM was developed by 3 4 5 ATM x x • [GAIN ATM, 2003]
(Reduced Aircraft increase in risk of aircraft operations due to reduced Rannoch Corporation. • [Sheperd97]
Separation Risk separation requirements, and/or reduced risk due to
Assessment Model) new surveillance or navigational technology. It is a
PC-based tool that is based on a large database of
aircraft data, incorporating aircraft and air traffic
controller data. The overall organisation of RASRAM
is a fault-tree analysis of the major failure modes in
specific operational scenarios. The approach includes
time-budget analyses of dynamic interactions among
multiple participants in a scenario, each with defined
roles, responsibilities, information sources, and
performance functions. Examples are response times
for pilots and air traffic controllers. The methodology
works directly with the functional form of probability
distributions, rather than relying on Monte Carlo
simulation techniques. The probability of a Near Mid-
Air Collision (NMAC) is computed, and from this, the
probability of a collision, using a factor of
collisions/NMAC. Probability distributions of lateral
miss distance and simultaneous runway occupancy are
also computed.
127
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
532. Rating Scales T H 1930 A Rating Scale is a set of categories designed to elicit The Dynamic Workload Scale is 5 many x • [FAA HFW]
from information about a quantitative or a qualitative used in aircraft certification, e.g. • Wikipedia
attribute. Generally, it couples a qualitative description by Airbus. See also SART.
of a criterion to a numerical measure. Various specific
Rating Scales can be identified [FAA HFW], e.g.
• Bedford Workload Scale (Workload)
• Behaviorally Based Performance Rating Scale
• China Lake Situational Awareness Rating Scale
• Cooper Harper Rating Scale (Workload)
• Dynamic Workload Scale (Workload)
• Hart & Bortolussi Rating Scale (Workload)
• Hart & Hauser Rating Scale (Workload)
• Haworth-Newman Avionics Display Readability
Scale (Investigation of displays)
• Likert Scale (Agreement)
• NASA TLX (NASA Task Load Index)
• POMS (Profile of Mood States)
• SA/BARS (Situation Awareness Behavioural Rating
Scales)
• SARS (Situation Awareness Rating Scales)
• Semantic Differential Scales (Perception,
Attitude/Agreement)
• SUS (System Usability Scale) (User satisfaction with
software)
• Thurstone Scale (Attitude/Agreement)
533. RBD T R 1972 Technique related to FTA where one is looking for a Alternative name: SDM (Success 4 aircraft x • [Bishop90]
(Reliability Block success path instead of failure path. Aim is to model, Diagram Method). Useful for the • [EN 50128]
Diagrams) in a diagrammatical form, the set of events that must analysis of systems with • [FT handbook02]
take place and conditions which must be fulfilled for a relatively straightforward logic, • [MUFTIS3.2-I]
successful operation of a system or task. but inferior to fault tree analysis • [Sparkman92]
for more complex systems. In • Wikipedia
some references referred to as
Dependence Diagrams (DD).
RBD is also sometimes referred
to as equivalent to a Fault Tree
without repeated events. Tools
available, but tools for FTA may
also be useful.
128
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
534. RCA T R 1981 This method identifies causal factors to accident or The root cause is underlying 8 aviation x x x x • [FAA00]
(Root Cause Analysis) or near-miss incidents. The technique goes beyond the contributing causes for observed ATM • Several Internet
older direct causes to identify fundamental reasons for the deficiencies that should be health sources
fault or failure; it asks why things happen, instead of documented in the findings of an and many • [ΣΣ93, ΣΣ97]
treating the symptoms. It is a systematic process of investigation. Several training other • Wikipedia
gathering and ordering all relevant data about counter- courses, tools and supporting
quality within an organisation; then identifying the packages are (commercially)
internal causes that have generated or allowed the available.
problem; then analysing for decision-makers the
comparative benefits and cost-effectiveness of all
available prevention options. To accomplish this, the
analysis methodology provides visibility of all causes,
an understanding of the nature of the causal systems
they form, a way to measure and compare the causal
systems, an understanding of the principles that
govern those causal systems, and a visibility of all
internal opportunities for the organisation to control
the systems.
535. RCM T Dh 1990 RCM is the concept of developing a maintenance 2 3 5 6 aircraft x x • [Cotaina&al00]
(Reliability Centered scheme based on the reliability of the various electricity • [Moubray00]
Maintenance) components of the system or product in question. defence • [Rausand&Vatn98]
RCM can improve the efficiency of the system manufactu • [NASA-RCM]
undergoing maintenance, and all other products or ring [Relex-RCM]
processes that interact with that system - allowing one nuclear Wikipedia
to anticipate the times when the system is down for
maintenance, and scheduling other activities or
processes accordingly. RCM can help to inform the
safety of all aspects of maintenance operations,
including determining what maintenance intervals to
adopt to maximise safety, and what combinations of
concurrent maintenance of equipment sub-systems are
risky. It optimises preventive maintenance
programmes in three phases: 1) ranking the
components and evaluation of failure mode criticality;
2) identification of degradation mechanisms at work;
3) for each critical failure, determine most efficient
reliability-based and cost-based maintenance task.
Real-Time Simulation See Computer modelling and
simulation
536. Real-time Yourdon I D 1985 Complete software development method consisting of Worth considering for real-time 6 computer x • [Bishop90]
specification and design techniques oriented towards systems without a level of • [EN 50128]
the development of real-time systems. The criticality that demands more • [Rakowsky]
development scheme underlying the technique formal approaches. Related to
assumes a three phase evolution of a system being SADT. Tools available. Software
developed: 1) building an ‘essential model’ that requirements specification phase
describes the behaviour required by the system; 2) and design & development phase.
building an implementation model which describes the
structures and mechanisms that, when implemented,
embody the required behaviour; 3) actually building
the system in hardware and software.
129
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
537. REASON 5 T O 2002 Reason 5 is a knowledge management tool that has a REASON 6 was released in 2004. 8 medical x x x • [GAIN AFSA, 2003]
number of components designed to help an • [FAA HFW]
organisation identify, communicate and solve issues:
A quick risk assessment tool that directs activity when
issues arise; A guided investigation tool that gauges
itself to the time prudent to spend on the issue (based
upon the risk assessment); 15-30 minute mode of
investigation (REASON Frontline); 2-8 hour mode of
investigation (REASON Express); 1 day plus mode of
investigation (REASON Pro); REASON Lesson
Learned System; REASON Situational Profiles for
every employee; Corrective Action Writing;
Corrective Action Tracking.
Reason’s model See Swiss Cheese Model.
538. Recovery blocks T D 1975 Aim is to increase the likelihood of the program Effective in situations without 6 computer x • [Bishop90]
or ? performing its intended function. A number of strict temporal constraints. • [EN 50128]
Recovery Block routines are written (in isolation) using different Software architecture phase. • [Rakowsky]
Programming approaches. In addition, an Acceptance Test is • [Sparkman92]
provided and the first routine to satisfy the acceptance • [SSCS]
test is selected.
539. RECUPERARE T R 2000 Model based on systematic analysis of events Developed by IRSN (Institute for 3 5 7 nuclear x • [Matahri02]
including Human Reliability in Nuclear Plants. Model Radiological Protection and • [Matahri03]
puts emphasis on the recovery process during events Nuclear safety) for operating • [Straeter01]
and uses a classification for the default-recovery links experience feedback analysis. For
and delays for detection diagnosis and actions. The the time being, IRSN emphasises
method aims at the following objectives: 1) Identify the difficulty in connecting
the main mechanisms and parameters which performance indicators to safety.
characterise events occurring in the French PWSs
(power series solution) during one year; 2) Provide a
way of classifying deficiencies and associated
recoveries; 3) Provide a way of classifying events
according to previous parameters; 4) Record these
data in a base to make trend analyses.
540. REDA T M 1999 The REDA process focuses on a cognitive approach to Developed by Boeing. REDA is 8 aviation x x • [GAIN AFSA, 2003]
(Ramp Error Decision ? understand how and why the event occurred, not who based on MEDA. • [Reda example]
Aid) was responsible. REDA contains many analysis • [Balk&Bossenbroek,
elements that enable the user to conduct an in-depth 2010]
investigation, summarise findings and integrate them • [REDA User Guide]
across various events. The REDA data organisation
enables operators to track their progress in addressing
the issues revealed by the analyses. REDA is made up
of two components: the interview process and
contributing factors analysis. It consists of a sequence
of steps that identify key contributing factors to ramp
crew errors and the development of effective
recommendations aimed at the elimination of similar
errors in the future.
541. Redundancy for Fault T D 1980 By employing redundancy, checks may be made for Useful in safety computer 6 computer x • [Bishop90]
Detection ? differences between units to determine sub-system applications.
failures.
542. Refined Reich collision T R 1993 Refinement of Reich collision risk model (CRM) to 5 ATM x • [Bakker&Blom93]
risk model evaluate risk of collision between aircraft. Replaces • [Mizumachi&Ohmura
the two restrictive Reich assumptions by one less 77]
restrictive one. • [MUFTIS3.2-II]
130
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
543. REHMS-D I M 1999 REHMS-D uses a six-stage system engineering REHMS-D is called a major 2 6 defence x • [MIL-HDBK]
(Reliable Human abou process, a cognitive model of the human, and advance in system and reliability manufactu • [FAA HFW]
Machine System t operational sequence diagrams (OSD) to assist the engineering that has broad ring • [LaSala, 2003]
Developer) designer in developing human-machine interfaces application to systems and transport
subject to top-level reliability or yield requirements. processes. It can be used to
Through its system engineering process, REHMS-D synthesise or analyse radar and
guides the designer through the understanding of sonar systems, control rooms and
customer requirements, the definition of the system, control systems, communications
the allocation of human functions, the basic design of systems, geographic information
human functions, the assignment of job aids, and the systems, manufacturing
design of tests to verify that the human functions meet processes, maintenance
the allocated reliability requirements. REHMS-D can processes, biomedical systems,
be used for both the synthesis of new systems and the transportation systems, and other
analysis of existing systems. systems and processes that
involve human-computer
interfaces.
Reich model See CRM (Collision Risk Model
(ICAO)).
544. Relative Ranking T Dh 1992 Rank hazardous attributes (risk) of process. Hazards Any system wherein a ranking 5 nuclear x • [ΣΣ93, ΣΣ97]
or can be ranked based on e.g. frequency of occurrence approach exists or can be
older or on severity of consequences, etc. The ranking may constructed. See also PC (Paired
lead to prioritisation of mitigating measures. Comparisons). See also Rapid
Risk Ranking.
Relevance Diagram Equal to Influence Diagram
Relevance Tree See How-How Diagram
Relex Human Factors See HF PFMEA
Risk Analysis
545. Reliability Growth T Ds 1972 Aim is to predict the current software failure rate and Some problems have been 5 computer x • [Bishop90]
Models hence the operational reliability. After a software reported during application. Tools • [Sparkman92]
component has been modified or developed, it enters a available.
testing phase for a specified time. Failures will occur See Musa model for an example.
during this period, and software reliability can be
calculated from various measures such as number of
failures and execution time to failure. Software
reliability is then plotted over time to determine any
trends. The software is modified to correct the failures
and is tested again until the desired reliability
objective is achieved.
546. REPA I R 1993 Aim is to get a total overview of the risks involved for 2 3 4 5 x x • [Kjellen, 2000]
(Risk and Emergency concept selection and to check compliance with
Preparedness Analysis) acceptance criteria. REPA consists of two parts: risk
analysis, and emergency-preparedness analysis. The
risk analysis involves four activities: 1) System
description; 2) Identification of hazards and listing of
initial events; 3) Accident modelling, consequence
evaluation and assessment of probabilities; 4)
Evaluation of risk and comparison with risk-
acceptance criteria. The emergency-preparedness
analysis identifies dimensioning accidental events, i.e.
major accidents which generate the most severe
accidental loads that the safety barriers must be able to
withstand.
131
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
547. Requirements Criticality T Ds 1996 Criticality analysis identifies program requirements 3 avionics x • [FAA00]
Analysis or that have safety implications. A method of applying • [NASA-GB-1740.13-
older criticality analysis is to analyse the hazards of the 96]
software/ hardware system and identify those that
could present catastrophic or critical hazards. This
approach evaluates each program requirements in
terms of the safety objectives derived for the software
component.
548. Re-try Fault Recovery T M 1990 Aim is to attempt functional recovery from a detected Should be used with care and 6 computer x • [Bishop90]
or fault condition by re-try mechanisms, i.e. re-executing always with full consideration of • [EN 50128]
older the same code or by re-booting. There are three the effect on time-critical events, • [Rakowsky]
general categories of methods used to recover to a and the effect of lost data during • [Sparkman92]
previous state: (1) checkpointing, (2) audit trails, and re-boot. Combine with software
(3) recovery cache. time-out checks or watchdog
timers. Software architecture
phase.
549. Return to Manual T D 1990 Aim is to provide the operator or supervisor the Useful provided it is used with 6 computer x • [Bishop90]
Operation or information and the means to perform the function of care.
older the failed automatic control system.
550. RFA T R 1991 Aim is to model recurring events that prevent the Currently used in nuclear 4 6 nuclear x • [ΣΣ93, ΣΣ97]
(Repetitive Failure or system from performing its function. It provides a industry. Potential for transfer to
Analysis) older systematic approach to address, evaluate and correct other fields.
repetitive failures.
RIA See RIF diagram (Risk
(Risk Influence Influencing Factor Diagram)
Analysis)
551. RIF diagram T R 1998 RIFs are classified according to Operational RIFs, Alternative to fault trees and 5 aviation x x • [Vinnem00]
(Risk Influencing Factor Organisational RIFs and Regulatory related RIFs. The event trees. A RIF is a set of space • [Hokstad et al, 1999]
Diagram) Operational RIFs are divided into technical, human relatively stable conditions rail • [Albrechtsen&Hoksta
or and external factors. The RIFs are next arranged in an influencing the risk. It is not an offshore d, 2003]
RIA (Accident) Frequency Influence Diagram and an event, and it is not a state that
(Risk Influence (Accident) Consequence Influence Diagram. All RIFs fluctuates over time. RIFs are
Analysis) are characterized by their status (present state) and thus conditions that may be
their effect (influence) on other RIFs. Arrows indicate influenced or improved by
the influence between one RIF and another, usually at specific actions.
the next upward level.
552. Risk Classification T R These are matrices that relate the severity of risk or These exist for different domains 1 many x • [Storey96]
Schemes hazard to its maximum tolerated probability. and different types of systems,
see the references for a collection.
See also Safety Targets Setting.
553. Risk Graph Method T R 1998 Risk Graphs are a diagrammatic representation of risk Developed by IEC 61508. 5 ATM x • [Gulland04]
factors and are used to determine the safety integrity Reference [ACM, 2006] lists chemical • [IEC 61508]
level (SIL). SIL correlation is based on four factors: some advantages and • [ACM, 2006]
consequence (C), frequency and exposure time (F), disadvantages. • [Summers98]
possibility of avoiding the hazardous event (P), and
probability of the unwanted occurrence (W). The four
factors are evaluated from the point of view of a
theoretical person being in the incident impact zone.
The likelihood and consequence are determined by
considering the independent protection layers during
the assessment. Once these factors are determined, the
risk graph is utilized to determine the minimum risk
reduction level and associated SIL.
132
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
554. Risk-Based Decision I M 1993 Risk-Based Decision Analysis is an efficient approach The technique is universally 4 5 nuclear x • [ARES-RBDA]
Analysis or to making rational and defensible decisions in appropriate to complex systems. health • [FAA00]
older complex situations. It can be regarded as a generic and many • [ΣΣ93, ΣΣ97]
term, or as an integrated approach, covering decision other
analysis tools, such as decision trees, influence
diagrams, Monte Carlo analysis, Bayesian update
analysis, and simulation modelling. The concepts
involved in decision analysis are particularly
significant in regard to activities where information
relative to a specific state of an activity may be
insufficient and/or inadequate.
555. RITA T T 1995 RITA 2 is an experience feedback tool for the training RITA was initially developed by 8 aviation x • [GAIN ATM, 2003]
(Replay Interface of of air traffic controllers and to reinforce the training of Centre d’Etudes de la Navigation ATM
TCAS (Traffic Collision flight crews. It shows on the same screen what both Aérienne (CENA) in 1995 for the
Avoidance System) pilots and controllers could see and a transcript of ACAS (Aircraft Collision
Alerts) what was said. Although individual use is possible, Avoidance System) training of
RITA is best used by instructors in briefing sessions French controllers. RITA 2 is a
with small groups of pilots and controllers. Its display new PC-based European version
is divided into three main parts: 1) a visual display whose main objectives are to
simulating the radar display provided by radar include TCAS II Version 7 events
recordings, 2) a visual display of the pilot’s view on and to implement modern radar
either an Instantaneous Vertical Speed Indicator and TCAS displays. A library of
(IVSI) or an Electronic Flight Instrument System TCAS alert events are being
(EFIS), with the associated aural alarms, 3) display of assembled, selected based on
the transcript of voice communication between their relevance to training needs.
controller(s) and pilots.
556. RMA T Ds 1973 It ensures that time critical activities will be properly Is a useful analysis technique for 5 avionics x • [FAA00]
(Rate Monotonic verified. RMA is a collection of quantitative methods software. Also referred to as Rate computer • [NASA-GB-1740.13-
Analysis) and algorithms that allows engineers to specify, Monotonic Scheduling. 96]
understand, analyse, and predict the timing behaviour • [Rakowsky]
of real-time software systems, thus improving their • [RMA Sha 1991]
dependability and evolvability. RMA can be used by • Wikipedia
real-time system designers, testers, maintainers, and
troubleshooters, as it provides 1) mechanisms for
predicting real-time performance; 2) structuring
guidelines to help ensure performance predictability;
3) insight for uncovering subtle performance problems
in real-time systems. This body of theory and methods
is also referred to as generalised rate monotonic
scheduling (GRMS).
ROBDD See BDD (Binary Decision
(Reduced Ordered Diagram).
Binary Decision
Diagram)
557. RSM T R 1996 An RSM is a model or depiction of a system or Are sometimes referred to as 2 aircraft x x • [FAA00]
(Requirements State or subsystem, showing states and the transitions between Finite State Machines (FSM). • [NASA-GB-1740.13-
Machines) older states. Its goal is to identify and describe all possible 96]
states and their transitions. • [Rakowsky]
558. Rule violation G D These are techniques that try to avoid violations of See also TOPPE. 6 offshore x x • [HSEC02]
techniques rules, e.g. by designing the system such that the computer
violation is prohibited, or such that an alert follows
after the violation.
133
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
SA/BAR See Rating Scales
(Situational Awareness
Behaviorally Anchored
Rating Scale)
559. SACRI T H 1995 Adaptation of SAGAT to evaluate nuclear power plant SACRI was developed as the 5 nuclear x • [Hogg et al, 1995]
(Situation Awareness operator’s situational awareness and uses the freeze result of a study investigating the • [Collier et al, 1995]
Control Room technique to administer control room based situational use of SAGAT in process control
Inventory) awareness queries. rooms. The freeze technique
involves the freezing of the
exercise at random times, during
which the subjects respond to
questions.
560. SADA T Ds 1996 Analysis performed on the high-level design to verify 7 computer x • [FAA00]
(Safety Architectural or the correct incorporation of safety requirements and to • [NASA-STD-8719]
Design Analysis) older analyse the Safety-Critical Computer Software • [Rakowsky]
Components (SCCSCs). It uses input from the
Architectural Design, the results of the Software
Safety Requirements Analysis (SSRA), and the system
hazard analyses. The SADA examines these inputs to:
a) Identify as SCCSCs those software components that
implement the software safety requirements identified
by the SSRA. Those software components that are
found to affect the output of SCCSCs shall also be
identified as SCCSCs; b) Ensure the correctness and
completeness of the architectural design as related to
the software safety requirements and safety-related
design recommendations; c) Provide safety-related
recommendations for the detailed design; d) Ensure
test coverage of the software safety requirements and
provide recommendations for test procedures. The
output of the SADA is used as input to follow-on
software safety analyses.
561. SADT T Dh 1973 SADTTM aim is to model and identify, in a Developed by Douglas T. Ross 2 computer x x • [Bishop90]
(Structured Analysis and diagrammatical form using information flows, the and SofTech, Inc. • [EN 50128]
Design Technique) decision making processes and the management tasks Good analysis tool for existing • [HEAT overview]
associated with a complex system. A type of systems, and can also be used in • [Rakowsky]
structured analysis methodology, SADT is a the design specification of • Wikipedia
framework in which the nouns and verbs of any systems. Software requirements
language can be embedded for the representation of a specification phase and design &
hierarchical presentation of an information system. It development phase.
is composed of a graphic language and a method for The military equivalent to SADT
using it. A SADT model is an organised sequence of is IDEF.
diagrams, each with supporting text. SADT also
defines the personnel roles in a software project.
SAFE See Flight Data Monitoring
(Software Analysis for Analysis and Visualisation
Flight Exceedance)
134
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
562. Safe Language Subsets T D 1990 Aim is to reduce the probability of introducing Software design & development 6 computer x • [Bishop90]
or or programming faults and increase the probability of phase. Tools available. See also • [EN 50128]
Safe Subsets of older detecting any remaining faults. A language is Design and Coding Standards. • [FAA00]
Programming considered suitable for use in a safety-critical • [NASA-GB-1740.13-
Languages application if it has a precise definition, is logically 96]
coherent, and has a manageable size and complexity. • [Rakowsky]
With the “safe subset” approach, a language definition
is restricted to a subset; only the subset is used in the
programming. The reasons are: 1) some features are
defined in an ambiguous manner; 2) some features are
excessively complex. The language is examined to
identify programming constructs that are either error-
prone or difficult to analyse, for example, using static
analysis methods. A language subset is then defined
which excludes these constructs.
563. SAFER T R 2000 SAFER was developed to provide a more Developed for DDESB 5 defence x • [DDESB, 2000]
(Safety Assessment For comprehensive assessment of the overall risk of (Department of Defense
Explosives Risk) explosives operations. It calculates risk in terms of the Explosives Safety Board), and for
statistical expectation for loss of life from an Defence application only.
explosives event. Three components are multiplied to See also Explosives Safety
estimate annual maximum probability of fatality, P(f), Analysis. See also Process
and the expected fatalities, E(f): (1) the probability of Hazard Analysis.
an explosives event, P(e), (2) the probability of a
fatality given an event, P(f/e), and (3) the average
exposure of an individual, E(p). SAFER calculates
risk using the following basic equations: P(f) = P(e) ×
P(f/e) × E(p) to determine individual risk; E(f) =
Σ(P(e) × P(f/e) × E(p)) to determine group risk. Risk
exceeding individual and group risk limits constitutes
a violation of the risk acceptance criteria.
564. Safety Bag T M 1969 Aim is to protect against residual specification and May be considered for fail- 3 6 nuclear x • [Bishop90]
? implementation faults in software that adversely affect systems, provided there is • [EN 50128]
safety. In this technique, an external monitor, called a adequate confidence in the • [Sparkman92]
safety bag, is implemented on an independent dependability of the safety bag
computer using a different specification. The primary itself. Tools are not applicable.
function of the safety bag is to ensure that the main Software architecture phase. The
system performs safe - but not necessarily correct - Safety Bag is a form of Fault
operations. The safety bag continually monitors the Detection and Diagnosis (FDD).
main system to prevent it from entering an unsafe
state. If a hazardous state does occur, the system is
brought back to a safe state by either the safety bag or
the main system.
565. Safety Monitoring T Ds Safety monitoring is a means of protecting against 7 aircraft x x • [DO178B]
specific failure conditions by directly monitoring a computer • Wikipedia
function for failures that would contribute to the health
failure condition. Monitoring functions may be
implemented in hardware, software, or a combination
of hardware and software. Through the use of
monitoring technique, the software level of the
monitored function may be reduced to the level
associated with the loss of its related system function.
135
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
566. Safety Review G R A Safety Review assesses a system, identifies facility Periodic inspections of a system, 7 aviation x • [FAA00]
or conditions, or evaluates operator procedures for operation, procedure, or process aircraft • [Storey96]
Safety Audit hazards in design, the operations, or the associated are a valuable way to determine computer • [ΣΣ93, ΣΣ97]
maintenance. their safety integrity. A Safety
Review might be conducted after
a significant or catastrophic event
has occurred.
567. Safety Screening T O 2006 Collection of four methods of screening Air Traffic The four methods have been 2 5 ATM x x x • [Straeter, 2006]
Techniques Management (ATM) system changes, built upon the proposed by four groups of
rationale of the “Safety Fundamentals”, in order to experts, from Eurocontrol, NLR
make a preliminary assessment of their safety (National Aerospace Laboratory),
implications, and also to enable systematic DNV (Det Norske Veritas), TÜV
consideration of safety issues within ATM strategy (Technischer Überwachungs-
development. The objectives of the methods are: • To verein).
anticipate safety issues at an early stage in ATM
concept development, including both positive and
negative effects on safety. • To prioritise ATM
changes for more detailed safety assessment studies. •
To enable systematic consideration of safety issues
within ATM strategy development.
568. Safety targets setting T R 2001 Setting requirements for the level of safety that is See also Risk Classification 1 ATM and x x • [SPF-safety01]
or tolerated. Schemes. many
older other
569. SAFMAC I O 2006 Framework for the development of a validated Developed by NLR, supported by 1 2 3 4 5 6 7 8 aviation x • [Everdij et al, 2006]
(SAFety validation operational concept for a major change in air transport Dutch regulatory and Oversight ATM • [EverdijBlom, 2007]
framework for MAjor operations. Consists of two complementary authorities, the Dutch ANSP, and • [Everdij et al, 2009]
Changes) components. The first is a framework of four Eurocontrol.
synchronised processes: 1) Joint goal setting by all
stakeholders involved; 2) Development of operational
concept; 3) Allocation of tasks and information flows
to individual stakeholders; 4) Validation. The second
SAFMAC component is a list of 32 safety validation
quality indicators to characterise which aspects should
be addressed by a safety validation for a major change
in air transport operations.
570. SAFSIM I R 2002 SAFSIM is a process and a toolbox of measures. The Launched by EEC (Eurocontrol 3 5 ATC x x • [SAFSIM guidance]
(Simulations for Safety process involves either the measurement of the safety Experimental Centre) in 2002. • [Scaife00]
Insights) of controller performance when faced with specific • [Gordon04]
safety-related events (e.g. hazards) in a real-time • [Shorrock05]
human-in-the-loop simulation, or else general safety • [Gizdavu02]
monitoring using less intrusive procedures to see if • [SAP15]
any safety-relevant information arises during a real
time simulation.
571. SAGAT T H 1995 SAGAT is a specialised questionnaire for querying Most known uses of SAGAT 7 aviation x x • [Endsley97]
(Situation Awareness subjects about their knowledge of the environment. have been in the context of ATM • [HIFA Data]
Global Assessment This knowledge can be at several levels of cognition, fighter aircraft although its defence • [MIL-HDBK]
Technique) from the most basic of facts to complicated predictions application within the ATM • [FAA HFW]
of future states. It is administered within the context of domain has also been • [GAIN ATM, 2003]
high fidelity and medium fidelity part-task investigated. SAGAT is a method
simulations, and requires freezing the simulation at that provides an objective
random times. measure of situation awareness
(SA) during a simulated
operation. It is not intended for
use during an actual operation.
136
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
572. SAINT I H 1977 SAINT is a general purpose network modelling and Micro-SAINT (1985) is a 2 avionics x x • [CBSSE90, p40]
(Systems Analysis by simulation technique that can be used in the design commercial version of SAINT. It navy • [HEAT overview]
Integrated Networks of and development of complex human-machine systems. is easier to use than SAINT but • [Kirwan94]
Tasks) Using a Monte Carlo approach, SAINT provides the has fewer features. It is a discrete- • [Kirwan98-1]
conceptual framework and the means for modelling event task network modelling tool • [THEMES01]
systems whose processes can be described by discrete that can be used to analyse and • [GAIN ATM, 2003]
and continuous functions/tasks, and interactions improve any system that can be
between them. It provides a mechanism for combining described by a flow diagram. It
human performance models and dynamic system can be used to answer questions
behaviours in a single modelling structure. about the costs of alternative
training, about how crew
workload levels or reaction times
affect system performance, and
about the allocation of functions
between people and machines.
573. SALIANT T H 1998 SALIANT involves the use of a theoretically based list Developed by the US Naval Air 4 5 navy x • [Muniz et al, 1998]
(Situational Awareness of behaviours to assess team behavior. It is an Warfare Centre. • [Smith et al, 2007]
Linked Indicators inferential technique that requires experts to rate • [FAA HFW]
Adapted to Novel situation awareness (SA) based upon implicit evidence
Tasks) from observable correlates. SALIANT comprises 5
phases: Phase 1: Delineation of behaviours
theoretically linked to team SA. Phase 2: Development
of scenario events to provide opportunities to
demonstrate team SA behaviours. Phase 3:
Identification of specific, observable responses. Phase
4: Development of script. Phase 5: Development of
structured observation form.
SAM See EATMP SAM
(Safety Assessment
Methodology)
574. SAME I M 2008 SAME describes the broad framework on to which the SAME is developed by 1 2 3 4 5 6 7 ATM x x x x • [SAME PT1, 2008]
(Safety Assessment EATMP SAM-defined processes, and the associated EUROCONTROL. • [Fowler et al., 2009]
Made Easier) safety, human-factors and system-engineering
methods, tools and techniques, are mapped in order to
explain their purpose and interrelationships. Where
EATMP SAM focuses on the negative contribution to
risk, SAME also considers the positive contribution of
the concept under investigation to aviation safety. It
does this by proposing a ‘broader approach to safety
assessment’, consisting of complementary success and
failure approaches: The success approach seeks to
show that an ATM system will be acceptably safe in
the absence of failure; The failure approach seeks to
show that an ATM system will still be acceptably safe,
taking into account the possibility of (infrequent)
failure. In SAME, the safety assessment is driven by a
safety argument structured according to system
assurance objectives and activities.
137
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
575. SAMPLE T H 1996 SAMPLE models the situation awareness and actions Developed by Charles River 4 5 aviation x • [GAIN ATM, 2003]
(Situation Awareness of operators (individuals or crews) of complex human- Analytics. It has been applied to ATC • [FAA HFW]
Model for Pilot- in-the- machine systems. Recent variants have been applied to e.g. combat aviation, commercial defence
Loop Evaluation) the study of the effects of individual differences and aviation and air traffic control,
environmental stressors on cognitive performance. battlefield command and control,
SAMPLE assumes that the actions of an operator are and Military Operations on Urban
guided by highly structured standard procedures and Terrain (MOUT).
driven by detected events and assessed situations.
Some variants assume a multitasking environment. In
all cases, the operator (or crew) is concerned primarily
with performing situation assessment, continuous
control and communication, and discrete procedure
execution.
576. SARD T M 2008 SARD defines a process and a set of ‘transition The SARD process has been 6 ATM x x x x x • [CAATS II D13]
(Strategic Assessment of criteria’ for the analysis of ATM R&D (air traffic successfully applied and further
ATM Research and management research and development) results per improved through application to
Development results) operational concept from a strategic view point. The two ATM operational concepts.
process assesses the maturity of operational concepts, In principle it can be used for any
in terms of the phases of the Concept Lifecycle Model ATM improvement under
of E-OCVM, and provides recommendations for next development.
steps.
577. SART T H 1989 SART is a multi-dimensional rating scale for operators Developed by R.M. Taylor in 5 ATC x x x x x • [MIL-HDBK]
(Situation Awareness to report their perceived situational awareness. It 1989. SART is simple, quick and defence • [Uhlarik&Comerford0
Rating Technique) examines the key areas of SA: understanding, supply easy to apply. It has been 2]
and demand. These areas are further broken down into applied to several complex • [FAA HFW]
the 14 dimensions ([Uhlarik02] mentions 10 domains, including air traffic • [Taylor90]
dimensions). From the ratings given on each of the control. 3D-SART is a narrowed- • [GAIN ATM, 2003]
dimensions situational awareness is calculated by down version of SART,
using the equation SA =U-(D-S) where U is summed applicable to aircrew, and
understanding, D is summed demand and S is summed covering only 3 dimensions: (a)
supply. Demands on Attentional
Resources - a combination of
Instability of Situation,
Complexity of Situation, and
Variability of Situation; (b)
Supply of Attentional Resources -
a combination of Arousal of
Situation, Concentration of
Attention, Division of Attention,
and Spare Mental Capacity; and
(c) Understanding of Situation - a
combination of Information
Quantity, Information Quality,
and Familiarity.
See also Rating Scales.
578. SA-SWORD T H 1989 SA-SWORD is a Situation Awareness adaptation of See also Paired Comparisons. 5 aviation x • [Vidulich et al, 1991]
(Situational Awareness SWORD, which measures workload of different tasks • [Snow&French, 2002]
Subjective Workload as a series of relative subjective judgments compared
Dominance) to each other. SWORD has three steps: 1. Collect
subjective between-tasks comparative ratings using a
structured evaluation form after the subject has
finished all the tasks; 2. Construct a judgment matrix
based on the subjective ratings; 3. Calculate the
relative ratings for each task.
138
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
SAT Diagram See OSD (Operational Sequence
(Sequence and Timing Diagram)
Diagram)
579. SATORI D 1993 Incident reporting system. Goal is to gain a better 8 ATM x x • [Pounds03]
(Systematic Air Traffic understanding of the interaction between the various • [FAA HFW]
Operations Research elements of displayed information, verbal interactions,
Initiative) and the control actions taken by air traffic control
specialists. SATORI enables its users to re-create
segments of operational traffic in a format similar to
what was displayed to the ATCS, for example,
showing relative location and separation, speeds, and
headings of aircraft. Among other things, SATORI
can display full and limited data blocks, beacon
targets, and conflict alerts. Video and audio are
synchronized, and the air traffic situation can be
displayed in four dimensions.
580. SAVANT T H 2002 SAVANT is a combination of SAGAT and SPAM. SAVANT was developed by the 7 aviation x • [FAA HFW]
(Situation Awareness The SAVANT measure is an attempt to retain and FAA Technical Center in New • [Willems02]
Verification and combine the advantages of both techniques. The Jersey, USA
Analysis Tool) specific advantages to be retained from SAGAT are:
(1) queries are anchored in the airspace (i.e. the
location of aircraft on the sector map); (2) the
controller enters responses directly into the system.
From SPAM the specific advantages to be retained
are: (1) no interruption of the simulation, (2) no
extensive use of memory, (3) queries of relational
information instead of verbatim information.
581. SCDM I R 2003 The Safety Case Development Manual gives an Version 2.2 (dated 2006) is a 1 8 ATM x • [SCDM, 2006]
(Safety Case overview of a methodology being proposed for the complete rewrite of Version 1.3
Development Manual) construction and development of Safety Cases. which was published in 2003,
The manual includes the concept of a Safety Case as taking into consideration user
presenting the entirety of argument and evidence needs and recent experience with
needed to satisfy oneself and the regulator with respect Safety Case developments.
to safety. It does not provide guidance on the
generation or documentation of the evidence itself.
582. Scenario Analysis T R 1979 Scenario Analysis identifies and corrects hazardous Scenarios provide a conduit for 3 5 many x x • [FAA00]
or situations by postulating accident scenarios where brainstorming or to test a theory • [ΣΣ93, ΣΣ97]
older credible and physically logical. Scenario analysis in where actual implementation • Wikipedia
relies on the asking “what if” at key phases of flight could have catastrophic results.
and listing the appropriate responses. Steps are: 1) Where system features are novel,
Hypothesize the scenario; 2) Identify the associated subsequently, no historical data is
hazards; 3) Estimate the credible worst case harm that available for guidance or
can occur; 4) Estimate the likelihood of the comparison, a Scenario Analysis
hypothesized scenario occurring at the level of harm may provide insight.
(severity).
583. Scenario Process Tool T R 2000 The Scenario Process tool is a time-tested procedure to 3 defence x x x x • [FAA00]
or identify hazards by visualizing them. It is designed to
older capture the intuitive and experiential expertise of
personnel involved in planning or executing an
operation, in a structured manner. It is especially
useful in connecting individual hazards into situations
that might actually occur. It is also used to visualize
the worst credible outcome of one or more related
hazards.
139
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
Scenario-based hazard See Pure Hazard Brainstorming
brainstorming
584. SCHAZOP T R 1996 HAZOP adapted for safety management assessment. 3 6 chemical? x • [Kennedy&Kirwan98]
(Safety Culture Hazard By application of ‘safety management’ guidewords to
and Operability) a representation of the system, it identifies: Areas
where the safety management process is vulnerable to
failures; the potential consequences of the safety
management failure; the potential failure mechanisms
associated with the safety management failure; the
factors which influence the likelihood of the safety
management failures manifesting themselves; error
recovery and reduction measures.
585. SCHEMA I H 1992 Integrated framework of techniques for human factors Originated from SHERPA. 5 chemical x • [Kirwan98-1]
(System for Critical assessment. The method has been implemented as a • [MUFTIS3.2-I]
Human Error computer program called Theta (Top-down Human
Management and Error and Task Analysis). Includes techniques like
Assessment OR HTA, SLIM. It has a flow chart format following the
Systematic Critical SHERPA method.
Human Error
Management Approach)
586. SDA T Ds 1996 Safeware hazard analysis technique that incorporates 3 computer x • [Reese&Leveson97]
(Software Deviation the beneficial features of HAZOP (e.g. guidewords,
Analysis) deviations, exploratory analysis, systems engineering
strategy) into an automated procedure that is capable
of handling the complexity and logical nature of
computer software.
587. SDA T H 1999 SDA follows from TLA and notes the dependency 2 3 6 nuclear x • [Kirwan&Kennedy&
(Sequence Dependency or between different task elements. It can also estimate ? Hamblen]
Analysis) older the qualitative uncertainty in time estimates for each ?
sub-task, and the timing data source used. SDA is
useful in identifying tasks whose reliability is critical,
and therefore tasks that require a high quality of
human factors design. SDA can therefore lead to error
reduction recommendations (often via the TTA and
Ergonomics Review) that will have a general effect on
human reliability across a scenario or several
scenarios. SDA also helps to identify the longest time
likely for the task sequence, and where it may perhaps
be best to gain more accurate time estimates to ensure
the TLA is accurate.
140
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
588. SDAT T D 1990 SDAT supports nearly all airspace and traffic data SDAT concept start was in 1985; 5 ATM x x • [GAIN ATM, 2003]
(Sector Design Analysis sources used within the FAA and overlays the traffic it came in full operation in 1990. • [FAA HFW]
Tool) data on the airspace environment. The user is able to Developed by Washington
select from menus the portions of the data to display Consulting Group.
and how the data are displayed. SDAT permits the
user to postulate changes in the airspace and/or traffic
data to compare the analysis results to those with the
original. SDAT analysis tools include measures of
traffic loadings within control sectors or within a
given radius of a specified fix. SDAT also performs a
calculation of the expected number of ATC aircraft
separations per hour in each airspace sector. This
allows the user to see in advance how a proposed
change could impact controller task load, particularly
separation assurance task load, and possibly prevent
errors resulting from excessive demands on the
controllers’ attention.
SDCPN See DCPN (Dynamically
(Stochastically and Coloured Petri Nets)
Dynamically Coloured
Petri Nets)
589. SDFG T D 1988 An SDFG is a graph with ‘actors’ as vertices and SDFGs is a data flow model of 2 computer x • [Bishop90]
(Synchronous Data or ‘channels’ as edges. Actors represent basic parts of an computation that is traditionally • [Ghamarian, 2008]
Flow Graphs) older application which need to be executed. Channels used in the domain of Digital • [Pullaguntla, 2008]
represent data dependencies between actors. Signal Processing platforms.
Streaming applications essentially continue their Possible approach for the
execution indefinitely. Therefore, one of the key implementation of concurrent
properties of an SDFG which models such an real-time control systems. Tools
application is liveness, i.e., whether all actors can available.
run infinitely often. Relation with Petri Nets.
590. SDHA T R 2005 The SDHA is used to understand the dynamics of an Developed by Raheja and 4 aviation x • [Oztekin, 2007]
(Scenario-Driven accident. The first step involves the generation of Allocco (2005), building on an • [Luxhoj, 2009]
Hazard Analysis) possible scenarios. This includes scenario description, approach by Hammer (1972).
initial contributors, subsequent contributors, life-cycle
phase, possible effect, system state and exposure,
recommendations, precautions and controls. Next,
hazards are classified and communicated. Hazard
“counts” are obtained, which lead to implicit
proportions or percentages of the hazard system and
subsystem sources as derived from the scenarios.
591. SDL I D 1976 Aims to be a standard language for the specification Based on Extended FSM, similar 2 computer x x • [Bishop90]
(Specification and and design of telecommunication switching systems. to SOM. Tools available. telecom • [EN 50128]
Description Language) SDL is an object-oriented, formal language defined by Software requirements • Wikipedia
The International Telecommunications Union– specification phase and design &
Telecommunications Standardisation Sector (ITU–T) development phase.
as recommendation Z.100. The language is intended
for the specification of complex, event-driven, real-
time, and interactive applications involving many
concurrent activities that communicate using discrete
signals.
SDM See RBD (Reliability Block
(Success Diagram Diagrams)
Method)
141
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
592. SEAMAID I H 1996 Cognitive simulations. Has similar functionality to In 1998 it has been applied to 4 5 nuclear x • [Fumizawa00]
(Simulation-based CAMEO-TAT. SEAMAID was being developed to model a team of the operators in a • [Kirwan98-1]
Evaluation and Analysis simulate the behaviour of operators, Human System complicated situation, after which • [Nakagawa]
support system for Interface (HSI) and plant behaviour. a validation of SEAMAID has
MAn-machine Interface been carried out. In 1999, several
Design) HSI design configurations were
examined to compare the
workload that were the key
factors of human error.
593. Secondary Task T H 1986 Secondary task monitoring is a method of measuring 7 aviation x • [FAA HFW]
Monitoring or mental workload in which the operator is required to • [MIL HDBK]
older perform two tasks concurrently—the primary task of
interest and another (related or unrelated) task. The
operator’s performance on the secondary task is used
to estimate primary task workload. The method of
secondary task monitoring is an important tool to help
the human error practitioner assess mental workload
so that especially stressful tasks can be identified and
redesigned or re-allocated.
594. SEEA T Ds 1973 Qualitative Design tool. Similar to SFMEA (Software Software architecture phase. 3 computer x • [Fragola&Spahn,
(Software Error Effects FMEA). 1973]
Analysis) • [EN 50128]
• [Lutz&Woodhouse96]
• [Rakowsky]
595. Seismic Analysis T M 1927 Seismic Analysis is a structural analysis technique that Physical structures and 6 nuclear x • [ΣΣ93, ΣΣ97]
involves the calculation of the response of a building equipment. • Wikipedia
(or nonbuilding) structure to earthquakes. Aim is to
ensure structures and equipment resist failure in
seismic event.
596. Self testing and T Ds 1978 Software Testing technique. Aim is to verify on-line Essential on a normally dormant 6 computer x x • [Bishop90]
Capability testing or that the system maintains its capability to act in the primary safety system. See also
older correct and specified manner. Software Testing.
597. Self-Reporting Logs T Ds 1998 Self-reporting logs are paper-and-pencil journals in Alternative name: Diary Method. 2 computer x x • [FAA HFW]
or which users are requested to log their actions and See also Journaled Sessions.
older observations while interacting with a product.
598. SEM T O 1997 SEM is an assessment and development tool for Development of the tool has been 5 6 mining x x • [Kjellen, 2000]
(Safety Element improvement of the safety, health and environment carried out through a structured • [Alteren&Hovden,
Method) (SHE) management, tailored for application in the group problem solving process. 1997]
Norwegian mining industry. The method identifies the The participants were resource
current SHE performance and the desired future of the persons representing different
organisation. The tool also gives aid to find parties in the industry.
improvement measures. SEM emphasises consensus
decisions through internal group discussions. The
method is designed as a matrix, where the columns
represent five phases of development. The rows define
the safety elements considered. The content is divided
in six main elements that ought to be considered by
the organisation; Goals/ambitions, Management,
Feedback systems/learning, Safety culture,
Documentation and Result Indicators.
Semantic Differential See Rating Scales
Scales
142
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
599. Semi-Markov Chains M 1969 Markov chains that also allow non-exponential Tools available (e.g. ASSIST: 4 5 many x x • [Butler&Johnson95]
transitions. Abstract Semi-Markov • [MUFTIS3.2-I]
Specification Interface to the • [NASA-Assist01]
SURE Tool). • Wikipedia
600. Sensitivity Analysis T R Sensitivity Analysis is the study of how the variation Many techniques exist to 5 many x x x • Wikipedia
(uncertainty) in the output of a mathematical model determine the sensitivity of the
can be apportioned, qualitatively or quantitatively, to output with respect to variation in
different sources of variation in the input of the model. the input, such as linearisation,
It is a technique for systematically changing sampling, variance based
parameters in a model to determine the effects of such methods, Monte Carlo methods.
changes. See also What-If Analysis. See
also Bias and Uncertainty
Assessment. See also Uncertainty
Analysis.
601. Sentinel D 2005 Sentinel monitors airline safety data, enabling users to Developed by Mercator (the IT 7 8 aviation x x x x x • www.mercator.com
pinpoint potential areas of concern. Incident reports division of Emirates Airline) by
are filed in a data repository for trending and analysis. updating WinBASIS and BASIS.
Sentinel analyses this accumulated information and It is in use at over 100 airlines
helps detect patterns and trends which are significant and aviation companies.
or may become significant. The results can be
transmitted in real time to safety specialists within an
organisation and can be shared with other Sentinel
users around the world. Aim is to support adopting
preventative strategies and target resources.
602. sequenceMiner D 2006 Approach to model the behaviour of discrete sensors sequenceMiner was developed 7 8 avionics x x x • [Budalakoti et al,
in an aircraft during flights in order to discover with funding from the NASA 2006]
atypical behavior of possible operational significance, Aviation Safety Program. The
e.g. anomalies in discrete flight data. The approach is stated to be general
sequenceMiner analyzes large repositories of discrete and not restricted to a domain,
sequences and identifies operationally significant hence can be applied in other
events. The focus is on the primary sensors that record fields where anomaly detection
pilot actions. Each flight is analyzed as a sequence of and event mining would be
events, taking into account both the frequency of useful.
occurrence of switches and the order in which
switches change values. It clusters flight data
sequences using the normalized longest common
subsequence (nLCS) as the similarity measure and
using algorithms based on a Bayesian model of a
sequence clustering that detect anomalies inside
sequences. In addition, it provides explanations as to
why these particular sequences are anomalous. The
sequenceMiner algorithm operates by first finding
groups of similar flight sequences, and then finding
those sequences that are least similar to any of the
groups. It uses the normalized longest common
subsequence as the similarity measure, and ideas from
bioinformatics such as Multiple Sequence Alignment
to determine the degree to which a given sequence is
anomalous.
143
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
603. SEU T R 1954 SEU aims to transform concepts like safety, quality of Promoted by L.J. Savage in 1954 5 x • [Savage, 1954]
(Subjective Expected life, and aesthetic value into a form that can be used • [FAA HFW]
Utility) for cost/benefit analyses. The theory of SEU combines • Wikipedia
two subjective concepts: first, a personal utility
function, and second a personal probability
distribution (based on Bayesian probability theory).
The likelihood of an event (which is subject to human
influence) occurring (the expectancy variable) is seen
as the subjective probability that the outcome will
occur if a behavior is undertaken. The value variable
(the subjectively determined utility of the goal) is
multiplied by the expectancy. The product is the
subjective expected utility.
604. Severity Distribution T R 1982 Is used in estimations of the probability of severe 8 offshore x x • [Kjellen, 2000]
Analysis accidents at a workplace and in comparing different
workplaces with respect to the expected severity of the
accidents. It is based on the accidents for a specified
period of time and follows a step-wise procedure: 1)
Arrange the accidents by consequence in an ascending
order; 2) Divide the highest registered consequence
value into intervals such that each interval has
approximately the same size on a logarithmic scale; 3)
Tally the number of accidents in each interval and the
cumulative number; 4) Calculate the cumulative
percentage of accidents for each interval and use a
log-normal paper to plot the results.
SFMEA See FMEA (Failure Mode and
(Systems Failure Mode Effect Analysis)
and Effect Analysis)
605. SFMEA T Ds 1979 This technique identifies software related design Software is embedded into vital 3 avionics x x • [FAA00]
(Software Failure deficiencies through analysis of process flow-charting. and critical systems of current as • [Lutz&Woodhouse96]
Modes and Effects It also identifies areas for verification/ validation and well as future aircraft, facilities, • [ΣΣ93, ΣΣ97]
Analysis) test evaluation. It can be used to analyse control, and equipment. SFMEA can be • [Pentti&Atte02]
sequencing, timing monitoring, and the ability to take used for any software process; • [Ippolito&Wallace95]
a system from an unsafe to a safe condition. This however, application to software • [Reifer, 1979]
should include identifying effects of hardware failures controlled hardware systems is
and human error on software operation. It uses the predominate application.
inductive reasoning to determine the effect on the
system of a component (includes software
instructions) failing in a particular failure mode.
SFMEA was based on FMEA and has a similar
structure.
606. SFTA T Ds 1983 This technique is employed to identify the root Any software process at any level 4 5 avionics x x • [FAA00]
(Software Fault Tree cause(s) of a “top” undesired event. To assure of development or change can be computer • [Leveson95]
Analysis) adequate protection of safety critical functions by analysed deductively. However, • [NASA-GB-1740.13-
inhibits interlocks, and/or hardware. Based on Fault the predominate application is 96]
Tree Analysis. software controlled hardware • [ΣΣ93, ΣΣ97]
systems.
144
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
607. SHA T Dh 1993 System Hazard Analysis purpose is to concentrate and Any closed loop hazard 3 4 aircraft x • [FAA00]
(System Hazard or assimilate the results of the Sub-System Hazard identification and tracking system medical • [FAA tools]
Analysis) older Analysis (SSHA) into a single analysis to ensure the for an entire program, or group of • [SEC-SHA]
hazards of their controls or monitors are evaluated to a subsystems can be analysed. • [ΣΣ93, ΣΣ97]
system level and handled as intended. SHA built on Identifies system design features
preliminary hazard analysis (PHA) as a foundation. and interface considerations
SHA considers the system as a whole and identifies between system elements that
how system operation, interfaces and interactions create hazards. Inductive.
between subsystems, interface and interactions
between the system and operators, and component
failures and normal (correct) behaviour could
contribute to system hazards. The SHA refines the
high-level design constraints generated during PHA.
Conformance of the system design to the design
constraints is also validated. Through SHA, safety
design constraints are traced to individual components
based on the functional decomposition and allocation.
608. SHARD T Ds 1994 Adaptation of HAZOP to the high-level design of Developed by DCSC 3 6 computer x • [McDermid01]
(Software Hazard computer-based systems. Provides a structured (Dependable Computing Systems • [McDermid&Pumfrey
Analysis and Resolution approach to the identification of potentially hazardous Centre). Early version was ]
in Design) behaviour in software systems. SHARD uses a set of referred to as CHAZOP • [Mauri, 2000]
guidewords to prompt the consideration of possible (Computer HAZOP)
failure modes. Based on software failure classification
research, five guidewords are used in the SHARD
method - omission, commission, early, late and value
failure. These guidewords are applied systematically
to functions and/or flows in a software design. Use of
SHARD facilitates the systematic identification of
software contributions to system level hazards and the
definition of associated software safety requirements.
609. SHARP T H 1984 Helps practitioners picking up the right Human 3 4 5 electricity x • [MUFTIS3.2-I]
(Systematic Human Reliability Analysis method to use for a specific action • [Wright&Fields&Harr
Action Reliability / situation. It employs a 4-phase procedure: 1) ison94]
Procedure) Identification of potential human errors (using detailed
description of operator tasks and errors, and
techniques like FMEA); 2) Selecting significant errors
(e.g. based on likelihood and whether it leads directly
to undesirable event); 3) Detailed analysis of
significant errors (likelihood analysis); 4) Integration
into a system model (studying the dependence
between human errors and system errors and the
dependence of human errors on other errors). SHARP
suggests a number of techniques to be used.
610. SHEL or SHELL model T M 1972 In the SHELL model, S=Software (procedures, Developed by Prof Dr. E. 2 aviation x x x • [Edwards, 1972]
symbology, etc.); H=Hardware (machine); Edwards of Birmingham • [Edwards, 1988]
E=Environment (operational and ambient); University in 1972. Modified in • [FAA00]
L=Liveware (human element). The model has the about 1979 by Cpt Frank • [Hawkins93]
form of a plus-sign (+), consisting of 5 blocks, each Hawkins of KLM. • [ICAO Doc 9806]
with one letter of SHELL in it, with one of the ‘L’-s in
the middle. A connection between blocks indicates an
interconnection between the two elements. The match
or mismatch of the blocks (interconnection) is just as
important as the characteristics described by the
blocks themselves.
145
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
611. SHERPA T M 1986 Focuses on particular task types depending on the Related to SCHEMA and PHEA. 3 5 6 nuclear x • [Kirwan94]
(Systematic Human industry concerned. Root of TRACEr, HERA I, Equivalent to FMEA used in • [Kirwan98-1]
Error Reduction and HERA II. The description of activities developed reliability Technology. Also does • [FAA HFW]
Prediction Approach ) using HTA is taken task-by-task and scrutinised to it work like a human HAZOP.
determine what can go wrong. Each task is classified
into one of 5 basic types (i.e. checking, selection,
action, information communication and information
retrieval) and a taxonomy of error types is applied.
The immediate consequences for system performance
are recorded. For each error type, an assessment of
likelihood and criticality is made. Finally, potential
recovery tasks and remedial strategies are identified.
612. Shock method T R 1991 Is used to quantify common cause effects identified by 5 aircraft x • [MUFTIS3.2-I]
or Zonal Analysis.
older
613. Signal Flow Graphs T Dh 1966 Identifies the important variables and how they relate Related to State Transition 2 electricity x • [KirwanAinsworth92]
within the system. The analysis is conducted by Diagrams defence • [HEAT overview]
selecting a system output variable and then identifying • [FAA HFW]
all the variables that could influence this. The network
presents the system variables as nodes connected by
flows.
614. SIMMOD Pro I R 1997 The Airport and Airspace Simulation Model, Developed by ATAC. 4 ATM x x • [SAP15]
SIMMOD, is an FAA-validated model used by airport
planners and operators, airlines and air traffic
authorities. Simmod PRO! adds advanced modelling
capabilities, incorporating rules-based dynamic
decision making. The rules-based decision making can
be used to model complex interactions between ATM
systems, disruptive events and human resources and
activities (e.g., controllers, pilots, etc.)
Simulators/mock-ups See Computer Modelling and
Simulation. See Prototype
Development or Prototyping
or Animation.
Site Visits See Plant walkdowns/ surveys
615. Situational Awareness T H 2001 The Situation Awareness of an operator may be Chinese whispering is a game in 4 ATM x x • [DiBenedetto et al,
Error Evolution abou erroneous for several reasons, e.g., wrong perception which the first player whispers a 2005]
t of relevant information, wrong interpretation of phrase or sentence to the next • [Stroeve&Blom&Park
perceived information, wrong prediction of a future player. Each player successively 03]
state and propagation of error due to agent whispers what that player • Wikipedia
communication. A situation awareness error can believes he or she heard to the
evolve and expand as it is picked up by other humans next. The last player announces
(‘Chinese whispering’). This error propagation can be the statement to the entire group.
modelled and analysed using e.g. stochastic hybrid Errors typically accumulate in the
models. retellings, so the statement
announced by the last player
differs significantly, and often
amusingly, from the one uttered
by the first.
146
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
616. SLIM T H 1984 Estimates human error probabilities. Two modules: Developed by D.E. Embrey, P. 5 nuclear x • [Humphreys88]
(Success Likelihood MAUD (Multi-Attribute Utility Decomposition, used Humphreys, E.A. Rosa, B. chemical • [Kirwan&Kennedy&
Index Method) to analyse a set of tasks for which human error Kirwan, and K. Rea, Brookhaven Hamblen]
probabilities are required) and SARAH (Systematic National Laboratory, July 1984. • [Kirwan94]
Approach to the Reliability Assessment of Humans, Similar to APJ. Can be reserved • [MUFTIS3.2-I]
used to transform success likelihoods into human error for difficult HEP assessments that • [GAIN ATM, 2003]
probabilities (HEP)). HEART and THERP are not • Wikipedia
designed for.
617. SLM T H 1986 SLM is an information-processing model that assumes Developed by Rasmussen. 2 3 4 x • [FAA HFW]
(Step Ladder Model) an expected sequence of mental operations in the Considers cognitive elements not • [Rasmussen86]
course of decision making. The 8 steps are only behavioural patterns. • [Weitzman00]
‘Activation’, Observe, Identify, Interpret and • [GAIN ATM, 2003]
Evaluate, Define Task, Formulate Procedure, and
Execute. Errors can occur when operators avoid
intermediate steps to decrease mental demand. Three
types of decisions are conceptualised (skill, rule,
knowledge-based model): “Skill-based” decisions
proceed directly from detection to the execution with
few intermediate mental steps. “Rule-based” decisions
require a mental representation of the system state
(e.g. the air traffic situation), and the selection of an
appropriate procedure based on that recognition.
“Knowledge-based” decisions proceed through causal
reasoning.
618. SMHA T Ds 1987 Used to identify software-related hazards. A state Often used in computer science. 3 4 avionics x • [Leveson95]
(State Machine Hazard machine is a model of the states of a system and the For complex systems, there is a • [Houmb02]
Analysis) transitions between them. Software and other large number of states involved.
component behaviour is modelled at a high level of Related to Petri nets. Procedure
abstraction, and faults and failures are modelled at the can be performed early in the
interfaces between software and hardware. system and software development
process.
619. SMORT T R 1987 SMORT is a simplified modification of MORT. This Developed by U. Kjellén et al. 8 offshore x • [Kjellen, 2000]
(Safety Management technique is structured by means of analysis levels (Norway). • [NEMBS, 2002]
Organisation Review with associated checklists, while MORT is based on a
Technique) comprehensive tree structure. The SMORT analysis
includes data collection based on the checklists and
their associated questions, in addition to evaluation of
results. The information can be collected from
interviews, studies of documents and investigations. It
can be used to perform detailed investigation of
accidents and near misses. It also serves as a method
for safety audits and planning of safety measures.
147
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
620. SNEAK T R 1967 Sneak-Circuit Analysis identifies unintended paths or Applicable to those components 2 3 4 aircraft x x x • [Bishop90]
(Sneak Circuit Analysis) / control sequences that may result in undesired events that are safety critical. This nuclear • [EN 50128]
1991 or inappropriately time events. Sneak Analysis starts technique is applicable to control computer • [FAA AC431]
with the development of a stepwise flow chart of the and energy-delivery circuits of all electricity • [FAA00]
task sequence. Clue application is next carried out kinds, whether electronic/ • [Kirwan95]
using the computerised system. A number of the electrical, pneumatic, or • [Kirwan98-1]
questions will require a relatively detailed human hydraulic. Tools available.
• [Rakowsky]
factors analysis of the installation if they are to be Originally developed (Boeing) to
• [ΣΣ93, ΣΣ97]
answered. For each question, there is back-up look at unintended connections in
information expanding on what constitutes an wiring systems. Later (1991) • [Sparkman92]
acceptable system configuration in human factors adapted considerably to consider
terms. Sneak paths are then identified by considering errors of commission in HRA.
the logical possibilities for flows in the system. Highly resource-intensive.
Barriers that are present must be considered at this
point.
621. SOAM T O 2007 Aim is to broaden the focus of an investigation from Reason’s original Swiss Cheese 7 ATM x x x x • [Licu, 2007]
(Systematic Occurrence human involvement issues, also known as “active model has been adapted in
Analysis Methodology) failures of operational personnel” under Reason’s accordance with a “just culture”
original model, to include analysis of the latent philosophy. ‘Unsafe acts’ are
conditions deeper within the organisation that set the referred to as Human
context for the event. The SOAM process follows six Involvement; ‘Psychological
steps: 1) Review gathered data; 2) Identify barriers; 3) precursors of unsafe acts’ as
Identify human involvement; 4) Identify contextual Contextual conditions; ‘Fallible
conditions; 5) Identify organisational factors; 6) decisions’ as Organisational and
Prepare SOAM chart. system factors. Data gathering is
according to the SHEL model.
622. SOCRATES I O 1998 Analysis of organisational factors. Is intended to aid Developed by Idaho National 3 5 nuclear x x • [HRA Washington]
(Socio-Organisational conceptualising the role that organisational factors Engineering and Environmental • [NEA99]
Contribution to Risk play in shaping plant performance and how they Laboratory (INEEL). • [Oien et al, 2010]
Assessment and the influence risk. According to [Oien et al, 2005],
Technical Evaluation of US NRC terminated the project
Systems) and no final report exists.
623. SOFIA T R 2001 SOFIA is an analytical and graphical method Developed in EUROCONTROL 8 ATM x x x • [Blajev, 2003]
(Sequentially Outlining supporting the process of ATM safety occurrence in collaboration with the
and Follow-up investigation to distinguish between the causes of an Bulgarian Air Traffic Services
Integrated Analysis) occurrence. It is for use during factual information Authority.
gathering; event reconstruction; event analysis and Link with TOKAI and HERA.
issuing recommendations. It refers to the three layers
in the Swiss Cheese model: unsafe acts, local
workplace factors and organisational factors. The
method uses event/ condition building blocks to
describe the causal chain leading to an occurrence.
Building blocks are associated with a unique actor at a
particular moment in time. Actor(s) can be any
representative player in the occurrence, including
persons but also technical systems or any attribute that
is important and is dynamic in the course of particular
occurrence like separation.
624. Software configuration G D Requires the recording of the production of every Technique used throughout 6 computer x • [EN 50128]
management version of every significant deliverable and of every development. In short it is “To • [Jones&Bloomfield&
relationship between different versions of the different look after what you’ve got sofar”. Froome&Bishop01]
deliverables. The resulting records allow the developer • [Rakowsky]
to determine the effect on other deliverables of a • [SCM biblio]
change to one deliverable. • Wikipedia
148
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
625. Software Testing T Ds 1976 Software Testing provides an objective, independent See also 7 computer x • [Bishop90]
or view of the software to allow the business to • Code Analysis • [EN 50128]
older appreciate and understand the risks at implementation • Code Coverage • [ISO/IEC 15443]
of the software. Test techniques include, but are not • Code Inspection Checklists • [Jones&Bloomfield&
limited to, the process of executing a program or • Code Logic Analysis. Froome&Bishop01]
application with the intent of finding software bugs. • Complexity Models • [Rakowsky]
Several methods of testing exist, e.g.: • Control Flow Checks • Wikipedia
• Assertions and plausibility checks. • Interface Testing.
• Avalanche/Stress Testing. • Test Adequacy Measures.
• Back-to-back Testing.
• Boundary value analysis.
• Equivalence Partitioning and Input Partition Testing.
• Probabilistic testing. Self testing and Capability
testing.
• Tests based on Random Data.
• Tests based on Realistic data.
• Tests based on Software structure.
• Tests based on the Specification.
626. Software Time-out T Ds 1980 Aim is to provide time limits for software running Useful to provide determinism on 6 computer x • [Bishop90]
Checks or non-deterministic tasks. non-deterministic task in safety
older computer systems. Related to
error-recovery and time-out
checks.
627. SOM I D 1987 SOM is a development language and methodology Based on Extended FSM, related 2 6 computer x x • [Bishop90]
(Systems Development or covering the development of systems consisting of to SBC, CCS, SDL, SADT. Tools
by an Object-oriented older software and hardware from requirements to available.
Methodology) implementation, with special emphasis on real-time
systems.
628. SPAM T H 1998 SPAM is a method of measuring situation awareness 5 ATC x • [HIFA Data]
(Situation-Present (SA). In contrast to SAGAT, the SPAM method uses • [Durso95]
Assessment method) response latency as the primary dependent variable • [FAA HFW]
and does not require a memory component. It
acknowledges that SA may sometimes involve simply
knowing where in the environment to find some
information, rather than remembering what that
information is exactly.
629. SPAR HRA T H 2001 Quick easy to use screening level (i.e. not full scope) Qualitative and quantitative. Was 5 nuclear x • [HRA Washington]
(Simplified Plant or HRA technique. Significant revision of ASP (Accident developed for the US NRC’s
Analysis Risk Human older Sequence Precursor). Supports ASP analysis of (Nuclear Regulatory
Reliability Assessment) operating events at Nuclear Power Plants. Incorporates Commission) Simplified Plant
the advantages of other human reliability assessment Analysis Risk (SPAR) program.
methods (e.g. IPE, HPED, INTENT).
630. SPC T Dh 1920 Aim is to understand and control variations in process. Pioneered by Walter A. Shewhart 6 computer x • [Leavengood98]
(Statistical Process s Four general steps: 1) Describe the distribution of a in the early 1920s. • [ΣΣ93, ΣΣ97]
Control) process; 2) Estimate the limits within which the Any process where sufficient data • Wikipedia
process operates under ‘normal’ conditions; 3) can be obtained. Many training
Determine if the process is ‘stable’, sample the output courses available.
of the process and compare to the limits. Decide: a)
‘process appears to be OK; leave it alone, or b) ‘there
is reason to believe something has changed’ and look
for the source of that change; 4) Continuous process
improvement.
149
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
631. SPEAR T H 1993 SPEAR uses an error taxonomy consisting of action, Taxonomic approach to Human 5 chemical x • [Baber et al, 2005]
(System for Predictive checking, retrieval, transmission, selection and Error Identification (HEI) similar • [Stanton et al, 2005]
Error Analysis and planning errors and operates on a HTA of the task to SHERPA. SPEAR was
Reduction) under analysis. The analyst considers a series of developed by the Centre for
performance-shaping factors for each bottom level Chemical Process Safety (CCPS)
task step and determines whether or not any credible for use in the American
errors could occur. For each credible error, a processing industry’s HRA
description of it, its consequences and any error programme.
reduction measures are provided.
632. Specification Analysis G 1990 Specification Analysis evaluates the completeness, 7 avionics x • [NASA-GB-1740.13-
or correctness, consistency and testability of software 96]
older requirements. Well-defined requirements are strong
standards by which to evaluate a software component.
Specification analysis evaluates requirements
individually and as an integrated set.
633. SpecTRM I Ds 2002 SpecTRM helps system and software engineers Developed by Nancy Leveson. 2 3 6 aircraft x • [Leveson02]
(Specification Tools and develop specifications for large, complex safety- Is based on the principle that avionics
Requirements critical systems. It enables engineers to find errors critical properties must be defence
Methodology) early in development so that they can be fixed with the designed into a system from the medical
lowest cost and impact on the system design. It also start. As a result, it integrates
traces both the requirements and design rationale safety analysis, functional
(including safety constraints) throughout the system decomposition and allocation,
design and documentation, allowing engineers to build and human factors from the
required system properties into the design from the beginning of the system
beginning. SpecTRM provides support for manual development process.
inspection, formal analysis, simulation, and testing,
while facilitating communication and the coordinated
design of components and interfaces.
634. SPFA T Dh 1980 This technique is to identify those failures that would This approach is applicable to 3 space x x x • [FAA AC431]
(Single-Point Failure produce a catastrophic event in items of injury or hardware systems, software • [FAA00]
Analysis) monetary loss if they were to occur by themselves. systems, and formalised human • [ΣΣ93, ΣΣ97]
The SPFA is performed by examining the system, operator systems. It is sometimes
element by element, and identifying those discrete referred to as another standard
elements or interfaces whose malfunction or failure, name for FMEA.
taken individually, would induce system failure.
635. Spotfire D Spotfire is a tool for visual display of data in many Developed by Spotfire, Inc. (now 2 7 x • [GAIN ATM, 2003]
dimensions, using 3-d projections and various sizes, TIBCO Spotfire). • [GAIN AFSA, 2003]
shapes, and colours. This allows the user to spot multi- • [FAA HFW]
dimensional relationships that might not be detectable
through looking at raw numbers or more limited
presentations. Spotfire’s visualisation technology
allows examining data relationships. It has a series of
built-in heuristics and algorithms to aid the user in
discovering alternative views of data.
636. SRG CAP 760 I R 2006 Risk assessment and mitigation process definition. It Developed by UK CAA’s Safety 2 3 4 5 6 7 ATM x x x • [CAP 760]
(Safety Regulation follows seven steps: 1) System description; 2) Hazard Regulation Group (SRG)
Group CAA Publication and consequence identification; 3) Estimation of the
760) severity of the consequences of the hazard occurring;
4) Estimation/assessment of the likelihood of the
hazard consequences occurring; 5) Evaluation of the
risk; 6) Risk mitigation and safety requirements; 7)
Claims, arguments and evidence that the safety
requirements have been met and documenting this in a
safety case.
150
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
637. SRK T H 1981 Psychologically-based model, assuming three levels: Rarely used as model on its own. 2 many x • [Kirwan98-1]
(Skill, Rule and 1) Skill-based level: A query of an agent is accepted • [SRK]
Knowledge-based and by searching the knowledge-base, proper
behaviour model) immediate action is selected. 2) Rule-based level: A
query of an agent is accepted and a case data base is
consulted to determine the action. 3) Knowledge-
based level: A query is accepted and the agent uses its
knowledge base to interact with the other agent and
identify the actual needs. After this problem
identification level, the proper action is determined by
consulting other agents.
638. SRS-HRA D 1994 Data-based approach based on data collected from Related to JHEDI. The name 3 5 nuclear x • [Kirwan98-1]
(Savannah River Site four existing SRS databases (based on incidents, logs, comes from the Savannah River • Wikipedia
Human Reliability etc.): fuel processing; fuel fabrication; waste Site, which is a nuclear
Analysis) management; and reactors. The approach is contextual reservation in South Carolina,
and taxonomy-based. Uses a checklist and is relatively USA, established in 1950 to
easy to use. produce special radioactive
isotopes for national security
purposes.
639. SSA T Dh 1994 The SSA according to ARP 4761 collects, analyses, This SSA is a refinement and 7 aircraft x x • [ARP 4754]
(System Safety Ds and documents verification that the system, as extension of JAR-25 steps avionics • [ARP 4761]
Assessment) implemented, meets the system safety requirements (though JAR-25 does not use the • [Klompstra&Everdij9
according to ARP 4761 established by the FHA and the PSSA. It is a term SSA). It covers both 7]
systematic, comprehensive evaluation of the hardware and software. • [Lawrence99]
implemented system functions to show that relevant
safety requirements are met.
640. SSA T R 2004 The SSA according to EATMP SAM collects This SSA is a refinement and 1 7 ATM x x x x • [EHQ-SAM]
(System Safety arguments, evidence and assurance to ensure that each extension of JAR-25 steps and • [Review of SAM
Assessment) system element as implemented meets its safety the SSA according to ARP 4761, techniques, 2004]
according to EATMP requirements and that the system as implemented but its scope is extended to Air
SAM meets its safety objectives throughout its lifetime. It Navigation Systems, covering
demonstrates that all risks have been eliminated or AIS (Aeronautical Information
minimised as far as reasonably practicable in order to Services), SAR (Search and
be acceptable, and subsequently monitors the safety Rescue) and ATM (Air Traffic
performance of the system in service. The safety Management).
objectives are compared with the current performances
to confirm that they continue to be achieved by the
system. Five substeps are identified: 1) SSA initiation;
2) SSA planning; 3a) Safety evidences collection
during implementation and integration (including
training); 3b) Safety evidences collection during
transfer to operations; 3c) Safety evidences collection
during operations and maintenance; 3d) Safety
evidences collection during system changes (people,
procedures, equipment), 3e) Safety evidences
collection during decommissioning; 4a) SSA
validation; 4b) SSA verification; 4c) SSA assurance
process; 5) SSA completion. Most of these steps
consist of subtasks.
641. SSCA T Ds 1976 SSCA is designed to discover program logic that could The technique is universally 3 computer x • [FAA00]
(Software Sneak Circuit or cause undesired program outputs or inhibits, or appropriate to any software • [ΣΣ93, ΣΣ97]
Analysis) older incorrect sequencing/ timing. program. See also SNEAK.
151
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
642. SSG M 1991 Models all discrete states of a system and associates to 2 4 many x x • [MUFTIS3.2-I]
(State Space Graphs (or or each discrete state a level of severity of consequences • Wikipedia
Discrete State Space older on the service delivered. Petri Nets may be used
Graphs)) during the modelling.
643. SSHA T R 1972 The SSHA is performed to identify and document This protocol is appropriate to 3 aircraft x x • [FAA AC431]
(Subsystem Hazard or hazards associated with the design of subsystems subsystems only. • [FAA00]
Analysis) older including component failure modes, critical human • [FAA tools]
error inputs, and hazards resulting from functional • [ΣΣ93, ΣΣ97]
relationships between components and assemblies
within the subsystems as well as their external
interfaces. It includes software whose performance,
degradation, functional failure or inadvertent
functioning could result in a hazard. It also includes a
determination of the modes of failure including
reasonable human errors, single point failures and the
effects on safety when failures occur within subsystem
components and assemblies.
644. SSRFA T Ds 1996 Safety requirements are flowed down into the system 6 7 avionics x • [NASA-GB-1740.13-
(Software Safety or design specifications. Tools and methods for 96]
Requirements older requirements flowdown analyses include checklists
Flowdown Analysis) and cross references. A checklist of required hazard
controls and their corresponding safety requirements
should be created and maintained.
645. SST T O 2010 The Safety Scanning Tool (SST) is built upon the Developed for Eurocontrol SRC 1 2 8 aviation x x x x x • [SCAN TF, 2010]
(Safety Scanning Tool) rationale of the “Safety Fundamentals” and upon basic (Safety Regulatory Commission) ATM • [SCAN TF, 2010a]
safety regulatory principles with relevance for safety by University of Kassel, National • [SCAN TF, 2010b]
regulatory activities. This tool contains questions that Aerospace Laboratory NLR and
are traceable to existing safety regulations; the Helios Ltd.
questions are therefore relevant and applicable in a Further developed from Safety
regulatory licensing process. Application of the SST Screening Techniques, with more
also results in addressing key issues that need to be focus on Regulatory Safety
part of a consistent safety argument, if this argument issues.
should provide the basis for regulatory acceptance of
an operational change. The tool is an electronic wizard
implemented in MS Excel. The rationale of and
guidance on the tool and the interpretation of their
results is available in a number of supporting
documents.
646. STAHR T H 1985 Determines human reliability by the combined Supporting tool commercially 4 5 nuclear x • [Humphreys88]
(Socio-Technical influences of factors, which influences are in turn available. Developed in the field offshore • [KirwanAinsworth92]
Assessment of Human affected by other lower level influences. The effect of of decision analysis. Former • [Kirwan94]
Reliability) each identified influence is evaluated quantitatively, name is Influence Diagram • [MUFTIS3.2-I]
with the resulting values used to calculate human error Approach (IDA, 1980). Is not
probability estimates. considered very accurate.
152
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
647. STAMP I O 2002 STAMP aims to integrate all aspects of risk, including STAMP was developed by Nancy 3 5 6 8 aviation x x x x • [Leveson2004]
(Systems-Theoretic organisational and social aspects. It can be used as a Leveson and first presented at ATM • [Leveson2006]
Accident Model and foundation for new and improved approaches to MIT Internal Symposium in May
Processes) accident investigation and analysis, hazard analysis 2002
and accident prevention, risk assessment and risk
management, and for devising risk metrics and
performance monitoring. Emphasis is on the use of
visualisation and building shared mental models of
complex system behaviour among those responsible
for managing risk. Systems are viewed in STAMP as
interrelated components that are kept in a state of
dynamic equilibrium by feedback loops of information
and control. Each level of control over safety arises
from (1) component failures, (2) dysfunctional
interactions among components, or (3) unhandled
environmental disturbances at a lower level. The basic
concepts in STAMP are constraints, control loops and
process models, and levels of control. Particular
attention is focused on the role of constraints in safety
management. Accidents can be understood in terms of
why the controls that were in place did not prevent or
detect maladaptive changes.
648. STAR T R 2007 STAR is a version of an IRP that is aimed to predict STAR interpolates between an 5 ATM x x • [Perrin, 2007]
(Safety Target how risks will be affected as Operational Integrated Risk Picture (IRP) for
Achievement Roadmap) Improvements (ATM Changes) are implemented and 2005 and an IRP for 2020.
traffic grows, and to help optimize the implementation
strategy from the safety point of view.
649. Starlight D Starlight is an R&D platform developed for the Developed at Battelle Memorial 7 x • [GAIN ATM, 2003]
intelligence community. Starlight uses visual Institute. • [FAA HFW]
metaphors to depict the contents of large datasets. It is
an information system that couples advanced
information modelling and management functionality
with a visualisation-oriented user interface. This
makes relationships that exit among the items visible,
enabling new forms of information access,
exploitation and control.
State Transition See Finite State Machines
Diagrams
153
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
650. STEADES D 2001 STEADES is a database containing de-identified STEADES was an initiative of 8 aviation x x x • [STEADES]
(Safety Trend incident reports with over 350,000 records. It provides the IATA Safety Committee. The
Evaluation, Analysis & a forum for the analysis, trending, and general inquiry data is gathered from airlines.
Data Exchange System) of the leading indicators of industry safety in order to
develop a comprehensive list of prevention strategies.
It can be used for global safety trending, customised
analysis projects, ad-hoc mini-analysis requests.
Results can be provided to members: -) Daily, through
the safety data management & analysis (SDMA)
website and ad-hoc mini-analysis requests; -)
Monthly: with the STEADES safety bulletin,
providing a regular pulse of accident information by
email; -) Quarterly: with the STEADES safety trend
analysis report, highlighting the latest safety features
found in the incident data; -) Yearly: with the IATA
safety report, featuring in-depth synopsis of the
previous years accidents, including analysis of
contributing factors.
651. STEP T R 1978 This method is used to define systems; analyse system In accident investigation, a 8 x x • [FAA00]
or STEPP or operations to discover, assess, and find problems; find sequential time of events may • [ΣΣ93, ΣΣ97]
(Sequentially-Timed older and assess options to eliminate or control problems; give critical insight into
Events Plot or monitor future performance; and investigate accidents. documenting and determining
Sequential Times Event It is an events-analysis-based approach in which causes of an accident. The
Plotting Procedure) events are plotted sequentially (and in parallel, if technique is universally
appropriate) to show the cascading effect as each appropriate. In [FAA00], STEP is
event impacts on others. It is built on the management also named Multilinear Event
system embodied in the Management Oversight and Sequence Tool (MES).
Risk Tree (MORT) and system safety technology.
652. Stochastic Differential M 1990 These are differential equations on hybrid state space Relation with some Petri nets also 4 ATM x x x x x • [Blom90]
Equations on Hybrid with stochastic elements. The stochastic elements may established. These Petri nets can • [Blom03]
State Space model noise variations in processes, or the occurrence be used to make a compositional • [Krystul&Blom04]
of random events. The advantage of using this esoteric specification of the operation • [Krystul&Blom05]
modelling formalism is the availability of powerful considered which fit the esoteric • [Everdij&Blom04]
stochastic analysis tools. but powerful stochastic • Wikipedia
differential equation models.
653. Stress Reduction G D Aim is to ensure that under all normal operational 6 computer x x • [Bishop90]
circumstances both hardware components and
software activity are operated well below their
maximum stress levels.
Stress Testing See Avalanche/Stress Testing
654. Strongly Typed G D 1983 The term strong typing is used to describe those Tools available. Software design 6 computer x • [Bishop90]
Programming or situations where programming languages specify one & development phase. • [EN 50128]
Languages older or more restrictions on how operations involving • [Rakowsky]
values having different data types can be intermixed. • Wikipedia
Strong typing implies that the programming language
places severe restrictions on the intermixing that is
permitted to occur, preventing the compiling or
running of source code which uses data in what is
considered to be an invalid way. Aim is to reduce the
probability of faults by using a language that permits a
high level of checking by the compiler.
154
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
655. Structural Safety T R 1979 Is used to validate mechanical structures. Inadequate The approach is appropriate to 3 6 aircraft x • [FAA AC431]
Analysis or structural assessment results in increased risk due to structural design; i.e., airframe. • [FAA00]
older the potential for latent design problems causing • [ΣΣ93, ΣΣ97]
structural failures, i.e., contributory hazards. Structural
design is examined via mathematical analysis to
satisfy two conditions: 1) Equilibrium of forces, and
2) Compatibility of displacements. The structure
considered as a whole must be in equilibrium under
the action of the applied loads and reactions; and, for
any loading, the displacements of all the members of
the structure due to their respective stress-strain
relationships must be consistent with respect to each
other.
656. Structure Based Testing T Ds 1995 Software Testing technique. Based on an analysis of See also Software Testing. 7 computer x • [EN 50128]
or the program, a set of input data is chosen such that a • [Rakowsky]
older large fraction of selected program elements are
exercised. The program elements exercised can vary
depending upon level of rigour required.
657. Structure Diagrams T D 1995 Notation which complements Data Flow Diagrams. See also UML. 2 computer x • [EN 50128]
or They describe the programming system and a • [Rakowsky]
older hierarchy of parts and display this graphically, as a • Wikipedia
tree, with the following symbols: 1) rectangle
annotated with the name of the unit; 2) an arrow
connecting these rectangles; 3) A circled arrow,
annotated with the name of data passed to and from
elements in the structure chart. Structure Diagrams
document how elements of a data flow diagram can be
implemented as a hierarchy of program units.
658. Structured Programming G D 1967 Aim is to design and implement the program in a way Tools available. Software design 6 computer x • [Bishop90]
that makes the analysis of the program practical. This & development phase. • [EN 50128]
analysis should be capable of discovering all • [Rakowsky]
significant program behaviour. The program should • Wikipedia
contain the minimum of structural complexity.
Complicated branching should be avoided. Loop
constraints and branching should be simply related to
input parameters. The program should be divided into
appropriately small modules, and the interaction of
these modules should be explicit.
659. Structuring the System T D 1989 Aim is to reduce the complexity of safety critical Info from HAZOP, FTA, FMEA 6 computer x • [Bishop90]
according to Criticality software. can be used.
155
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
660. SUMI T Ds 1993 This generic usability tool comprises a validated 50- SUMI was developed by the 5 computer x • [Kirakowski, 1996]
(Software Usability item paper-based questionnaire in which respondents Human Factors Research Group • [SUMI background]
Measurement Inventory) score each item on a three-point scale (i.e., agree, (HFRG), University College, • [FAA HFW]
undecided, disagree). SUMI measures software quality Cork.
from the end user's point of view. The questionnaire is
designed to measure scales of: 1) Affect - the
respondents emotional feelings towards the software
(e.g., warm, happy). 2) Efficiency - the sense of the
degree to which the software enables the task to be
completed in a timely, effective and economical
fashion. 3) Learnability - the feeling that it is relatively
straightforward to become familiar with the software.
4) Helpfulness - the perception that the software
communicates in a helpful way to assist in the
resolution of difficulties. 5) Control - the feeling that
the software responds to user inputs in a consistent
way and that its workings can easily be internalized.
Surveys See Interface Surveys
See Plant walkdowns/surveys
SUS See Rating Scales
(System Usability Scale)
661. SUSI T R 1993 HAZOP has been modified to handle Human- 2 3 6 health x x x • [Chudleigh&Clare94]
(Safety Analysis of User or computer interaction. The approach adopted in the transport • [Falla97]
System Interaction) older SUSI methodology is a natural extension of standard • [Stobart&Clare94]
hazard analysis procedures. The principal
development has been in the creation of an appropriate
representation of user system interaction. A major
advantage of this process is that the dataflow
representation gives an overview of the complete
system. The representation of the system as processes
and data/control flows is understood by individuals
with no software design training, such as operators
and users. The review process can lead to detailed
insights into potential flaws in the procedures and
processes. Designers with different viewpoints are
able to use a common representation and believe that
it increases their understanding of the total system.
156
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
662. SWAT T H 1981 SWAT is a technique to assess the workload placed on Although SWAT has been widely 7 ATC x • [GAIN ATM, 2003]
(Subjective Workload operators of complex human-machine systems. SWAT used, it has two main problems: it • [HEAT overview]
Assessment Technique) is designed to be easy to use, low cost, non-intrusive, is not very sensitive for low • [FAA HFW]
valid, and sensitive to workload variations. It has been mental workloads and it requires
applied to several complex domains, including air a time-consuming card sorting
traffic control. SWAT is composed of subjective pretask procedure.
operator ratings for three orthogonal dimensions of SWAT can also be applied to
workload: time load, mental effort load, and predict operator workload prior to
psychological stress load. For time load, the question a system being built; in such
is about how much spare time does the operator have. applications it is referred to as
For mental effort load, the question is how much Pro-SWAT (Projective SWAT).
mental effort or concentration is required. For
psychological stress load, the question is about
confusion, risk, frustration, and anxiety. Each
dimension is represented on a three-point scale with
verbal descriptors for each point. Individual
assessments are scaled and conjoint analysis is carried
out on the results to convert them to a single metric of
workload. There are 27 possible combinations; the
user can decide how to rank order these values.
663. SWHA T Ds 1984 The purpose of this technique is to identify, evaluate, This practice is universally 3 5 6 computer x • [FAA AC431]
(Software Hazard or and eliminate or mitigate software hazards by means appropriate to software systems. • [FAA00]
Analysis) older of a structured analytical approach that is integrated • [ΣΣ93, ΣΣ97]
into the software development process. The SWHA
identifies hazardous conditions incident to safety
critical operator information and command and control
functions identified by the PHA, SHA, SSHA and
other efforts. It is performed on safety critical
software-controlled functions to identify software
errors/paths that could cause unwanted hazardous
conditions. The SWHA can be divided into two stages,
preliminary and follow-on.
157
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
664. SWIFT T Dh 1992 SWIFT is a systematic team-oriented technique for SWIFT may be used simply to 3 6 chemical x x • [DNV-HSE01]
(Structured What-IF hazard identification in chemical process plants. It identify hazards for subsequent
Technique) addresses systems and procedures at a high level. quantitative evaluation, or
SWIFT considers deviations from normal operations alternatively to provide a
identified by brainstorming, with questions beginning qualitative evaluation of the
“What if…?” or “How could…?”. The brainstorming hazards and to recommend
is supported by checklists to help avoid overlooking further safeguards where
hazards. SWIFT relies on expert input from the team appropriate.
to identify and evaluate hazards. There is no single As its name suggests SWIFT will
standard approach to SWIFT; it can be modified to generate answers more quickly
suit each individual application. An example protocol than HAZOP but is less
is: 1. Identify design boundaries. 2. Define the design thorough in looking at the detail.
intent and normal operating conditions. 3. Choose a Developed by DNV.
question category. 4. Identify a deviation from design
intent by applying a system of guidewords/ questions.
5. Identify possible causes for, and consequences of,
the deviation. A deviation can be considered
"meaningful" if it has a credible cause and can result
in harmful consequences. 6. For a meaningful
deviation, identify safeguards and decide what action,
if any, is necessary. 7. Record the discussion and
action. Steps 4 to 7 are repeated until all the
guidewords/questions have been exhausted and the
team is satisfied that all meaningful deviations have
been considered. The team then goes back to Step 3
and repeats the process for the next question category.
When all question categories have been exhausted, the
team then goes back to Step 1 and repeats the process
for the next phase/case.
665. Swiss Cheese Model T R 1990 James Reason’s Swiss Cheese model presents human James Reason’s model of 3 4 6 8 many x x • [GAIN AFSA, 2003]
error as a consequence rather than a cause, and should accident causation is intended as • [Reason90]
be the starting point for further investigation rather an approach toward • [Swiss Cheese]
than the end of the search for incident or accident understanding incidents and
causes. Reason’s key points can be best described as accidents and their underlying or
follows: 1) Hazards, errors and other threats to aircraft contributing factors. Its value,
operations happen all the time, but accidents do not— therefore, lies primarily in the
because most safety threats are caught and corrected orientation or attitude towards
by a variety of defenses.2) The aviation environment investigations it has inspired.
has multiple or redundant layers of protection— The model is usually depicted as
designed to prevent; mistakes or system failures from a series of slices of cheese with
cascading into accidents; 3) Each layer of protection holes. Arrows going through a
has flaws. As flaws develop in a layer, the risk for an hole in one slice may be stopped
accident begins to increase; 4) Accidents occur only by the next slice having no hole
when sufficient layers of protection are penetrated. at that point.
666. SYBORG I H 1996 A cognitive simulation approach which is the first to There is ongoing work to 2 3 nuclear x • [Kirwan98-1]
(System for the try to deal with emotional aspects of performance. It determine how emotions interact • [Lenne et al, 2004]
Behaviour of the aims to predict what emotions personnel will with each other and with error
Operating Group) experience when dealing with difficult nuclear power forms. SYBORG is stated to be
plant events, and aims to determine how these “the first approach that, in the
emotions will affect attention, thought, action, and future, may be able to identify
utterances. The emotions considered include fear, idiosyncratic errors, or errors
anxiety, tension, surprise, etc. caused by extreme stress in a
situation.”
158
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
667. Symbolic Execution T Ds 1976 Aim is to show the agreement between the source Useful for safety critical software 7 computer x • [Bishop90]
code and the specification. The program is executed providing the number of paths is • [EN 50128]
substituting the left hand side by the right hand side in small and there is good tool • [Rakowsky]
all assignments. Conditional branches and loops are support. Tools available. • Wikipedia
translated into Boolean expressions. The final result is See also Validation and
a symbolic expression for each program variable. This Verification.
can be checked against the expected expression.
Systematic Inspection See Formal Inspections. And see
Inspections and walkthroughs.
And see Safety Review or Safety
Audit.
Systematic Observation See Field Study
Systematic Occupational See Occupational Health Hazard
Safety Analysis Analysis.
668. T/LA T R 1980 This technique is a system safety analysis-based Any airport, airline and other 8 aviation x x x • [FAA AC431]
(Time/ Loss Analysis or process to semi-quantitatively analyse, measure and aircraft operators should have an • [FAA00]
for Emergency older evaluate planned or actual loss outcomes resulting emergency contingency plan to • [ΣΣ93, ΣΣ97]
Response Evaluation ) from the action of equipment, procedures and handle unexpected events. The
personnel during emergencies or accidents. T/LA T/LA approach defines organised
procedures produce objective, graphic time/loss curves data needed to assess the
showing expected versus actual loss growth during objectives, progress, and outcome
emergencies or mishaps. The expected versus actual of an emergency response; to
loss data is used to describe the change in the outcome identify response problems; to
produced by intervention actions at successive states find and assess options to
of the emergency response. Although it is a system eliminate or reduce response
level analysis, due to lack of design definition and problems and risks; to monitor
maturity, it is not usually initiated until after the SSHA future performance; and to
has begun and uses the SSHA data before it is investigate accidents.
integrated into the SHA.
669. Table-top analysis G 1974 A group of experts who have an understanding of a See also Brainstorm. 2 3 many x x • [KirwanAinsworth92]
specific aspect of a system, meet together as a • [FAA HFW]
discussion group to define or assess particular aspects
of a task. The discussions must be directed around
some basic framework.
670. TAFEI T H 1991 Task analysis method based on State Space Diagrams, Related to State Space Diagrams. 2 3 x x • [Kirwan98-1]
(Task Analysis For describing user interactions with equipment in terms
Error Identification) of transition (input-output) boxes (non-Markovian:
qualitative in nature). For a particular task the network
of transition boxes is developed, and then examined to
determine what illegal transitions could take place,
such as skipping over task elements, sequence errors,
etc., though in theory EOCs (errors of commission)
could be developed from such networks.
671. TALENT I H 1988 An assessment framework which also contains a TALENT was applied for an 3 4 5 nuclear x • [Kirwan98-1]
(Task Analysis-Linked strong task analysis bias, utilising Task Analysis or evaluation of the US Peach
EvaluatioN Technique) Sequential Task Analysis, Timeline Analysis, and bottom nuclear power plant. It
Link Analysis for each task sequence. Then, tasks are has not been used substantially
identified for inclusion in the fault and event trees, recently.
through a collaborative effort between the behavioural
scientists and the safety assessors. PSF (Performance
Shaping Factor) are then identified for each task, and
then the tasks are quantified using either THERP or
SLIM.
159
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
672. Talk-Through Task T M 1986 Similar to Walk-Through, but is undertaken more 2 3 many x x • [KirwanAinsworth92]
Analysis remotely from the normal task location, so that the
tasks are verbalised rather than demonstrated.
673. TapRooT T R 1990 The TapRooT is a suite of tools for accident/incident Developed at System 8 aviation x x x • [FAA HFW]
investigation. It systematically leads an investigator Improvements Inc. • [GAIN ATM, 2003]
through the techniques/steps used to perform an in- Although it was not specifically • [GAIN AFSA, 2003]
depth accident investigation or incident analysis. designed for aviation, TapRooT • [Hutchins95]
TapRooT focuses on uncovering the root causes of has been applied to airline safety.
accident/incident and helps in proactively improving
performance.
Task Allocation Charts See OSD (Operational Sequence
Diagram)
Task Analysis See AET, CAMEO/TAT, Critical
Path Method, Critical Task
Analysis, CTA, Decision Tables,
FPC, GDTA, HECA, HTA, OSD,
Operator Task Analysis, PERT,
TAFEI, TALENT, Talk-Through
Task Analysis, Team CTA, TTA,
TTM, Walk-Through Task
Analysis
674. Task Decomposition G 1953 Task decomposition is a structured way of expanding 2 many x • [KirwanAinsworth92]
the information from a task description into a series of • [FAA HFW]
more detailed statements about particular issues which
are of interest to the analyst.
675. Task Description T H 1986 Method supported by several different methods 2 defence x • [MIL-HDBK]
Analysis or designed to record and analyse how the human is
older involved in a system. It is a systematic process in
which tasks are described in terms of the perceptual,
cognitive, and manual behaviour required of an
operator, maintainer or support person.
Teachback See Interview
676. TEACHER/ SIERRA I R 1993 Alternative HRA framework more aimed at lower 2 3 5 6 chemical x • [Kirwan98-1]
(Technique for consequence accidents than PSA traditionally aims at.
Evaluating and It has a number of components. The first is SIERRA.
Assessing the This states that humans have basic error tendencies
Contribution of Human that are influenced by PIFs (Performance Influencing
Error to Risk [which Factors). TEACHER focuses on defining a task
uses the] Systems inventory, then determining the prioritisation of
Induced Error critical tasks according to their risk potential, leading
Approach) to a rating on a risk exposure index for each task.
Following the screening analysis a HTA and PHEA
analysis are carried out, following which, those errors
with significant consequence potential are analysed
with respect to a set of PIF audit questions, to develop
remedies for the error. Each PIF audit question allows
the analyst to rate the task according to, e.g., the extent
to which procedures are defined and developed by
using task analysis, on a seven-point semantic
differential, anchored at each end-point. Risk
reduction is then determined by the analyst.
160
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
677. Team CTA T T 2000 Team CTA considers the team as an intelligent entity Was developed on the notion that 2 8 x x • [Klein, 2000]
(Team Cognitive Task that can be studied to aid team task design, team current methods of task analysis • [Klinger, 2003]
Analysis) composition, team training. The model emphasizes the fail to capture team • [Salmon et al, 2004]
importance of communication, and shared situational characteristics such as • [FAA HFW]
awareness and focuses on “action teams”. It can be interdependence and co-
used to diagnose and offer suggestions for treatment of operation. Applying a method of
existing problems in teamwork as well as to help analysis designed for individuals
design training materials for new team members by to teams is not sufficient for true
outlining the knowledge and skills required for team understanding of how a team
membership. works.
678. Telelogic Tau I D 2001 Telelogic Tau provides specialised tool sets for every Software tools that cover all 3 6 computer x • [Telelogic Tau]
or phase of a project: 1) Telelogic Tau UML Suite for phases of the development • Wikipedia
older requirement capture and analysis; 2) Telelogic Tau process: analysis, design,
SDL Suite for design and implementation, and 3) implementation and testing.
Telelogic Tau TTCN Suite for comprehensive testing.
In addition, a) SCADE Suite (sold to Esterel)
facilitates the capture of unambiguous software
specifications. It allows detecting corner bugs in the
early stages of the development and reduces the
coding and testing efforts. b) Telelogic Tau Logiscope
Detects Coding Errors in C, C++, Ada and Java,
Identifies and Locates Error-Prone Modules and
Provides Code Coverage Analysis.
679. Temporal Logic T Dh 1986 Direct expression of safety and operational Useful as descriptive and 2 7 computer x x • [Bishop90]
Ds or requirements and formal demonstration that these demonstrative technique for small • [EN 50128]
older properties are preserved in the subsequent systems or small parts of large • [Rakowsky]
development steps. Formal Method. It extends First systems. Computer based tools • Wikipedia
Order Logic (which contains no concept of time) by are necessary for large systems.
adding model operators. These operators can be used Related methods are Petri nets,
to qualify assertions about the system. Temporal finite state machines. Software
formulas are interpreted on sequences of states requirements specification phase
(behaviours). Quantified time intervals and constraints and design & development phase.
are not handled explicitly in temporal logic. Absolute
timing has to be handled by creating additional time
states as part of the state definition.
680. TESEO T H 1980 Assesses probability of operator failure. Used more as Not considered very accurate. 5 chemical x • [Humphreys88]
(Tecnica Empirica a tool of comparison between different designs of the nuclear • [MUFTIS3.2-I]
Stima Errori Operatori man-machine system than for obtaining absolute • [GAIN ATM, 2003]
(Empirical technique to probabilities. Human Error Probability (HEP) is the • Wikipedia
estimate operator product of five values: (1) complexity of action,
errors)) requiring close attention or not. (2) time available to
carry out the activity. (3) experience and training of
the operator. (4) operators emotional state, according
to the gravity of the situation. (5) man-machine and
environment interface.
681. Test Adequacy T Ds 1972 Aim is to determine the level of testing applied using See also Software Testing. 7 computer x • [Bishop90]
Measures or quantifiable measures.
older
161
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
682. Test Coverage G Ds For small pieces of code it is sometimes possible to See also Code Coverage. 5 avionics x • [DO178B]
achieve 100% test coverage. However due to the • [FAA00]
enormous number of permutations of states in a • Wikipedia
computer program execution, it is often not possible to
achieve 100% test coverage, given the time it would
take to exercise all possible states. Several techniques
exist to reach optimum test coverage. There is a body
of theory that attempts to calculate the probability that
a system with a certain failure probability will pas a
given number of tests. Monte Carlo simulation may
also be useful.
683. Tests based on Random T Ds 1984 Software Testing technique. Aim is to cover test cases Useful if there is some automated 7 computer x • [Bishop90]
Data or not covered by systematic methods. To minimise the means of detecting anomalous or
older effort of test data generation. incorrect behaviour.
See also Software Testing.
684. Tests based on Realistic T Ds 1976 Software Testing technique. Aim is to detect faults Not particularly effective or 7 computer x • [Bishop90]
data or likely to occur under realistic operating conditions. appropriate at the early stages of
older software development. Useful for
system testing and acceptance
testing.
See also Software Testing.
685. Tests based on Software T Ds 1976 Software Testing technique. Aim is to apply tests that Essential part of an overall test 7 computer x • [Bishop90]
structure or exercise certain subsets of the program structure. strategy for critical systems.
older Tools available.
See also Software Testing.
686. Tests based on the T Ds Software Testing technique. Aim is to check whether Essential part of an overall test 7 computer x • [Bishop90]
Specification there are any faults in the program that cause strategy.
deviations from the specified behaviour of the See also Software Testing.
software.
687. THA T R 1997 A THA lays out all possible threat environments that a Weapons systems. Mandatory 3 5 defence x • [ΣΣ93, ΣΣ97]
(Threat Hazard or weapon could possibly be exposed to during its requirement of MIL STD 2105B. • [AQ, 2003]
Analysis) older lifecycle and is the baseline for establishing the
parameters for the safety and environmental test
program. These tests and analyses are performed
to verify the ruggedness and soundness of the design
to withstand or protect the weapon against these
environments.
688. THEA T H 1997 THEA is a technique designed for use by interactive THEA aims to inform human- 2 3 4 5 6 aviation x • [Fields, 1997]
(Technique for Human system designers and engineers to help anticipate computer interface design at an • [Pocock, 2001]
Error Analysis) interaction failures. These may become problematic early stage of development.
once designs become operational. The technique
employs a cognitive error analysis based on an
underlying model of human information processing. It
is a highly structured approach, intended for use early
in the development lifecycle as design concepts and
requirements concerned with safety and usability – as
well as functionality – are emerging. THEA employs a
systematic method of asking questions and exploring
interactive system designs based on how a device
functions in a scenario. Steps are: 1. Detailed System
Description; 2. Usage Scenarios; 3. Structure the
scenarios (e.g. HTA); 4. Error Identification Error
Consequence; 5. Underlying model of “human error”;
6. Suggestions for new requirements & Implications
for design.
162
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
689. THERP T H 1981 Aim is to predict human error probabilities and Developed by Swain & Guttman. 5 nuclear x • [FAA00]
(Technique for Human evaluate degradation of a man-machine system likely Longest surviving HRA (Human defence • [Humphreys88]
Error Rate Prediction ) to be caused by human error, equipment functioning, Reliability Analysis) technique. offshore • [Kirwan94]
operational procedures and practices, etc. This Developed in 1960-1970; • [Kirwan98-1]
technique provides a quantitative measure of human released in 1981. This technique • [MUFTIS3.2-I]
operator error in a process. is the standard method for the • [ΣΣ93, ΣΣ97]
quantifying of human error in
• [FAA HFW]
industry.
• [Swain83]
• [GAIN ATM, 2003]
• Wikipedia
690. Think-Aloud Protocol T R 1984 Think aloud protocol is a technique applied in user The theoretical framework for 2 x • [FAA HFW]
testing where users are asked to vocalise their think-aloud protocol experiments • [Nielsen97]
thoughts, feelings and opinions whilst interacting with is provided mainly by the work of • [Thinkaloud]
a site as they perform a task. While the focus in user Ericsson and Simon (1984, 1993). • [Bernardini]
testing is primarily on how effectively a user performs Two variations are Co-discovery, • More refs: see
the required tasks (and not on how users believe they in which two participants jointly http://www.arches.uga
are performing), verbalisations are useful in attempt to perform tasks together .edu/~scwong/edit835
understanding mistakes that are made and getting while being observed in a 0/task1/task1.htm
ideas for what the causes might be and how the realistic work environment, and • Wikipedia
interface could be improved to avoid those problems. Cooperative Evaluation.
Threshold Analysis See Trend Analysis
Thurstone Scale See Rating Scales
691. Timeline Analysis T H 1959 Analytical technique for the derivation of human Timeline Analysis has been used 2 4 5 nuclear x x • [FAS_TAS]
performance requirements which attends to both the for years by the defence and offshore • [HEAT overview]
functional and temporal loading for any given intelligence communities, defence • [KirwanAinsworth92]
combination of tasks. Timeline Analysis examines the primarily for predicting foreign • [Kirwan94]
precise sequence of events in a scenario. Visualises government actions and • [MIL-HDBK]
events in time and geographically. responses to world events. Tools • [Mucks&Lesse01]
available. See also HTLA and
• [FAA HFW]
VTLA.
• [Luczak97]
• [Wickens99]
• [Parks89]
692. Timing, Throughput and T Ds 1996 Timing and sizing analysis for safety critical functions 7 avionics x • [NASA-GB-1740.13-
Sizing Analysis or evaluates software requirements that relate to 96]
older execution time and memory allocation. It focuses on
program constraints. Typical constraint requirements
are maximum execution time and maximum memory
usage.
693. TKS T H 1991 There are two different parts of a TKS, a goal TKS was developed as a 2 4 x x • [Johnson&Johnson,
(Task-Knowledge structure and a taxonomic structure. The goal structure theoretical approach to analyzing 1991]
Structures) represents the sequencing of task activities, and the and modelling tasks with the • [FAA HFW]
taxonomic structure models extensive knowledge purpose of design generation.
about objects, their relationships and their behaviors.
There is an explicit assumption that knowledge
modeled within a TKS is not of equal status, some is
more important to successful task performance than
others. The status of individual knowledge
components must be modeled in order that systematic
assumptions about usability can be made.
TLX See NASA TLX (NASA Task
(Task Load Index) Load Index)
See Rating Scales
163
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
694. TOKAI D 2000 TOKAI is a database management system containing TOKAI was designed to support 8 ATM x x x • [GAIN ATM, 2003]
(Tool Kit for imbedded tools that permit: 1) Air traffic management the Eurocontrol member states in • [TOKAI web]
Occurrence Reporting staff to report occurrences, 2) local investigators to implementing a reporting system
and Analysis) investigate, analyse and assess occurrences, to develop compliant with Eurocontrol
safety recommendations, 3) safety departments to Safety Regulatory Requirements
investigate, exchange data and develop statistics on (ESARR 2). It assures that reports
groups of occurrences, 4) regulators to develop submitted by the various
remedial policies. TOKAI’s occurrence notification providers are of uniform quality
form is the ATS Occurrence Reporting Form and format to allow aggregated
developed by Eurocontrol. The data gathered is based data to remain meaningful.
on Eurocontrol ATM occurrence data taxonomy called
HEIDI.
695. TOPAZ I R 1993 Scenario and Monte Carlo simulation-based accident Developed by NLR from 1993 1 2 3 4 5 6 8 ATM x x x x x • [Blom&Bakker93]
(Traffic Organisation – risk assessment of an ATM operation, which addresses onwards. Several updates have • [Blom&al98,01]
and Perturbation 2004 all types of safety issues, including organisational, been developed. • [Blom&Daams&Nijh
AnalyZer) (upd environmental, human-related and other hazards, and The methodology combines many uis00]
ates) any of their combinations. In step 0 the objective of individual techniques, some of • [Blom&Stroeve&Daa
the study is determined, as well as the safety context, which do not have specific ms&Nijhuis01]
the scope and the level of detail of the assessment. The names. The methodology is • [Blom&Stroeve&Ever
actual safety assessment starts by determining the supported by a tool set and a dij&Park02]
operation that is assessed (step 1). Next, hazards database with hazards from • [Daams&Blom&Nijh
associated with the operation are identified (step 2), previous studies, previous uis00]
and clustered into conflict scenarios (step 3). Using submodels, simulation • [DeJong&al01]
severity and frequency assessments (steps 4 and 5), environments, etc.
• [DeJong04]
the risk associated with each conflict scenario is
• [Everdij&Blom02]
classified (step 6). For each conflict scenario with a
(possibly) unacceptable risk, safety bottlenecks are • [Kos&al00]
identified (step 7), which can help operational concept • [MUFTIS3.2-II]
developers to find improvements for the operation. • [GAIN ATM, 2003]
Should such an improvement be made, a new cycle of • [FAA HFW]
the safety assessment should be performed to
investigate whether all risks have decreased to a
negligible or tolerable level. A complementary step is
to use monitoring data to verify the assumptions
adopted.
696. TOPAZ hazard database D 1999 Database of hazards gathered using dedicated Technique used for TOPAZ- 3 ATM x x x x x • [TOPAZ hazard
TOPAZ-based hazard brainstorms for various ATM based hazard brainstorms is Pure database]
operations. hazard brainstorming, or
Scenario-based hazard
brainstorm.
TOPAZ-based hazard See Pure Hazard Brainstorming
brainstorming
697. TOPPE T O 1991 A procedure validation and team performance 3 7 nuclear x • [Kirwan95]
(Team Operations evaluation technique. It uses judges to evaluate team • [Kirwan98-1]
Performance and performance when carrying out emergency
Procedure Evaluation) procedures. It is therefore not designed as a Human
Error Identification tool. However, it can identify
procedural errors (omissions, wrong procedural
transitions etc.), and team leadership or co-ordination
problems. As such, an approach could be developed to
determine credible procedural and co-ordination errors
of these types, based on observation of emergency
exercises which all nuclear power plant utilities are
required to carry out.
164
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
698. TOR T M 1973 The focus of TOR analysis is on system failures, TOR analysis was initially 3 6 insurance x x x • [FAA HFW]
(Technique Of seeking to identify management failures rather than developed by Weaver (1973) as a
Operations Review) ‘blaming’ employees involved. TOR analysis is training tool to assist with the
presented in a work sheet format. It is a group prevention of incidents. It has
technique requiring participants to progress through subsequently found application as
the work sheet answering yes or no to a series of an investigatory technique for the
questions. A condition of TOR analysis is that the identification of root causes
group reaches a consensus on the answers to the associated with incidents and
questions. There are four basic steps in the TOR accidents.
analysis process: 1) Establish the facts. 2) Trace the
root causes. 3) Eliminate insignificant causes. 4)
Identify realistic actions.
699. TRACEr I M 1999 Aim is to predict human errors that can occur in ATM Human factors in ATM; Reduced 3 6 8 ATM x • [HIFA Data]
(Technique for the systems, and to derive error reduction measures for scope version of TRACEr is • [Shorrock01]
Retrospective Analysis ATM. The design process is aided by predicting what named TRACEr lite (2001). • [Shorrock&Kirwan98]
of Cognitive Errors in errors could occur, thus helping to focus design effort. HERA is TRACEr for European • [Shorrock&Kirwan99]
Air Traffic It is designed to be used by ATM system designers use. • [TRACEr lite_xls]
Management) and other operational personnel. The tool helps to • [Scaife01]
identify and classify the ‘mental’ aspects of the error,
• [GAIN ATM, 2003]
the recovery opportunities, and the general context of
the error, including those factors that aggravated the
situation, or made the situation more prone to error.
Training Systems See Front-End Analysis
Requiremenets Analysis
700. Translator Proven in G D A translator is used whose correct performance has Software design & development 6 computer x • [EN 50128]
Use been demonstrated in many projects already. phase. • [Rakowsky]
Translators without operating experience or with any
serious known errors are prohibited. If the translator
has shown small deficiencies the related language
constructs are noted down and carefully avoided
during a safety related project.
701. Trend Analysis G This method can assist in identifying trends, outliers, Can be used for safety, 7 aviation x x • [GAIN AFSA, 2003]
and signal changes in performance. The method is maintenance, and manufacturing nuclear • Wikipedia
widely used particularly for analysis of events, human production applications.
performance, equipment failure/ reliability/
maintainability, process systems performance, etc. The
method is used to first characterise data, trend it over
time to establish a baseline, and then by expert
judgement or statistical inference establish thresholds
or control points that when exceeded indicate a
significant change in the performance of what is being
monitored.
702. TRIPAC M R 1992 Typical third party risk assessment method. It consists Developed by National 4 5 airport x • [Pikaar00]
(Third party Risk and of three submodels: an accident probability model, an Aerospace Laboratory NLR. • [OPAL2003]
analysis Package for 1999 accident location probability model and an accident The method is applied in many
aircraft ACcidents consequence model. The results of these submodels airports risk studies. With the
around airports) are combined to calculate individual risk levels (local experience gained over the years
probability of death), which are usually presented as in application of the method, and
risk contours on a topographical map, and societal risk due to the availability of
(probability of death of a group of N or more people). improved historical data, the risk
models were updated in 1999,
consisting of adaption of model
parameters and conceptual
changes of the external risk
models.
165
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
703. TRIPOD I M 1994 Tripod is an incident investigation and analysis Two tools developed by the 3 6 7 8 offshore x x • [Aberdeen, 2003]
method that identifies latent failures in the Royal Dutch/Shell Group to • [Groeneweg]
organisation of safety critical operations and comes up measure safety and investigate • [Kennedy&Kirwan98]
with recommendations for improvement. accidents based on the Tripod • [PROMAI5]
It aims to identify and correct the latent failures that theory are Tripod-BETA and • [Tripod Beta]
are behind operational disturbances, like production Tripod-DELTA. Tripod-BETA is • [Tripod Solutions]
incidents, environmental incidents, IT incidents, a methodology for conducting an
financial incidents, by focusing on the organisation accident analysis in parallel with
rather than the individual. (as opposed to at the end of) the
investigation; highlighting
avenues of investigation leading
to latent failures and assigning
general failure type categories to
latent failures. Tripod-DELTA is
a methodology for identifying
weaknesses in the Safety
Management System; providing a
pro-active tool for planning
Safety management actions;
getting workforce involvement in
the identification of weaknesses
and planning of corrective
actions; and development of root
cause thinking to promote a
learning organisation.
TRM See CRM (Crew Resource
(Team Resource Management)
Management)
704. TSA T M 1979 Test Safety Analysis is used to ensure a safe A lessons learned approach of 6 space x • [FAA AC431]
(Test Safety Analysis) or environment during the conduct of systems and any new systems ‘or potentially • [FAA00]
older prototype testing. It also provides safety lessons to be hazardous subsystems’ is • [ΣΣ93, ΣΣ97]
incorporated into the design, as applicable. Each test is provided. This approach is
evaluated to identify hazardous materials or especially applicable to the
operations. Each proposed test needs to be analyzed development of new systems, and
by safety personnel to identify hazards inherent in the particularly in the engineering/
test and to ensure that hazard control measures are development phase.
incorporated into test procedures. It is during the
process of test safety analysis that safety personnel
have an opportunity to identify other data that may be
useful to safety and can be produced by the test with
little or no additional cost or schedule impact.
705. TTA T M 1989 Aim is to specify the context in which important task Is useful for dynamic situations 2 nuclear? x • [Kirwan94]
(Tabular Task Analysis) or steps take place and to identify aspects that may be which involve a considerable • [Vinnem00]
older improved. The TTA usually follows on from a amount of decision-making.
Hierarchical Task Analysis (HTA) and is columnar in
format. It takes each particular task-step or operation
and considers specific aspects, such as Who is doing
the operation, What displays are being used.
TTM See Decision Tables
(Truth Table Method)
706. UML (Unified T D 1997 UML is the industry-standard language for specifying, 2 computer x • [UML]
Modelling Language) visualising, constructing, and documenting the • Wikipedia
artefacts of software systems. It simplifies the
complex process of software design, making a
“blueprint” for construction.
166
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
707. Uncertainty Analysis G Uncertainty Analysis addresses, quantitatively and All analyses should address 5 many x x x x • [FAA00]
qualitatively, those factors that cause the results of an uncertainty explicitly. See also • [ΣΣ93, ΣΣ97]
assessment to be uncertain. Bias and Uncertainty Assessment. • Wikipedia
See also Sensitivity Analysis.
708. Unused Code Analysis T Ds 1996 A common real world coding error is generation of See also Code Coverage. 3 avionics x • [NASA-GB-1740.13-
or code that is logically excluded from execution; i.e., 96]
older preconditions for the execution of this code will never • [Rakowsky]
be satisfied. There is no particular technique for
identifying unused code; however, unused code is
often identified during the course of performing other
types of code analysis. It can be found during unit
testing with COTS coverage analyser tools.
Usability Heuristic See Heuristic Evaluation
Evaluation
709. User Analysis G Aims to describe the user population in order to Some user factors to consider 2 x • [FAA HFW]
identify user specific factors impacting the task(s). include knowledge, skills, • Wikipedia
limitations, experience, age,
height, size, weight, strength,
maturity, and many other
considerations
710. VDM T D 1972 Systematic specification and implementation of The origins of VDM specification 2 6 computer x • [Bishop90]
(Vienna Development sequential programs. Formal Method. Mathematically language lie in the IBM • [EN 50128]
Method) based specification technique and a technique for Laboratory in Vienna where the • Wikipedia
refining implementations in a way that allows proof of first version of the language was
their correctness with respect to the specification. The called the Vienna Definition
specification language is model-based in that the Language (VDL).
system state is modelled in terms of set-theoretic Recommended especially for the
structures, on which defined invariants (predicates) specification of sequential
and operations on that state are modelled by programs. Established technique,
specifying their pre-and post conditions in terms of the training courses available.
system state. Operations can be proved to preserve the Closely related to Z. Tools
system invariants. available. Software requirements
specification phase and design &
development phase.
711. Verbal Protocols T M 1972 Verbal protocols are verbalisations made by a person 2 nuclear x x • [KirwanAinsworth92]
or while they are carrying out a task, in the form of a chemical • Wikipedia
older commentary about their actions and their immediate
perceptions of the reasons behind them.
712. Verification and G 1982 Verification: to build the product right (which refers to Essential for safety-related 7 8 many x x • [Bishop90]
Validation or product specifications); systems. Tools available. Several • Wikipedia
older Validation: to build the right product (which refers to frameworks for validation and/or
user’s needs). verification exist, e.g. E-OCVM
or SAFMAC, but numerous
safety methods in this database
are applicable to V&V activities.
Video Prototyping See Prototyping
713. Vital Coded Processor T Ds 1989 Aim is to be fail-safe against computer processing Overcomes most of the 6 computer x x • [Bishop90]
faults in the software development environment and insecurities associated with
the computer hardware. In this technique, three types microprocessor-based
of errors – operation, operator and operand errors – technology. Useful on relatively
can be detected by redundant code with static simple applications that have a
signatures. safe state.
See also Fail Safety. See also
Memorizing Executed Cases.
167
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
714. VTLA T H 1987 Investigates workload and crew co-ordination, focuses See also HTLA. See also 2 4 5 nuclear x x • [Kirwan&Kennedy&
(Vertical Timeline or on crew activities and personnel. A series of columns Timeline Analysis. offshore Hamblen]
Analysis) older are used: task; sub-task (action) description; time the • [Kirwan94]
sub-task begins; time the sub-task ends; and a column • [Task Time]
each for the operators involved in the whole
task/scenario, indicating in each row which operators
are involved in the sub-task. If an operator moves
from their usual location this is noted under the
column for that operator at the time it happens. The
VTLA helps to identify where team co-ordination will
be particularly required, and also where workload may
be unevenly spread, and where human resources may
be insufficient. The VTLA can also discriminate
between actions and monitoring, and can show
potential actions given other plant failures or system
recoveries. Lastly, key system/transient events can be
indicated on the x-axis.
715. Walk-Through Task T M 1986 This technique is a systematic analysis that can be This technique is applicable to 3 7 x x • [FAA00]
Analysis used to determine and correct root causes of maintenance. See also Inspections • [EN 50128]
unplanned occurrences related to maintenance. and Walkthroughs. • [KirwanAinsworth92]
• [Kirwan94]
• [ΣΣ93, ΣΣ97]
Walkthroughs See Inspections and
Walkthroughs
716. Watchdog timers T D 1977 Watchdog timers are hardware devices with the Useful on all safety critical and 6 computer x x • [Bishop90]
or capability to reset (reboot) the system should the real-time control systems. Related • Internet
older watchdog not be periodically reset by software. The to software time-out checks. • Wikipedia
computer has to “say hello” from time to time to the
watchdog hardware to let it know that it is still alive. If
it fails to do that then it will get a hardware reset. Aim
is to provide a non-software related reliable hardware
checking method of the software operation.
717. WBA M R 1998 Why-Because Analysis (WBA) is a rigorous technique WBA primary application is in 4 aviation x x • [WBA Homepage]
(Why-Because for causally analysing the behaviour of complex the analysis of accidents, mainly rail • Wikipedia
Analysis) technical and socio-technical systems. WBA is based to transportation systems (air, rail marine
on a rigorous notion of causal factor. Whether one and sea). It is also used in the computer
event or state is a necessary causal factor in the Ontological Analysis method for
occurrence of another is determined by applying the safety requirements analysis
Counterfactual Test. During analysis, a Why-Because during system development.
Graph (WB-Graph or WBG) is built showing the
(necessary) causal connections between all events and
states of the behaviour being analysed. The completed
WBG is the main output of WBA. It is a directed
acyclic graph where the nodes of the graph are factors.
Directed edges denote cause-effect relations between
the factors.
718. What-If Analysis T R 1992 What-If Analysis methodology identifies hazards, The technique is universally 3 many x x x • [FAA00]
or hazardous situations, or specific accident events that appropriate. See also Sensitivity • [ΣΣ93, ΣΣ97]
older could produce an undesirable consequence. It is a Analysis. • [FAA HFW]
simple method of applying logic in a deterministic • Wikipedia
manner
168
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
719. Why-Why Diagram T R 1994 A Why-Why Diagram is a Tree Diagram where each Similar in use to a Cause and 4 x • [Kjellen, 2000]
or child statement is determined simply by asking 'why' Effect Diagram, and techniques • [IE, Why-Why]
older the parent occurs. Four steps: 1) State the problem / may be borrowed from Cause • [Switalski, 2003]
situation on the left side of paper; 2) Create a decision And Effect Diagram usage.
tree of causes to the right side of the problem, by See also How-How diagram.
asking a) a succession of Why’s (why is this
happening; why is it a problem); b) a succession of
why’s for each of the possible causes; 3) Continue the
process until each strand is teased out as far as
possible; 4) Analyse the Why-Why diagram to
identify main issues and to restate the problem in
terms of its root cause.
WinBASIS See BASIS (British Airways
Safety Information System)
720. WinCrew T H 1998 WinCrew is used for constructing system performance 4 x • [Mitchell, 2000]
or models for existing or conceptual systems when a • [FAA HFW]
older central issue is whether the humans and machines will
be able to handle the workload. It also can be used to
predict operator workload for a crew given a design
concept. Additionally, WinCrew can simulate how
humans dynamically alter their behavior under high
workload conditions, including the dropping of tasks
based on task priority, task time, and accuracy
degradation.
721. Wind/ Tornado Analysis T R 1988 Analysis of hazards resulting from all types of winds. All structures and buildings. 3 5 nuclear x • [ΣΣ93, ΣΣ97]
or
older
Wizard of OZ See Prototyping
Technique
722. Workload Analysis T H 1986 Provides an appraisal of the extent of operator or crew 2 6 defence x x • [MIL-HDBK]
or task loading, based on the sequential accumulation of
older task times. Method permits an evaluation of the
capability of the operator or crew to perform all
assigned tasks in the time allotted by mission
constraints. As capability is confirmed, hardware
design requirements can be more precisely designated.
If limitations are exposed, alternate function
allocations and operator or crew task assignments are
considered and implemented.
Worst Case See Maximum Credible
Accident/ Worst Case
169
Id Method name For- Pur- Year Aim/Description Remarks Safety assessment stage Domains Application References
mat pose 1 2 3 4 5 6 7 8 H S H P O
w w u r r
723. WPAM I O 1994 Safety management assessment linked to PSA-type of WPAM may double-count the 2 3 4 5 nuclear x x • [Kennedy&Kirwan98]
(Work Process Analysis approach. The first part (WPAM-I) is qualitative; dependence of the organisational
Model) basically a task analysis is performed on the work factors, if the HEPs used have
process to which the tasks involved, actions and the already taken into the account the
defences in the task, and their failure modes are underlying factors, which may at
investigated. Next, the organisational factors matrix is times be implicitly modelled.
defined for each key work process. The organisational
factors influencing each task in the given work process
are then ranked according to their importance.
WPAM-II is next used to modify minimal cut set
frequencies to include organisational dependencies
among the PSA parameters, i.e. candidate parameter
group. The next step in the WPAM-II is
quantification. SLIM is used to find new frequencies
for each minimal cut set.
724. WSA T H 1981 Systematic investigation of working methods, Related to Barrier Analysis, but 3 5 6 manufactu x x • [KirwanAinsworth92]
(Work Safety Analysis) machines and working environments in order to find looks more in detail at each step ring • [Leveson95]
out direct accident potentials. Similar to HAZOP, but of the task to see what hazards ergonomi
the search process is applied to work steps. could occur, and to provide a cs
rough quantitative calculation of
their relative risks, and hence
what barriers are needed.
725. Z T D 1977 Specification language notation for sequential systems Z is short for Zermelo. Formal 6 rail x • [Bishop90]
or and a design technique that allows the developer to Method. Powerful specification • [Cichocki&Gorski]
Z notation proceed from a Z specification to executable notation for large systems. • [EN 50128]
algorithms in a way that allows proof of their Commercial training available. • Wikipedia
correctness with respect to the specification. Related to VDM. Tools available.
Software requirements
specification phase and design &
development phase.
726. ZA or ZSA T Dh 1987 Used to identify sources of common cause failures and In [FAA00] ZSA is named 3 aircraft x • [ARP 4761]
(Zonal (Safety) effects of components on their neighbours. Zonal Mapping Tool. • [DS-00-56]
Analysis) Analysis is an analysis of the physical disposition of See also Beta-factor method, • [Mauri, 2000]
the system and its components in its installed or CCA (Common Cause Analysis), • [FAA00]
operating domain. It should be used to determine: a) Multiple Greek Letters method, • [MUFTIS3.2-I]
The consequences of effects of interactions with Particular Risk Analysis, Shock
adjacent systems in the same domain. b) The safety of method.
the installation and its compliance with relevant
standards and guidelines. c) Areas where maintenance
errors affecting the installation may cause or
contribute to a hazard. d) The identification of sources
of common cause failure; e.g. environmental factors.
e) Transportation and storage effects.
170
Part 2: Statistics
This part provides statistics on the following details for each of the 726 methods as collected in part 1:
• Format; specifies whether the method is a (D) Database, data analysis tool or data mining tool, a (G)
Generic term, a (M) Mathematical model, an (I) Integrated method of more than one technique, or a (T)
specific Technique.
• Purpose, specifies whether the method is a (R) Risk assessment technique, a (H) Human performance
analysis technique, a (M) hazard Mitigating technique, an (O) Organisation technique, a (T) Training
technique, a (Dh) hardware Dependability technique, a (Ds) software Dependability technique, or a (D)
a Design technique.
• Year, i.e. year of development of the method. If uncertain, then words like ‘about’ or ‘or older’ are
added.
• Safety assessment stage, which lists the stages of a generic safety assessment process, proposed in
[SAP 15], during which the method can be of use. These stages are: 1) Scope the assessment; 2)
Learning the nominal operation; 3) Identify hazards; 4) Combine hazards into risk framework; 5)
Evaluate risk; 6) Identify potential mitigating measure to reduce risk; 7) Safety monitoring and
verification; 8) Learning from safety feedback.
• Domains, i.e. the domains of application the method has been used in, such as nuclear, chemical, ATM
(air traffic management), aviation, aircraft development, computer processes.
• Application, i.e. is the method applicable to hardware, software, human, procedures, or to organisation.
171
A. Statistics on Format of method (column ‘Format’)
The Safety Methods Database provides information on format of method, by the following classes:
• (G) Generic term rather than a particular technique
• (D) Database, data analysis or data mining tool
• (T) specific Technique
• (I) Integrated method of more than one technique
• (M) Mathematical model
The number of methods in each of these classes is provided by the table and figure below.
Integrated method
Mathematical model
Specific
technique; 473
172
B. Statistics on Primary Purpose of method (column ‘Purpose’)
The Primary Purpose class specifies whether the method is a
• (R) Risk assessment technique
• (Dh) hardware Dependability technique
• (Ds) software Dependability technique
• (H) Human performance analysis technique
• (O) Organisation technique
• (T) Training technique
• (M) hazard Mitigating technique
• (D) Design technique
For some methods, the primary purpose is not specified. It appears that these methods are mainly of format
Generic term, Database or datamining tool, or Mathematical model (see previous statistic), and these types
of methods generally serve multiple purposes.
173
C. Statistics on year of development of method (column ‘Year’)
The Safety Methods Database also indicates the method’s year of development. For 45 of the 726 methods
collected, this information was not available. For some other methods, only an estimated year could be
identified, and for others only a ‘latest’ year is available, i.e. the method existed in that year, but it is
possible that it was developed earlier than that. Statistics on number of methods developed in each period of
time are provided in the figure below.
The oldest methods are Data mining (1750), Monte Carlo simulation (1777), Neural networks (1890),
Factor analysis (1900), Markov chains (1906) and Pareto charts (1906).
2000-2010 95
266
1980-1989 155
96
1960-1969 32
18
1940-1949 3
4
1920-1929 4
2
1900-1909 3
1
1700-1799 2
174
D. Statistics on coverage of eight stages of generic safety assessment process (column ‘Safety
assessment stage’)
The Safety Methods Database also indicates in which stages of the generic safety assessment process the
method can be of use; these stages are explained in [SAP 15]. A summary distribution of methods among
the eight stages is given below.
The following chart and table show how many methods address multiple stages.
175
# stages Stages of Generic Safety Assessment process Number of methods in
1 2 3 4 5 6 7 8 this class
8 x x x x x x x x 3 3
x x x x x x x 2
7 3
x x x x x x x 1
6 x x x x x x 4 4
5 x x x x x 2
3
x x x x x 1
x x x x 1
x x x x 1
x x x x 3
x x x x 5
x x x x 1
x x x x 1
4 18
x x x x 1
x x x x 1
x x x x 1
x x x x 1
x x x x 1
x x x x 1
x x x 1
x x x 1
x x x 1
x x x 5
x x x 3
x x x 4
x x x 5
x x x 1
3 55
x x x 14
x x x 5
x x x 8
x x x 1
x x x 1
x x x 3
x x x 1
x x x 1
x x 1
x x 1
x x 9
x x 6
x x 10
x x 12
x x 5
x x 2
x x 8
2 x x 27 184
x x 27
x x 5
x x 7
x x 40
x x 3
x x 8
x x 2
x x 2
x x 9
x 2
x 59
x 66
x 58
1 456
x 82
x 72
x 57
x 60
total 15 147 221 168 229 172 95 98 477 726
176
E. Statistics on coverage of domains of application (column ‘Domains’)
The Safety Methods Database covers many domains of application, such as aviation, nuclear industry,
chemical industry, telecommunications, etc. For each method the database indicates in which domains of
application it has been used to date. Note that for some methods the domain of application is unclear (e.g.
some methods are generic models, with no particular application in mind). The histogram below shows for
different domains how many of the 726 methods collected have been applied in that domain. These domains
have been grouped as follows:
Aviation/ATM
182
Computer
144
Nuclear
136
Chemical
75
Aircraft/avionics
81
Defence
68
"Many"
57
Rail/road/water transport
32
Health
32
Energy (non-nuclear)
21
Space
16
Other
21
0 20 40 60 80 100 120 140 160 180 200
177
F. Statistics on coverage of concept aspects (columns ‘Hw’, ‘Sw’, ‘Hu’, ‘Pr’, ‘Or’)
Finally, another detail provided for each method listed in the Safety Methods Database is whether it is
aimed at assessing Hardware aspects, Software aspects, Human aspects, Procedures, or Organisation. Some
statistics on these results are given below.
92
Hardware
201 356
Software
Human
Procedures
Organisation
362 248
178
Note that one method may cover several of these concept aspects, so some methods are counted more than
once. The following table shows how many methods cover which of these aspects.
One may see that there are a lot of methods (i.e. 420 methods, or 58%) that cover one concept aspect only.
Three aspects
One aspect
179
Part 3: References
180
[Andow89] P. Andow, Estimation of event frequencies: system reliability, component reliability data, fault tree analysis. In
R.A. Cox, editor, Mathematics in major accident risk assessment, pp. 59-70, Oxford, 1989.
[Andre&Degani96] A. Andre, A. Degani, Do you know what mode you are in? An analysis of mode error in everyday things,
Proceedings of 2nd Conference on Automation Technology and Human, pp. 1-11, March 1996
[Anton, 1996] Annie I. Antón, Eugene Liang, Roy A. Rodenstein, A Web-Based Requirements Analysis Tool, In Fifth
Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (WET-ICE `96), pages 238--
243, Stanford, California, USA, June 1996, http://www.csc.ncsu.edu/faculty/anton/pubs/gbrat.pdf
[Anton, 1997] A. I. Anton, “Goal Identification and Refinement in the Specification of Software-Based Information Systems,”
Ph.D. Dissertation, Georgia Institute of Technology, Atlanta GA, 1997.
[APMS example] T. Chidester In Conjunction with GAIN Working Group B, Analytical Methods and Tools, Example Application
of The Aviation Performance Measuring System (APMS), September 2004,
http://technology.arc.nasa.gov/SOY2006/SOY_MorningReport/Publications/1%20-%20Report%20-
%20Example%20Application%20of%20APMS.pdf
[APMS guide] APMS 3.0 Flight Analyst Guide, August 25, 2004,
http://apms.arc.nasa.gov/publications/APMS_3.0_FlightAnalystGuide.pdf
[Apthorpe01] R. Apthorpe, A probabilistic approach to estimating computer system reliability, 4 June 2001,
http://www.usenix.org/events/lisa2001/tech/apthorpe/apthorpe.pdf
[AQ, 2003] Ammunition Quarterly, Vol. 9 No. 3 October 2003,
http://www.marcorsyscom.usmc.mil/am/ammunition/Corporate_Center/Ammunition_Quarterly/Vol%209%20No
%203.pdf
[ARES-RBDA] Risk-based decision analysis according to ARES corporation,
http://www.arescorporation.com/services.aspx?style=1&pict_id=186&menu_id=157&id=1068
[ARP 4754] SAE ARP 4754, Certification considerations for highly-integrated or complex aircraft systems, Systems
Integration Requirements Task Group AS-1C, Avionics Systems Division (ASD), Society of Automotive
Engineers, Inc. (SAE), September 1995.
[ARP 4761] SAE ARP 4761, Guidelines and methods for conducting the safety assessment process on civil airborne systems
and equipment, S-18 Committee, Society of Automotive Engineers, Inc. (SAE), March 1994.
[ART-SCENE slides] N. Maiden, Centre for HCI Design, slides on ART-SCENE: Modelling Complex Design Trade-off Spaces with
i*, http://www.science.unitn.it/tropos/tropos-workshop/slides/maiden.ppt
[ART-SCENE web] http://www.ercim.org/publication/Ercim_News/enw58/maiden.html
[ASIAS portal] FAA ASIAS portal, http://www.asias.faa.gov/portal/page/portal/ASIAS_PAGES/ASIAS_HOME. See also
http://www.cast-safety.org/pdf/asias_factsheet.pdf
[ASRS web] ASRS web site, http://asrs.arc.nasa.gov/
[ATCSPMD] http://hf.tc.faa.gov/atcpmdb/default.htm
[ATSB, 2004] ICAO Universal Safety Oversight Audit Programme, Audit Report Of The Australian Transport Safety Bureau
(ATSB) Of Australia, Canberra, 31 May to 4 June 2004,
http://www.atsb.gov.au/publications/2004/pdf/ICAO_audit.pdf
[Avermaete, 1998] J. Van Avermaete, Non-technical skill evaluation in JAR-FCL, NLR, November 1998.
[Ayeko02] M. Ayeko, Integrated Safety Investigation Methodology (ISIM) -Investigating for Risk Mitigation, Workshop on
the Investigation and Reporting of Incidents and Accidents (IRIA 2002), Editor: C.W. Johnson, pp. 115-126,
http://www.dcs.gla.ac.uk/~johnson/iria2002/IRIA_2002.pdf
[Ayyub01] B.M. Ayyub, Elicitation of expert opinions for uncertainty and risks, CRC Press, Boca Raton, Florida, 2001.
[Baber et al, 2005] Chris Baber, Daniel P. Jenkins, Guy H. Walker, Human Factors Methods : A Practical Guide for Engineering
And Design, Ashgate Publishing, 2005
[Babinec&Bernatik&Pavelka F. Babinec, A. Bernatik, M. Vit, T. Pavelka, Risk sources in industrial region, November 1999,
99] http://mahbsrv.jrc.it/proceedings/greece-nov-1999/D6-BAB-BERNATIC-z.pdf
[Bahr, 1997] N.J. Bahr, System Safety Engineering and Risk Assessment: A Practical Approach. Taylor & Francis, 1997
[Bakker&Blom93] G.J. Bakker, H.A.P. Blom, Air traffic collision risk modelling, 32nd IEEE Conference on Decision and Control,
Vol 2, Institute of Electrical and Electronics Engineers, New York, Dec 1993, pp. 1464-1469.
[Balk&Bossenbroek, 2010] A.D. Balk, J.W. Bossenbroek, Aircraft Ground Handling And Human Factors - A comparative study of the
perceptions by ramp staff and management, National Aerospace Laboratory Report NLR-CR-2010-125, April
2010, http://www.easa.europa.eu/essi/documents/HFreportfinal_000.pdf
[Barbarino01] M. Barbarino, EATMP Human Resources R&D, 2nd ATM R&D Symposium, 18-20 June 2001, Toulouse,
[Barbarino02] M. Barbarino, EATMP Human Factors, ATM 2000+ Strategy Update Workshop, 5-7 March 2002
[Basnyat, 2006] Sandra Basnyat , Nick Chozos , Chris Johnson and Philippe Palanque, Incident and Accident Investigation
Techniques to Inform Model-Based Design of Safety-Critical Interactive Systems, Lecture Notes in Computer
Science, Volume 3941/2006, Springer Berlin / Heidelberg,
http://liihs.irit.fr/basnyat/papers/BasnyatChozosJohnsonPalanqueDSVIS05.pdf
[Basra&Kirwan98] G. Basra and B. Kirwan, Collection of offshore human error probability data, Reliability Engineering and System
Safety, Vol 61, pp. 77-93, 1998
[Baybutt89] P. Baybutt, Uncertainty in risk analysis, Mathematics in major accident risk assessment. In R.A. Cox, editor,
Mathematics in major accident risk assessment, pp. 247-261, Oxford, 1989.
[Bayesian web] http://en.wikipedia.org/wiki/Bayesian_probability
[BBN04] About Bayesian Belief Networks, for BNet.Builder Version 1.0, Charles River Analytics, Inc., Last updated
September 9, 2004, http://www.cra.com/pdf/BNetBuilderBackground.pdf
[Belief networks] http://www.norsys.com/belief.html
[Benner75] L. Benner, Accident Investigations:Multilinear Events Sequencing Methods, Journal of Safety Research, June
1975/Vol. 7/No. 2
[Benoist] Experience feedback in the Air Transport,
http://www.eurosafe-forum.org/files/pe_358_24_1_panel_lecture_benoist.pdf
[Bernardini] S. Bernardini (1999). "Using think-aloud protocols to investigate the translation process: Methodological
aspects". RCEAL Working papers in English and applied linguistics 6, edited by John, N. Williams. Cambridge:
University of Cambridge. 179-199.
[Besco, 2005] Robert O. Besco, Removing Pilot Errors Beyond Reason! Turning Probable Causes Into Plausible Solutions,
36th Annual International Seminar, ISASI Proceedings, 2005, pages 16 - 21
181
[Bilimoria00] K. Bilimoria, B. Sridhar, G. Chatterji, K. Sheth, and S. Grabbe, FACET: Future ATM Concepts Future ATM
Concepts Evaluation Tool Evaluation Tool, Free Flight DAG-TM Workshop Free Flight DAG-TM Workshop,
NASA Ames Research Center, 24 May 2000, http://www.asc.nasa.gov/aatt/wspdfs/Billimoria_FACET.pdf
[Bishop90] Dependability of critical computer systems - Part 3: Techniques Directory; Guidelines produced by the European
Workshop on Industrial Computer Systems Technical Committee 7 (EWICS TC7). London Elsevier Applied
Science 1990 (249 pages), P.G. Bishop (editor), Elsevier, 1990
[Blajev, 2003] Tzvetomir Blajev, Eurocontrol, Guidelines for Investigation of Safety Occurrences in ATM, Version 1.0, March
2003, http://www.eurocontrol.int/safety/gallery/content/public/library/guidelines%20for%20investigation.pdf
[Blanchard, 2006] H. Blanchard, Human reliability data calibration for European air traffic management. For Eurocontrol
Experimental Centre, HEL/EEC/051335/RT03, 5 May 2006, Issue 01
[Blom&al98,01] H.A.P. Blom, G.J. Bakker, P.J.G. Blanker, J. Daams, M.H.C. Everdij, and M.B. Klompstra, Accident risk
assessment for advanced ATM, 2nd USA/Europe Air Traffic Management R&D Seminar, FAA/Eurocontrol,
1998, also in Eds G.L. Donohue, A.G. Zellweger, Air Transportation Systems Engineering, AIAA, pp. 463-480,
2001.
[Blom&Bakker02] H.A.P. Blom and G.J. Bakker, Conflict Probability and Incrossing Probability in Air Traffic Management, Proc.
Conference on Decision and Control 2002, pp. 2421-2426, December 2002.
[Blom&Bakker93] H.A.P. Blom, G.J. Bakker, A macroscopic assessment of the target safety gain for different en route airspace
structures within SUATMS, Working paper for the ATLAS study of the commission of European Communities,
NLR report CR 93364 L, 1993.
[Blom&Bar-Shalom88] H.A.P. Blom and Y. Bar-Shalom, The Interacting Multiple Model Algorithm for Systems with Markovian
Switching Coefficients, IEEE Trans. on Automatic Control, Vol. 33, No. 8, 1988, pp. 780-783.
[Blom&Daams&Nijhuis00] H.A.P. Blom, J. Daams, H.B. Nijhuis, Human cognition modelling in ATM safety assessment, 3rd USA/Europe
Air Traffic management R&D seminar, Napoli, 13-16 June 2000, also in Eds G.L. Donohue, A.G. Zellweger, Air
Transportation Systems Engineering, AIAA, pp. 481-511, 2001.
[Blom&Everdij&Daams99] H.A.P. Blom, M.H.C. Everdij, J. Daams, ARIBA Final Report Part II: Safety Cases for a new ATM operation,
NLR report TR-99587, Amsterdam, 1999, http://www.aribaproject.org/rapport6/part2/index.htm
[Blom&Stroeve&Daams&Ni H.A.P. Blom, S. Stroeve, J. Daams and H.B. Nijhuis, Human cognition performance model based evaluation of
jhuis01] air traffic safety, 4th International Workshop on Human Error, Safety and Systems Development, 11-12 June
2001, Linköping, Sweden
[Blom&Stroeve&Everdij&Pa H.A.P. Blom, S.H. Stroeve, M.H.C. Everdij, M.N.J. van der Park, Human cognition performance model based
rk02] evaluation of safe spacing in air traffic, ICAS 2002 Congress
[Blom&Stroeve04] H.A.P. Blom and S.H. Stroeve, Multi-Agent Situation Awareness Error Evolution in Air Traffic, International
Conference on Probabilistic SafetyAssessment and Management (PSAM 7), June 14-18, 2004, Berlin, Germany.
[Blom03] H.A.P. Blom, Stochastic hybrid processes with hybrid jumps, IFAC Conference on Analysis and Design of
Hybrid Systems (ADHS03), April 2003, HYBRIDGE deliverable R2.3, http://www.nlr.nl/public/hosted-
sites/hybridge/
[Blom90] H.A.P. Blom, Bayesian estimation for decision-directed stochastic control, Ph.D. dissertation, Delft University of
Technology, 1990.
[Bolstad et al, 2002] Cheryl A. Bolstad, Jennifer M. Riley, Debra G. Jones, Mica R. Endsley, Using goal directed task analysis with
army brigade officer teams, Human Factors and Ergonomics Society 47th Annual Meeting, September 30th –
October 4th, 2002, Baltimore, MD.
http://www.satechnologies.com/Papers/pdf/Bolstad%20et%20al%20(2002)%20HFES%20GDTA.pdf
[Bongard01] J.A. Bongard, Maintenance Error Management through MEDA, 15th Annual Symposium - Human Factors in
Maintenance and Inspection 27-29 March 2001, London, UK,
http://www.hf.faa.gov/docs/508/docs/bongard15.pdf
[Botting&Johnson98] R.M. Botting, C.W. Johnson, A formal and structured approach to the use of task analysis in accident modelling,
International Journal Human-Computer studies, Vol 49, pp. 223-244, 1998
[Branicky&Borkar&Mitter98 M.S. Branicky, V.S. Borkar, S.K. Mitter, A unified framework for Hybrid Control: model and optimal control
] theory, IEEE Transactions on Automatic Control, Vol 43, No 1, pp. 31-45, Jan 1998,
http://www.vuse.vanderbilt.edu/~biswas/Courses/cs367/papers/branicky-control.pdf
[Brooker02] P. Brooker, Future Air Traffic Management: Quantitative en route safety assessment, The Journal of Navigation,
2002, Vol 55, pp. 197-211, The Royal Institute of Navigation
[Budalakoti et al, 2006] Suratna Budalakoti, Ashok N. Srivastava, and Ram Akella, “Discovering Atypical Flights in Sequences of
Discrete Flight Parameters”, presented at the IEEE Aerospace Conference in March 2006.
[Butler&Johnson95] R.W. Butler and S.C. Johnson, Techniques for modeling the reliability of fault-tolerant systems with Markov
state-space approach, NASA Reference publication 1348, September 1995
[CAA9095] CAA, Aircraft Proximity Hazard (APHAZ reports), CAA, Volumes 1-8, 1990-1995
[CAA-RMC93-1] Hazard analysis of an en-route sector, Volume 1 (main report), Civil Aviation Authority, RMC Report R93-
81(S), October 1993.
[CAA-RMC93-2] Hazard analysis of an en-route sector, Volume 2, Civil Aviation Authority, RMC Report R93-81(S), October
1993.
[CAATS II D13] Jelmer Scholte, Bas van Doorn, Alberto Pasquini, CAATS II (Cooperative Approach To Air Traffic Services II),
D13: Good Practices For Safety Assessment In R&D Projects, PART 2: Appendices, October 2009,
http://www.eurocontrol.int/valfor/gallery/content/public/docs/CAATSII-D13-2.pdf
[Cacciabue&Amendola&Coj P.C. Cacciabue, A. Amendola, G. Cojazzi, Dynamic logical analytical methodology versus fault tree: the case of
azzi86] the auxiliary feedwater system of a nuclear power plant, Nuclear Technology, Vol. 74, pp. 195-208, 1986.
[Cacciabue&Carpignano&Vi P.C. Cacciabue, A. Carpignano, C. Vivalda, Expanding the scope of DYLAM methodology to study the dynamic
valda92] reliability of complex systems: the case of chemical and volume control in nuclear power plants, Reliability
Engineering and System Safety, Vol. 36, pp. 127-136, 1992.
[Cacciabue98] P.C. Cacciabue, Modelling and human behaviour in system control, Advances in industrial control, Springer,
1998
[Cagno&Acron&Mancini01] E. Cagno, F. Acron, M. Mancini, Multilevel HAZOP for risk analysis in plant commissioning, ESREL 2001
[CAP 760] Safety Regulation Group (SRG) CAP 760, Guidance on the Conduct of Hazard Identification, Risk Assessment
and the Production of Safety Cases, For Aerodrome Operators and Air Traffic Service Providers, 13 January
2006, http://www.caa.co.uk/docs/33/CAP760.PDF
182
[Card83] Card, S. K., Moran, T. P., and Newell, A. L. (1983). The psychology of human computer interaction. Hillsdale,
NJ: Erlbaum.
[CbC lecture] IS 2620: Developing Secure Systems, Jan 16, 2007, Secure Software Development Models/Methods, Lecture 1,
http://www.sis.pitt.edu/~jjoshi/courses/IS2620/Spring07/Lecture1.pdf
[CBSSE90, p30] Commission on Behavioral and Social Sciences and Education, Quantitative Modeling of Human Performance in
Complex, Dynamic Systems, 1990, page 30, http://books.nap.edu/books/030904135X/html/30.html
[CBSSE90, p40] Commission on Behavioral and Social Sciences and Education, Quantitative Modeling of Human Performance in
Complex, Dynamic Systems, 1990, page 40, http://books.nap.edu/books/030904135X/html/40.html#pagetop
[CCS] http://ei.cs.vt.edu/~cs5204/fall99/ccs.html
[CEFA example] D. Mineo In Conjunction with GAIN Working Group B, Analytical Methods and Tools, Example Application of
Cockpit Emulator for Flight Analysis (CEFA), September 2004,
http://www.flightsafety.org/gain/CEFA_application.pdf
[Charpentier00] P. Charpentier, Annex 5: Tools for Software fault avoidance, Task 3: Common mode faults in safety systems,
Final Report of WP 1.2, European Project STSARCES (Standards for Safety Related Complex Electronic
Systems), Contract SMT 4CT97-2191, February 2000, http://www.safetynet.de/EC-
Projects/stsarces/WP12d_Annex5_software_task3.PDF
[CHIRP web] The CHIRP Charitable Trust Home Page, http://www.chirp.co.uk/
[ChoiCho, 2007] Jong Soo Choi and Nam Zin Cho, A practical method for accurate quantitifcation of large fault trees, Reliability
Engineering and System Safety Vol 92, pp. 971-982, 2007
[Chudleigh&Clare94] M.F. Chudleigh and J.N. Clare, The benefits of SUSI: safety analysis of user system interaction, Arthur D. Little,
Cambridge Consultants, 1994
[ChungNixon, 1995] Lawrence Chung Brian A. Nixon, Dealing with Non-Functional Requirements: Three Experimental Studies of a
Process-Oriented Approach, Proc., 17th ICSE, Seattle, WA, U.S.A., Apr. 1995, pp. 25--37.
http://citeseer.ist.psu.edu/cache/papers/cs/7312/http:zSzzSzwww.utdallas.eduzSz~chungzSzftpzSzICSE95.pdf/ch
ung95dealing.pdf
[Cichocki&Gorski] T. Cichocki, J. Górski, Safety assessment of computerised railway signalling equipment supported by formal
techniques, Proc. of FMERail Workshop #5, Toulouse (France), September, 22-24, 1999
[Cluster Analysis] Cluster Analysis, http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf
[CM] http://www2.cs.uregina.ca/~hamilton/courses/831/notes/confusion_matrix/confusion_matrix.html
[COCOM web] http://www.ida.liu.se/~eriho/COCOM_M.htm
[COGENT web] COGENT home page, http://cogent.psyc.bbk.ac.uk/
[Cojazzi&Cacciabue92] G. Cojazzi, P.C. Cacciabue, The DYLAM approach for the reliability analysis of dynamic systems. In Aldemir,
T., N.O. Siu, A. Mosleh, P.C. Cacciabue, and B.G. Göktepe, editors, Reliability and Safety Assessment of
dynamic process systems, volume 120 of Series F: Computer and Systems Sciences, pp. 8-23. Springer-Verlag,
1994.
[Collier et al, 1995] Collier, S.G., Folleso, K., 1995. SACRI: A measure of situation awareness for nuclear power control rooms. In
D. J. Garland and M. R. Endsley (Eds.), Experimental analysis and measurement of situation awareness, pp. 115-
122, Daytona Beach, FL: Embry-Riddle University Press.
[Cooper01] J. Arlin Cooper, The Markov Latent Effects Approach to Safety Assessment and Decision-Making, SAND2001-
2229, Unlimited Release, September 2001, http://www.prod.sandia.gov/cgi-bin/techlib/access-
control.pl/2001/012229.pdf
[Cooper96] J.A. Cooper, PHASER 2.10 methodology for dependence, importance, and sensitivity: The role of scale factors,
confidence factors, and extremes, Sandia National Labs., Dept. of System Studies, Albuquerque, NM USA, Sept.
1996, http://www.osti.gov/bridge/servlets/purl/392821-cFygGe/webviewable/392821.pdf
[Corker00] K.M. Corker, Cognitive models and control: human and system dynamics in advanced airspace operations, Eds:
N. Sanders, R. Amalberti, Cognitive engineering in the aviation domain, Lawrence Erlbaum Ass., pp. 13-42,
2000
[Cotaina&al00] N. Cotaina, F. Matos, J. Chabrol, D. Djeapragache, P. Prete, J. Carretero, F. García, M. Pérez, J.M. Peña, J.M.
Pérez, Study of existing Reliability Centered Maintenance (RCM) approaches used in different industries,
Universidad Politécnica de Madrid, Facultad de informática, TR Number FIM/110.1/DATSI/00, 2000,
http://www.datsi.fi.upm.es/~rail/bibliography/documents/RAIL-soa-FIMREPORT-00.pdf
[CPIT example] Mike Moodi & Steven Kimball, In Conjunction with GAIN Working Group B, Analytical Methods and Tools,
Example Application of Cabin Procedural Investigation Tool (CPIT), September 2004,
http://www.flightsafety.org/gain/CPIT_application.pdf
[CPQRA] S. Bonvicini, V. Cozzani, G. Spadoni, Chemical process safety and quantitative risk analysis,
http://www.dicma.unibo.it/NR/rdonlyres/25D17F38-DEF0-4473-B95E-
73D9B86A8B35/56101/SICUREZZA1.pdf
[Cranfield, 2005] Cranfield University Department of Air Transport International General Aviation and Corporate Aviation Risk
Assessment (IGA-CARA) Project, Cranfield, Final Report, Issue 1.1, June 2005,
http://www.airsafety.aero/assets/uploads/files/assi_IGA_CARA_report_web.pdf
[CREAM web] http://www.ida.liu.se/~eriho/CREAM_M.htm
[CREWS] http://crinfo.univ-paris1.fr/CREWS/Corps.htm
[CRIOP History] http://www.criop.sintef.no/CRIOP%20in%20short/The%20CRIOP%20history.htm
[CSE web] http://www.ida.liu.se/~eriho/CSE_M.htm
[CTD web] http://www.ida.liu.se/~eriho/CTD_M.htm
[CWA portal] http://projects.ischool.washington.edu/chii/portal/index.html
[D5 Main Document] M.H.C. Everdij, Review of techniques to support the EATMP Safety Assessment Methodology, Main Document,
Safety methods Survey Final report D5, 31 March 2003.
[D5 Technical Annex] M.H.C. Everdij, Review of techniques to support the EATMP Safety Assessment Methodology, Technical
Annex, Safety methods Survey Final report D5, 31 March 2003.
[Daams&Blom&Nijhuis00] J. Daams, H.A.P. Blom, and H.B. Nijhuis, Modelling Human Reliability in Air Traffic Management, PSAM5 -
Probabilistic Safety Assessment and Management, S. Kondo, and K. Furata (Eds.), Vol. 2/4, Universal Academy
Press, Inc., Tokyo, Japan, 2000, pp. 1193-1200.
183
[Dang et al, 2002] W.N. Dang, B. Reer, S. Hirschberg. Analyzing errors of commission: identification and a first assessment for a
Swiss plant. In: Proceedings of the OECD NEA workshop, Building the new HRA: errors of commission—from
research to application, Rockville, MD, USA, May 7–9, 2001. NEA/CSNI/R(2002)3. Le Seine St. Germain,
France: OECD, Nuclear Energy Agency; 2002. p. 105–16.
[Dardenne, 1993] A. Dardenne, A. van Lamsweerde and S. Fickas, “Goal Directed Requirements Acquisition,” Science of
Computer Programming, vol. 20, pp. 3–50, Apr. 1993.
[Darlington] R.B. Darlington, Factor Analysis, http://comp9.psych.cornell.edu/Darlington/factor.htm
[Das et al, 2000] Das N.; Yu F.-J.; Hwang S.-L.1; Huang Y.-H.; Lee J.-S., Application of human error criticality analysis for
improving the initiator assembly process, International Journal of Industrial Ergonomics, Volume 26, Number 1,
July 2000 , pp. 87-99(13), Elsevier
[Davis, 2007] Guy Davis, SENG 635: Software Reliability and Testing Tutorial Part #2, February 2007,
http://www.guydavis.ca/seng/seng635/tutorial2.doc
[Davis84] M.H.A. Davis, Piecewise Deterministic Markov Processes: A general class of non-diffusion stochastic models,
Journal Royal Statistical Society (B), Vol 46, pp. 353-388, 1984
[Davison] H. Davison, Cognitive task analysis: Current research, slides, http://web.mit.edu/16.459/www/CTA2.pdf
[DDESB, 2000] Department of Defense Explosives Safety Board (DDESB), Risk-Based Explosives Safety Analysis, Technical
paper No 14, 2000, http://uxoinfo.com/blogcfc/client/enclosures/ddesbtechPaper14.pdf
[DEFSTAN00-56] Hazard analysis and safety classification of the computer and programmable electronic system elements of
defence equipment, Int. Defence standard 00-56/1, April 1991.
[Degani&Kirlik, 1995] Asaf Degani, Alex Kirlik, Modes In Human-Automation Interaction: Initial Observations About A Modeling
Approach, Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC).
Vancouver, Canada, October 22-25, 1995. http://ti.arc.nasa.gov/m/profile/adegani/Modes%20in%20Human-
Automation%20Interaction.pdf
[Degani, 1996] Asaf Degani, Modeling Human-Machine Systems: On Modes, Error, And Patterns Of Interaction, School of
Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 1996
[DeJong&al01] H.H. De Jong, R.S. Tump, H.A.P. Blom, B.A. van Doorn, A.K. Karwal, E.A. Bloem, Qualitative Safety
Assessment of a RIASS based operation at Schiphol airport including a quantitative model, Crossing departures
on 01L/19R under good visibility conditions, NLR memorandum LL-2001-017, May 2001
[DeJong04] H.H. De jong, Guidelines for the identification of hazards: How to make unimaginable hazards imaginable,
Contract Report NLR-CR-2004-094, National Aerospace Laboratory NLR, 2004.
[Delphi] Web article on Delphi Method, http://www.iit.edu/~it/delphi.html
[DeMarle92] DeMarle, D. J., & Shillito, M. L. (1992). Value engineering. In G. Salvendy, (Ed.), Handbook of Industrial
Engineering (2nd ed.). New York John Wiley.
[Demichela and Piccinini, M. Demichela and N. Piccinini, Integrated Dynamic Decision Analysis: a method for PSA in dynamic process
2003] system, Proceedings sixth Italian Conference on Chemical and process engineering, 8-11 June, 2003, Pisa, Italy,
http://www.aidic.it/CISAP3/webpapers/87Demichela.pdf
[Deutsch et al, 1993] S.E. Deutsch, M.J. Adams, G.A. Abrett, N.L. Cramer, and C.E. Feehrer (1993). RDT&E Support: OMAR
Software Technical Specification, AL/HR-TP-1993-0027. Wright- Patterson AFB, OH.
[DFS Method Handbook, R. Wiegandt, Safety Assessment Handbook, DFS Deutsche Flugsicherung, Version 2.0, 15 December 2004 (not
2004] public).
[DiBenedetto et al, 2005] Maria D. Di Benedetto, Stefano Di Gennaro, Alessandro D’Innocenzo, Critical Observability for a Class of
Stochastic Hybrid Systems and Application to Air Traffic Management, HYBRIDGE WP7: Error Evolution
Control, May 2005, http://www2.nlr.nl/public/hosted-sites/hybridge/documents/D7.5%2030May05.pdf
[DiBenedetto02] M.D. Di Benedetto and G. Pola, Inventory of Error Evolution Control Problems in Air Traffic Management,
HYBRIDGE D7.1 report, 4 November 2002
[Dispersion] http://www.ene.gov.on.ca/envision/env_reg/er/documents/2004/air%20standards/PA04E0009.pdf
[Dix98] Dix, A. J., Finlay, J. E., Abowd, G. D., Beale, R. (1998). Human-Computer Interaction (2nd ed.). New York:
Prentice Hall.
[DNV-HSE01] Det Norske Veritas, for the Health and Safety Executive, Marine risk assessment, Offshore technology Report
2001/063, http://www.hse.gov.uk/research/otopdf/2001/oto01063.pdf
[DO178B] RTCA DO178B, Software considerations in airborne systems and equipment certification, 1 December 1992
[DOD DCS] Department of Defence Design Criteria Standard, Noise Limits, MIL-STD-1474D, 12 February 1997,
http://www.hf.faa.gov/docs/508/docs/milstd1474doc.pdf
[DOE 1023-95] Department Of Energy (DOE) Standard, Natural Phenomena Hazards Assessment Criteria, DOE-STD-1023-95,
April 2002, http://www.hss.doe.gov/nuclearsafety/ns/techstds/standard/std1023/std102395_reaf.pdf
[DOE-3006] Department Of Energy (DOE) Standard, Planning and Conduct of Operational Readiness Reviews (ORR), DOE-
STD-3006-2000, June 2000, http://www.hss.energy.gov/NuclearSafety/ns/orr/archive/DOE_STD_3006_2000.pdf
[Dorado-Usero et al, 2004] Manual Miguel Dorado-Usero, Jose Miguel de Pablo Guerrero, Albert Schwartz, William J. Hughes, Karlin Roth,
Frederic Medioni, Didier Pavet, FAA/Eurocontrol Cooperative R&D Action Plan 5 and Action Plan 9,
“Capability Assessment of Various Fast-Time Simulation Models and Tools with Analysis Concerns”, October,
2004, http://www.tc.faa.gov/acb300/ap5_workshops/documents/AP9_MS_TIM_Paper_Final_101504.pdf
[DOT-FTA00] U.S. Department of Transportation, Federal Transit Administration, Hazard analysis guidelines for transit
projects, U.S. Department of Transportation, Research and Special Programs Administration, Final Report,
January 2000, http://transit-safety.volpe.dot.gov/Publications/Safety/Hazard/HAGuidelines.pdf
[Dryden-ORR] NASA, Dryden Centerwide Procedure, Code SH, Facility Operational Readiness Review (ORR), DCP-S-031,
http://www.dfrc.nasa.gov/Business/DMS/PDF/DCP-S-031.pdf
[DS-00-56] Defence Standard 00-56, Safety Management Requirements for defence systems containing programmable
electronics, 21 September 1999
[Durso95] Durso, F.T., Truitt, T.R., Hackworth, C.A., Crutchfield, J.M., Nikolic, D., Moertl, P.M., Ohrt, D. & Manning,
C.A. (1995). Expertise and Chess: a Pilot Study Comparing Situation Awareness Methodologies. In: D.J. Garland
& M. Endsley (Eds), Experimental Analysis and Measurement of Situation Awareness. Embry-Riddle
Aeronautical University Press.
[EATMS-CSD] EATMS Concept and Scope Document (CSD), EATCHIP doc: FCO.ET1.ST02.DEL01, Edition 1.0, 15
September 1995
[Eberts97] Eberts, R. (1997). Cognitive Modeling. In G. Salvendy (Ed.), Handbook of Human Factors and Ergonomics (2nd
ed.). New York: John Wiley.
184
[ECOM web] http://www.ida.liu.se/~eriho/ECOM_M.htm
[Edwards, 1972] E. Edwards (1972). Man and machine: Systems for safety. In: Proceedings of British Airline Pilots Association
Technical Symposium. British Airline Pilots Association, London, pp. 21-36
[Edwards, 1988] Edwards, E. (1988) “Introductory Overview” in E.L. Wiener & D.C. Nagel (Eds) Human factors in aviation. San
Diego, CA: Academic Press.
[Edwards99] C.J. Edwards, Developing a safety case with an aircraft operator, Proc Second Annual Two-Day Conference on
Aviation Safety Management, May 1999
[EHQ-MOD97] Eurocontrol, Model of the cognitive aspects of air traffic control, Brussels, 1997.
[EHQ-PSSA] PSSA part of [EHQ-SAM]
[EHQ-SAM] Air Navigation System Safety Assessment Methodology, SAF.ET1.ST03.1000-MAN-01, including Safety
Awareness Document edition 0.5 (30 April 1999), Functional Hazard Assessment edition 1.0 (28 March 2000),
Preliminary System Safety Assessment edition 0.2 (8 August 2002) and System Safety Assessment edition 0.1
(14 August 2002)
[EHQ-TASK98] Eurocontrol, Integrated Task and Job Analysis of air traffic controllers, Phase 1, Development of methods,
Brussels, 1998.
[Elm, 2004] W.C. Elm, S.S. Potter, J.W. Gualtieri, E.M. Roth, J.R. Easter, (2004). Applied cognitive work analysis: A
pragmatic methodology for designing revolutionary cognitive affordances. In E. Hollnagel (Ed) Handbook for
Cognitive Task Design. London: Lawrence Erlbaum Associates, Inc.
[EMG] http://www.useit.com/alertbox/20010624.html
[EN 50128] CENELEC (Comité Européen de Normalisation Electrotecnique), European standard Pr EN 50128: Railway
applications, Software for railway control and protection systems, January 1996; From the internet: Annex B:
Bibliography of techniques, http://www.dsi.unifi.it/~fantechi/INFIND/50128a2.ps
[Endsley, 1993] M.R. Endsley (1993). A survey of situation awareness requirements in air-to-air combat fighters. International
Journal of Aviation Psychology, 3(2), 157- 168.
[Endsley95] M.R. Endsley, Towards a theory of situation awareness in dynamic systems, Human Factors, Vol. 37, 1995, pp.
32-64.
[Endsley97] M.R. Endsley, Situation Awareness, Automation & Free Flight, 1997, http://www.atmseminar.org/past-
seminars/1st-seminar-saclay-france-june-1997/papers/paper_019
[Engkvist, 1999] Inga-Lill Engkvist, Accidents leading to over-exertion back injuries among nursing personnel, Department of
Public Health Sciences Division of Rehabilitation Medicine, Karolinska Institutet, Stockholm, Programme for
Ergonomics, National Institute for Working Life, Stockholm, 1999,
http://gupea.ub.gu.se/dspace/bitstream/2077/4210/1/ah1999_20.pdf
[Engstrom, 2006] J. Engström, AIDI, Adaptive Integrated Driver-Vehicle Interface Interaction Plan, August 2006, Document
D4.0.1
[Enterprise-ORR] Cotran Technologies, Enterprise Application Software Systems - Operational Readiness Review (ORR)
Procedures & Checklists, http://www.cotrantech.com/reference/ORR/orr_toc.htm
[E-OCVM] http://www.eurocontrol.int/valfor/public/standard_page/OCVMSupport.html
[ESARR 4] Eurocontrol Safety Regulatory Requirement (ESARR), ESARR 4, Risk assessment and mitigation in ATM,
Edition 1.0, 5 April 2001, http://www.eurocontrol.be/src/index.html (SRC publications - ESARR related).
[Escobar01] J. Escobar, Maintenance Error Decision Aid (MEDA), A process to help reduce maintenance errors, April 2001,
http://www.amtonline.com/publication/article.jsp?pubId=1&id=1086
[ESSAI web] ESSAI web page, http://www.nlr.nl/hosting/www.essai.net/introduction.htm
[EUCARE web] European Confidential Aviation Safety Reporting Network webpage, http://www.eucare.de/
[Everdij et al, 2006] M.H.C. Everdij, H.A.P. Blom, J.W. Nollet, M.A. Kraan, Need for novel approach in aviation safety validation,
Second Eurocontrol Safety R&D Seminar, October 2006, Barcelona, Spain.
http://www.eurocontrol.int/eec/gallery/content/public/documents/conferences/2006_Barcelona/Everdij_Nollet_N
eed_for_novel_approach_to_aviation_safety_validation_19_10_2006.pdf
[Everdij et al, 2009] M.H.C. Everdij, H.A.P. Blom, J.J. Scholte, J.W. Nollet, M.A. Kraan, ‘Developing a framework for safety
validation of multi-stakeholder changes in air transport operations’, Safety Science, Volume 47, Issue 3, March
2009, pages 405-420. doi:10.1016/j.ssi.2008.07.021. Also NLR-TP-2008-425
[Everdij&al04] M.H.C. Everdij, M.B. Klompstra, H.A.P. Blom, B. Klein Obbink Compositional specification of a multi-agent
system by Dynamically Coloured Petri Nets, HYBRIDGE Deliverable D9.2, November 2004,
http://www.nlr.nl/public/hosted-sites/hybridge/
[Everdij&al06] M.H.C. Everdij, M.B. Klompstra, H.A.P. Blom, B. Klein Obbink, Compositional specification of a multi-agent
system by stochastically and dynamically coloured Petri nets, H.A.P. Blom, J. Lygeros (eds.), Stochastic hybrid
systems: Theory and safety critical applications, Springer, 2006, pp. 325-350. Also NLR-TP-2006-688.
[Everdij&Blom&Bakker02] M.H.C. Everdij, H.A.P. Blom, and G.J. Bakker, Accident risk assessment for airborne separation assurance,
Advanced Workshop on Air Traffic Management (ATM 2002), 22-26 September 2002, Capri, Italy,
http://radarlab.disp.uniroma2.it/FilePDF/B.Bakker.pdf
[Everdij&Blom&Kirwan, M.H.C. Everdij, H.A.P. Blom and B. Kirwan, Development Of A Structured Database Of Safety Methods,
2006] PSAM 8 Conference, New Orleans, 14-19 May 2006
[Everdij&Blom&Klompstra9 M.H.C. Everdij, H.A.P. Blom, M.B. Klompstra, Dynamically Coloured Petri Nets for Air Traffic Management
7] Purposes, Proceedings 8th IFAC Symposium on transportation systems, Chania, Greece, pp. 184-189, NLR report
TP 97493, National Aerospace Laboratory NLR, Amsterdam, 1997
[Everdij&Blom02] M.H.C. Everdij and H.A.P. Blom, Bias and Uncertainty in accident risk assessment, TOSCA-II WP4 final report,
2 April 2002, NLR TR-2002-137, TOSCA/NLR/WPR/04/05/10
[Everdij&Blom03] M.H.C. Everdij and H.A.P. Blom, Petri nets and Hybrid-state Markov processes in a power-hierarchy of
dependability models, Proc. IFAC conference on analysis and design of hybrid systems, Saint Malo, Brittany,
France, 16-18 June 2003, pp. 355-360
[Everdij&Blom04] M.H.C. Everdij and H.A.P. Blom, Modelling hybrid state Markov processes through Dynamically and
Stochastically Coloured Petri Nets, National Aerospace Laboratory NLR, HYBRIDGE Project Deliverable PD11,
September 2004, http://www.nlr.nl/public/hosted-sites/hybridge/
[Everdij&Blom05] M.H.C. Everdij and H.A.P. Blom, Piecewise Deterministic Markov Processes represented by Dynamically
Coloured Petri Nets, Stochastics, p. 1-29, February 2005.
185
[Everdij&Blom06] M.H.C. Everdij, H.A.P. Blom, Hybrid Petri nets with diffusion that have into-mappings with generalized
stochastic hybrid processes, H.A.P. Blom, J. Lygeros (eds.), Stochastic hybrid systems: Theory and safety critical
applications, Springer, 2006, pp. 31-63. Also NLR-TP-2006-689.
[Everdij, 2010] M.H.C. Everdij, ‘Compositional modelling using Petri nets with the analysis power of stochastic hybrid
processes’, PhD Thesis, June 2010, http://doc.utwente.nl/71752/ or
http://ifly.nlr.nl/documents/P10.7%20PhD%20Thesis%20Everdij%2011%20June%202010.pdf
[EverdijBlom, 2007] M.H.C. Everdij and H.A.P. Blom, Study of the quality of safety assessment methodology in air transport,
Proceedings 25th International System Safety Conference, Engineering a Safer World, Hosted by the System
Safety Society, Baltimore, Maryland USA, 13-17 August 2007, Editors: Ann G. Boyer and Norman J. Gauthier,
pages 25-35, 2007
[FAA AC431] FAA Advisory Circular 431-35.2, Reusable launch and reentry vehicle System Safety Process, July 20, 2005,
http://www.skybrary.aero/bookshelf/books/350.pdf
[FAA HFAJA] FAA Human Factors Research and Engineering Division, Human Factors Acquisition Job Aid, DOT/FAA/AR
03/69, http://www.hf.faa.gov/docs/508/docs/jobaid.pdf
[FAA HFED] FAA Human Factors and Engineering Division, AAR 100, Human Factors Assessments in Investment Analysis:
Definition and Process Summary for Cost, Risk, and Benefit Ver 1.0, January 28, 2003 ,
http://www.hf.faa.gov/docs/508/docs/HFA_IA_Assessment_16.pdf
[FAA HFW] FAA Human Factors Workbench, http://www.hf.faa.gov/Portal/toolsbyamsprocess.aspx
[FAA memo02] FAA Memorandum, Policy no. ANE-2002-35.15-RO, Draft, November 2002,
http://www.ihsaviation.com/memos/PM-ANE-2002-35.15-RO.pdf
[FAA SSMP] US Department of Transportation, Federal Aviation Administration, NAS Modernization, System Safety
Management Program, FAA Acquisition Management System, ADS-100-SSE-1, Rev 3.0, 1 May 2001, FAA
Acquisition System Toolset web page, http://fast.faa.gov/toolsets/SafMgmt/section5.htm#5.2.10
[FAA TM] http://www.hf.faa.gov/docs/508/docs/TranslationMatix.pdf
[FAA tools] FAA Acquisition System Toolset web page, http://fast.faa.gov/toolsets/SafMgmt/section5.htm#5.2.10
[FAA00] FAA System Safety Handbook, December 2000, Updated May 21, 2008,
http://www.faa.gov/library/manuals/aviation/risk_management/ss_handbook/
[FAA-AFS-420-86] Federal Aviation Administration, Risk analysis of rejected landing procedure for land and hold short operations at
chicago O-Hare International Airport Runways 14R and 27L, DOT-FAA-AFS-420-86, October 2000
[Falla97] M. Falla, Results and Achievements from the DTI/EPSRC R&D Programme in Safety Critical Systems,
Advances in Safety Critical Systems, June 1997, http://www.comp.lancs.ac.uk/computing/resources/scs/
[FANOMOS] FANOMOS section at NLR website, http://www.nlr.nl/smartsite.dws?ch=def&id=10872
[FAS_TAS] FAS intelligence resource program: TAS webpage, http://www.fas.org/irp/program/process/tas.htm
[FAST Method, 2005] JSSI-FAST, The FAST approach to discovering aviation futures and their hazards, Future Safety Team (FAST), a
working group of the JAA Safety Strategy Initiative (JSSI), Draft, 5 October 2005
[FaultInjection] Web page on Fault Injection, http://www.cerc.utexas.edu/~jaa/ftc/fault-injection.html
[Fayyad et al, 1996] Fayyad, Usama; Gregory Piatetsky-Shapiro, and Padhraic Smyth (1996). “From Data Mining to Knowledge
Discovery in Databases”, American Association for Artificial Intelligence, pp. 37-54
[FEA web] Front-End Analysis web page, http://classweb.gmu.edu/ndabbagh/Resources/Resources2/FrontEnd.htm
[Feridun et al, 2005] M. Feridun, O. Korhan, A. Ozakca, Multi-attribute decision making: An application of the Brown-Gibson model
of weighted evaluation, Journal of Applied Sciences vol 5, no 5, pp. 850-852, 2005,
http://adsabs.harvard.edu/abs/2005JApSc...5..850F
[FFC guide 2004] FutureFlight Central Customer Guide (July 6, 2004)
[FFC web] www.ffc.arc.nasa.gov
[Fields, 1997] Fields, B., Harrison, M. & Wright, P. (1997) THEA: Human Error Analysis for Requirements Definition.
Technical Report YCS 294. Department of Computer Science, University of York, York YO10 5DD, UK.
[Fields01] R.E. Fields, Analysis of erroneous actions in the design of critical systems, Submitted for the degree of Doctor of
Philosophy, University of York, Human Computer Interaction Group, Department of Computer Science, January
2001, http://www.cs.york.ac.uk/ftpdir/reports/2001/YCST/09/YCST-2001-09.pdf
[Fitts51] Fitts, P. M., (Ed.). (1951). Human Engineering for an effective air-navigation and traffic-control system.
Columbus Ohio: Ohio State University Research Foundation.
[Fitts64] Fitts, P. M. (1964). Perceptual-motor skill learning. In A. W. Melton (Ed.), Categories of Human Learning. New
York: Academic Press.
[Fitzgerald, 2007] Ronald E. Fitzgerald, Can Human Error in Aviation Be Reduced Using ASHRAM, Proceedings 25th
International System Safety Conference, Baltimore, Maryland, USA, 13-17 August 2007
[FlachGyftodimos, 2002] Peter A. Flach and Elias Gyftodimos, Hierarchical Bayesian Networks: A Probabilistic Reasoning Model for
Structured Domains, In Proceedings of ICML-2002 Workshop on development of representations, pp. 23-30,
2002
[Flanagan, 1954] J.C. Flanagan (1954) The Critical Incident Technique. Psychological Bulletin, 51.4, 327-359
[Fleming, 1995] Fleming, K.N., "A Reliability Model for Common Mode Failure in Redundant Safety Systems," Proceedings of
the Sixth Annual Pittsburgh Conference on Modeling and Simulation, April 23-25, 1975 (General Atomic Report
GA-A13284).
[Flin et al, 1998] R. Flin, K. Goeters, H. Hörman, and L Martin. A Generic Structure of Non-Technical Skills for Training and
Assessment, September 1998.
[Foot94] P.B. Foot, A review of the results of a trial hazard analysis of airspace sectors 24 and 26S, Civil Aviation
Authority CS report 9427, April 1994.
[Fota93] O.N. Fota, Étude de faisabilité d’analyse globale de la sécurité d’un CCR à l’aide de l’EPS (Evaluation
Probabiliste de la Sécurité. Sofréavia, CENA/R93-022, 1993.
[Fowler et al., 2009] D. Fowler, E. Perrin, R. Pierce, A systems-engineering approach to assessing the safety of the SESAR
Operational Concept 2020 Foresight, Eighth USA/Europe Air Traffic Management Research and Development
Seminar, 2009
[Fragola&Spahn, 1973] J.R. Fragola and J.F. Spahn (1973), "The Software Error Effects Analysis; A Qualitative Design Tool," In
Proceedings of the 1973 IEEE Symposium on Computer Software Reliability, IEEE, New York, pp. 90-93.
[Freedy85] Freedy, A., Madni, A., & Samet, M. (1985). Adaptive user models: Methodology and applications in man-
computer systems. In W. B. Rousse, (Ed.), Advances in Man-Machine Systems Research: Volume 2. Greenwich
and London: JAI Press.
186
[FT handbook02] W. Vesely et al, Fault Tree Handbook with Aerospace Applications, NASA office of safety and mission
assurance, Version 1.1, August 2002, http://www.hq.nasa.gov/office/codeq/doctree/fthb.pdf
[Fujita94] Y. Fujita, Y., Sakuda, H. and Yanagisawa, I. (1994). Human reliability analysis using simulated human model. In
PSAM-II Proceedings, San Diego, California, March 20-25, pp. 060-13 - 060-18.
[Fumizawa00] Motoo Fumizawa, Takashi Nakagawa, Wei Wu, Hidekazu Yoshikawa, Development Of Simulation-Based
Evaluation System For Iterative Design Of Hmi In Nuclear Power Plant - Application for Reducing Workload
using HMI with CRT, International Topical Meeting on Nuclear Plant Instrumentation, Controls, and Human-
Machine Interface Technologies (NPIC&HMIT 2000), Washington, DC, November, 2000
[Futures Group, 1994] The Futures Group, Relevance Tree And Morphological Analysis, 1994, http://www.agri-
peri.ir/AKHBAR/cd1/FORESIGHT%20METHODOLOGY%20&%20FORECASTING/FORESIGHT%20MET
HODOLOGY/related%20articles/books/Future%20Research%20Methodology/12-tree.pdf
[FuzzyLogic] Web page on Fuzzy Logic, http://www-2.cs.cmu.edu/Groups/AI/html/faqs/ai/fuzzy/part1/faq.html
[GAIN AFSA, 2003] GAIN Working Group B, Analytical Methods and Tools, Guide to methods and tools for Airline flight safety
analysis, Second edition, June 2003, http://flightsafety.org/files/analytical_methods_and_tools.pdf
[GAIN ATM, 2003] GAIN Working Group B, Analytical Methods and Tools, Guide to methods and tools for safety analysis in air
traffic management, First edition, June 2003, http://flightsafety.org/files/methods_tools_safety_analysis.pdf
[GAIN GST03] GAIN Government Support Team, Updated list of major current or planned government aviation safety
information collecting programs, Sept 2004, http://www.flightsafety.org/gain/info_collection_programs.pdf
[Galvagni et al, 1994] R. Galvagni, I. Ciarambino, N. Piccinini, Malfunctioning evaluation of pressure regulating installation by
Integrated Dynamic Decision Analysis, PSAM II, San Diego, March 20-25, 1994
[Gantt03] Gantt Charts, Henry Laurence Gantt’s legacy to management is the Gantt Chart, Retrieved August 29, 2003 from
http://www.ganttchart.com/History.html
[Garrick88] B.J. Garrick, The approach to risk analysis in three industries: nuclear power, space systems and chemical
process, Reliability engineering and system safety, Vol. 23, pp. 195-205, 1988.
[Gaspard02] H. Gaspard-Boulinc, Y. Jestin, L. Fleury, EPOQUES: Proposing Tools and Methods to treat Air Traffic
Management Safety Occurrences, Workshop on the Investigation and Reporting of Incidents and Accidents
(IRIA 2002) Editor: Chris Johnson, pp. 82-88, 2002, http://www.dcs.gla.ac.uk/~johnson/iria2002/IRIA_2002.pdf
[Geisinger85] Geisinger, K.E. (1985), “Airspace Conflict Equations”, Transportation Science, Operations Research Society of
America, Vol.19, No. 2, May 1985
[Genesereth05] M. Genesereth, Truth Table Method and Propositional Proofs, Computational logic, Lecture 3, Autumn 2005,
http://logic.stanford.edu/classes/cs157/2005fall/lectures/lecture03.pdf
[GfL 2001] GfL – Gesellschaft für Luftverkehrsforschung, Berlin, und Arcadis Trischler & Partner GmbH, Darmstadt,
Risikoanalyse für den Flughafen Basel-Mülhausen – Kurzfassung –, June 2001.
[GfL web] Gesellschaft für Luftverkehrsforschung, GfL www.gfl-consult.de
[Ghamarian, 2008] Amir Hossein Ghamarian, Timing Analysis of Synchronous Data Flow Graphs, PhD Thesis Eindhoven
University of Technology, 4 July 2008, http://alexandria.tue.nl/extra2/200811099.pdf
[Gibbons et al, 2006] Alyssa Mitchell Gibbons, Terry L. von Thaden, Douglas A. Wiegmann, Development and Initial Validation of a
Survey for Assessing Safety Culture Within Commercial Flight Operations Department of Psychology,
The International Journal Of Aviation Psychology, 16(2), 215–238, 2006,
http://www.leaonline.com/doi/pdf/10.1207/s15327108ijap1602_6 .
[Gizdavu02] Adrian Gizdavu; EEC Report N°374/2000, Spata 2000 Real-time Simulation,
http://www.eurocontrol.int/eec/public/standard_page/2002_report_374.html
[Glyde04] Sue Glyde, In Conjunction with: GAIN Working Group B, Analytical Methods and Tools, Example Application
of Aviation Quality Database (AQD), September 2004, http://www.flightsafety.org/gain/AQD_application.pdf
[GoranssonKoski, 2002] Linus Göransson and Timo Koski, Using a Dynamic Bayesian Network to Learn Genetic Interactions, 2002,
http://www.math.kth.se/~tjtkoski/dynbayesian.pdf
[Gordon04] Rachael Gordon, Steven T. Shorrock, Simone Pozzi, Alessandro Boschiero (2004) Using human error analysis to
help to focus safety analysis in ATM simulations: ASAS Separation. Paper presented at the Human Factors and
Ergonomics Society 2004 Conference, Cairns, Australia, 22nd - 25th August, 2004.
[GoreCorker, 2000] B.F. Gore and K.M. Corker, Value of human performance cognitive predictions: A free flight intergrayion
application, Proc. IEA 2000/HFES 2000 Congress,
http://www.engr.sjsu.edu/hfe/hail/airmidas/HFES_Symp_2000_01504.pdf
[Groeneweg] J. Groeneweg, Controlling the Controllable: preventing business upsets. Fifth Edition ISBN 90-6695-140-0
[Groot&Baecher, 1993] DeGroot, D.J. and G.B. Baecher. (1993), Estimating autocovariance of in-situ soil properties. Journ. Geotechnical
Engnrg., 119(1):147-166.
[Gualtieri, 2005] James W. Gualtieri, Samantha Szymczak and William C. Elm, Cognitive system engineering - based design:
Alchemy or engineering, Cognitive Systems Engineering Center, ManTech International, 2005
[Gulland04] W.G. Gulland, Methods of Determining Safety Integrity Level (SIL) Requirements - Pros and Cons, Springer-
Verlag, Proceedings of the Safety-Critical Systems Symposium, February 2004, http://www.4-
sightconsulting.co.uk/Current_Papers/Determining_SILs/determining_sils.html
[GyftodimosFlach, 2002] Elias Gyftodimos and Peter A. Flach, Hierarchical Bayesian Networks: A Probabilistic Reasoning Model for
Structured Domains, Machine Learning group, Computer Science department, University of Bristol, UK, 2002,
http://www.cs.bris.ac.uk/Publications/Papers/1000650.pdf
[Hadley99] Hadley, G. A., Guttman, J. A., & Stringer, P. G. (1999). Air traffic control specialist performance measurement
database (DOT/FAA/CT-TN99/17). Atlantic City International Airport: Federal Aviation Administration William
J. Hughes Technical Center, http://hf.tc.faa.gov/atcpmdb/default.htm
[HAIL] Human Automation Integration Laboratory, (HAIL), http://www.engr.sjsu.edu/hfe/hail/software.htm
[Halim et al, 2007] Enayet B. Halim, Harigopal Raghavan, Sirish L. Shah and Frank Vanderham, Application of Unductive
Monitoring System (IMS) for monitoring industrial processes, NSERC-Matrikon-Suncor-iCORE IRC Seminar, 9
December 2007, http://www.ualberta.ca/~slshah/files/nserc_irc2007/Talk2_IRC%20Part-2%20IMS-EH.pdf
[Hall et al, 1995] E.M. Hall, S.P. Gott, R.A. Pokorny (1995) A procedural guide to cognitive task analysis: The PARI methodology
(AL/HR-TR-1995-0108). Brooks Air Force Base, TX: Air Force Material Command.
[Hamilton, 2000] I. Hamilton (2000) Cognitive task analysis using ATLAS; in, J.M. Schraagen, S.F. Chipman and V.L. Shalin,
Cognitive Task Analysis, Lawrence Erlbaum Associates, 215-236.
187
[Harbour&Hill, 1990] Jerry L. Harbour and Susan G. Hill, Using HSYS in the analysis of Human-System interactions: Examples from
the offshore petroleum industry, Human Factors and Ergonomics Society Annual Meeting Proceedings, Test and
Evaluation, pp. 1190-1194(5), Human Factors and Ergonomics Society, 1990
[Hart88] Hart, S.G., & Staveland, L.E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and
theoretical research. In P.A. Hancock and N. Meshkati (Eds.) Human mental workload (pp.139-183).
Amsterdam: North-Holland.
[Hawkins93] F. Hawkins, “Human Factors in Flight”, second edition, edited by Harry W. Orlady, 1993, Ashgate Publishing
Company.
[HCR-HESRA, 2005] Human Centric Research (HCR), Task 3 Report, Application of Draft Human Error and Safety Risk Analysis
(HESRA) to the Voice Switching Control System (VSCS), 30 September 2005,
http://www.hf.faa.gov/Portal/techrptdetails.aspx?id=1710
[HE, 2005] Human Engineering, A review of safety culture and safety climate literature for the development of the safety
culture inspection toolkit, Prepared by Human Engineering for the Health and Safety Executive 2005, Research
Report 367, 2005
[HEA practice] Human Error Analysis, Error Mode Analysis – Single assessor method, Emergency Shutdown Workshop, “hea-
practice.ppt”
[HEAT overview] Overview Of Human Engineering Analysis Techniques,
[http://www.manningaffordability.com/s&tweb/PUBS/Man_Mach/part1.html is a dead link; potential
alternative?: http://books.google.nl/books?id=gyUs2Nh5LscC&pg=RA1-PA230&lpg=RA1-
PA230&dq=Overview+%22Human+Engineering+Analysis+Techniques%22&source=bl&ots=JJBq9_IDk2&sig
=Q5uRGIrtZhGdfQnH5HvVe6_zQn0&hl=nl&ei=ruoLSsHHJ4PO-
AaU18SnBg&sa=X&oi=book_result&ct=result&resnum=6#PRA1-PA237,M1 ]
[HEA-theory] Human error Analysis, Theory and Concepts, Techniques and Practice, Cognitive Error Analysis,
“hea.theory.ppt”
[HEDADM] http://www.hf.faa.gov/docs/DID_003.htm
[HEDADO] http://www.hf.faa.gov/docs/DID_002.htm
[Heisel, 2007] Maritta Heisel, Entwicklung Sicherer Software, 2007, http://swe.uni-duisburg-
essen.de/de/education/ss2007/ess/folien/ess-part5-print4p.pdf
[Hendrick97] Hendrick, H. W. (1997). Organizational design and macroergonomics. In G. Salvendy (Ed.). Handbook of
Human Factors and Ergonomics (2nd ed.). New York: John Wiley.
[Henley&Kumamoto92] E.J. Henley and H. Kumamoto, Probabilistic Risk Assessment; Reliability engineering, design, and analysis,
IEEE Press, 1992
[HEPP] http://www.hf.faa.gov/docs/DID_001.htm
[Hewitt, 2006] Glen Hewitt, FAA Human Factors Research and Engineering Group, Human Error and Safety Risk Analysis: A
Proactive Tool for Assessing the Risk of Human Error, Presentation for Eurocontrol Safety R&D Seminar
25 October 2006, Barcelona, Spain,
http://www.eurocontrol.int/eec/gallery/content/public/documents/conferences/2006_Barcelona/Hewitt_HESRA_
Briefing_V3.pdf
[HFC] The Human Factors Case: Guidance for HF Integration, Edition No 1, 27-08-2004,
http://www.eurocontrol.be/eec/gallery/content/public/documents/EEC_human_factors_documents/Guidance_for_
HF_integration.pdf
[HFS, 2003] Norwegian Petroleum Directorate, Developed by Human Factors Solutions – 2003, HF-Assessment Method for
control rooms. 2003, http://www.hfs.no/wp-content/uploads/2010/04/EnglishHFAM.pdf
[HIFA Data] Eurocontrol EATMP HIFA data, http://www.eurocontrol.int/hifa/public/standard_page/Hifa_HifaData.html
[Hignett&McAtamney, 2000] Sue Hignett and Lynn McAtamney, Rapid entire body assessment (REBA); Applied
Ergonomics. 31:201-205, 2000.
[Hiirsalmi, 2000] Mikko Hiirsalmi , VTT information research report TTE1-2000-29, MODUS-Project, Case Study WasteWater
Method feasibility Study: Bayesian Networks, Version 1.1-1, 16.10.2000,
http://www.vtt.fi/inf/julkaisut/muut/2000/rr2k29-c-ww-feasib.pdf
[Hochstein02] L. Hochstein, GOMS, October 2002, http://www.cs.umd.edu/class/fall2002/cmsc838s/tichi/printer/goms.html
[Hoegen97] M. Von Hoegen, Product assurance requirements for first/Planck scientific instruments, PT-RQ-04410 (Issue 1),
September 1997, ESA/ESTEC, Noordwijk, The Netherlands,
http://www.sron.nl/www/code/lea/Hifi/User/ccadm/0035.pdf
[Hofer et al, 2001] E. Hofer, M. Kloos, B. Krzykacz-Hausmann, J. Peschke, M. Sonnenkalb, Methodenentwicklung zur simulativen
Behandlung der Stochastik in probabilistischen Sicherheitsanalysen der Stufe 2, Abschlussbericht, GRS-A-2997,
Gesellschaft für Anlagen- und Reaktorsicherheit, Germany (2001).
[Hogg et al, 1995] Hogg, D.N., Folleso, K., Strand-Volden, F., & Torralba, B., 1995. Development of a situation awareness measure
to evaluate advanced alarm systems in nuclear power plant control rooms. Ergonomics, Vol 38 (11), pp 2394-
2413.
[Hokstad et al, 1999] Per Hokstad, Erik Jersin, Geir Klingenberg Hansen, Jon Sneltvedt, Terje Sten. Helicopter Safety Study 2,
Volume I: Main report, SINTEF Industrial Management, Report STF38 A99423, December 1999,
http://www.sintef.no/upload/Teknologi_og_samfunn/Sikkerhet%20og%20p%C3%A5litelighet/Rapporter/STF38
%20A99423.pdf
[Hollamby97] D. Hollamby, Non Destructive Inspection, School of Aerospace and Mechanical Engineering, University of New
South Wales, AMEC 4018, Course Notes, July 1997
[Hollnagel, 2002] E. Hollnagel, (2002), Understanding accidents: From root causes to performance variability. In J. J. Persensky, B.
Hallbert, & H. Blackman (Eds.), New Century, New Trends: Proceedings of the 2002 IEEE Seventh Conference
on Human Factors in Power Plants (p. 1-6). New York: Institute of Electrical and Electronic Engineers.
[Hollnagel, 2003] Hollnagel, Erik (Eds.) (2003): Handbook of Cognitive Task Design. Mahwah, New Jersey, Lawrence Erlbaum
Associates
[Hollnagel, 2004] E. Hollnagel, Barriers and accident prevention, Ashgate Publishing, 2004
[Hollnagel, 2006] E. Hollnagel, Capturing an Uncertain Future: The Functional Resonance Accident Model, Eurocontrol Safety
R&D Seminar, 25 October 2006, Barcelona, Spain,
http://www.eurocontrol.int/eec/gallery/content/public/documents/conferences/2006_Barcelona/Hollnagel(FRAM
_Barcelona_Arial).pdf
188
[Hollnagel,Woods, 2005] E. Hollnagel and D.D. Woods, Joint cognitive systems: foundations of cognitive systems engineering, CRC
Press, Taylor and Francis Group, 2005
[Hollnagel93] E. Hollnagel, Human Reliability analysis, context and control. Academic Press, London, 1993.
[Hollnagel-ETTO] http://www.ida.liu.se/%7Eeriho/ETTO_M.htm
[HollnagelGoteman, 2004] E. Hollnagel and Ö. Goteman (2004), The functional resonance accident model, University of Linköping
[HollnagelNaboLau, 2003] E. Hollnagel, A. Nåbo, and I.V. Lau, (2003). A systemic model for driver-in-control. In Proceedings of the
Second International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design.
Park City Utah, July 21-24.
[HollnagelWoods83] Hollnagel, E. & Woods, D. D. (1983). Cognitive systems engineering: New wine in new bottles. International
Journal of Man-Machine Studies, 18, 583-600.
[HollnagelWoodsLeveson, E. Hollnagel, D.D. Woods, N. Leveson (Eds), Resilience engineering: concepts and precepts, Ashgate Publishing
2006] Limited, 2006
[Holloway89] N.J. Holloway, Pilot study methods based on generic failure rate estimates, Mathematics in major accident risk
assessment. In R.A. Cox, editor, pp. 71-93. Oxford, 1989.
[HOS user’s guide, 1989] HOS user’s guide, http://www.dtic.mil/cgi-
bin/GetTRDoc?AD=ADA212007&Location=U2&doc=GetTRDoc.pdf
[Houmb02] S.H. Houmb, Stochastic models and mobile e-commerce: Are stochastic models usable in the analysis of risk in
mobile e-commerce?, University College of Østfold, 15 February 2002
[Howat02] C.S. Howat, Hazard identification and Evaluation; Introduction to Fault Tree Analysis in Risk assessment, Plant
and Environmental Safety, 2002
[HRA Washington01] Draft proceedings HRA Washington workshop, Building the new HRA - Errors of Commission - from research
to application, Headquarters US NRC, Rockville, Maryland, USA, 7-9 May 2001, “Draft proceedings HRA
Washington workshop.zip”
[HSEC02] Health Safety and Engineering Consultants Ltd, Techniques for addressing rule violations in the offshore
industries, Offshore Technology report 2000/096, 2002,
http://www.hse.gov.uk/research/otopdf/2000/oto00096.pdf
[Hsu, 1987] C.S. Hsu, Cell-to-Cell Mapping: A Method of Global Analysis for Nonlinear Systems, Springer, New York,
1987.
[Hu, 2005] Yunwei Hu, A guided simulation methodology for dynamic probabilistic risk assessment of complex systems,
PhD Thesis, University of Maryland, 2005, http://www.lib.umd.edu/drum/bitstream/1903/2472/1/umi-umd-
2344.pdf
[Humphreys88] P. Humphreys, Human reliability assessors guide, Safety and Reliability Directorate UKAEA (SRD) Report No
TRS 88/95Q, October 1988.
[Hunns, 1982] D.M. Hunns, The method of paired comparisons. In: A.E. Green, Editor, High risk safety technology, Wiley,
Chichester (1982).
[Hutchins95] Hutchins, S.G., Westra, D.P. (1995). Patterns of Errors Shown by Experienced Navy Combat Information Center
Teams. Designing for the Global Village. Proceedings of the Human Factors and Ergonomics Society 39th
Annual Meeting, San Diego, California, October 9-13, 1995.
[ICAO Doc 9806] ICAO (2002). Human factors guidelines for safety audits manual. Doc 9806 AN/763.
[IDKB] IDKB, Instructional Design Knowledge Base, Perform a Front-End analysis,
http://classweb.gmu.edu/ndabbagh/Resources/Resources2/FrontEnd.htm
[IE, How-How] The Improvement Encyclopedia, How-How Diagram, http://syque.com/improvement/How-
How%20Diagram.htm
[IE, Why-Why] The Improvement Encyclopedia, Why-Why Diagram, http://syque.com/improvement/Why-
Why%20Diagram.htm
[IEC 61508] International Standard International Electrotechnical Commission, IEC 61508-5, Functional safety of
electrical/electronic/ programmable electronic safety-related systems – Part 5: Examples of methods for the
determination of safety integrity levels, First edition, 1998-12,
http://www.exida.com/articles/iec61508_overview.pdf
[IEC 61508-6] IEC61508” Functional Safety of Electrical/Electronic/Programmable Electronic Safety Related Systems”, Part 6
:”System Aspects”, April 1998. http://www.panrun.com/download/ca/IEC61508/IEC61508-Part6.pdf
[Infopolis2] Infopolis 2 Consortium, Ergonomics Methods and Tools, http://www.ul.ie/~infopolis/methods/incident.html
[Inspections] Reviews, Inspections, and Walkthroughs, http://www.cs.txstate.edu/~rp31/slidesSQ/03-
Inspections&Cleanroom.pdf
[IO example] T. Panontin and R. Carvalho In Conjunction with: GAIN Working Group B, Analytical Methods and Tools,
Example Application of Investigation Organizer, September 2004,
http://www.flightsafety.org/gain/IO_application.pdf
[IPME web] IPME web page, Micro Analysis & Design, http://www.maad.com/index.pl/ipme
[Ippolito&Wallace95] L.M. Ippolito, D.R. Wallace, A Study on Hazard Analysis in High Integrity Software Standards and Guidelines,
National Institute of Standards and Technology, January 1995,
http://hissa.nist.gov/HHRFdata/Artifacts/ITLdoc/5589/hazard.html#33_SEC
[IRP, 2005] Eurocontrol, 2004 Baseline Integrated Risk Picture for Air Traffic Management in Europe, EEC Note No. 15/05,
May 2005,
http://www.eurocontrol.be/eec/gallery/content/public/document/eec/report/2005/013_2004_Baseline_Integrated_
Risk_Picture_Europe.pdf
[IRP, 2006] John Spouge and Eric Perrin, Main report for the: 2005/2012 Integrated Risk Picture for air traffic management
in Europe, EEC Note No. 05/06, Project C1.076/EEC/NB/05, April 2006,
http://www.eurocontrol.be/eec/gallery/content/public/document/eec/report/2006/009_2005-
2012_Integrated_Risk_Picture_ATM_Europe.pdf
[Isaac et al, 2003] A. Isaac, S.T. Shorrock, R. Kennedy, B. Kirwan, H. Andersen and T. Bove, The Human Error in ATM Technique
(HERA-JANUS), February 2003,
http://www.eurocontrol.int/humanfactors/gallery/content/public/docs/DELIVERABLES/HF30%20(HRS-HSP-
002-REP-03)%20Released-withsig.pdf
[Isaac&al99] A. Isaac, S.T. Shorrock, R. Kennedy, B. Kirwan, H. Anderson, T. Bove, The Human Error in ATM (HERA)
technique, 20 June 1999, “hera.doc”
189
[Isaac&Pounds01] A.Isaac and J. Pounds, Development of an FAA-Eurocontrol Technique for the Analysis of Human Error in
ATM, 4th USA/Europe ATM R&D Seminar, Santa Fe, 3-7 December 2001,
http://www.hf.faa.gov/docs/508/docs/cami/0212.pdf
[ISO/IEC 15443] ISO/IEC, Information technology - Security techniques - A framework for IT security assurance – Part 2:
Assurance methods, ISO/IEC 15443-2 PDTR1, 2 Oct 2002
[ISRS brochure] ISRS Brochure, http://viewer.zmags.com/publication/e94fa62a#/e94fa62a/2
[JAR 25.1309] Joint Aviation Requirements JAR - 25, Large Aeroplanes, Change 14, 27 May 1994, and Amendment 25/96/1 of
19 April 1996, including AMJ 25-1309: System design and analysis, Advisory Material Joint, Change 14, 1994.
[Jeffcott&Johnson] M. Jeffcott, C. Johnson, The use of a formalised risk model in NHS information system development,
http://www.dcs.gla.ac.uk/~johnson/papers/NHS_paper_CTW.pdf
[Jensen, 2002] F. Jensen, U.B. Kjaerolff, M. Lang, A.L. Madsen, HUGIN - The tool for Bayesian networks and Influence
diagrams, Proceedings of the First European Workshop on Probabilistic Graphical Models, pp. 212-221, 2002
[Johnson&Johnson, 1991] Hilary Johnson and Peter Johnson, Task Knowledge Structures: Psychological basis and integration into system
design. Acta Psychologica, 78 pp 3-26. http://www.cs.bath.ac.uk/~hci/papers/ActaPsychologica.pdf
[Johnson, 1999] Chris Johnson, A First Step Towards the Integration of Accident Reports and Constructive Design Documents, In
Proceedings of SAFECOMP'99, 1999, http://www.dcs.gla.ac.uk/~johnson/papers/literate_reports/literate.html
[Johnson, 2003] C.W. Johnson, Failure in Safety-Critical Systems: A Handbook of Accident and Incident Reporting, University of
Glasgow Press, Glasgow, Scotland, October 2003.
http://www.dcs.gla.ac.uk/~johnson/book/C_Johnson_Accident_Book.pdf
[Johnson92] Johnson, P. (1992). Human-Computer Interaction: Psychology, Task Analysis and Software Engineering.
Maidenhead: McGraw-Hill.
[Jonassen et al, 1999] Jonassen, D., Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for instructional
design. Mahwah, New Jersey: Lawrence Erlbaum Associates
[Jones&Bloomfield&Froome C. Jones, R.E. Bloomfield, P.K.D. Froome, P.G. Bishop, Methods for assessing the safety integrity of safety-
&Bishop01] related software of uncertain pedigree (SOUP), Adelard, Health and safety executive, contract research report
337/2001, http://www.hse.gov.uk/research/crr_pdf/2001/crr01337.pdf
[JRC ECCAIRS] JRC, ECCAIRS, European Coordination Centre for Accident and Incident Reporting Systems
http://eccairsportal.jrc.ec.europa.eu/
http://ipsc.jrc.ec.europa.eu/showdoc.php?doc=promotional_material/JRC37751_ECCAIRS.pdf&mime=applicati
on/pdf
[Kardes, 2005] E. Kardes and James T. Luxhøj, “A Hierarchical Probabilistic Approach for Risk Assessments of an Aviation
Safety Product Portfolio,” Air Traffic Control Quarterly, Vol. 13, No. 3 (2005), pp. 279-308.
[Keeney76] Keeney, R. L., & Raiffa, H. (1976). Decisions with Multiple Objectives: Preferences and Value Tradeoffs. New
York: John Wiley.
[Keidar&Khazan00] I. Keidar and R. Khazan, A virtually synchronous group multicast algorithm for WANs: formal approach,
http://www.ee.technion.ac.il/~idish/ftp/vs-sicomp.pdf , Extended paper version of ‘A Client-Server Approach to
Virtually Synchronous Group Multicast: Specifications and Algorithms’, 20th International Conference on
Distributed Computing Systems (ICDCS 2000), April 2000, pages 344-355.
[Kelly, 1998] T.P. Kelly, Arguing Safety – A systematic approach to managing safety cases, PhD Thesis, University of York,
September 1998, http://www-users.cs.york.ac.uk/tpk/tpkthesis.pdf
[Kennedy slides] R. Kennedy, Human Error assessment – HAZOP studies, “hazop.ppt”
[Kennedy&Kirwan98] R. Kennedy and B. Kirwan, Development of a hazard and operability-based method for identifying safety
management vulnerabilities in high risk systems, Safety Science 30 (1998) 249-274
[Kennedy] R. Kennedy, Human error assessment and reduction technique (HEART), “heart.ppt”
[Kieras&Meyer, 1997] Kieras, D. E., & Meyer, D. E. (1997) An overview of the EPIC architecture for cognition and performance with
application to human-computer interaction. Human-computer interaction, 4(12), 391-438
[Kieras, 1996] Kieras, David (1996). "A Guide to GOMS Model Usability Evaluation using NGOMSL".
http://www.idemployee.id.tue.nl/g.w.m.rauterberg/lecturenotes/GOMS96guide.pdf
[Kieras88] Kieras, D. (1988). Towards a practical GOMS model methodology for user interface design. In Handbook of
Human-Computer Interaction (Helander M. ed.), pp. 135-158. Amsterdam: North-Holland.
[Kilduff et al, 2005] Patricia W. Kilduff, Jennifer C. Swoboda, and B. Diane Barnette, Command, Control, and Communications:
Techniques for the Reliable Assessment of Concept Execution (C3TRACE) Modeling Environment: The Tool,
Army Research Laboratory, June 2005, ARL-MR-0617, http://www.arl.army.mil/arlreports/2005/ARL-MR-
0617.pdf
[Kim et al, 2005] J. Kim, W. Jung and J. Park, A systematic approach to analysing errors of commission from diagnosis failure in
accident progression, Reliab Eng System Saf 89 (2005), pp. 137–150.
[Kirakowski, 1996] J. Kirakowski (1996). The software usability measurement inventory: background and usage. P. Jordan, B.
Thomas e B. Weedmeester (eds), Usability Evaluation in Industry. London: Taylor & Francis, 169-178.
[Kirkwood76] Kirkwood, C. W. (1976). Pareto optimality and equity in social decision analysis. IEEE Transactions on Systems,
Man, and Cybernetics, 9(2), 89-91.
[Kirwan&al97] B. Kirwan, A. Evans, L. Donohoe, A. Kilner, T. Lamoureux, T. Atkinson, and H. MacKendrick, Human Factors
in the ATM System Design Life Cycle, FAA/Eurocontrol ATM R&D Seminar, 16 - 20 June, 1997, Paris, France,
http://www.atmseminar.org/past-seminars/1st-seminar-saclay-france-june-1997/papers/paper_007
[Kirwan&al97-II] B. Kirwan, R. Kennedy, S. Taylor-Adams, B. Lambert, The validation of three human reliability quantifiction
techniques – THERP, HEART and JHEDI: Part II – Results of validation exercise, Applied Ergonomics, Vol 28,
No 1, pp. 17-25, 1997, http://www.class.uidaho.edu/psy562/Readings/Kirwin%20(1997)%20A%20II.pdf
[Kirwan&Basra&Taylor.doc] B. Kirwan, G. Basra and S.E. Taylor-Adams, CORE-DATA: A computerised Human Error Database for Human
reliability support, Industrial Ergonomics Group, University of Birmingham, UK, “IEEE2.doc”
[Kirwan&Basra&Taylor.ppt] B. Kirwan, G. Basra and S.E. Taylor-Adams, CORE-DATA: A computerised Human Error Database for Human
reliability support, Industrial Ergonomics Group, University of Birmingham, UK, “core-data.ppt”
[Kirwan&Kennedy&Hamble B. Kirwan, R. Kennedy and D. Hamblen, Human reliability assessment in probabilistic safety assessment -
n] guidelines on best practice for existing gas-cooled reactors, “Magnox-IBC-final.doc”
[Kirwan, 2004] Kirwan, B., Gibson, H., Kennedy, R., Edmunds, J., Cooksley, G., and Umbers, I. (2004) Nuclear Action
Reliability Assessment (NARA): A data-based HRA tool. In Probabilistic Safety Assessment and Management
2004, Spitzer, C., Schmocker, U., and Dang, V.N. (Eds.), London, Springer, pp. 1206 – 1211.
190
[Kirwan, 2007] B. Kirwan, Technical Basis for a Human Reliability Assessment Capability for Air Traffic Safety Management,
EEC Note No. 06/07, Eurocontrol Experimental Centre, Project: HRA, September 2007,
http://www.eurocontrol.int/eec/public/standard_page/DOC_Report_2007_006.html
[Kirwan, 2008] Barry Kirwan, W. Huw Gibson and Brian Hickling, Human error data collection as a precursor to the
development of a human reliability assessment capability in air traffic management. Reliability Engineering &
System Safety Volume 93, Issue 2, February 2008, Pages 217-233,
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V4T-4MS9RCB-
2&_user=2073121&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000056083&_version=1&_urlV
ersion=0&_userid=2073121&md5=63c764878fe1f93df44288bdbd33eeb5#secx5
[Kirwan_HCA] B. Kirwan, Developing human informed automation in Air Traffic Management, “HCApaper2.doc”
[Kirwan00] B. Kirwan, SHAPE human error interviews: Malmo and Stockholm, 14-16 November 2000-11-28, “SHAPE
Human Error Interviews 1.doc”
[Kirwan94] B. Kirwan, A guide to practical human reliability assessment, Taylor and Francis, 1994
[Kirwan95] B. Kirwan, Current trands in human error analysis technique development, Contemporary Ergonomics 1995, S.A.
Robertson (Ed), Taylor and Francis, 1995.
[Kirwan96-I] B. Kirwan, The validation of three human reliability quantifiction techniques – THERP, HEART and JHEDI:
Part I – technique descriptions and validation issues, Applied Ergonomics, Vol 27, No 6, pp. 359-373, 1996,
http://www.class.uidaho.edu/psy562/Readings/Kirwan%20(1996).pdf
[Kirwan97-III] B. Kirwan, The validation of three human reliability quantifiction techniques – THERP, HEART and JHEDI:
Part III – Practical aspects of the usage of the techniques, Applied Ergonomics, Vol 28, No 1, pp. 27-39, 1997,
http://www.class.uidaho.edu/psy562/Readings/Kirwin%20(1997)%20A%20III.pdf
[Kirwan98-1] B. Kirwan, Human error identification techniques for risk assessment of high risk systems – Part 1: Review and
evaluation of techniques, Applied Ergonomics, Vol 29, No 3, pp. 157-177, 1998, “HEAJNL6.doc”,
http://www.class.uidaho.edu/psy562/Readings/Kirwan%20(1998)%20A%201.pdf
[Kirwan98-2] B. Kirwan, Human error identification techniques for risk assessment of high risk systems – Part 2: Towards a
framework approach, Applied Ergonomics, Vol 29, No 5, pp. 299-318, 1998,
http://www.class.uidaho.edu/psy562/Readings/Kirwan%20(1998)%20A%202.pdf
[KirwanAinsworth92] A guide to task analysis, edited by B. Kirwan and L.K. Ainsworth, Taylor and Francis, 1992
[KirwanGibson] Barry Kirwan, Huw Gibson, CARA: A Human Reliability Assessment Tool for Air Traffic Safety Management –
Technical Basis and Preliminary Architecture,
http://www.eurocontrol.int/eec/gallery/content/public/documents/EEC_safety_documents/CARA-SCS.doc
[Kirwan-sages] B. Kirwan, “bk-sages-template.doc”
[Klein et al, 1989] G.A. Klein, R. Calderwood, R., and D. Macgregor (1989, May/June). Critical Decision Method for Eliciting
Knowledge. IEEE Transactions on Systems, Man, and Cybernetics, Vol. 19, No. 3.
[Klein, 2000] G. Klein (2000). Cognitive Task Analysis of Teams. In J.M. Schraagen, S.F. Chipman, V.L. Shalin (Eds).
Cognitive Task Analysis pp. 417-431`. Lawrence Erlbaum associates
[Klein04] Klein, G. (2004), “Cognitive Task Analyses in the ATC Environment: Putting CTA to Work”. Presentation given
on May 19, 2004 to the FAA Human Factors Research and Engineering Division.,
http://www2.hf.faa.gov/workbenchtools/default.aspx?rPage=Tooldetails&toolID=8
[KleinObbink&Smit, 2004] B. Klein Obbink; H.H. Smit, Obstakeldichtheid Schiphol; Studie naar afwegingscriteria voor ontheffingen,
National Aerospace Laboratory NLR, 2004, NLR-CR-2004-483 (In Dutch)
[Kletz74] T. Kletz, HAZOP and HAZAN – Notes on the identification and assessment of hazards, Rugby: Institute of
Chemical Engineers, 1974.
[Klinger, 2003] D. Klinger, (2003). Handbook of team cognitive task analysis. Fairborn, OH: Klein Associates, Inc.
[Klompstra&Everdij97] M.B. Klompstra, and M.H.C. Everdij, Evaluation of JAR and EATCHIP safety assessment methodologies, NLR
report CR 97678 L, Amsterdam, 1997.
[Kloos&Peschke, 2006] M. Kloos, J. Peschke, MCDET - A Probabilistic Dynamics Method Combining Monte Carlo Simulation with the
Discrete Dynamic Event Tree Approach, Nuclear Science and Engineering153, 137-156 (2006).
[Kopardekar02] Parimal Kopardekar and Glen Hewitt, Human Factors Program Cost Estimation- Potential Approaches; A
Concept Paper, Titan Systems Corporation and FAA, 23 March 2002
[Kos&al00] J. Kos, H.A.P. Blom, L.J.P. Speijker, M.B. Klompstra, and G.J. Bakker, Probabilistic wake vortex induced
accident risk assessment, 3rd USA/Europe Air Traffic Management R&D Seminar, FAA/Eurocontrol, 2000,
http://www.nlr.nl/smartsite.dws?id=4299
[Kosmowski00] K.T. Kosmowski, Risk analysis and management on socio-technical systems, SafetyNet meeting, Athens, Greece,
7010 June 2000, http://www.safetynet.de/Publications/articles/Kosmowski.PDF
[Koubek97] Koubek, J. K., Benysh, S. A. H., & Tang, E. (1997). Learning. In G. Salvendy (Ed.), Handbook of Human
Factors and Ergonomics (2nd ed.). New York: John Wiley.
[Krsacok04] Krsacok, S. J., & Moroney, W. F. (n.d.). Adverb Intensifiers for use in ratings of acceptability, adequacy, and
relative goodness. Retrieved January 2, 2004, From University of Dayton, William F. Maroney’s website:,
http://academic.udayton.edu/williammoroney/adverb_intensifiers_for_use_in_r.htm
[KruegerLai] Noah Krueger and Eugene Lai, Software Reliability,
http://www.ics.uci.edu/~muccini/ics122/LectureNotes/Reliability.ppt#256
[Krystul&Blom04] J. Krystul and H.A.P. Blom, Monte Carlo simulation of rare events in hybrid systems, 1 July 2004, HYBRIDGE
Project Deliverable PD13, http://www.nlr.nl/public/hosted-sites/hybridge/
[Krystul&Blom05] Jaroslav Krystul and Henk A.P. Blom, Sequential Monte Carlo simulation of rare event probability in stochastic
hybrid systems, 16th IFAC World Congress, 2005, HYBRIDGE Deliverable R8.4,
http://www.nlr.nl/public/hosted-sites/hybridge/
[Kumamoto&Henley96] H. Kumamoto and E.J. Henley, Probabilistic risk assessment and management for engineers and scientists, IEEE,
New York, NY, 1996.
[Kuusisto, 2001] Arto Kuusisto, Safety management systems – Audit tools and reliability of auditing, PhD Thesis, 2001, Tampere
University of Technology, Finland, http://www.vtt.fi/inf/pdf/publications/2000/P428.pdf
[Lamsweerde & Letier, 2000] Axel van Lamsweerde and Emmanuel Letier, Handling Obstacles in Goal-Oriented Requirements Engineering,
IEEE Transactions on Software Engineering, Special Issue on Exception Handling, 2000,
http://www.info.ucl.ac.be/Research/Publication/2000/TSE-Obstacles.pdf
191
[Lankford03] D.N. Lankford and S. Ladecky, FAA, Flight operations, simulation and analysis branch, Airspace Simulation and
Analysis for TERPS (Terminal Procedures), 12 November 2003, http://wwwe.onecert.fr/projets/WakeNet2-
Europe/fichiers/programmeLondon2003/Lankford_Ladecky_ASAT_FAA.pdf
[LaSala, 2003] Kenneth P. LaSala, Human Reliability Fundamentals and Issues, RAMS 2003 Conference Tutorial,
ftp://ftp.estec.esa.nl/pub3/tos-qq/qq/RAMS2003ConferenceTutorial/Tutorials/1Ddoc.pdf
[Laurig89] Laurig, W., & Rombach, V. (1989). Expert systems in ergonomics: requirements and an approach. Ergonomics,
32(7), 795-811.
[Lawrence99] B.M. Lawrence, Managing safety through the Aircraft lifecycle – An aircraft manufacturer’s perspective, Proc
Second Annual Two-Day Conference on Aviation Safety Management, May 1999
[Leavengood98] S. Leavengood, Techniques for Improving Process and Product Quality in the Wood Products Industry: An
Overview of Statistical Process Control, A Microsoft Powerpoint Presentation, 16 May, 1998
[Lehto97] Lehto, M. R., (1997). Decision Making. In G. Salvendy, (Ed.), Handbook of Human Factors and Ergonomics (2nd
ed.). Chapter 37, New York: John Wiley
[Leiden et al, 2001] Kenneth Leiden, K. Ronald Laughery, John Keller, Jon French, Walter Warwick, and Scott D. Wood, A Review
of Human Performance Models for the Prediction of Human Error, September 30, 2001, http://human-
factors.arc.nasa.gov/ihi/hcsl/publications/HumanErrorModels.pdf
[Lenne et al, 2004] Michael Lenné, Michael Regan, Tom Triggs, Narelle Haworth, Review Of Recent Research In Applied
Experimental Psychology: Implications For Countermeasure Development In Road Safety, July, 2004,
http://www.monash.edu.au/muarc/reports/muarc223.pdf
[Lerche&Paleologos, 2001] Ian Lerche,Evan K. Paleologos, Environmental Risk Analysis, McGraw Hill Professional Engineering, 2001
[Letier, 2001] Emmanuel Letier, Reasoning about Agents in Goal-Oriented Requirements Engineering, Catholic University of
Leuven, (Belgium), Faculté des Sciences Appliquées Département d’Ingénierie Informatique, PhD Thesis, 22
Mai 2001
[Leuchter&al97] S. Leuchter, C. Niessen, K. Eyferth, and T. Bierwagen, Modelling Mental Processes of Experienced Operators
during Control of a Dynamic Man Machine System, In: B.B. Borys, G. Johannsen, C. Wittenberg & G. Stätz
(eds.): Proceedings of the XVI. European Annual Conference on Human Decision Making and Manual Control,
pp. 268–276. Dec. 9-11, 1997, University of Kassel, Germany
[Leuchter, 2009] Sandro Leuchter, Software Engineering Methoden für die Bedienermodellierung in dynamischen Mensch-
Maschine-Systemen, Fakultät für Verkehrs- und Maschinensysteme der Technischen Universität Berlin, PhD
Thesis, 25 February 2009
[Leva et al, 2006] Maria Chiara Leva, Massimiliano De Ambroggi, Daniela Grippa, Randall De Garis, Paolo Trucco, Oliver
Straeter, Dynamic Safety Modeling For Future Atm Concepts, Eurocontrol, edition 0.5, September 2006
http://www.eurocontrol.int/safety/gallery/content/public/Eurocontrol%20DRM%20Final.pdf
[Leveson02] N.G. Leveson, An approach to designing safe embedded software, A Sangiovanni-Vincentelli and J. Sifakis
(Eds): EMSOFT 2002, LNCS 2491, pp. 15-29, 2002, Springer-Verlag Berlin Heidelberg, 2002
[Leveson2004] N.G. Leveson, A new accident model for engineering safer systems, Safety Science, Vol. 42 (2004), pp. 237-270.
[Leveson2006] N.G. Leveson, N. Dulac, D. Zipkin, J. Cutcher-Gershenfeld, J. Carroll, B. Barrett, Engineering resilience into
safety-critical systems. In: Resilience engineering, E. Hollnagel D.D. woods N. Leveson (Eds), Ashgate
publishing, 2006, pp. 95-123
[Leveson95] N.G. Leveson, Safeware, system safety and computers, a guide to preventing accidents and losses caused by
technology, Addison-Wesley, 1995
[Li et al, 2009] Wen-Chin Li, Don Harris, Yueh-Ling Hsu and Lon-Wen Li, The Application of Human Error Template (HET)
for Redesigning Standard Operational Procedures in Aviation Operations. In: Engineering Psychology and
Cognitive Ergonomics, Lecture Notes in Computer Science, 2009, Volume 5639/2009, 547-553, DOI:
10.1007/978-3-642-02728-4_58
[Licu, 2007] T. Licu, F. Cioran, B. Hayward, A. Lowe, Eurocontrol - Systemic Occurrence Analysis Methodology (SOAM)-
A “Reason”-based organisational methodology for analysing incidents and accidents, Reliability Engineering and
System Safety Vol 92, pp. 1162-1169, 2007
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V4T-4N1SJPR-
1&_user=2073121&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000056083&_version=1&_urlV
ersion=0&_userid=2073121&md5=20bd02c8c14a12e1c30c5b015943ee51
[Lipner&Ravets, 1979] M.H. Lipner, and J.M. Ravets (1979), Nuclear Criticality Analyses for the Spent Fuel Handling and Packaging
Program Demonstration, Westinghouse Advanced Energy Systems Division, WAES-TME-292
[Liu97] Liu, Yili, (1997). Software-user interface design. In G. Salvendy, (Ed.), Handbook of Human Factors and
Ergonomics (2nd ed.). New York: John Wiley, p.1699.
[Liverpool, 2004] The University of Liverpool, MAIM (Merseyside Accident Information Model) - What is MAIM, 11 November
2004, http://www.liv.ac.uk/~qq16/maim/background.htm and
http://www.liv.ac.uk/~qq16/maim/bibliography.htm
[Livingston, 2001] A.D. Livingston, G. Jackson & K. Priestley , W.S. Atkins Consultants Ltd, Health and Safety Executive, Root
causes analysis: Literature review, Contract Research Report 325/2001, 2001,
http://www.hse.gov.uk/research/crr_pdf/2001/crr01325.pdf
[Loeve&Moek&Arsenis96] J.A. Loeve, G. Moek, S.P. Arsenis, Systematic Safety - Study on the feasibility of a structured approach using a
quantified causal tree, WP2: Statistical work, NLR CR 96317 L, 1996
[Luczak97] Luczak, H. (1997). Task analysis. In G. Salvendy (Ed.). Handbook of Human Factors and Ergonomics (2nd ed.).
New York: John Wiley.
[Lutz&Woodhouse96] R.R. Lutz and R.M. Woodhouse, Experience report: Contributions of SFMEA to requirements analysis, ICRE 96,
April 15-18, 1996, Colorado Springs, CO, http://www.cs.iastate.edu/~rlutz/publications/icre96.ps
[Luxhøj, 2002] James T. Luxhøj, Summary of FAA Research Accomplishments 1993-2002, December 2002
http://www.tc.faa.gov/logistics/grants/pdf/2000/00-G-006.pdf
[Luxhøj, 2005] James T. Luxhøj , Summary of NASA Research Accomplishments 2001-2005, December 2005
http://www.rci.rutgers.edu/~carda/luxhoj_NASA_research.pdf
[Luxhoj, 2009] James T. Luxhøj, Safety Risk Analysis of Unmanned Aircraft Systems Integration Into the National Airspace
System: Phase 1, DOT/FAA/AR-09/12, September 2009, http://www.tc.faa.gov/its/worldpac/techrpt/ar0912.pdf
[LuxhøjCoit, 2005] James T. Luxhøj and David W. Coit. Modeling Low Probability/High Consequence Events: An Aviation Safety
Risk Model, 2005. http://www.ise.rutgers.edu/research/working_paper/paper%2005-018.pdf
192
[LuxhøjOztekin, 2005] James T. Luxhøj and Ahmet Oztekin, A Case-Based Reasoning (CBR) Approach for Accident Scenario
Knowledge Management, 36th Annual International Seminar, ISASI Proceedings,
http://www.isasi.org/docs/Proceedings_2005.pdf pages 39 - 49
[Lygeros&Pappas&Sastry98] J. Lygeros, G.J. Pappas, S. Sastry, An approach to the verification of the Center-TRACON automation system,
Proceedings 1st International Workshop Hybrid Systems: Computation and Control, 1998, pp. 289-304.
[Macwan&Mosley94] A. Macwan, A. Mosley, A methodology for modelling operator errors of commision in probabilistic risk
assessment, Reliability Engineering and System Safety, Vol. 45, pp. 139-157, 1994.
[MaidenKamdarBush, 2005] Neil Maiden1, Namit Kamdar1, David Bush, Analysing i* System Models for Dependability Properties: The
Uberlingen Accident, http://hcid.soi.city.ac.uk/research/Rescuedocs/RESFQ06CameraReady.pdf
[MAIM web] MAIM Merseyside Accident Information Model, MAIM Home page, Bibliography and a list of research that led
to the development of MAIM, http://www.liv.ac.uk/~qq16/maim/bibliography.htm
[Malhotra96] Y. Malhotra, Organizational Learning and Learning Organizations: An Overview, 1996,
http://www.brint.com/papers/orglrng.htm
[Mana02] P. Mana, EATMP Safety Management Software Task Force, slides for FAA National Software Conference, May
2002
[Manning01] C.A. Manning, S.H. Mills, C. Fox, E. Pfleiderer, H.J. Mogilka, (2001), Investigating the validity of performance
and objective workload evaluation research (POWER). DOT/FAA/AM-01/10, Office of Aerospace Medicine,
Washington, DC 20591 USA, July 2001
[MaraTech] MaraTech Engineering Services Inc., System/Software Engineering Services,
http://www.mtesi.com/experience_engineering.htm
[Markov process] http://www-net.cs.umass.edu/pe2002/notes/markov2.pdf
[Matahri02] N. Matahri, G. Baumont, C. Holbe, The RECUPERARE incident analysis model, including technical, human and
organizational factors
[Matahri03] N. Matahri, RECUPERARE, A model of event including human reliability in nuclear power plants. Model
developed by IRSN, Poster for Eurosafe forum, 2003
[Matra-HSIA99] Matra Marconi Space, PID-ANNEX (draft), Documentation requirements description, 11 March 1999,
http://www.irf.se/rpg/aspera3/PDF/Doc_Req_Descr_990313.PDF
[Mauri, 2000] G. Mauri, Integrating safety analysis techniques supporting identification of common cause failures, PhD thesis,
University of York, Department of Computer Science, September 2000,
http://www.cs.york.ac.uk/ftpdir/reports/2001/YCST/02/YCST-2001-02.pdf
[MaurinoLuxhøj, 2002] Michele A. Maurino and James T. Luxhøj, Analysis of a group decision support system (GDSS) for aviation
safety risk evaluation, The Rutgers Scholar, An electronic bulletin of undergraduate research, Volume 4, 2002,
http://rutgersscholar.rutgers.edu/volume04/maurluxh/maurluxh.htm
[May97] A. May, Neural network models of human operator performance, The Aeronautical Journal, pp. 155-158. April
1997
[McClure&Restrepo99] P. J. McClure and L.F. Restrepo, Preliminary Design Hazard Analyses (PDHA) for the Capabilities Maintenance
and Improvement Project (CMIP) and Integration of Hazard Analysis Activities at Los Alamos National
Laboratory, 1999
[McDermid&Pumfrey] J.A. McDermid, D.J. Pumfrey, Software safety: why is there no consensus, http://www-
users.cs.york.ac.uk/~djp/publications/ISSC_21_final_with_refs.pdf
[McDermid01] J.A. McDermid: Software safety; where is the evidence, Sixth Australian workshop on industrial experience with
safety critical systems and software (SCS01), Brisbane, Conferences in Research and Practice in information
technology, Vol 3, P. Lindsay (Ed), 2001, ftp://ftp.cs.york.ac.uk/pub/hise/Software%20safety%20-
%20wheres%20the%20evidence.pdf
[McGonigle] Joseph Mc Gonigle, Biosafety in the Marketplace: Regulated Product Introduction as Applied Risk Management
http://www.gmo-safety.eu/pdf/biosafenet/McGonigle.pdf
[MEDA] Boeing website on MEDA, http://www.boeing.com/commercial/aeromagazine/aero_03/m/m01/story.html
[Meek&Siu89] B. Meek and K.K. Siu, The effectiveness of error seeding, ACM Sigplan Notices, Vol 24 No 6, June 1989, pp 81-
89
[Melham&Norrish01] T. Melham and M. Norrish, Overview of Higher Order Logic Primitive Basis, University Glasgow, 2001,
http://www.cl.cam.ac.uk/users/mn200/hol-training/
[MES guide] L. Benner, 10 MES INVESTIGATION GUIDES (Internet Edition) Starline Software, Ltd., Oakton, VA. 1994
Revised 1998, 2000, http://www.starlinesw.com/product/Guides/MESGuide00.html
[MES tech] http://starlinesw.com/product/Guides/MESGuide00.html
[Michell, 2000] D.K. Mitchell, Mental Workload and ARL Workload Modeling Tools. Army Research Laboratory, Report No
ARL-TN-161 (April 2000).
[MIL-HDBK] MIL-HDBK-46855A, Department of Defense Handbook, Human Engineering Program Process and Procedures,
17 May 1999, http://www.hf.faa.gov/docs/46855ndx.pdf
[Mills02] Mills, S. H., Pfleiderer, E. M., and Manning, C. A. (2002), “POWER: Objective activity and taskload assessment
in en route air traffic control”; DOT/FAA/AM-02/2, Office of Aerospace Medicine, Washington, DC 20591 USA
[MIL-STD 882B] Military Standard, System Safety Program Requirements, MIL-STD 882B, March 1984, http://www.system-
safety.org/Documents/MIL-STD-882B.pdf
[MindTools-DTA] MindTools webpage, Decision Trees, http://www.mindtools.com/dectree.html
[Minutes 10 Sept] M.H.C. Everdij, Minutes 10 September meeting Safety Methods Survey project
[Minutes SMS] M.H.C. Everdij, Minutes of 9 July 2002 kick-off meeting Safety Methods Survey project, 16 July 2002, Final.
[Mislevy&al98] R.J. Mislevy, L.S. Steinberg, F.J. Breyer, R.G. Almond, L. Johnson, A Cognitive task analysis, with implications
for designing a simulation-based performance assessment, CSE Technical Report 487, August 1998
[Mitello&Hutton, 1998] L.G. Militello and R.J. Hutton, Applied cognitive task analysis (ACTA): a practitioner's toolkit for understanding
cognitive task demands. Ergonomics. 1998 Nov;41(11):1618-41.
[Mizumachi&Ohmura77] M. Mizumachi and T. Ohmura, Electronics and communications in Japan, Vol 60-B, pp. 86-93, 1977.
[Moek84] G. Moek, “Methoden voor risicobepaling en risico evaluatie”, NLR Memorandum MP 84019 U, 1984. (In Dutch)
[MorenoVerhelleVanthienen, A.M. Moreno Garcia, M. Verhelle, and J. Vanthienen, An Overview of Decision Table literature, Fifth
2000] International Conference on Artificial Intelligence and Emerging Technologies in Accounting, Finance and Tax,
organized by the Research Group on Artificial Intelligence in Accounting (GIACA), November 2-3, 2000,
Huelva, Spain, 1982-2000, http://www.econ.kuleuven.ac.be/prologa/download/overview82-2000.pdf
[Moriarty83] R. Moriarty, System safety engineering and management, Wiley Interscience, 1983.
193
[Mosley91] A. Mosley, Common Cause Failures, an analysis methodology and examples, Reliability Engineering & System
Safety, Volume 34, Issue 3, 1991, Pages 249-292
[Moubray00] J. Moubray, Reliability-Centered Maintenance, 1999, 2000
[MSC] http://www.sdl-forum.org/MSC/index.htm
[Mucks&Lesse01] H.J. Mucks, L.A. Jesse, Web-enabled Timeline analysis system (WebTAS)
[MUFTIS1.2] J.M. Gouweleeuw, A.J. Hughes, J.L. Mann, A.R. Odoni, K. Zografos, MUFTIS workpackage report 1.2 Final
report on Global MSR studies Part 2: Review of available techniques/facilities, NLR TR 96406 L, 1996
[MUFTIS3.2-I] M.H.C. Everdij, M.B. Klompstra, H.A.P. Blom, O.N. Fota, MUFTIS work package report 3.2, final report on
safety model, Part I: Evaluation of hazard analysis techniques for application to en-route ATM, NLR TR 96196
L, 1996
[MUFTIS3.2-II] M.H.C. Everdij, M.B. Klompstra and H.A.P. Blom, MUFTIS workpackage report 3.2 Final report on Safety
Model Part II: Development of mathematical techniques for ATM safety analysis, NLR TR 96197 L, 1996
[Muniz et al, 1998] Muniz, E.J., Stout, R.J., Bowers, C.A. and Salas, E. ‘A methodology for measuring team Situational
Awareness: Situational Awareness Linked Indicators Adapted to Novel Tasks (SALIANT)’.
Proceedings of Symposium on “Collaborative Crew Performance in Complex Operational Systems”,
UK, 20-22 April 1998.
[Murch87] Murch, G. M. (1987). Color graphics: Blessing or ballyhoo? In Baecker, R. M., and Buxton, W. A. S., (Eds.),
Readings in human-computer interaction: A multidisciplinary approach (pp. 333-341). San Mateo, CA: Morgan
Kaufmann.
[Murphy, 2002] K. Murphy. Dynamic Bayesian Networks: Representation, Inference and Learning. PhD thesis, University of
California, Berkeley; Computer Science Division, 2002.
[Mylopoulos, 1992] J.Mylopoulos, L. Chung, and B. Nixon, "Representing and Using Non-Functional Requirements: A process-
Oriented Approach", IEEE Transactions on Software Engineering, Special Issue on Knowledge Representation
and Reasoning in Software Development, 18(6), June 1992, pp. 483-497.
[Mylopoulos, 1999] J. Mylopoulos, L. Chung and E. Yu, “From Object-Oriented to Goal-Oriented,” Communications of the ACM,
vol. 42. No. 1, Jan. 1999.
[NADA] http://www.ceismc.gatech.edu/MM_tools/NADA.html
[Nakagawa] Takashi Nakagawa, Satoko Matsuo, Hidekazu Yoshikawa, WeiWu, Akiyuki Kameda and Motoo Fumizawa,
Development of Effective Tool for Iterative Design of Human Machine Interfaces in Nuclear Power Plant,
http://www.iapsam.org/PSAM5/pre/tec_pro_fri.html
[NARA, 2004] Rail-Specific HRA Tool for Driving Tasks, T270, Phase 1 Report, Rail Safety and Standards Board, 5013727,
http://rssb.co.uk/pdf/reports/Research/T270%20Rail-
specific%20HRA%20tool%20for%20driving%20tasks%20Phase%201%20report.pdf
[Narkhede02] D.D. Narkhede, Credit Seminar on Bayesian Model for Software Reliability, Reliability Engineering, Indian
Institute of Technology, Bombay, 2002
[NASA-Assist01] NASA, Assist web page, 2001, http://shemesh.larc.nasa.gov/people/rwb/assist.html
[NASA-GB-1740.13-96] NASA-GB-1740.13-96, NASA Guidebook for Safety Critical Software - Analysis and Development, NASA
Lewis Research Center, Office of Safety and Mission Assurance, Superseded by NASA-GB-8719.13
Space Administration, March 31, 2004, http://www.hq.nasa.gov/office/codeq/doctree/871913.pdf
[NASA-RCM] NASA Reliability Centered Maintenance Guide for Facilities and Collateral Equipment, February 2000,
http://www.wbdg.org/ccb/NASA/GUIDES/rcm.pdf
[NASA-STD-8719] NASA-STD-8719.13A, Software Safety NASA Technical Standard, 15 September, 1997,
http://satc.gsfc.nasa.gov/assure/nss8719_13.html
[Nazeri03] Z. Nazeri, Application of Aviation Safety Data Mining Workbench at American Airlines, Proof-of-Concept
Demonstration of Data and Text Mining, November 2003, MITRE Product MP 03W 0000238,
http://www.mitre.org/work/tech_papers/tech_papers_03/nazeri_data/nazeri_data.pdf
[NEA01] Nuclear Energy Agency, Experience from international nuclear emergency exercises, The INEX 2 Series, 2001,
http://www.nea.fr/html/rp/reports/2001/nea3138-INEX2.pdf
[NEA98] Nuclear Energy Agency, Committee on the safety of nuclear installations, Critical operator actions: human
reliability modelling and data issues, 18 February 1998, http://www.nea.fr/html/nsd/docs/1998/csni-r98-1.pdf
[NEA99] Nuclear Energy Agency, Identification and assessment of organisational factors related to the safety of NPPs,
Contributions from Participants and Member Countries, September 1999,
http://www.nea.fr/html/nsd/docs/1999/csni-r99-21-vol2.pdf
[NEC02] The New England Chapter of the System Safety Society, System Safety: A Science and Technology Primer,
April 2002, http://www.system-safety.org/resources/SS_primer_4_02.pdf
[NEMBS, 2002] N.E.M Business Solutions, Risk Analysis Methodologies,
http://www.cip.ukcentre.com/risk.htm#2.6%20%20%20%20Safety%20Management%20Organization%20Revie
w
[Nielsen, 1993] Jakob Nielsen (1993) Usability Engineering. Morgan Kaufman Publishers, Inc.
[Nielsen97] Nielsen, J. (1997). Usability testing. In G. Salvendy (Ed.), Handbook of Human Factors and Ergonomics (2nd
ed.). New York: John Wiley.
[Niessen&Eyferth01] C. Niessen, K. Eyferth, A Model of the Air Traffic Controller’s Picture. Safety Science, Vol. 37, pp. 187-202,
2001.
[Niessen&Leuchter&Eyferth C. Niessen, S. Leuchter, K. Eyferth, A psychological model of air traffic control and its implementation. In: F.E.
98] Ritter & R.M. Young (eds), Proceedings of the second European conference on cognitive modelling (ECCM-98).
Nottingham: University Press. S. pp. 104-111, 1998.
[Nijstad01] B.A. Nijstad, How the group affects the mind: effects of communication in idea generating groups, PhD Thesis
Interuniversity Center for Social Science Theory and Methodology (ICS) of Utrecht University, The Netherlands,
2001
[NNSA-ORR] National Nuclear Security Administration (NNSA) homepage, http://nnsa.energy.gov/
[Nordman, 2002] L.H. Nordmann and J.T. Luxhøj, “Application of a Performance Measure Reduction Technique to Categorical
Safety Data” Reliability Engineering and System Safety, Vol. 75, No. 1 (2002), pp. 59-71.
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V4T-44MWP1F-
6&_user=2073121&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000056083&_version=1&_urlV
ersion=0&_userid=2073121&md5=bde0cb11b7adbc433903458b1a844313
[NOSS] Skybrary – NOSS, http://www.skybrary.aero/index.php/Normal_Operations_Safety_Survey_(NOSS)
194
[NRC-status99] Nuclear Regulatory Commission, Status report on Accident Sequence Precursor program and related initiatives,
20 December 1999, http://www.nrc.gov/reading-rm/doc-collections/commission/secys/1999/secy1999-289/1999-
289scy.html
[NRLMMD, 2006] Naval Research Laboratory Marine Meteorology Division, FORAS, 2006, http://www.nrlmry.navy.mil/foras/
[NSC-ANSTO] Report on the ANSTO application for a licence to construct a replacement research reactor, Addressing Seismic
Analysis and Seismic Design Accident Analysis Spent Fuel and Radioactive Wastes, February 2002,
http://www.arpansa.gov.au/pubs/rrrp/nsc150302.pdf
[Nurdin02] H. Nurdin, Mathematical modelling of bias and uncertainty in accident risk assessment, MSc Thesis, Twente
University, The Netherlands, June 2002, http://www.nlr.nl/public/hosted-sites/hybridge/
[NUREG CR6753] US Nuclear Regulatory Commission NUREG, Review of findings for human error contribution to risk in
operating events, August 2001, http://www.nrc.gov/reading-rm/doc-collections/nuregs/contract/cr6753/
[O’Neal et al, 1984] W.C. O’Neal, D. W. Gregg, J.N. Hockman, E.W. Russell, and W. Stein, Freclosure Analysis of Conceptual
Waste Package Designs for a Nuclear Waste Repository in Tuff, November 1984,
http://www.osti.gov/bridge/purl.cover.jsp;jsessionid=9B5C3A7D3F93E9D30A454CB1EE7FE0B9?purl=/59344-
iwfYDw/
[Ockerman et a, 2005] Jennifer Ockerman, Jennifer A.B. McKneely, Nathan Koterba, A Hybrid Approach to Cognitive Engineering:
Supporting Development of a Revolutionary Warfighter-Centered Command and Control System Associated
conference topic: Decision-making and Cognitive Analysis, 10th International Command and Control Research
and Technology Symposium (ICCRTS), June 2005,
http://www.dodccrp.org/events/10th_ICCRTS/CD/papers/050.pdf
[Oien et al, 2010] K. Øien, I.B. Utne, I.A. Herrera, Building Safety indicators: Part 1 – Theoretical foundation, Safety Science,
2010, http://198.81.200.2/science?_ob=MImg&_imagekey=B6VF9-509Y465-1-
F&_cdi=6005&_user=4861547&_pii=S0925753510001335&_orig=browse&_coverDate=06%2F16%2F2010&_
sk=999999999&view=c&wchp=dGLzVlb-
zSkWA&_valck=1&md5=b7e3697d913e3620abeb51b1a106f4fe&ie=/sdarticle.pdf
[OL glossary] University of Mannheim Glossary, Organisational Learning entry, 10 November 1997, http://www.sfb504.uni-
mannheim.de/glossary/orglearn.htm
[OmolaWeb] Omola and Omsim webpage, http://www.control.lth.se/~cace/omsim.html
[OPAL2003] OPAL (Optimizarion Platform for Airports, including Landside), WP3: Building of OPAL, Task 3.1:
Implementation of model base, Implementation of model enhancements and interfaces, 17 July 2003
[ORM web] http://www.safetycenter.navy.mil/orm/default.htm
[ORM] Operational Risk Management User Traning, slides
http://safetycenter.navy.mil/presentations/aviation/ormusertraining.ppt#256,1,OPERATIONAL RISK
MANAGEMENT
[Oztekin, 2007] A. Oztekin, J.T. Luxhøj, M. Allocco, General Framework for Risk-Based System Safety Analysis of the
Introduction of Emergent Aeronautical Operations into the National Airspace System, Proceedings 25th
International System Safety Conference, Baltimore, Maryland, USA, 13-17 August 2007
[Page&al92] M.A. Page, D.E. Gilette, J. Hodgkinson, J.D. Preston, Quantifying the pilot’s contribution to flight safety, FSF
45th IASS & IFA 22nd international conference, pp. 95-110, Long Beach, California, 1992.
[Parker&al91] R.G. Parker, N.H.W. Stobbs, D.Sterling, A.Azarian, T. Boucon, Working paper for a preliminary study of expert
systems for reliability, availability, maintainability and safety (RAMS), Workpackage 5000 final report, 19 July
1991
[Parks89] Parks, D. L., & Boucek, G. P., Jr. (1989). Workload prediction, diagnosis, and continuing challenges. In G. R.
McMillan, D. Beevis, E. Salas, M. H. Strub, R. Sutton, & L Van Breda (Eds.), Application of human
performance models to system design (pp. 47-64). New York: Plenum Press.
[Parry92] G.W. Parry, Critique of current practice in the treATMent of human interactions in probabilistic safety
assessments. In Aldemir, T., N.O. Siu, A. Mosleh, P.C. Cacciabue, and B.G. Göktepe, editors, Reliability and
Safety Assessment of dynamic process systems, volume 120 of Series F: Computer and Systems Sciences, pp.
156-165. Springer Verlag, 1994.
[PAS web] http://www.aviationsystemsdivision.arc.nasa.gov/research/foundations/pas.shtml
[Patton87] Patton, M. Q. (1987). How to use qualitative methods in evaluation. Newbury Park, CA: Sage.
[Peacock&al01] R.D. Peacock, R.W. Bukowski, P.A. Reneke, and J.D. Averill, S.H. Markos, Development of a fire hazard
assessment method to evaluate the fire safety of passenger trains, Building and Fire Research Laboratory,
National Institute of Standards and Technology, Gaithersburg, MD 20899, USAVolpe National Transportation
Systems Center, U.S. Department of Transportation, Cambridge, MA 02142, USA, Reprinted from the Fire and
Materials 2001. 7th International Conference and Exhibition. Proceedings. Interscience Communications Limited.
January 22-24, 2001, San Antonio, TX, 67-78 pp, 2001, http://fire.nist.gov/bfrlpubs/fire01/PDF/f01160.pdf
[Pearl, 1985] Judea Pearl (1985). "Bayesian Networks: A Model of Self-Activated Memory for Evidential Reasoning". In
Proceedings of the 7th Conference of the Cognitive Science Society, University of California, Irvine, CA, pp.
329-334, August 15-17.
[Pennycook&Embrey, 1993] W.A. Pennycook and D.E. Embrey, An operating approach to error analysis, in Proceedinmgs first Biennial
Canadian Conference on Process Safety and Loss Management, Edmonton, Alberta, Canada, 1993
[Pentti&Atte02] H. Pentti, H. Atte, Failure Mode and Effects Analysis of software-based automation systems, VTT Industrial
Systems, STUK-YTO-TR 190, August 2002, www.stuk.fi/julkaisut/tr/stuk-yto-tr190.pdf
[Perrin, 2007] Eric Perrin, Barry Kirwan, Ronald L. Stroup, James Daum, Development of a Safety Target Achievement
Roadmap (STAR) for ATM in Europe using the Integrated Risk Picture (IRP), Proceedings 25th International
System Safety Conference, Baltimore, Maryland, USA, 13-17 August 2007
[PetriNets World] Welcome to the Petri Nets world, http://www.informatik.uni-hamburg.de/TGI/PetriNets/
[Petrolekas&Haritopoulos01] P. D. Petrolekas and P. Haritopoulos, A Risk Management Approach For SEVESO Sites, ABS Group and Shell
Gas, Greece, 2001, http://www.microrisk2001.gr/Petrolekas.doc
[Piccinini et al, 1996] N. Piccinini et al, Application of Integrated Dynamic Decision Analysis to a gas treatment facility, Chemputers
IV, Houston, March 11-13, 1996
[Pikaar00] A.J. Pikaar, M.A. Piers and B. Ale, External risk around airports A model update, the 5th International
Conference on Probabilistic Safety Assessment and Management, Osaka, Japan, November 27 - December 1,
2000, NLR-TP-2000-400, National Aerospace Laboratory NLR, Amsterdam
195
[Pocock, 2001] Steven Pocock, Michael Harrison, Peter Wright & Paul Johnson, THEA: A Technique for Human Error
Assessment Early in Design, http://homepages.cs.ncl.ac.uk/michael.harrison/papers/int01pub4.pdf
[Polat96] M.H. Polat, A Comprehensive Reference List on Organisational Learning and Related Literatures (with special
focus on Team Learning), Version: 1.0 – 2, 25 March, 1996, University of Wollongong, Australia
[Potash81] Potash, L. M., Stewart, M., Dietz, P. E., Lewis, D. M. and Dougherty, E. M. (1981), “Experience in Integrating
the Operator Contributions in the PRA of Actual Operating Plants” in Proceedings of the ANS/ENS Topical
Meeting on Probabilistic Risk Assessment, Port Chester, N. Y., American Nuclear Society: La Grange Park, Ill.
[Pounds03] J. Pounds, FAA Strategies for Reducing Operational Error Causal Factors, Civil Aerospace Medical Institute,
Federal Aviation Administration Air Traffic Investigations Division, DOT/FAA/AM-03/19, November 2003,
http://libraryonline.erau.edu/online-full-text/faa-aviation-medicine-reports/AM03-19.pdf
[Pozsgai&Neher&Bertsche02 P. Pozsgai, W. Neher, B. Bertsche, Models to Consider Dependence in Reliability Calculation for Systems
] Consisting of Mechanical Components, 2002, http://www.math.ntnu.no/mmr2002/papers/contrib/Pozsgai.pdf
[Price82] Price, H. E., Maisano, R. E., & VanCott, H. P. (1982). The allocation of function in man-machine systems: A
perspective and literature review (NureG-CR-2623). Oak Ridge, TN, Oak Ridge National Laboratory.
[Prinzo02] Prinzo, O.V.(2002), Automatic Dependent Surveillance -Broadcast Cockpit Display of Traffic Information:
Innovations in Pilot-Managed Departures, http://www.hf.faa.gov/docs/508/docs/cami/0205.pdf
[Prinzo95] Prinzo, O.V., Britton, T.W., and Hendrix, A.M. (1995), Development of a coding form for approach control/pilot
voice communications, N95-28540
[PROMAI5] Human factors contribution to quantitative methods survey, Progress in Maintenance and Management of
Railway Infrastructure, Contribution to Report to Council of Decision Makers – 01/12/01, 2001,
“PROMAI5.doc”, http://www.promain.org/images/ “human.Factors.zip”
[Pullaguntla, 2008] Rama Krishna Pullaguntla, Rotation Scheduling On Synchronous Data Flow Graphs, Master’s Thesis, Graduate
Faculty of The University of Akron, August, 2008, http://etd.ohiolink.edu/send-
pdf.cgi/Pullaguntla%20Rama%20Krishna.pdf?acc_num=akron1217097704
[Pumfrey, 1999] David John Pumfrey, The Principled Design of Computer System Safety Analyses, PhD Thesis, University of
York, Department of Computer Science, September 1999
[Pygott&al99] C. Pygott, R. Furze, I. Thompson and C. Kelly, Safety Case Assessment Approach for ATM, ARIBA WP5 final
report, 1999, http://www.aribaproject.org/rapport5/frame.htm
[Pyy, 2000] Pekka Pyy, Human Reliability Analysis Methods for Probabilistic Safety Assessment, PhD Thesis, Lappeenranta
University of technology, Finland, 2000
[Qiu&al] S. Qiu, A.M. Agogino, S. Song, J. Wu, S. Sitarama, A fusion of Bayesian and fuzzy analysis for print faults
diagnosis, http://best.me.berkeley.edu/~aagogino/papers/ISCA-Fusion.pdf
[QWHSS, 2005] Queensland Workplace and Health Safety Strategy, Manufacturing Industry Action Plan 2004-2007, January
2005, http://www.dir.qld.gov.au/pdf/whs/manufacturing_action.pdf
[Rademakers&al92] L.W.M.M. Rademakers, B.M. Blok, B.A. Van den Horn, J.N.T. Jehee, A.J. Seebregts, R.W. Van Otterlo,
Reliability analysis methods for wind turbines, task 1 of the project: Probabilistic safety assessment for wind
turbines, Netherlands energy research foundation, ECN Memorandum, 1992.
[RAIT slides] Slides on RAIT, http://faculty.erau.edu/dohertys/325/325_last.ppt
[Rakowsky] U.K. Rakowsky, Collection of Safety and Reliability Engineering Methods, http://www.rakowsky.eu/home.html -
“Collection”
[Randolph, 2009] Warren S. Randolph, ASIAS Overview, JPDO Environment Working Group, Operations Standing Committee,
July 29, 2009,
http://ironman.ae.gatech.edu/~cdaworkshop/ws09/Randolph_20090729_ewg_ops_sc_nasa_ames.pdf
[Rasmussen86] Rasmussen, J. (1986), Information Processing and Human-machine Interaction: An Approach to Cognitive
Engineering. North-Holland: New York.
[Rausand&Vatn98] M. Rausand and J. Vatn, Reliability Centered Maintenance. In C. G. Soares, editor, Risk and Reliability in
Marine Technology. Balkema, Holland, 1998,
http://www.sintef.no/static/tl/projects/promain/Experiences_and_references/Introduction_to_RCM.pdf
[RAW2004] Craig Stovall, Slides for 2004 Risk Analysis Workshop, “How to collect and Analyse Data”
[Reason et al, 1994] J. Reason, R. Free, S. Havard, M. Benson and P. Van Oijen, Railway Accident Investigation Tool (RAIT): a step
by step guide for new users, Department of Psychology, University of Manchester (1994).
[Reason90] Reason, J.T., Human error, Cambridge University press, 1990.
[REBA] http://www.ergonomiesite.be/arbeid/risicoanalyse/REBA.pdf
http://www.unclear-medicine.co.uk/pdf/reba_handout.pdf
[REDA example] W.L. Rankin and S. Sogg In Conjunction with: GAIN Working Group B, Analytical Methods and Tools,
Example Application of Ramp Error Decision Aid (REDA), September 2004,
http://www.flightsafety.org/gain/REDA_application.pdf
[REDA User Guide] Boeing, REDA User’s Guide, http://www.hf.faa.gov/hfmaint/Portals/1/HF_site_REDA_Users_Guide_V-3.doc
[Reer, 2008] Bernhard Reer, Review of advances in human reliability analysis of errors of commission, Part 1: EOC
identification, Reliability Engineering & System Safety Volume 93, Issue 8, August 2008, Pages 1091-1104
[Reer97] B. Reer, Conclusions from Occurrences by Descriptions of Actions (CODA), Abstract of Meeting Paper, Society
for Risk Analysis – Europe, 1997 Annual Meeting,
http://www.riskworld.com/Abstract/1997/Europe97/eu7ab220.htm
[Reese&Leveson97] J.D. Reese and N.G. Leveson, Software Deviation Analysis: A “Safeware” Technique, AIChe 31st Annual Loss
Prevention Symposium, Houston, TX March 1997, http://sunnyday.mit.edu/papers/cels97.pdf
[Region I LEPC] Region I LEPC, California Accidental Release Prevention Program (CalARP), Implementation guidance
document, January 1999, http://www.acusafe.com/Laws-Regs/US-State/CalARP-Implementation-Guidance-
LEPC-Region-1.pdf
[Reich64] P.G. Reich, A theory of safe separation standards for Air Traffic Control, Technical report 64041, Royal Aircraft
Establishment, U.K., 1964.
[Reifer, 1979] D.J. Reifer (1979), "Software Failure Modes and Effects Analysis," IEEE Transactions on Reliability R-28, 3,
247-249.
[Relex-RCM] Relex software website on Reliability Centered Maintenance, http://www.reliability-centered-maintenance.com/
[Richardson92] J.E. Richardson, The design safety process, FSF 45th IASS & IFA 22nd international conference, pp. 95-110, Long
Beach, California, 1992.
196
[Ridley&Andrews01] L.M.Ridley and J.D.Andrews, Application of the Cause-Consequence Diagram Method to Static Systems,
Department of Mathematical Sciences, Loughborough University, Loughborough, Leicestershire, 2001,
http://magpie.lboro.ac.uk/dspace/bitstream/2134/695/1/01-22.pdf
[Risktec] http://www.risktec.co.uk/GetBlob.aspx?TableName=HomePageItems&ColumnName=PDF&RecordID=73dd491
a-55b3-4e8a-b64b-06dd2896c8d9
[RMA Sha 1991] L. Sha, M. Klein, J. Goodenough, Rate Monotonic analysis for real-time systems, Technical report CMU/SEI-91-
TR-006, March 1991, http://www.sei.cmu.edu/publications/documents/91.reports/91.tr.006.html
[Roberts&al81] N.H. Roberts, W.E. Vesely, D.F. Haasl, F.F. Goldberg, Fault tree handbook, U.S. Nuclear Regulatory
Commission, NUREG-0492-1981.
[Roelen&al00] A.L.C. Roelen (NLR), L.J. Bellamy (SAVE), A.R. Hale (DUT), R.J. Molemaker (NEI), M.M. van Paassen
(DUT), A causal model for the assessment of third party risk around airports; Feasibility of the development of a
causal model for the assessment of third party risk around airports, Main Report, April 2000,
http://www2.vlieghinder.nl/naslagdocs/CDrom/REGELS_SCHIPHOL/2.3_TNL/5.3.2.3_A_causal_model_for_th
e_assessment_of_third_party.pdf
[Rohmert83] Rohmert, W., & Landau, K. (1983). A new technique for job analysis. London: Taylor & Francis.
[Rolland et al. 1998] C. Rolland, C. Souveyet, C. Ben Achour, “Guiding goal modeling using scenarios,” IEEE Trans. Software Eng.,
vol. 24, pp. 1055–1071, Dec. 1998.
[Rouse97] Rouse, W. B., & Boff, K. R. (1997). Assessing cost/benefit of human factors. In G. Salvendy (Ed.), Handbook of
Human Factors and Ergonomics (2nd ed.). New York: John Wiley.
[Roussot03] Roussot, J-M. Task analysis. Retrieved August 28, 2003
[Rowe99] L.A. Rowe, Interface testing, Slides, April 1999,
http://bmrc.berkeley.edu/courseware/cs160/spring99/Lectures/17b-InterfaceTesting/sld001.htm
[SAE2001] S. Amberkar, B.J. Czerny, J.G. D’Ambrosio, J.D. Demerly and B.T. Murray, A Comprehensive Hazard Analysis
Technique for Safety-Critical Automotive Systems, SAE technical paper series, 2001-01-0674, 2001
[SAFETEC web] http://www.safetec-group.com/index.php?c=127&kat=HAZOP+-+HAZID+-+CRIOP
[SAFSIM guidance] Kermarquer, Y. and Antonini, A. 2004, Interim SAFSIM Guidance, Eurocontrol
[Salmon et al, 2004] Paul Salmon, Neville Stanton, Chris Baber, Gey Walker, Damian Green, Human Factors Design and Evaluation
Methods Review, 2004,
http://www.hfidtc.com/pdf/reports/Human%20Factors%20Design%20%20Evaluation%20Methods%20Review.p
df
[Salvendy97] Salvendy, G., & Carayon, P. (1997). Data collection and evaluation of outcome measures. In G. Salvendy (Ed.).
Handbook of Human Factors and Ergonomics (2nd ed.). New York: John Wiley.
[SAME PT1, 2008] Eurocontrol, Safety Assessment Made Easier, Part 1 – Safety Principles and an introduction to Safety Assessment
Ed. 0.92, 11 July 08
[SAP15] FAA/EUROCONTROL, ATM Safety Techniques and Toolbox, Safety Action Plan-15, Version 2.0, October
2007,
http://www.eurocontrol.int/eec/gallery/content/public/document/eec/report/2007/023_Safety_techniques_and_too
lbox.pdf
[Savage, 1954] Leonard J. Savage. 1954. The Foundations of Statistics. New York, Wiley.
[Savage54] Leonard J. Savage, Foundations of Statistics (1954).
[Scaife00] Scaife, R., Fearnside, P., Shorrock, S.T., and Kirwan, B. (2000) Reduction of separation minima outside
controlled airspace. Aviation Safety Management conference, Copthorne Tara Hotel, London, 22-23 May.
[Scaife01] Scaife, R., Smith, E. and Shorrock, S.T. (2001). The Practical Application of Error Analysis and Safety
Modelling in Air Traffic Management. IBC Conference on Human Error, London, February 2001.
[SCAN TF, 2010] SCAN Task Force, Safety Fundamentals for Safety scanning, Edition 1.1, 11 March 2010, O. Straeter, H.
Korteweg.
[SCAN TF, 2010a] SCAN Task Force, Safety Scanning Tool, Excel-based Tool, 11 March 2010, A. Burrage, O. Straeter, M.H.C.
Everdij.
[SCAN TF, 2010b] SCAN Task Force, Guidance on Interpreting and Using the Safety scanning results, Edition 1.0, 11 March 2010,
O. Straeter, G. Athanassiou, H. Korteweg, M.H.C. Everdij.
[SCDM, 2006] Eurocontrol, Safety Case Development Manual, DAP/SSH/091, Edition 2.2, 13 November 2006, Released Issue
[SchaaftalSchraagen2000] A. Schaaftal, and J.M. Schraagen (2000) In J. M. Schraagen, S. F. Chapman, & V. L. Shalin (Eds.) Cognitive
Task Analysis. Mahwah, NJ: Lawrence Erlbaum. 56
[Schneiderman92] Shneiderman, B. (1992). Designing the user interface: Strategies for effective human-computer interaction (2nd
ed.). Reading, MA: Addison-Wesley.
[Schram&Verbruggen98] G. Schram, H.B. Verbruggen, A fuzzy logic approach to fault-tolerant control, Journal A, Vol 39, No 3, pp. 14-
21, 1998
[Schuppen98] J.H. van Schuppen, A sufficient condition for controllability of a class of hybrid systems, Proceedings 1st
International Workshop Hybrid Systems: Computation and Control, 1998, pp. 374-383.
[SCM biblio] Bibliography on Software Configuration Management, http://liinwww.ira.uka.de/bibliography/SE/scm.html
[Seamster&al93] T.L. Seamster, R.E. Redding, J.R. Cannon, J.M. Ryder, J.A. Purcell, Cognitive Task Analysis of Expertise in Air
Traffic Control. The International Journal of Aviation Psychology, 3, 257-283, 1993.
[Seamster&al97] T.L. Seamster, R.E. Redding and G.L. Kaempf, Applied cognitive task analysis in aviation, 1997.
[SeaverStillwell, 1983] Seaver DA, Stillwell WG. Procedures for using expert judgement to estimate human error probabilities in nuclear
power plant operations. NUREG/CR-2743, Washington, DC 20555, 1983.
[SEC-SHA] Safeware Engineering Corporation, System Hazard Analysis, http://www.safeware-
eng.com/Safety%20White%20Papers/System%20Hazard%20Analysis.htm
[Seignette02] R. Seignette, RINA, Formal safety assessment of bulk carriers, International collaborative study, Work Package
9b, Detailed task inventory, Report No: GM-R0342-0108-1400, 2002
[Senni et al, 1991] S. Senni, M.G. Semenza, R. Galvani, ADMIRA – An analytical dynamic methodology for integrated risk
assessment. Probabilistic Safety Assessment and Management, G. Apostolakis (Ed), pp. 407-412, New York,
Elsevier, 1991
[SEU] http://en.wikipedia.org/wiki/Subjective_expected_utility
[ShalevTiran, 2007] D.M. Shalev and Joseph Tiran, Condition-based fault tree analysis (CBFTA): a new method for improved fault
tree analysis (FTA), reliability and safety calculations, Reliability Engineering and System Safety Vol 92, pp.
1231-1241, 2007
197
[Shanmugam&Balaban, K. S. Shanmugam and P. Balaban, “A Modified Monte-Carlo Simulation Technique for the evaluation of Error
1980] Rate in Digital Communication Systems,” IEEE Trans. on Communications, Vol. 28, pp. 1916-1924, Nov. 1980.
[Shappell00] Shappell, S. A. and Wiegmann, D. A. (2000), The Human Factors Analysis and Classification System (HFACS).
Report Number DOT/FAA/AM-00/7, Federal Aviation Administration: Washington, DC,
http://www.nifc.gov/safety/reports/humanfactors_class&anly.pdf
[Sharit97] Sharit, J. (1997). Allocation of functions. In G. Salvendy, (Ed.), Handbook of Human Factors and Ergonomics
(2nd ed.). New York: John Wiley.
[Sharma, 2005] Varun Sharma, Development of a Composite Program Assessment Score (CPAS) for Advanced Technology
Portfolio Prioritization, Thesis Proposal Presentation, December 16, 2005. Thesis Co-Advisors: Dr. James T.
Luxhøj and Dr. David W. Coit, http://www.rci.rutgers.edu/~carda/CPAS.pdf
[Sheperd97] Roger Shepherd, Rick Cassell, Rajeev Thapa, Derrick Lee, A Reduced Aircraft Separation Risk
Assessment Model, 1997, American Institute of Aeronautics and Astronautics, Inc.,
http://www.aiaa.org/content.cfm?pageid=406&gTable=mtgpaper&gID=14939
[Sherali et al, 2002] Hanif D. Sherali, J. Cole Smith, Antonio A. Trani, An Airspace Planning Model for Selecting Flight-plans Under
Workload, Safety, and Equity Considerations, Transportation Science, Vol. 36, No. 4, November 2002 pp. 378–
397,
http://www.atsl.cee.vt.edu/Publications/2002_An_Airspace_Planning_Model_for_Selecting_Flight_Plans_Under
_Workload_Safety_and_Equity_Considerations.pdf
[Sherry&al00] L.M. Sherry, M. Feary, P. Polson and E. Palmer, Autopilot totor: building and maintaining autopilot skills, In
Proceedings Int. Conf. on Human Computer Interaction –AERO, Toulouse, France, 2000
[Sherry&al01] L.M. Sherry et al., In: Int J. of Human Factors and Aerospace Safety, 2001.
[Shorrock&Kirwan98] S. Shorrock and B. Kirwan, The development of TRACEr: Technique for the retrospective analysis of cognitive
errors in Air Traffic Management, Powerpoint Slides, Human Factors Unit, NATS, Presented at the Second
International Conference on Engineering Psychology and Cognitive Ergonomics, 1998, “tracer7.ppt”
[Shorrock&Kirwan99] S. Shorrock and B. Kirwan, The development of TRACEr: a technique for the retrospective analysis of cognitive
errors in ATM, Ed: D. Harris, Engineering psychology and cognitive ergonomics, Volume 3, Transportation
systems, medical ergonomics and training, Ashgate, 1999, pp. 163-171.
[Shorrock01] S.T. Shorrock, Error classification for Safety Management: Finding the right approach, DNV Ltd, 2001, “error-
classification.doc”
[Shorrock05] Shorrock, S. Kirwan, B. and Smith, E. (2005: in press) Performance Prediction in Air Traffic Management:
Applying Human Error Analysis Approaches to New Concepts. In Kirwan, B., Rodgers, M., and Schaefer, D.
(Eds) Human Factors Impacts in Air Traffic Management. Ashgate, Aldershot, UK
[Silva&al99] J.S. Silva, K.S. Barber, T. Graser, P. Grisham, S. Jernigan, L. Mantock, The knowledge-based integrated design
and development environment (KIDDE) integrating a formal KA process and requirements representation with a
JAD/RAD development approach, 1999
[Sipser97] M. Sipser, Introduction to the theory of computation, PWS publishing company, Boston, 1997.
[Siu94] N. Siu, Risk assessment for dynamic systems: An overview, Reliability Engineering and System Safety, Vol. 43,
pp. 43-73, 1994.
[Skjerve HCA] Ann Britt Skjerve, Human Centred Automation - Issues related to design of automatic systems from a human
factors perspective,
http://www.ia.hiof.no/grensesnittdesign/forelsening/HumanCenteredAutomation.ppt#345,2,Content
[Skutt01] T. Skutt, Software Partitioning Technologies, Smiths Aerospace, 2001,
http://www.dtic.mil/ndia/2001technology/skutt.pdf
[Smartdraw] Smartdraw web page, How to draw data flow diagrams,
http://www.smartdraw.com/resources/centers/software/dfd.htm ; see also
http://www.pitt.edu/~laudato/DATAFLOW/index.htm
[Smith et al, 2007] Ebb Smith, Jonathan Borgvall, Patrick Lif. Team and Collective Performance Measurement, RTO-TR-HFM-121-
Part-II, 2007, http://ftp.rta.nato.int/public//PubFullText/RTO/TR/RTO-TR-HFM-121-PART-II///TR-HFM-121-
Part-II-07.pdf
[Smith&al98] S. Smith, D. Duke, T. Marsh, M. Harrison and P. Wright, Modelling Interaction in Virtual Environments,
Proceedings of 5th UK-VRSIG, Exeter, UK 1998
[Smith9697] E. Smith, Hazard analysis of route separation standards for Eurocontrol, DNV Technica, 1996 and 1997
[Snow&French, 2002] Michael P. Snow and Guy A. French, Effects of primary flight symbology on workload and Situation awareness
in a head-up synthetic vision display, Proceedings 21st Digital Avionics Systems Conference, Volume: 2, pp.
11C5-1 - 11C5-10, 2002
[Sollenberger97] Sollenberger, R. L., Stein, E. S., & Gromelski, S. (1997). The development and evaluation of a behaviorally
based rating form for assessing air traffic controller performance (DOT/FAA/CT-TN96/16). Atlantic City, NJ:
DOT/FAA Technical Center.
[SPARK web] SPARK web page, http://www.cse.secs.oakland.edu/edslabs/about/spark.asp
[Sparkman92] D. Sparkman, Techniques, Processes, and Measures for Software Safety and Reliability, Version 3.0, 30 May
1992
[SPF-safety01] NATS/Eurocontrol, Strategic Performance Analysis and Forecast Service, SPF_SAFETY report, Issue 2.0, 27
July 2001, Ref. SCS/SPAF/FIM/DOC/00/12
[SRK] http://www.enel.ucalgary.ca/People/far/res-e/theme_old01.html
[SSCS] Software for Safety Critical Systems, Fault Tolerant Systems, Lecture 12,
www.cs.strath.ac.uk/teaching/ug/classes/52.422/fault.tolerance.doc
[Stanton et al, 2005] N.A. Stanton, P.M. Salmon, G.H. Walker, “Human factors methods – a practical guide for engineering and
design”, Ashgate Publishing, 2005, Chapter 6, Human Error Identification Methods
[Stanton et al, 2006] N.A. Stanton, D. Harris, P.M. Salmon, J.M. Demagalski, A. Marshall, M.S. Young, S.W.A. Dekker and T.
Waldmann, Predicting design induced pilot error using HET (human error template) – A new formal human error
identification method for flight decks, The Aeronautical Journal, February 2006, Paper No. 3026, pp. 107-115,
http://www.raes.org.uk/pdfs/3026.pdf
[Stanton&Wilson00] N.A. Stanton, J.A. Wilson, Human factors: Step change improvements in effectiveness and safety, Drilling
Contractor, Jan/Feb 2000, http://www.iadc.org/dcpi/dc-janfeb00/j-step%20change%20psych.pdf
[Statematelatos] M.G. Stamatelatos, Risk assessment and management, tools and applications, slides,
http://www.ece.mtu.edu/faculty/rmkieckh/aero/NASA-RA-tools-sli.PDF
198
[Statler2004] Statler, I., et al, (2004). Identification of atypical flight patterns. Patent Application. NASA Ames Research
Center.
[STEADES] http://www.iata.org/ps/intelligence_statistics/steades/index.htm
[Stein85] Stein, E.S. (1985). Air traffic controller workload: An examination of workload probe. (Report No.
DOT/FAA/CT-TN84/24). Atlantic City, NJ: Federal Aviation Administration Technical Center.
[Stobart&Clare94] R. Stobart, J. Clare, SUSI methodology evaluating driver error and system hazard, 27th International Symposium
on Advanced Transportation pp. 1-8, Oct 1994
[Stoffert85] Stoffert, G. (1985). Analyse und einstufung von körperhaltungen bei der arbeit nach der OWAS-methode.
Zeitschrift für Arbeitswissenschaft, 39(1), 31-38.
[Storbakken, 2002] R. Storbakken, An Incident Investigation Procedure For Use In Industry, A Research Paper Submitted in Partial
Fulfillment of the Requirements for the Masters of Science Degree in Risk Control, The Graduate School
University of Wisconsin-Stout Menomonie, WI 54751, 2002,
http://www.uwstout.edu/lib/thesis/2002/2002storbakkenr.pdf
[Storey96] N. Storey, Safety-Critical Computer Systems, Addison-Wesley, Edinburgh Gate, Harlow, England, 1996
[Storyboard] http://www.ucc.ie/hfrg/projects/respect/urmethods/storyb.htm
[Straeter&al99] O. Straeter, B. Reer, V. Dang, S. Hirschberg, Methods, case studies, and prospects for an integrated approach for
analyzing errors of commission, Safety and Reliability, Proceedings of the ESREL99 – The Tenth European
Conference on Safety and Reliability, Munich-Garching, Germany, 13-17 September 1999, G.I. Schuëller and P.
Kafka (Eds), A.A. Balkema, Rotterdam/Brookfield, 1999, “EOC-Esrel99.pdf” or “Esrel99-Str-ua.pdf”
[Straeter, 2006] O. Sträter et al, Safety Screening Technique, Final Draft, Edition 0.5, 1 March 2006
[Straeter00] O. Straeter, Evaluation of human reliability on the basis of operational experience, Dissertation, Gesellschaft für
Anlagen und Reaktorsicherheit (GRS), August 2000
[Straeter01] O. Straeter, The quantification process for human interventions, In: Kafka, P. (ed) PSA RID - Probabilistic Safety
Assessment in Risk Informed Decision making. EURO-Course. 4.- 9.3.2001. GRS Germany, “ L6_Paper.PDF”
[Stroeve&al01] S.H. Stroeve, H.A.P. Blom, M.B. Klompstra, G.J. Bakker, M.N.J. van der Park, Accident risk assessment model
for active runway crossing procedure, NLR-TR-2001-527, National Aerospace Laboratory NLR, 2001
[Stroeve&Blom&Park03] S.H. Stroeve, H.A.P. Blom, M. Van der Park, Multi-agent situation awareness error evolution in accident risk
modelling, 5th FAA/Eurocontrol ATM R&D seminar, 23-27 June 2003
[SUMI background] SUMI background reading, http://sumi.ucc.ie/sumipapp.html
[Summers98] A.E. Summers, Techniques for Assigning A Target Safety Integrity Level, ISA Transactions 37 (1998) 95-104,
http://www.iceweb.com.au/sis/target_sis.htm
[Sutcliffe, 2003] A.G. Sutcliffe, Mapping the Design Space for Socio-Cognitive Task Design, In E. Hollnagel (Ed.), Handbook of
cognitive task design (pp. 549-575). Mahwah NJ: Lawrence Erlbaum Associates, 2003
[SW, 2004] SW Dependability and Safety assessment techniques, Slides ESTEC Workshop October 2004,
ftp://ftp.estec.esa.nl/pub3/tos-qq/qqs/Workshop_October_2004/SwDependability.pdf
[Swain83] Swain, A. D., & Guttman, H. E. (1983). Handbook of human reliability analysis with emphasis on nuclear power
plant applications. NUREG/CR-1278 (Washington D.C.).
[Swaminathan&Smidts, S. Swaminathan and C. Smidts, The Event Sequence Diagram framework for dynamic Probabilistic Risk
1999] Assessment, Reliability Engineering & System Safety, Volume 63, Issue 1, January 1999, Pages 73-90
[Swiss Cheese] http://www.hf.faa.gov/Webtraining/TeamPerform/TeamCRM009.htm
[Switalski, 2003] Laura Barbero Switalski, Evaluating and Organizing Thinking Tools in Relationship to the CPS Framework,
State University of New York - Buffalo State College, International Centre for Studies in Creativity, May 2003,
http://www.buffalostate.edu/orgs/cbir/Readingroom/theses/Switalbp.pdf
[Task Time] Powerpoint slides on Timeline analysis, “Task-time.ppt”
[Taylor90] Taylor, R.M. (1990). Situational Awareness Rating Technique (SART): The development of a tool for aircrew
systems design. In: AGARD Conference Proceedings No 478, Situational Awareness in Aerospace Operations.
Aerospace Medical Panel Symposium, Copenhagen, 2 nd -6 th October 1989.
[Telelogic Objectgeode] Telelogic Objectgeode webpage, http://www.telelogic.com/products/
http://www.spacetools.com/tools4/space/213.htm
[Telelogic Tau] Telelogic Tau webpage, http://www.telelogic.com/products/tau/
[Terpstra84] K. Terpstra, Phased mission analysis of maintained systems. A study in reliability and risk analysis, Netherlands
energy research foundation, ECN Memorandum, 1984.
[THEMES01] THEMES WP4, Deliverable D4.1, Report on updated list of methods and critical description, D’Appolonia
S.p.A, June 2001
[Thinkaloud] http://www.theusabilitycompany.com/resources/glossary/think-aloud-protocol.html#b
[TOKAI web] http://www.eurocontrol.be/src/public/standard_page/esarr2_tokai.html
[Tomlin&Lygeros&Sastry98] C. Tomlin, J. Lygeros, S. Sastry, Synthesising controllers for nonlinear hybrid systems, Proceedings 1st
International Workshop Hybrid Systems: Computation and Control, 1998, 360-373.
[Toola93] A. Toola, The safety of process automation, Automatica, Vol. 29, No. 2, pp. 541-548, 1993.
[TOPAZ hazard database] TOPAZ ATM hazard database, Database maintained within NLR’s TOPAZ Information Management System
(TIMS) containing hazards identified during ATM safety assessments (contact klompstra@nlr.nl)
[TRACEr lite_xls] Excel files “TRACEr lite Excel Predict v0.1 Protected!.xls” and “TRACEr lite v0[1].1 Protected.xls”
[Trbojevic&Carr99] V.M. Trbojevic and B.J. Carr, Risk based safety management system for navigation in ports, Port
Technology International, 11, pp. 187-192, 2001
[Tripod Beta] Tripod Solutions international webpage on Tripod Beta incident analysis,
http://www.tripodsolutions.net/productitem.aspx?ID=035326b7-7404-4d22-9760-11dfa53ddb3a
[Tripod Solutions] Tripod Solutions international webpage on incident investigation and analysis, www.tripodsolutions.net
[TRM web] Web page on Crew Resource Management, http://www.globalairtraining.com/business_trm.htm
[Trucco, 2006] Paolo Trucco, Maria C. Leva, Oliver Sträter (2006) Human Error Prediction in ATM via Cognitive Simulation:
Preliminary Study. Proceedings of the 8th International Conference on Probabilistic Safety Assessment and
Management May 14-18, 2006, New Orleans, Louisiana, USA, paper PSAM-0268
[TUD05] Safety research and safety assessment methods at the TU Dresden, The safety assessment techniques “External
Risk” and “LOS”, TU Dresden, Technical Contribution to CAATS (Co-operative Approach to Air Traffic
Services), 5 October 2005
199
[Uhlarik&Comerford02] J. Uhlarik and D. Comerford, A review of situation awareness literature relevant to pilot surveillance functions,
Department of Psychology, Kansas State University, March 2002,
http://www.hf.faa.gov/docs/508/docs/cami/0203.pdf
[UML] http://www.rational.com/uml/index.jsp?SMSESSION=NO
[Vakil00] S.S. Vakil, Analysis of Complexity Evolution Management and Human Performance Issues in Commercial
Aircraft Automation Systems, Submitted to the Department of Aeronautics and Astronautics in Partial
Fulfillment of the Requirements for the Degree of Doctor of Philosophy at the Massachusetts Institute of
Technology, May 19, 2000
[Vanderhaegen&Telle98] F. Vanderhaegen and B. Telle, APRECIH : vers une méthode d’analyse des conséquences de l’infiabilité
humaine, Compte-Rendu de la Réunion S3 du 19 mai 1998, http://www.univ-lille1.fr/s3/fr/cr-19-5-98.htm
[VanEs01] G.W.H. Van Es, A Review of Civil Aviation Accidents Air Traffic Management Related Accidents:1980-1999,
4th International Air Traffic Management R&D Seminar New-Mexico, December 3rd-7th, 2001
[Vargas, 1999] Enrique Vargas, Dynamic Reconfiguration, Enterprise Engineering, Sun BluePrints™ OnLine - April 1999,
http://www.sun.com/blueprints/0499/reconfig.pdf
[Verheijen02] Frans M. Verheijen, Flight Training and Pilot Employment, MSc Thesis, Air Transport Management, City
University, London, United Kingdom, August 2002,
http://www.airwork.nl/kennisbank/Flight_Training_and_Pilot_Employment.pdf
[Vesely70] W.E. Vesely, A time dependent methodology for fault tree evaluation, Nuclear engineering and design, Vol. 13,
pp. 337-360, 1970.
[Vidulich et al, 1991] Vidulich, M. A., Ward, F. G., & Schueren, J. (1991). Using the subjective workload dominance (SWORD)
technique for projective workload assessment. Human Factors, 33(6), 677-692.
[Villemeur91-1] A. Villemeur, Reliability, availability, maintainability and safety assessment, Volume 1: Methods and
Techniques, John Wiley and Sons, Inc., 1991.
[Vinnem00] J.E. Vinnem, R&D into operational safety aspects of FPSO/Shuttle Tanker collision hazard, SINTEF, 2000
[Voas97a] J. Voas, G. McGraw, L. Kassab, & L. Voas. Fault-injection: A Crystal Ball for Software Quality, IEEE
Computer, June 1997, Volume 30, Number 6, pp. 29-36, http://www.cigital.com/papers/download/crystal.ps
[Voas97b] J. Voas, F. Charron, G. McGraw, K. Miller, & M. Friedman. Predicting How Badly "Good" Software can
Behave, IEEE Software, July 1997, Volume 14, Number 4, pp. 73-83,
http://www.cigital.com/papers/download/ieee-gem.ps
[Volpe98] VOLPE National Transportation Systems Center (1998). Evaluation of Retroreflective Markings to Increase Rail
Car Conspicuity. Cambridge, MA 02142-1093.
[vonThaden, 2006] Terry L. von Thaden, Yongjuan Li, Li Feng, Jiang Li, Dong Lei, Validating The Commercial Aviation Safety
Survey In The Chinese Context, Technical Report HFD-06-09 Prepared for Federal Aviation Administration
Atlantic City International Airport, NJ, Contract DTFA 01-G-015, December 2006,
http://www.humanfactors.uiuc.edu/Reports&PapersPDFs/TechReport/06-09.pdf
[Wassell92] A.B. Wassell, Safety and reliability in the air, 16th Croxson Memorial Lecture, Cranfield, pp. 315-318, Dec 1992
[WBA Homepage] Why-Because Analysis Homepage, http://www.rvs.uni-bielefeld.de/research/WBA/
[Weinberg&Lynch&Delisle9 H.B. Weinberg, N. Lynch, N. Delisle, Verification of automated vehicle protection systems, Hybrid Systems III,
6] Verification and control, R. Alur et al. (eds.), Springer, 1996, pp. 101-113
[Weinberg, 1971] Gerald M. Weinberg, The psychology of computer programming, Computer science series, Van Nostrand
Reinhold, 1971
[Weitzman00] Weitzman, D. O. (2000), “Runway Incursions and Critical Controller Decision Making in Tower Operations,”
Journal of Air Traffic Control, 42(2), pp 26-31.
[Wickens92] C.D. Wickens, Engineering, psychology and human performance, Merrill, 1992
[Wickens97] Wickens, C. D., Gordon, S. E., Liu, Y., (1997). An Introduction to Human Factors Engineering. New York:
Longman.
[Wickens99] Wickens, C. D., & Hollands, J. G., (1999). Engineering psychology and human performance (3rd ed.). New
Jersey: Prentice Hall.
[Wiegman et al, 2000] Douglas A. Wiegmann, Aaron M. Rich and Scott A. Shappell, Human Error and Accident Causation Theories,
Frameworks and Analytical Techniques: An Annotated Bibliography, Technical Report ARL-00-12/FAA-00-7
September 2000, Prepared for Federal Aviation Administration Oklahoma City, OK ,Contract DTFA 99-G-006,
http://www.humanfactors.uiuc.edu/Reports&PapersPDFs/TechReport/00-12.pdf
[Wiegman et al, 2003] Douglas A. Wiegmann, Terry L. von Thaden, Alyssa A. Mitchell, Gunjan Sharma, and Hui Zhang, Development
and Initial Validation of a Safety Culture Survey for Commercial Aviation, Technical Report AHFD-03-3/FAA-
03-1, February 2003, Prepared for Federal Aviation Administration Atlantic City International Airport, NJ,
Contract DTFA 01-G-015, Aviation Human Factors Division Institute of Aviation,
http://www.humanfactors.uiuc.edu/Reports&PapersPDFs/TechReport/03-03.pdf
[Wiegman00] D.A. Wiegmann, A.M. Rich, S.A. Shappell, Human Error and Accident Causation Theories, Frameworks and
Analytical Techniques: An Annotated Bibliography, Aviation Research Lab (ARL) and Civil Aeromedical
Institute, Technical Report ARL-00-12/FAA-00-7, Institute of Aviation University of Illinois ARL, September
2000,
http://www.errorsolutions.com/pubs/Human%20Error%20and%20Accident%20Causation%20Theories%20Fram
eworks%20and%20Analytical%20Techniques.pdf
[Wiegman00] Wiegmann, D. A. Shappell, S. A., Cristina, F. and Pape, A. (2000), “A human factors analysis of aviation
accident data: An empirical evaluation of the HFACS framework,” Aviation Space and Environmental Medicine,
71, 328-339.
[Willems02] Willems, B., & Heiney, M. (2002). Decision Support Automation Research in the En Route Air Traffic Control
Environment (DOT/FAA/CT-TN02/07). Atlantic City International Airport: Federal Aviation Administration
William J. Hughes Technical Center.
[Williams85] J.C. Williams, Validation of human reliability assessment techniques, Reliability Engineering, Vol. 11, pp. 149-
162, 1985.
[Williams88] J.C. Williams, A data-based method for assessing and reducing human error to improve operational performance,
4th IEEE conference on Human factors in Nuclear Power plants, Monterey, California, pp. 436-450, 6-9 June
1988.
200
[Williams91] L.G. Williams, Formal Methods in the Development of Safety Critical Software Systems, Work performed under
the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-
7405-Eng-48, November 1991
[Wilson&al96] S.P. Wilson, J.A. McDermid, C.H. Pygott, D.J. Tombs, Assessing complex computer based systems using the
goal structuring notation, pp. 1-8, 1996
[Winkler03] Anna M. Fowles-Winkler, Modelling With The Integrated Performance Modelling Environment (IPME),
Proceedings 15th European Simulation Symposium, 2003, Alexander Verbraeck, Vlatka Hlupic (Eds.)
http://www.scs-europe.net/services/ess2003/PDF/TOOLS05.pdf
[Wolfram02] S.A Wolfram, New Kind of Science, Notes for Chapter 9: Fundamental Physics, Section: Time and Causal
Networks, Page 1032, http://www.wolframscience.com/reference/notes/1032f
[Woods et al, 1992] David D. Woods, Harry E. Pople Jr. and Emilie M. Roth, Cognitive environment simulation: a tool for modeling
intention formation for human reliability analysis, Nuclear Engineering and Design, Volume 134, Issues 2-3, 2
May 1992, Pages 371-380
[WordenSchneider, 1995] M. Worden and W. Schneider. Cognitive task design for fMRI, International Journal of Imaging Science &
Technology; 6, 253-270, 1995.
[Wright&Fields&Harrison94 P. Wright, B. Fields and M. Harrison, Deriving human error tolerance Requirements from tasks, Proceedings
] ICRE’94 – IEEE International Conference on Requirements Engineering, Colorado 1994,
http://www.cs.mdx.ac.uk/staffpages/bobf/papers/ICRE94.pdf
[Yanga&Mannan, 2010] Xiaole Yanga and M. Sam Mannan, The development and application of dynamic operational risk assessment in
oil/gas and chemical process industry, Reliability Engineering & System Safety, Volume 95, Issue 7, July 2010,
Pages 806-815
[Yu et al, 1999] Fan-Jang Yu, Sheue-Ling Hwang and Yu-Hao Huang, Task Analysis for Industrial Work Process from Aspects
of Human Reliability and System Safety, Risk Analysis, Volume 19, Number 3, 401-415, DOI:
10.1023/A:1007044527558
[Yu, 1994] Yu E. & Mylopoulos J.M., 1994, ‘Understanding “Why” in Software Process Modelling, Analysis and
Design’, Proceedings, 16th International Conference on Software Engineering, IEEE Computer Society
Press, 159-168.
[Zachary96] Zachary, W., Le Mentec, J. C., and Ryder, J. (1996), Interface agents in complex systems. In Human Interaction
with Complex Systems: Conceptual Principles and Design Practices, (C. N. Ntuen and E. H. Park, eds.), Kluwer
Academic Publishers
[Zingale et al, 2008] Carolina M. Zingale, Todd R. Truitt, D. M. McAnulty, Human-in-the-Loop Evaluation of an Integrated
Arrival/Departure Air Traffic Control Service for Major Metropolitan Airspaces, FAA report DOT/FAA/TC-
08/04, March 2008, http://www.tc.faa.gov/its/worldpac/techrpt/tc084.pdf
[Zuijderduijn99] C. Zuijderduijn, Risk management by Shell refinery/chemicals at Pernis, The Netherlands; Implementation of
SEVESO-II based on build up experiences, using a Hazards & Effects Management Process, 1999,
http://mahbsrv.jrc.it/Proceedings/Greece-Nov-1999/B4-ZUIJDERDUIJN-SHELL-z.pdf
201