Model Design and Planning: Chapter 4: Defining Sensitivity and Flexibility Requirements
Model Design and Planning: Chapter 4: Defining Sensitivity and Flexibility Requirements
Model Design and Planning: Chapter 4: Defining Sensitivity and Flexibility Requirements
AND PLANNING
Chapter 4: Defining Sensitivity and Flexibility
Requirements
Introduction:
This chapter discusses what is perhaps the single most important area to consider when
planning and designing models. This concerns ensuring that one clearly defines (early in the
processes) the nature of the sensitivity analysis that will be used in decision support, and
using this as the fundamental driver in model design
As the model is being built, sensitivity Once the model is built, sensitivity
analysis can be used: analysis can be used in the
traditional sense, i.e. to better
to test it for the absence of logical errors
understand the range of possible
to ensure that more complex formulae are
variation around a point forecast
implemented correctly
There are typically many ways of breaking down an item into subcomponents.
The use of SAT will help
• to clarify which approach is appropriate, especially relating to the choice of variables that
are used for inputs and intermediate calculations, and the level of detail that makes sense
(since one can run sensitivity analysis only on a model input).
• to ensure that the forward calculations correctly reflect dependencies between the items
(general dependencies or specific common drivers of variability), since sensitivity analysis
will be truly valid only if such dependencies are captured.
Example:
The aim is to calculate the labour cost associated with a project to renovate a house. In the first instance,
a backward thought process is applied to consider possible ways of breaking down the total cost into
components.
Figure 4.1 represents the initial method used, Figure 4.2 shows an example of a modified model, in which the
based on a hypothesis that the items shown are backward path has been extended to include an hourly labour rate, and
the underlying drivers of the total. the forward calculation path is based on using new underlying base
figures (derived so that the new totals for each are the same as the
original values).
In addition, one may desire to be able to vary the In a more general case, there may be several
figures using a percentage variation (as an underlying factors (or different categories of labour),
alternative, or in addition, to varying absolute with some individual items driven by one of these,
figures). Figure 4.3 shows an example of how this and other items by another. Figure 4.4 shows an
may be implemented. example of this.
In general when items fall into categories, it may be preferable to build a model which is not structurally constrained
by the categories; in other words, one in which the items can be entered in any order (rather than having to be entered
by category). This is simple to do by using functions such as INDEX, MATCH and SUM-IFS. Figure 4.5 shows an
example.
Another important case is that of models with a time axis where an important question is
whether the assumptions used for the forecast (e.g. for the growth rate in revenues) should
be individual to each time period, or common to several time periods:
• A separate assumption in each period can be cumbersome and inhibit sensitivity analysis
• A single assumption that applies to all future periods may be too crude (and unrealistic),
resulting in an excessively high sensitivity of the output to the input value.
A compromise approach, in which there are several growth rates, each applied to several
periods, is often the most appropriate. This can also be considered as a “parameter
reduction”, i.e. the number of inputs is reduced to a more manageable level, whilst aiming
to retain sufficient accuracy.
Figure 4.6, in which there is a single assumption for revenue growth in each of years 1–3, a single
assumption for each of years 4–5 and a single assumption for each of years 6–10
Time Granularity
Time Granularity
Where models have a time component (such as each column representing a time period), it
is important to consider the granularity of the time axis (such as whether a column is to
represent a day, a month, a quarter or a year, and so on).
It is generally better to build the model so that the granularity of the time axis is at least as
detailed as that required for the purposes of development of the formulae and results
analysis.
For example,
• If one may wish to delay some cash flows by a month, then a monthly model should be
considered.
• if the refinancing conditions for a bank or project loan are to be verified quarterly, then a
model which forecasts whether such conditions will be met should generally be built to
be at least quarterly.
The benefits of increasing granularity
• Models with a very granular time axis can be used to give the relevant figures for longer periods
(by summation).
• It is harder to validly allocate aggregate figures (i.e. for a period of a year) into the components
(such as monthly figures), since the effect of growth or other factors would lead to non-equal
values in the component periods
The appropriate level of granularity may be one that uses the detailed information explicitly (as shown in
Figure 4.4): (see Figure 4.7, which is also contained in the example file referred to earlier).
Sensitising Absolute
Values or Variations
from Base Cases
Sensitising Absolute Values or Variations from Base
Cases
At the model design stage, it is useful to consider explicitly whether sensitivity analysis will be
performed on an absolute or on a variation (change) basis.
• In the first approach, the value of a model’s output is shown, as an input takes each of a pre-
defined set of values.
• In the second approach, the output is shown for a set of input values corresponding to a
variation from the base case.
• In the latter approach (which uses a percentage variation), the base case position is fixed (at
0% variation), even if other assumption values were updated (such as the base unit labour-
cost).
When using the variation approach (whether absolute or percentages), the variation is an
additional model input, which must be used together with the original absolute input figure
within the calculations.
The percentage-variation approach has particular appeal, as it may correspond closely to
how many decision-makers think.
Uses of Scenarios
• Used most typically where it is desired to vary three or more input values
simultaneously.
• Is to reflect possible dependencies between two or more inputs. This can be useful where
the relationship between the variables is not well understood and cannot be represented
with simple formulae.
CONT..
For example, it may be difficult to express the volume of a product that might be sold
for every possible value of the price, but market research could be used to establish this
at several price points, with each volume-price combination forming a possible
scenario.
• It would often affect the best way to layout the model so that items of the similar type
(optimisation versus uncertain variables) are grouped together if possible, or are perhaps
formatted differently.
• The logic within the model may need to be adapted, potentially quite significantly
For example, if it is desired to find the price at which to sell a product in order to
maximise revenues, one will need to capture (in the logic of the model) the
mechanism by which volume decreases as price is increased.
Increasing
Model Validity
Using Formulae
Increasing Model Validity Using Formulae
Model assumptions are numerical values typically (sometimes text fields also act as
inputs), which the model’s calculations should update correctly if these values are
altered (e.g. to conduct a sensitivity analysis).
Contextual assumptions are those which limit the validity of a model, and so cannot
be validly changed within the existing model