Loss Distribution
Loss Distribution
Loss Distribution
LECTURE-NOTES
August 3, 2022
1 Loss Distribution
Objectives:
Describe the properties of the statistical distributions which are suitable for modelling
individual and aggregate losses.
Derive moments and moment generating functions (where dened) of loss distributions in-
cluding the gamma, exponential, Pareto, generalised Pareto, normal, lognormal, Weibull,
Burr distributions and mixture distribution
Apply the principles of statistical inference to select suitable loss distributions for sets of
claims.
Estimate the parameters of a failure time or loss distribution when the data is complete,
or when it is incomplete, using maximum likelihood and the method of moments.
1.1 Introduction
For a variety of reasons, general insurance companies must study claims history and employ
mathematical methodologies. These include the following:;
Reserving (i.e. assessing how much money should be set aside to cover the cost of
claims)
1
Testing for solvency (i.e. assessing the company's nancial position)
Reserving (i.e. assessing how much money should be set aside to cover the cost of
claims)
The total number of claims led in a given time period is a crucial gure for an insurance
company's good management. All of the models examined here make the premise that the
occurrence of a claim and the amount of a claim may be analyzed separately. As a result, a
claim is made based on a simple model for time-based events, and the claim amount is picked
from a distribution characterizing the claim amount.
In this chapter we will look at loss distributions, which are a mathematical method of
modelling individual claims. We will introduce some new statistical distributions and we will
see how these can be tted to observed claims data. We can then test for goodness of
t, and use the tted loss distributions to estimate probabilities. These distribution helps the
insurance companies model either the number of claims or the amount of individual claims (price
associated with the claim). Basically every insurance company is very much concerned with
the total number of claim it receives in a given period (say 1 year). Knowing the approximate
number of claims and the associated amounts (aggregated amounts) in a given period helps the
insurance company plan well and ensure smooth running of the company. There are two major
concerns when we looks at the claims
the claim amount (i.e the amount associated with the claims)
However, in this course we are more concerned with the severity of the claim that is the claim
amount, and the model or distribution that can be used to determine the claim amounts. There
are various statistical models that can be used to understand the distribution on these losses.
The most important thing is the need to nd model or distribution to best t the data. Basi-
cally we assume that the claims follows a particular known distribution. The major challenge
in all this process is knowing exactly which distribution the claim comes from.
Below are the commonly known probability and their properties that can be used to deter-
mine the number of claims and the associated amount to each individual claim.
When a random variable X has a Poisson distribution with parameter λ > 0, its probability
function is given by
e−λ λx
P (X = x) = x = 0, 1, 2, . . .
x!
The moment generating function is
∞ ∞ −λ x ∞
X
tx
X
tx e λ −λ
X (et λ)x
MX (t) = e P (X = x) = e =e = exp[λ(et − 1)]
x=0 x=0
x! x=0
x!
2
The moment of X can be found from the moment generating function. For example,
and
M ”X (t)λet MX (t) + (λet )2 Mx (t)
From which it follows that E[X] = λ and E[X] = λ2 + λ so that V ar[X] = λ. We use the
notation P (λ) to denote a Poisson distribution with parameter λ.
Example 1.2.1. Derive the mean, variance and the moment generating function of the Poisson
distribution.
Solution:
Mean:
∞
X
E[X] = xi p(xi )
x=0
Using
∞
X ay
ea = (1)
y=0
y!
and Since
e−λ λx
P (x) =
x!
∞
X e−λ λx
E[X] = x
x=0
x!
∞
−λ
X xλx
E[X] = e
x=0
x!
xλx
Since x=0 makes the expression
x!
= 0, thus we let x = 1, so that
∞
−λ
X xλx
E[X] = e
x=1
x!
3
Using Equation 1 we get
E[X] = λe−λ eλ
E[X] = λ
Variance
V ar[X] = E[X 2 ] − (E[X])2
P∞
We need to nd E[X 2 ] = i=0 x2 p(x) but it is more complicated so we use
∞
X
E[X(x − 1)] = x(x − 1)2 p(x)
i=0
∞
X e−λ λx
E[X(x − 1)] = x(x − 1)
i=0
x!
∞
−λ
X λx
E[X(x − 1)] = e x(x − 1)
x=0
x!
∞
−λ
X λx
E[X(x − 1)] = e x(x − 1)
x=2
x!
∞
X λx−2
E[X(x − 1)] = λ2 e−λ x(x − 1)
x=2
x(x − 1)(x − 2)!
∞
2 −λ
X λx−2
E[X(x − 1)] = λ e
x=2
(x − 2)!
∞
2 −λ
X λy
E[X(x − 1)] = λ e
y=0
(y)!
where y =x−2 By equation 1 we have
Hence
E[X(x − 1)] = λ2
From E[X(x − 1)] = λ2 we have
E[X 2 − x] = λ2
E[X 2 ] − E[x] = λ2
E[X 2 ] = λ2 + E[x] = λ2 + λ
So that
V ar[X] = λ2 + λ − (λ)2 = λ
Therefore
V ar[X] = λ
MGF:
∞
X
tx
M x( t) = E[e ] = etx p(x)
x=0
4
∞
X eλ λx
Mx (t) = etx
x=0
(x)!
∞
X (et λ)x
Mx (t) = eλ
x=0
(x)!
Using equation 1 we have
t
Mx (t) = e−λ eλe
t
Mx (t) = e(e −1)λ
NB The MGF can also be used to nd the mean and the variance.
Example 1.2.2. An insurance company uses a Poisson distribution to model the cost of re-
pairing insured vehicles that are involved in accidents. Find the maximum likelihood estimate
of the mean cost, given that the total cost of repairing a sample of 1000 vehicles was K22, 000.
Solution:
P
Given that xi = K2200 and n = 1000 Then using the maximum likelihood estimator method
n
Y
L(λ|x) = p(λ|x)
i=1
n
Y e−λ λx
L(λ|x) =
i=1
x!
P
e−nλ λ x
L(λ|x) = P
x!
Taking the natural log on both side we have
P
e−nλ λ x
ln[L(λ|x)] = ln P
x!
X X
ln[L(λ|x)] = −nλ + x ln(λ) − ln x!
Dierentiating w.r.t λ
∂ ∂ h X X i
ln[L(λ|x)] = −nλ + x ln(λ) − ln x!
∂λ ∂λ
P
∂ xi
= −n +
∂λ λ
∂
Letting
∂λ
=0 P
xi
=⇒ −n + =0
λ
Solving for λ we get
1X
λ̂ = xi = X̄
n
Thus
1
λ̂ = (22000) = 22
1000
Therefore maximum likelihood estimate of the mean cost λ̂ = 22.
5
1.3 Gamma Distribution
Suppose a random variable X ∼ Gamma(α, β) then its probability density function is given by
is given by
β α α−1 −βx
f (x) = x e , x > 0, α, β > 0
Γ(α)
where Γ(α) is the gamma function and its dened as
Z ∞
Γ(α) = xα−1 e−x dx
0
α−1
X (βx)j
F (x) = 1 − e−βx , x≥0
j=0
j!
. The moments and moment generating function of the gamma distribution can be found by
noting that Z ∞
f (x)dx = 1 yields
0
Z ∞
Γ(α)
xα−1 e−βx dx = (2)
0 βα
Γ(α + n)
E[X n ] =
β n Γ(α)
For n=1
Γ(α + 1) αΓ(α)
E[X 1 ] = E[X] = 1
=
β Γ(α) βΓ(α)
α
E[X] = as the mean
β
6
and for n=2
Γ(α + 2) α(α + 1)Γ(α)
E[X 2 ] = 2
=
β Γ(α) β 2 Γ(α)
α(α + 1)
E[X 2 ] =
β2
So that
V ar[X] = E[X 2 ] − (E[X])2
α(α + 1) α
V ar[X] = −
β2 β
α
V ar[X] = as the variance
β2
We can nd the moment generating function in a similar fashion. As
∞
β α α−1 −βx
Z
Mx (t) etx x e dx
0 Γ(α)
∞
βα
Z
MX (t) = xα−1 e−(β−t)x dx (3)
Γ(α) 0
βα
Γ(α)
MX (t) =
Γ(α) (β − t)α
βα
MX (t) =
(β − t)α
α
β
MX (t) = (4)
(β − t)
NB: equation 2, β > 0. Hence in order to apply equation 2 to equation 3 we require that
β − t > 0, so that the moment generating function exists when β>t or t > β.
Example 1.3.1. Based on an analysis of past claims, an insurance company believes that
individual claims in a particular category for the coming year will follows a Gamma distribution
with parameters α = 0.44 and β = 0.0008. Estimate the probability of claims that will exceed
K25, 000.
Solution:
Since X ∼ Gamma(α, β ) then we can have that X ≈ N α α
,
β β2
, given that α = 0.44 and
β = 0.0008 so that
α 0.44
µ= = = 550
β 0.0008
α 0.44
σ2 = 2 = = 687500
β (0.0008)2
P (X > 25000)
7
x−µ 25000 − 550
P √ > √
σ 687500
P (Z > 29.49) = 1 − P (Z < 29.49)
1 − Φ(29.49)
1−1=0 since Φ(29.49) = 1
Therefore the estimated probability of the claims tha will exceed K25, 00 is 0. Hence we conclude
that it is impossible to have claims that will exceed K25, 00
The exponential distribution is a special case of the gamma distribution. It is just a gamma
distribution with α = 1. Hence , the exponential distribution with parameter β > 0 is specied
as follows from the gamma distribution.
Since
β α α−1 −βx
f (x) = x e , x > 0, α, β > 0
Γ(α)
Let α = 1, then
β 1 1−1 −βx
f (x) = x e , x>0
Γ(1)
Since Γ(1) = 0! = 1, then
f (x) = βe−βx , for x>0
as the probability density function and has the cumulative density function
n!
E[X n ] = (5)
βn
which give the nth moment of the distribution. In a similar way from equation 4, with α=1
β
MX (t) = t<β
β−t
α α
thus we get the moment generating function. From
β
E[X] =
and V ar[X] = β2
with α=1 we
get the mean and the variance of the exponential distribution given by
1
E[X] =
β
and
1
V ar[X] =
β2
respectively.
Example 1.4.1. The random variable X has an exponential distribution with mean β1 . It is
found that MX (−β 2 ) = 0.2. Find β .
8
Solution:
1
Since X is having an exponential distribution with mean
β
, we have that
Z ∞
tx
MX (t) = E[e ] = etx f (x)dx
0
Z ∞
MX (t) = etx βe−βx dx
0
Z ∞
MX (t) = β e−(β−t)x dx
0
∞
e−(β−t)x
MX (t) = β −
β−t 0
1 β
MX (t) = β 0 + =
β−t β−t
Then
β 1
Mx (β 2 ) = 2
= = 0.2
β−β 1+β
=⇒ β = 4
Therefore β=4
Example 1.4.2. A ZISC general insurance uses an exponential distribution to model the cost
of repairing insured house that are damaged by the heavy rains. Find the maximum likelihood
estimate of the mean cost, given that the average cost of repairing a sample of 50 houses was
$2, 200.
Solution:
∂ ln[L(x|β)] n X
= − x1 = −0
∂β β
1
β= 1
P
n
xi
9
1
β̂ =
x̄
since we have n = 50 and x̄ = 2, 200 then we have
1
β̂ =
2200
Therefore the maximum likelihood estimate of the mean cost, given that the average cost of
1
repairing a sample of 50 houses was $2, 200 is β̂ =
2200
Suppose a random variable X has a Pareto distribution with α > 0 and β > 0, then its density
function is given by
αβ α
f (x) = , x>0
(β + x)α+1
By integrating this density function we nd that the cumulative density function is
α
β
F (x) = 1 − , x≥0
β+x
NB; For the general Pareto distribution the density function is given by
αβ α
f (x) =
(β + x)α+1
Whenever moments of the distribution exist they can be found from the expression below
Z ∞
n
E[X ] = xn f (x)dx
0
by using integration by parts Thus, we can use the following approach to determine them
individually. Since the integral of the density function over (0, ∞) equals 1 we have
Z ∞
dx 1
α+1
=
0 (β + x) αβ α
as an identity which holds provided that α > 0. The mean E[X] and variance of the Pareto
distribution are given by
β αβ α
E[X] = and V ar[X] =
α−1 (α − 1)2 (α − 2)
respectively
The Pareto distribution is often appropriate model for claim size distribution particularly
where exceptionally large claim may occur
10
Example 1.5.1. A random sample of claims with n = 20 from a distribution believed to be
Pareto with parameters α and β gives values such that
X X
x = 1508 x2 = 257212
Also
2β 2 1 257212
E[X 2 ] = = Σx2i =
(α − 1)(α − 2) n 20
2β 2
=⇒ = 12860.6 (7)
(α − 1)(α − 2)
2(75.4)2 (α − 1)
= 12860.6
(α − 2)
αβ 2
V ar[X] =
(α − 1)2 (α − 2)
instead of E[X 2 ]
When a random variable X has a normal distribution with parameters µ and σ2, its density
function is given by
1 (x−µ)2
f (x) = √ e− 2σ2 ∞ < x < +∞
σ 2π
11
We use the notation N (µ, σ 2 ) o denote a normal distribution with parameters µ and σ2 The
standard normal distribution has parameters 0 and 1 and its distribution function is denoted
Φ where Z z
1 z2
Φ(z) = √ e− 2 dz
0 2π
x−µ2
A key relationship is that if X ∼ N (µ, σ ) σ
, then and if z= Z ∼ N (0, 1).
The moment generating function of the normal distribution is
1 2 t2
Mx (t) = eµt+ 2 σ
Example 1.6.1. Derive the formula for the MGF of the standard normal distribution
Solution:
Z +∞
1 1 2 2 2
Mx (t) = √ e− 2 (x −2tx−t +t ) dx
−∞ 2π
Z +∞
1 2 1 1 2 2
Mx (t) = e 2
t
√ e− 2 (x −2tx+t ) dx
−∞ 2π
Z +∞
1 2 1 1 2
Mx (t) = e 2
t
√ e− 2 (x−t) dx
−∞ 2π
Since the integral is just the area under the graph of the PDF of a N (t, 1) distribution which
equals 1. Hence the MGF of the standard normal distribution is
1 2 1 2
Mx (t) = e 2 t (1) = e 2 t
Suppose a random variable X has a lognormal distribution with parameters µ and σ, where
−∞ < µ < +∞ and σ > 0, then its density function is given by
1 (log x−µ)2
f (x) = √ e− 2σ 2 , x > 0.
xσ 2π
The distribution function or CDF is obtained by integrating the density function as follows
Z x
1 (log y−µ)2
F (x) = √ e− 2σ 2 dy
0 yσ 2π
12
By substituting z = log y it gives
Z log x
1 (z−µ)2
F (x) = √ e− 2σ 2 dz
−∞ yσ 2π
log x − µ
F (x) = ϕ
σ
However, probabilities under the lognormal distribution can be calculated from the standard
normal distribution function. We use the notation LN (µ, σ) to denote a lognormal distribution
with parameters µ and σ.
From the proceeding argument it follows that if X ∼ LN (µ, σ 2 ) then log x ∼ N (µ, σ 2 ). This
relationship between the normal and lognormal distribution is extremely usefully, particularly
2
in deriving moments. If X ∼ LN (µ, σ ) and Y = log X , then
1 2 n2
E[X n ] = E[enY ] = MY (n) = eµn+ 2 σ
The log-normal distribution is very useful in modelling of claim sizes. For large σ its tail is
(semi-)heavy heavier than the exponential. For small σ the log-normal resembles a normal
distribution, although this is not always desirable.
Example 1.7.1. If the the individual claim amount X follows a lognormal distribution with
PDF " 2 #
1 1 log x − µ
f (x) = √
exp − , x>0
xσ 2π 2 σ
Derive a formula for average claim amount.
Solution:
" 2 #
∞ ∞
log x − µ
Z Z
1 1
E[X] = xf (x)dx = x √
exp − dx
0 0 xσ 2π 2 σ
log x−µ dx
Letting z= σ
=⇒ dz = xσ
and x = eµ+σz gives
Z +∞
1 1 2
E[X] = eµ+σz √ e− 2 z dz
−∞ σ 2π
Z +∞
1 1 2
E[X] = e µ
√ e− 2 (z −2σz) dz
−∞ σ 2π
Using completing the square to evaluate this integral we have that
Z +∞
1 1 2 2 2
E[X] = e µ
√ e− 2 (z −2σz+σ −σ ) dz
−∞ σ 2π
Z +∞
µ+ 21 σ 2 1 1 2
E[X] = e √ e− 2 (z−σ) dz
−∞ σ 2π
13
The function in the last integral is the PDF of a random variable that has a N (σ, 1) distribution
which is equal to 1 then we have that
1 2 1 2
E[X] = eµ+ 2 σ (1) = eµ+ 2 σ
1 2
E[X] = eµ+ 2 σ
γ
f (x) = γβxγ−1 e−βx x > 0,
γ
F (x) = 1 − e−βx x>0
n −n n
E[X ] = β γ Γ 1+
γ
For the Weibull distribution, the maximum likelihood and method of moment estimators can
be only be evaluated numerically.
Example 1.8.1. A random sample of 100 claim amounts x1 , x2 , . . . , x100 observed from a
Weibull distribution with parameter γ = 2 where β is unknown. For these data:
(a) Determine the maximum likelihood estimator for β . Hence estimate the value of β .
(b) Estimate the value of β using the method of moments.
(c) Calculate the method of percentiles estimate of β .
Solution:
14
(a) Since
γ
f (x) = γβxγ−1 e−βx x > 0,
But γ=2 then we have
2
f (x) = 2βxe−βx x > 0,
Likelihood function is
n
Y
L(x|β) = f (xi |β)
i=1
n
2
Y
L(x|β) = 2βe−βxi
i=1
x2i
P
L(x|β) = 2n β n e−β
Taking the natural log we have
X
ln L(x|β) = n ln 2 + n ln β − β x2i
Dierentiating w.r.t β
∂ ln L(x|β) n X 2
= − xi
∂β β
Equating to 0 and solving for β
n
β̂ = P 2
xi
x2i = 976444000
P
Since n = 100 and then
100
β̂ = = 1.024 × 10−7
976444000
(b) Since
− γ1 1
E[X] = β Γ 1+
γ
But γ=2 we get
− 21 1 1X 487926
E[X] = β Γ 1+ = xi = = 4879.26
2 n 100
1√
Γ(1.5) = 12 Γ 1
Using
2
= 2
π , then
1 1√
β− 2 π = 4879.26
2
Solving for β we get
−2
4879.26
β̂ = 1√ = 3.299 × 10−8
2
π
Therefore β̂ = 3.299 × 10−8
15
(c) For the median M, which satises F (M ) = 0.5 using CDF we have
2
F (M ) = 1 − e−βM = 0.5
2
F (4500) = 1 − e−β(4500) = 0.5
2
1 − e−β(4500) = 0.5
2
e−β(4500) = 0.5
Taking the natural log and solving for β we get
ln 0.5
β̂ = = 3.423 × 10−8
(4500)2
′
xγ−1
f (x) = γαβ α x>0
(β + xγ )α+1
And α
β
F (x) = 1 − x>0
β + xγ
The tth raw moment
1 t t t
Mx (t) = βγΓ 1 + Γ α− exist only for t < γα
Γ(α) γ γ
n
β− γ
n n n
E[X ] = Γ 1+ Γ α−
Γ(α) γ γ
Just like the weibull distribution that the moment generating function do not exist. The
maximum likelihood and method of moments estimators for the Burr distribution can only be
evaluated numerically.
Example 1.9.1. A random variable X has a Burr distribution with parameters γ = 2 and
β = 500 . Prove that the maximum likelihood estimate of the parameter α, based on a random
sample x1 , x2 , . . . , xn is given by
n
α̂ = P
ln (500 + x2i ) − n ln 500
Hence evaluate this estimator based on a sample consisting of six values 53, , 54, 109, 114, 163
and 181.
16
Solution:
Dierentiating w.r.t α
∂ ln L(α, β, γ) n X
= + n ln β − ln (β + xγi )
∂α α
Equating to 0 and solving for α we get
n
α= P
ln (β + xγi ) − n ln β
But since γ=2 and β = 500 we have that
n
α̂ = P hence proven
ln (500 + x2i ) − n ln 500
Given 53, , 54, 109, 114, 163 and 181. then we have that
X
ln (500 + x2i ) = 47.6245
so that
6
α̂ = ≈ 0.3021
47.6245 − 6 ln 500
Therefore α̂ = 0.3021
The exponential distribution is one of the simplest models for insurance losses. Suppose that
each individual in a large insurance portfolio incurs losses according to an exponential distri-
bution.
Practical knowledge of almost any insurance portfolio reveals that the means of these various
distributions will dier among the policyholders. Thus the description of the losses in the port-
folio is that each loss follows its own exponential distribution, i.e. the exponential distributions
have means that dier from individual to individual.:
A description of the variation among the individual means must now be found. One way
to do this is to assume that the exponential means themselves follow a distribution. In the
exponential case, it is convenient to make the following assumption. Let
1
λ=
θi
17
be the reciprocal of the mean loss for the ith policyholder. Suppose we assume that the variation
among the λi can be described by a known gamma distribution Gamma(α, β). i.e. assume that
λ ∼ Gamma(α, β) where
β α α−1 −βλ
f (x) = λ e , λ>0
Γ(α)
Thus note that the PDF is in terms of λ with known values of α and β This formulation has
much in common with that used in Bayesian estimation. The fundamental idea in Bayesian
estimation is that the parameter of interest (here, Λ) can be treated as a random variable with
a known distribution. However, that the purpose here is not to estimate the individual λi , but
to describe the aggregate losses over the whole portfolio.
Estimation of the individual λi can be treated by Bayesian estimation, when the Gamma(α, β)
distribution would be referred to as a prior distribution. In this problem of describing the
losses over the whole portfolio, the Ga(α, β) distribution is used to average the exponential
distributions; the Gamma(α, β) distribution is referred to as the mixing distribution and the
resulting loss distribution as a mixture distribution.
The random variable X represents the amount of a single randomly selected claim and E[X]
represents the mean claim amount for all risks in the portfolio. X .
To nd the overall distribution of claim amounts, we need to work out the marginal distribution
of X . This is obtained by integrating the joint density function fX,λ , over all possible values
of λ
The PDF of the mixture (or marginal) distribution of X is
Z ∞
fX (x) = fX,λ (x, λ)dλ
0
Z ∞
fλ (λ)fX,λ (x|λ)dλ
0
∞
β α α−1 −βλ
Z
λ e · λe−λX dλ
0 Γ(α)
Z ∞
βα
λα−1 e−(x+β)λ dλ
Γ(α) 0
We now make the integrand look like the PDF of gamma(α + 1, x + β) distribution:
Z ∞
βα Γ(α + 1) (x + β)α+1 α −(x+β)λ
fX (x) = · λ e dλ
Γ(α) 0 (x + β)α+1 Γ(α + 1)
Z ∞
βα Γ(α + 1) (x + β)α+1 α −(x+β)λ
fX (x) = · λ e dλ
Γ(α) (x + β)α+1 0 Γ(α + 1)
The integration the PDF over all possible values of λ will yield 1, so that
β α Γ(α + 1)
fX (x) =
Γ(α) (x + β)α+1
αβ α
fX (x) = , x>0
(x + β)α+1
which can be recognised as the PDF of the Pareto distribution, P a(α, β). This result gives
a very nice interpretation of the Pareto distribution: P a(α, β) arises when exponentially dis-
tributed losses are averaged using a P a(α, β) mixing distribution.
18
Example 1.10.1. The annual number of claims an individual policy in a portfolio has a
Poisson(µ) distribution. The variability in µ among policies is modelled by assuming that over
the portfolio, individual values of µ have a Ga(α, β) distribution. Derive mixture distribution
for the annual number of claims from each policy in the portfolio.
Solution:
∞
e−µ µx β α α−1 −βµ
Z
= µ e dµ
0 x! Γ(α)
We make this integral look like that of a gamma(x + α, β + 1) distribution.
∞
Γ(x + α) (β + 1)x+α µx β α α−1 −(β+1)µ
Z
P (X = x) = · µ e dµ
0 (β + 1)x+α Γ(x + α) x! Γ(α)
Z ∞
Γ(x + α) β α (β + 1)x+α x+α−1 −(β+1)µ
P (X = x) = µ e dµ
x!(β + 1)x+α Γ(α) 0 Γ(x + α)
The integral yields 1 so that we have
Γ(x + α) βα
P (X = x) = x = 0, 1, 2, . . .
x!Γ(α) (β + 1)x+α
α x
x+α−1 β 1
P (X = x) =
x β+1 β+1
β
This is a negative binomial distribution with parameter P = β+1
and k=α
Summary :
- Individual claim amounts can be modelled using a loss distribution. Loss distributions are
often positively skewed and long-tailed.
-Distributions such as the exponential, normal, lognormal, gamma, Pareto, generalised Pareto,
Burr or Weibull distribution are commonly used to model individual claim amounts.
-Once the form of the loss distribution has been decided upon, the values of the parameters
must be estimated. This may be done using the method of maximum likelihood, the method of
moments or the method of percentiles. Goodness of t can then be checked using a chi square
test.
19
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies: