0% found this document useful (0 votes)
2 views36 pages

par-est

The document discusses parameter estimation in statistics, covering concepts such as sampling distributions, maximum likelihood estimators, and different types of statistical inference methods. It explains point and interval estimation, including how to construct confidence intervals for population means. The document also provides examples using various distributions, including exponential, Bernoulli, Poisson, and normal distributions.

Uploaded by

Darun Nayeem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views36 pages

par-est

The document discusses parameter estimation in statistics, covering concepts such as sampling distributions, maximum likelihood estimators, and different types of statistical inference methods. It explains point and interval estimation, including how to construct confidence intervals for population means. The document also provides examples using various distributions, including exponential, Bernoulli, Poisson, and normal distributions.

Uploaded by

Darun Nayeem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Parameter Estimation

Mahbub Latif, PhD

February 2025
2
Plan
Sampling distribution of sample mean
Central limit theorem
Sampling distribution of sample variance

3
Introduction

Let X , … , X be a random sample from a distribution F , where θ is a


1 n θ

vector of unknown parameters


The sample could be from a Poisson distribution with θ = λ
It could be from a normal distribution with θ = (μ, σ 2
)

In probability theory, the parameters of a distribution is assumed to be


known
In statistics observed data are used to make inferences about unknown
parameters

4
Estimation

There are two types of statistical methods: descriptive statistics and


statistical inference
Descriptive statistics deals with summarizing observed data using graphical
tools or numeric values
Inferential statistics deals with making inference about population
parameters using observed data
There are two approaches of statistical inference: estimation and test of
hypothesis

5
Estimation

Parameters of a distribution are estimated using the methods of estimation


There are two types of estimation methods: point estimation and interval
estimation
Test of hypothesis deals with making conclusion on a particular statement
about a population (probability distribution)

6
Point estimates

7
Maximum likelihood estimators

Any statistic used to estimate the value of an parameter θ is known as an


estimator of θ
The observed value of an estimator is called the estimate
Suppose we have a random sample of three observations X 1 ,
= 10 X2 = 7

and X = 4 from a population with mean μ


3

The sample mean is an estimator of μ


n
¯ 1
X = ∑ Xi
n i=1

For the sample, X̄ = (21/3) = 7 is an estimate

8
Maximum likelihood estimators

Let X , … , X be a random sample from f (x; θ), where θ is the unknown


1 n

parameter vector
The likelihood function of θ
n

L(θ
θ | x ) = f (x1 , … , xn ; θ ) = ∏ f (xi ; θ )

i=1

The maximum likelihood estimator (mle) ^ θ is a value of θ that maximizes the

likelihood function L(θθ | x) or log L(θθ | x)


e

^
θ = arg max L(θ
θ | x)
θ ∈Ω
9
Exponential distribution

Let X , … , X be a random sample from an exponential distribution with


1 n

parameter θ, i.e. f (x; θ) = (1/θ) e


−x/θ

The likelihood function of θ


n n

−xi /θ n
L(θ | x) = ∏(1/θ) e = (1/θ ) exp (− ∑ xi /θ)

i=1 i=1

The log-likelihood function


n

log L(θ | x) = −n log θ − ∑ xi /θ

i=1

10
Exponential distribution

The maximum likelihood estimator of the exponential parameter θ


^
θ = arg max log L(θ | x )
θ∈Ω

So we can write
d ∣
log L(θ | x )∣ = 0
dθ ∣ ^
θ=θ
n n
n ∑ xi
i=1 ^
− + = 0 ⇒ θ = ∑ xi /n = x̄
2
^ ^
θ θ i=1

11
Exponential distribution

Let {4.6, 3.6, 0.5, 15.5, 3.1} be a


random sample from an
exponential distribution with
parameter θ.
The mle of θ
1 27.3
^
θ = ∑x = = 5.46
5 5

12
Bernoulli distribution

Let X , … , X be a random sample from a Bernoulli distribution with


1 n

parameter p, i.e. f (x; p) = p (1 − p)


x 1−x

The likelihood and log-likelihood function


n
n n
xi 1−xi ∑ xi n−∑ xi
L(p |x
x) = ∏ p (1 − p) = p i=1
(1 − p) i=1

i=1

n n

log L(p |x
x) = ∑ xi log p + (n − ∑ xi ) log(1 − p)

i=1 i=1

Show that the mle p^ = ∑


n
xi /n = x̄
i=1

13
Bernoulli distribution

Let {0, 0, 0, 0, 1, 0, 0, 1, 1, 0} be a random sample from a Bernoulli


distribution with parameter p
Obtain the mle of p
Calculate the value of log-likelihood function at the mle and two other
values of p, and compare the results

14
Poisson distribution

Let X , … , X be a random sample from a Poisson distribution with


1 n

parameter λ, i.e. f (x; p) =


−λ x
e λ
, x = 0, 1, …
x!

The likelihood and log-likelihood function


n −λ xi −nλ ∑ xi
e λ e λ i

L(λ |x
x) = ∏ =
xi ! ∏ x1 !
i=1

log L(λ |x
x) = −nλ + ∑ xi log λ − log ∏ x1 !

i=1

15
The normal distribution

Let X 1, … , Xn be a random sample from N (μ, sigma 2


)

The likelihood and log-likelihood function


n

2
1 −
1
(xi −μ)
2
2 −n/2 −
1

n
(xi −μ)
2
2 2 i=1
L(μ, σ |x
x) = ∏ e 2σ
= (2πσ ) e 2σ

√2πσ 2
i=1

n
n 2
1 2
log L(μ, σ |x
x) = − ( log 2π + log σ ) − ∑(xi − μ)
2
2 2σ
i=1

16
The normal distribution

The log-likelihood function


n
n 2
1 2
log L(μ, σ |x
x) = − ( log 2π + log σ ) − ∑(xi − μ)
2
2 2σ
i=1

The score functions


n
d log L(μ, σ |x
x) 1
= ∑(xi − μ)
2
dμ σ
i=1

n
d log L(μ, σ |x
x) n 1
2
= − + ∑(xi − μ)
2 2 2 2
dσ 2σ 2(σ )
i=1

17
The normal distribution
n
x) ∣
d log L(μ, σ |x 1
∣ = ^ = 0
∑(xi − μ)
2
dμ ∣ σ
^
μ=μ,σ=
^ σ
^ i=1

⇒ ^ = x̄
μ

n
x) ∣
d log L(μ, σ |x n 1
2
∣ = − + ^
∑(xi − μ) = 0
2 2
dσ ∣ 2σ
^ 2(σ
^ )
2
μ=μ,σ=
^ σ
^ i=1

2
1
2
⇒ σ
^ = ^
∑(xi − μ)
n
i=1

18
Interval estimates

19
Interval estimates

Point estimate of a parameter is a single value, e.g. sample mean X


¯
is a
point estimate of the population mean μ
Interval estimate of a parameter is a range of values that contain the
parameter with a certain degree of confidence

20
Interval estimate of population mean

Sample mean X ¯
is an point
estimate of μ
Sampling distribution
X̄ ∼ N (μ, σ /n) and
2

¯
X − μ
Z =
σ/√n P ( − zα/2 < Z < zα/2 ) = 1 − α

21
Interval estimate of population mean

P ( − zα/2 < Z < zα/2 ) = 1 − α

X̄ − μ
P ( − zα/2 < < zα/2 ) = 1 − α
σ/√n

¯ ¯
P (X − zα/2 (σ/√n) < μ < X + zα/2 (σ/√n)) = 1 − α

100(1 − α)% two-sided confidence interval for μ

X̄ ± zα/2 (σ/√n)

22
Interval estimate of population mean

100(1 − α)% two-sided confidence interval for μ


¯
X ± zα/2 (σ/√n)

95% confidence interval for μ


100(1 − α)% = 95% ⇒ α = 0.05

z.05/2 = z.025 = 1.96

X̄ ± 1.96(σ/√n)

23
Interval estimate of population mean

100(1 − α)% one-sided lower confidence interval for μ


¯
P (Z > −zα ) = 1 − α ⇒ P (μ < X + zα (σ/√n)) = 1 − α

( − ∞, X̄ + zα (s/√n))

100(1 − α)% one-sided upper confidence interval for μ

P (Z < zα ) = 1 − α ⇒ P (μ > X̄ − zα (s/√n)) = 1 − α

(X̄ − zα (s/√n), ∞)

24
Example 7.3a

Suppose that when a signal having value μ is transmitted from location A


the value received at location B is normally distributed with mean μ and
variance 4.
If the successive values received are
5, 8.5, 12, 15, 7, 9, 7.5, 6.5, 10.5

Construct a 95 percent two-sided confidence interval for μ.

25
Example 7.3a

Signal received X ∼ N (μ, 4) The sample mean x̄ = 81

9
= 9

If the successive values received 95% two-sided confidence interval


are for μ
5, 8.5, 12, 15, 7, 9, 7.5, 6.5, 10.5
x̄ ± z.025 (σ/√n) = 9 ± 1.96(2/√9)

Construct a 95 percent two-sided = (7.69, 10.31)

confidence interval for μ.


We are 95% confident that the
Construct a 95 percent lower and interval (7.69, 10.31) contains the
upper confidence interval for μ. true message value!
26
Confidence interval for a normal population mean when variance is
unknown

Let X , … , X be a random sample from a normal distribution with mean


1 n

μ and variance σ , and both the parameters are unknown


2

¯ ¯
X − μ X − μ
Z = ⇒ t =
σ/√n s/√n

100(1 − α)% confidence interval for μ

P ( − tn−1,α/2 < t < tn−1,α/2 ) = 1 − α

P (x̄ − tn−1,α/2 (s/√n) < μ < x̄ + tn−1,α/2 (s/√n) = 1 − α

27
Example 7.3a

Signal received X ∼ N (μ, σ )


2
The estimates

Successive values received are x̄ = 9

2
5, 8.5, 12, 15, 7, 9, 7.5, 6.5, 10.5 s = 9.5

95% two-sided CI for μ?

28
Estimating in difference in means of two normal populations

Let X , … , X be a sample of size n from a normal population having


1 n

mean μ and variance σ


1
2
1

Let Y , … , Y be a sample of size m from a different normal population


1 m

having mean μ and variance σ


2
2
2

We are interested to estimate


μ1 − μ2

29
Estimating in difference in means of two normal populations

Maximum likelihood estimators

^ ¯ ^ ¯ ^ ^ ¯ ¯
μ = X = (1/n) ∑ Xi , μ = Y = (1/m) ∑ Yi , μ − μ = X − Y
1 2 1 2

i i

Sampling distributions
¯ 2
X ∼ N (μ1 , σ /n)
1

¯ ∼ N (μ , σ 2 /m)
Y 2 2

2 2
σ σ
¯ ¯ 1 2
X − Y ∼ N (μ1 − μ2 , + )
n m

30
Estimating in difference in means of two normal populations

Since
¯ ¯ 2 2
X − Y ∼ N (μ1 − μ2 , (σ /n + σ /m))
1 2

¯ ¯
X − Y − (μ1 − μ2 )
Z = ∼ N (0, 1)
2 2
σ σ
√ 1
+
2

n m

It can be shown
¯ ¯
X − Y − (μ1 − μ2 )
P { − zα/2 < < zα/2 } = 1 − α
2 2
σ σ
√ 1
+
2

n m
31
Estimating in difference in means of two normal populations

100(1 − α)% confidence interval for (μ 1 − μ2 )

2 2
σ σ
1 2
(x̄ − ȳ ) ± zα/2 √ +
n m

If population variances are assumed to be equal, i.e. σ 2


1
= σ
2
2
= σ
2
, the
100(1 − α)% confidence interval for (μ − μ ) 1 2

2 2
σ σ 1 1
(x̄ − ȳ ) ± zα/2 √ + = (x̄ − ȳ ) ± zα/2 σ√ +
n m n m

32
Estimating in difference in means of two normal populations

If the common population variance σ is unknown, we can estimate it from


2

the data as
2 2
(n − 1)S + (m − 1)S
2 1 2
Sp =
n + m − 2

The 100(1 − α)% confidence interval for (μ 1 − μ2 )

1 1
(x̄ − ȳ ) ± tα/2,n+m−2 Sp √ +
n m

33
Approximmate confidence interval for the mean of a Bernoulli random
variable

Let X ∼ B(n, p) and the point estimate p^ = X/n and corresponding


variance V ar(p^) = p(1 − p)/n

For a large n, p^ ∼ N (p, p(1 − p)/n), i.e.


^ − p
p
Z = ∼ N (0, 1)
√p(1 − p)/n

34
Approximate confidence interval for p

We can show
^ − p
p
P ( − zα/2 < < zα/2 ) = 1 − α
√p(1 − p)/n

The 100(1 − α)% confidence interval for p

^(1 − p
p ^) ^(1 − p
p ^)
^ − zα/2 √
P (p ^ + zα/2 √
< p < p ) = 1 − α
n n

35
Problems

1, 8, 9, 10, 13, 14, 17, 18, 41, 44, 48, 49,

36

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy