0% found this document useful (0 votes)
2 views

2

This document discusses point estimation in the context of probability and statistics, focusing on parameter estimation methods and their properties. It covers concepts such as unbiased estimators, minimum variance unbiased estimators (MVUE), the method of moments (MoM), and maximum likelihood estimation (MLE), along with examples and homework problems for practical application. The document serves as a resource for understanding the foundational aspects of estimation theory in statistics.

Uploaded by

uselessuseful685
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

2

This document discusses point estimation in the context of probability and statistics, focusing on parameter estimation methods and their properties. It covers concepts such as unbiased estimators, minimum variance unbiased estimators (MVUE), the method of moments (MoM), and maximum likelihood estimation (MLE), along with examples and homework problems for practical application. The document serves as a resource for understanding the foundational aspects of estimation theory in statistics.

Uploaded by

uselessuseful685
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

BITS Pilani

Pilani Campus

Course No: MATH F113

Probability and Statistics


Chapter 6: Point Estimation
Sumanta Pasari
BITS Pilani
Pilani Campus sumanta.pasari@pilani.bits-pilani.ac.in
Parameter Estimation
• Parameter estimation is one of the important steps
in statistical inference.
• It belongs to the subject of estimation theory.
• Why do we require parameter estimation?
• What are the different estimation methods?
• What are the desirable properties of an estimator?
• How to judge “how good is my estimator”?
• Two broad types – point estimation and interval
estimation 3 BITS Pilani, Pilani Campus
Estimator and estimate
• A statistic (which is a function of a random sample,
and hence a random variable) used to estimate the
population parameter θ is called a point estimator
for θ and is denoted by ˆ

• The value of the point estimator on a particular


sample of given size is called a point estimate for θ.

4
BITS Pilani, Pilani Campus
Desirable Properties

1. ˆ to be unbiased for  .
2. ˆ to have a small variance for large sample size.
(MVUE: Minimum Variance Unbiased Estimator)

Unbiased estimator:
A point estimator ˆ is an unbiased estimator for a

population parameter  if E ˆ = .
5 BITS Pilani, Pilani Campus
Point Estimator
Comments.
1. The sample mean, X , is an unbiased estimator for  .
2. The sample variance, S , is an unbiased estimator for  .
2 2

3. When X is binomial RV with parameters n and p, the sample


proportion pˆ  X / n is an unbiased estimator of p.


Standard error of sample mean  X 
n

6 BITS Pilani, Pilani Campus


Minimum Variance Unbiased Estimator

Among all estimators of the parameter  that are unbiased, choose


the one that has the minimum variance. The resulting ˆ is called the
MVUE of  .

Theorem :
Let X 1 , X 2 , ..., X n be a random sample from a normal population
with mean  and standard deviation  . Then the estimator ˆ  X
is the MVUE for  .

7 BITS Pilani, Pilani Campus


Method of Moments
• In method of moments (MoM), we compare the observed
sample moments (about origin) with the corresponding
population moments (about origin).
• If there are k-parameters in the distribution, then first k
sample moments will be compared with the first k
population moments to yield k equations. The solution of
this k-equations will provide the required estimated
parameter values.

8 BITS Pilani, Pilani Campus


Example: Method of Moments

Ex.1. Use method of moments to estimate the parameter of exponential


distribution.
x
1 
f  x;    e  ; x  0,   0

Sol.
Step 1 : Find E  X    .
1 n
Step 2 : Find the first sample moment as M 1   X i
n i 1
Step 3 : Equate the first sample moment with the first population moment.
1 n 1 n
   X i  ˆ  X   X i
n i 1 n i 1
Is the estimator ˆ unbiased?
9 BITS Pilani, Pilani Campus
Example: Method of Moments

HW.1. Use MoM to estimate the parameter of Poisson distribution.


e k k x
f  x; k   ; x  0,1, 2, and k  0
x!
Sol. kˆ  X ? Is there an alternative estimator of k ?
(hint: compare sample and population variance)

HW.2. Use MoM to estimate the parameters of Binomial distribution.


n x
f  x; n, p     p 1  p  ; x  0,1, 2,
n x
, n and 0  p  1
 x

10 BITS Pilani, Pilani Campus


Example: Method of Moments

HW.3. Use MoM to estimate the parameter of Rayleigh distribution.


x  x2 
f  x;    2 exp   2  ;   0, x  0
  2 
2
Sol. ˆ  X ? Is it an unbiased estimator?

HW.4. Use MoM to estimate the parameter of Maxwell distribution.


2 x2  1  x 2 
f  x;    exp      ;for   0, x  0
  3
 2    
X 2
Sol. ˆ  ? Is it an unbiased estimator?
2 
11 BITS Pilani, Pilani Campus
Example: Method of Moments

HW.5. Use MoM to estimate the parameters of Gaussian distribution.


 1 1  x 
2
  
 2  
;    x  ,     ,   0
f  x;  ,      2
e

 0 ; otherwise

HW.6. Use MoM to estimate the parameter of gamma distribution.


 1 
x

  x 1e  ; x  0,   0,   0
f  x;  ,        

 0 ; otherwise
12 BITS Pilani, Pilani Campus
Maximum Likelihood Estimation
1. MLE is the most widely used parameter estimation method as on today.
2. The basic principle is to maximize the likelihood of the parameters,
denoted by L  | x  , as a function of the model parameters  .
3. Note that the  can be a single parameter or a vector of parameters;
  1 , 2 , , p  .
n
4. The likelihood function L  | x  is defined as L  | x    f  xi ; 
i 1

5. As log is a one  to  one function, maximization of log  likelihood  ln L 


is often preferred for computational ease.

The MLE method was recently proposed by Fisher in 1920s


13 BITS Pilani, Pilani Campus
Examples: MLE
Ex.2. Let X 1 , X 2 , X m be a random sample of size m from a
binomial distribution of parameters n ( known) and p. Find the
maximum likelihood estimator for p. Is it an unbiased estimator?
Sol.
Step 1 : The log-likelihood function for binomial distribution is
m
L( p x)   f ( xi , p ), 0  p 1
i 1
m m
m   n  xi   m
 n    xi nm   xi
     p (1  p ) n  xi        p i1 (1  p ) i1
i 1   xi    i 1  xi  
 m n  m
   
m
ln L( p x)  ln        xi ln p   nm   xi  ln(1  p )
 i 1  xi   i 1  i 1 
14 BITS Pilani, Pilani Campus
Examples: MLE
Step 2 : The corresponding log  likelihood equation is

ln L( p x)  0
p

   
m
nm   xi 
m
 xi 
 i 1    0
L '( p) i 1

L( p ) p 1 p
 m
  m 
  nm   xi  p    xi  (1  p )
 i 1   i 1 
 m 
  Xi  X
Step 3 : The estimator of p is then obtained as pˆ   i 1  
nm n
Why does this estimator maximize likelihood function?
15 BITS Pilani, Pilani Campus
Examples: MLE
Ex.3. Use MLE to estimate parameters of exponential distribution
x
1 
f  x;    e  ; x  0,   0

Sol.
Step 1 : The log  likelihood function for exponential distribution is
n
xi
ln L  | x   ln L  ; x1 , x2 ,..., xn   n ln   
i 1 
Step 2 : The corresponding log  likelihood equation is

ln L  0

1 n
Step 3 : The estimator of  is then obtained as ˆ   X i
n i 1
16 BITS Pilani, Pilani Campus
Examples: MLE
HW.7. Use MLE to estimate the parameter of Poisson distribution.

HW.8. Use MLE to estimate the parameter of Gaussian distribution.


 1 1  x 
2
  
 2  
;    x  ,     ,   0
f  x;  ,      2
e

 0 ; otherwise

 X X
n
2
i
n 1 2
ˆ  X and ˆ 
2 i 1
 S .
n n

Thus M-L estimator for  2 is not unbiased.


17 BITS Pilani, Pilani Campus
Example: MLE
HW.9. Use MLE to estimate the parameter of Rayleigh distribution.
x  x2 
f  x;    2 exp   2  ;   0, x  0
  2 
1 n 2
Sol. ˆ  
2n i 1
Xi ?

HW.10. Use MLE to estimate the parameter of Maxwell distribution.


2 x2  1  x 2 
f  x;    exp      ;for   0, x  0
  3
 2    
1 n 2
Sol. ˆ  
3n i 1
Xi ?

18 BITS Pilani, Pilani Campus


Example: MLE
HW.11. Use MLE to estimate the parameters of inverse Gaussian distribution
     
2
 t 
f  t;  ,    exp    ; t  0,   0,   0
2 t 3
 2 t 
2

HW.12. Use MLE to estimate the parameters of lognormal distribution


1  1 ln t   
 
2

f  t; ,    exp      ; t  0,   0
t  2  2    

19 BITS Pilani, Pilani Campus


Example: MLE
HW.12. Use MoM and MLE to estimate the parameter of exponential
distribution.
f  x;    e   x ; x  0,   0

Discuss whether the estimator ˆ unbiased, in case of MoM and MLE.

HW.13. Let a random sample of size n is taken from a uniform


distribution on [0, ]. Find ˆMLE . What is the distribution of ˆMLE ?
Is it an unbiased estimator of  ? If not, find one unbiased estimator.

20 BITS Pilani, Pilani Campus


Examples: MLE
Ex.4. Use MLE to estimate the parameters of Weibull distribution

t 
  
f  t;   t  1   
e ; t  0,   0,   0
 

Sol.
Step 1 : The log  likelihood function is
ln L  | t   ln L  ,  ; t1 , t2 ,..., tn 

 n ln   n ln      1  ln  ti     i 
n n
t
i 1 i 1 

Step 2 : The corresponding log  likelihood equations are
 
ln L  0 and ln L  0
 
21 BITS Pilani, Pilani Campus
Examples: MLE
This gives
 1 n
ln L  ,  ; t1 , t2 ,..., tn   0      ti  0
 n i 1

 n n   ti    ti 
ln L  ,  ; t1 , t2 ,..., tn   0    1     ln    0
  i 1        
Step 3 : The estimates of  and  are then obtained from
n

1 1 n  i ln  ti 
t 
 1 n  
1

  ln  ti   i 1
 0 and     ti 
 n i 1 n
 n i 1 
i
t 

i 1

How to solve now? (Need to learn more! Numerical techniques?)


22 BITS Pilani, Pilani Campus
Example: MLE
HW.14. Let a random sample of size n is taken from a uniform
distribution on [ ,  ]. Find ˆ
MLE and ˆ . What are the distributions of
MLE

ˆ MLE and ˆMLE ? Are they unbiased estimators?

23 BITS Pilani, Pilani Campus


Estimating Functions of Parameters

Let ˆ1 ,ˆ2 , , ˆm be the MLEs of the parameters 1 , 2 , , m . Then the
MLE of any function h 1 , 2 , , m  of these parameters is the function


h ˆ1 ,ˆ2 , 
, ˆm of the MLEs.

HW.15. What is ˆ MLE in a normal distribution? Is it an unbiased estimator?

HW.16. What is the MLE estimator for mean  of a gamma  ,   distribution?


Is it an unbiased estimator of  ?

24 BITS Pilani, Pilani Campus


Recall: Sample Proportion
The statistic that estimates the parameter p, a proportion of a
population that has some property, is the sample proportion
number in sample with the trait (success) X
pˆ  
sample size n
Properties:
(i) As the sample size increases (n large), the sampling
distribution of pˆ becomes approximately normal (WHY?)
p 1  p 
(ii) The mean of pˆ is p, and variance of pˆ is (WHY?)
n
(iii) Can we get a point estimators of p? (See Ex. 6.15, page no. 258)

BITS Pilani, Pilani Campus


Example 6.15 (page 258)

BITS Pilani, Pilani Campus


Example 6.15 (page 258)

BITS Pilani, Pilani Campus


Example 6.15 (page 258)

BITS Pilani, Pilani Campus


Example 6.15 (page 258)

BITS Pilani, Pilani Campus


Large Sample Behaviour of MLE

BITS Pilani, Pilani Campus

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy