0% found this document useful (0 votes)
3 views

Chapter #7 Point Estimation and Sampling Dist

Chapter 7 of 'Applied Statistics and Probability for Engineers' focuses on point estimation of parameters and sampling distributions. It discusses the importance of statistical inference, the definition of point estimators, and the central limit theorem, which states that the sampling distribution of the sample mean approaches a normal distribution as sample size increases. The chapter also covers unbiased estimators and provides examples related to engineering applications.

Uploaded by

yashabiba23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Chapter #7 Point Estimation and Sampling Dist

Chapter 7 of 'Applied Statistics and Probability for Engineers' focuses on point estimation of parameters and sampling distributions. It discusses the importance of statistical inference, the definition of point estimators, and the central limit theorem, which states that the sampling distribution of the sample mean approaches a normal distribution as sample size increases. The chapter also covers unbiased estimators and provides examples related to engineering applications.

Uploaded by

yashabiba23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Applied Statistics and

Probability for Engineers


Sixth Edition
Douglas C. Montgomery • George C. Runger

Chapter 7
Point Estimation of
Parameters and Sampling
Distributions
Introduction
 The field of statistical inference consists of those
methods used to make decisions or to draw conclusions
about a population.
 These methods utilize the information contained in a
sample from the population in drawing conclusions.

 Statistical inference may be divided into two major areas:


 Parameter estimation
 Hypothesis testing
7-1 Point Estimation
 A point estimate is a “reasonable value” of a population
parameter.

Suppose that we want to obtain a point estimation of a


population parameter:
 The observations are random variables, say 𝑋𝑋1, 𝑋𝑋2, … , 𝑋𝑋𝑛𝑛.
 Therefore, any function of the observations, or any
statistic, is also a random variable.
 For Example, the sample mean 𝑿𝑿 � , and the sample
variance 𝑺𝑺𝟐𝟐 are statistics and they are also random
variables.
 Since a statistic is Random Variable, it has a probability
distribution.
 We call the probability distribution of a statistic a
sampling distribution.
7-1 Point Estimator
Definition

Θ is the uppercase of θ
Example:
As an example,suppose the random variable X is normally distributed with
an unknown mean μ. The sample mean is a point estimator of the unknown
population mean μ. That is, μ = X . After the sample has been selected,
the numerical value x is the point estimate of μ.
Thus= =
if x1 25, =
x2 30, =
x3 29, and x4 31, the point estimate of μ is

25 + 30 + 29 + 31
=x = 28.75
4
Some Parameters & Their Statistics
Estimation problems occur frequently in engineering. We
often need to estimate
Parameter Measure Statistic
μ Mean of a single population x-bar 𝑋𝑋�
σ2 Variance of a single population s2
σ Standard deviation of a single population s
p Proportion ot items of a population that belong to class of interest p -hat 𝜌𝜌�
μ1 - μ2 Difference in means of two populations x bar1 - x bar2 𝑋𝑋�1 − 𝑋𝑋�2
p 1 - p 2 Difference in proportions of two populations p hat1 - p hat2 𝜌𝜌�1 − 𝜌𝜌�2

 There could be choices for the point estimator of a parameter.


 To estimate the mean of a population, we could choose the:
– Sample mean
– Sample median
– Average of the largest & smallest observations in the sample
7.2 Sampling Distributions and the Central
Limit Theorem
Statistical inference is concerned with making decisions
about a population based on the information contained in a
random sample from that population.
Definitions:
7.2 Sampling Distributions and the Central
Limit Theorem
 The probability distribution of 𝑋𝑋� is called the sampling
distribution of the sample mean

 The sampling distribution of a statistic depends on:


• the distribution of the population
• the size of the sample
• Method of sample selection

 If we are sampling from a population that has an unknown


probability distribution, what will be the sampling
distribution of the sample mean??
One Die Four Dice

Two Dice Five Dice

Three Dice ∞ Dice


1 – 4 Coins
0.6
0.6

0.5 0.5

0.4 0.4
0.3
Series1 0.3 Series1
0.2
0.2
0.1
0.1
0
1 2
0
1 - Coin 1 1.5 2

2 - Coins
0.4 0.4

0.35 0.35

0.3 0.3

0.25 0.25

0.2 0.2
Series1 Series1
0.15 0.15
0.1 0.1
0.05 0.05
0 0
1 1.333 1.666 2 1 2 3 4 5

3- Coins 4 - Coins
7.2 Sampling Distributions and the Central
Limit Theorem

 If we are sampling from a population that has an unknown


probability distribution, the sampling distribution of the
sample mean will still be approximately normal with
mean µ and variance σ2/n, if the sample size is large.

 This is one of the most useful theorems in statistics, and it


is called central limit theorem.
Some History …
The first version of this theorem was
postulated (in 1733 )by Abraham de Moivre
who used the normal distribution to
approximate the distribution of the number Abraham de Moivre (1667 -1754)
of heads resulting from many tosses of a fair
coin.

Extends De Moivre’s work, to


approximate binomial distributions with
normal distributions (in 1812) Pierre-Simon Laplace (1749–1827).

Defined it in general terms and proved


precisely how it worked mathematically
(in 1901).
Aleksandr Lyapunov (1857-1918)
7.2 Sampling Distributions and the Central
Limit Theorem

X is N(µ, σ𝟐𝟐/𝒏𝒏)
 If the population is continuous, unimodal and symmetric, 𝑛𝑛 = 5 is
enough.
 If 𝑛𝑛 ≥ 30, that will be enough regardless of the shape of the population.
 If 𝑛𝑛 < 30, still it will be fine if the population distribution is not severely
non-normal
7.2 Sampling Distributions and the Central
Limit Theorem
Example 7-1
 An electronic company manufactures resistors that have a mean
resistance of 100 ohms and a standard deviation of 10 ohms.
 The distribution of resistance is normal.
 Find the probability that a random sample of 𝑛𝑛 = 25 resistors will have
an average resistance less than 95 ohms.

𝑃𝑃 𝑋𝑋� < 95 = ? ?
 Note that the sampling distribution of 𝑋𝑋� is normal, with mean µ=100
ohms and a standard deviation of:
7.2 Sampling Distributions and the Central
Limit Theorem
Example 7-1
 Therefore, the desired probability, shaded area shown in the figure,
can be estimated by standardizing the point 𝑋𝑋� = 95:

And therefore
Probability for previous example
7.2 Sampling Distributions and the Central
Limit Theorem (CLT)
Example 7-2
Suppose that a random variable 𝑋𝑋 has a continuous uniform
distribution: 1 2, 4 ≤ x ≤ 6
f ( x) = 
0, otherwise
Find the distribution of the sample mean of a random sample
of size n = 40.
By the CLT the distribution X is normal .
b+a 6+4
=µ = = 5
2 2
(=
b − a) ( 6 − 4)
2 2

σ2 13
1212
σ2 1 3 1
σ=
2
X
= =
n 40 120
Sampling Distribution of a Difference in Sample Means
 If we have two independent populations with means μ1 and μ2, and
variances σ12 and σ22, and
 If 𝑋𝑋�1 and 𝑋𝑋�2 are the sample means of two independent random samples
of sizes n1 and n2 from these populations:
 Then the sampling distribution of:
(𝑋𝑋�2 −𝑋𝑋�1 ) − (𝜇𝜇2 − 𝜇𝜇1 )
𝑧𝑧 =
𝜎𝜎22 𝜎𝜎12
+
𝑛𝑛2 𝑛𝑛1

is approximately standard normal, if the conditions of he central limit


theorem apply.

 If the two populations are normal, then the sampling distribution of Z is


exactly standard normal.
17
Example: Aircraft Engine Life
The effective life of a component used
in jet-turbine aircraft engine is a
random variable with mean 5000 and
SD 40 hours and is close to a normal
distribution. The engine manufacturer
Figure 7-6 The sampling distribution
introduces an improvement into the of 𝑋𝑋�2 − 𝑋𝑋�1 in Example 7-3.
Manufacturing process for this
component that changes the
parameters to 5050 and 30. Random (𝑋𝑋�2 −𝑋𝑋�1 ) − (𝜇𝜇2 − 𝜇𝜇1 ) −25
𝑧𝑧 = =
samples of size 16 and 25 are 136
selected. 𝜎𝜎22 𝜎𝜎12
+
𝑛𝑛2 𝑛𝑛1
What is the probability that the
difference in the two sample means is (𝑋𝑋�2 −𝑋𝑋�1 ) − (𝜇𝜇2 − 𝜇𝜇1 ) = 25 − 50 = −25
at least 25 hours?
Given 𝑋𝑋�2 − 𝑋𝑋�1 = 25 𝜎𝜎22 𝜎𝜎12
𝜇𝜇2 = 5050 and 𝜇𝜇1 = 5000 + = 62 + 102 = 136 ℎ
𝑛𝑛2 𝑛𝑛1
𝑃𝑃 𝑋𝑋�2 − 𝑋𝑋�1 ≥ 25 =? 18
Example: Aircraft Engine Life
The effective life of a component used
in jet-turbine aircraft engine is a
random variable with mean 5000 and Figure 7-6 The sampling distribution
of 𝑋𝑋�2 − 𝑋𝑋�1 in Example 7-3.
SD 40 hours and is close to a normal
distribution. The engine manufacturer Process
introduces an improvement into the Old (1) New (2) Diff (2-1)
Manufacturing process for this
x -bar = 5,000 5,050 50
component that changes the
parameters to 5050 and 30. Random s= 40 30
samples of size 16 and 25 are n= 16 25
selected. Calculations
s / √n = 10 6 11.7
What is the probability that the z= -2.14
difference in the two sample means is P(xbar2-xbar1 > 25) = P(Z > z) = 0.9840
at least 25 hours?
= 1 - NORMSDIST(z)

𝑃𝑃 𝑋𝑋�2 − 𝑋𝑋�1 ≥ 25 =?
19
General Concepts of Point Estimation
Unbiased Estimators
 An estimate should be close to the true value of the unknown
parameter
 Formally, we say that Θ is unbiased estimator of θ if the expected
value of Θ is equal to θ

� is an unbiased estimator for the parameter θ


 The point estimator Θ
� =θ
if E(Θ)

� − θ is
 If the estimator is not unbiased, then the difference E(Θ)

called the bias of the estimator Θ

20
7-3 General Concepts of Point Estimation
7-3.1 Unbiased Estimators
 An estimate should be close to the true value of the unknown parameter.

 Formally, we say that Θ is unbiased estimator of 𝜃𝜃 if the expected value


of Θ is equal to 𝜃𝜃 .
Definition
Example 7-4: Sample Mean & Variance Are Unbiased-1
 X is a random variable with mean μ and variance σ2.
 Let X1, X2,…,Xn be a random sample of size n.
 Show that the sample mean (𝑿𝑿 � ) is an unbiased estimator of μ.

Sec 7-3.1 Unbiased Estimators


22
Example 7-4: Sample Mean & Variance Are Unbiased-2

Show that the sample variance (S2) is a unbiased estimator of σ2.

Sec 7-3.1 Unbiased Estimators 23


7-3 General Concepts of Point Estimation
Example:
 Suppose we have a random sample of size 2n from a population
denoted by X, and E(X)=µ and V(X)=σ2.
 Let 2n
1 n

1
X1 = Xi and X2 = ∑ Xi
2n i =1 n i =1
Be two estimators of µ. Which one is an unbiased estimator of µ?
SOLUTION:  2n 
 ∑ Xi 
1  2n  1 2n
E (X 1 ) = E   (2nµ ) = µ
1
 2n  2n  i =1  2n ∑
i =1
= E ∑ X i  = E[ X i ] =
i =1 2 n
 
 
 n 
 ∑ Xi 
1  n  1
E (X 2 ) = E  i =1  = E  ∑ X i  = (nµ ) = µ
 n  n  i =1  n
 
 
Thus: 𝑋𝑋�1 and 𝑋𝑋�2 are unbiased estimators of µ.
7-3 General Concepts of Point Estimation
Example:
 Let X1, X2, …, X7 denote a random sample from a population having
mean µ and variance σ2.
 Consider the following estimators of µ:
 X 1 + X 2 + ... X 7  2 X1 − X 6 + X 4
Θ1 = Θ2 =
7 2
 Is either estimators unbiased?
SOLUTION:

( )
E Θ1 = [E ( X 1 ) + E ( X 2 ) +  + E ( X 7 )] = (7 E ( X )) = (7 µ ) = µ
ˆ 1
7
1
7
1
7

( ˆ )
E Θ = [E (2 X ) − E ( X ) + E ( X
2
1
2
1 6 7 )] =
1
2
[2µ − µ + µ ] = µ

Both estimators are unbiased


7-3 General Concepts of Point Estimation
7-3.2 Variance of a Point Estimator
Definition

The sampling distributions


of two unbiased estimators
Θˆ 1 and Θˆ 2 .
7-3 General Concepts of Point Estimation
Example:
 In the previous example:
 X + X 2 + ... X 7  2 X1 − X 6 + X 4
Θ1 = 1 Θ2 =
7 2

VΘ( )
ˆ = V  X 1 + X 2 + ... + X 7  = 1 (V ( X ) + V ( X ) +  + V ( X ) ) = 1 (7σ 2 ) = 1 σ 2
1   72 1 2 7
 7  49 7
σ2
V (Θˆ1 ) =
7
( )
ˆ = V  2 X 1 − X 6 + X 4  = 1 (V (2 X ) + V (− X ) + V ( X ) ) = 1 (4V ( X ) + (−1) 2 V ( X ) + V ( X ))
VΘ 2   22 1 6 4 1 6 4
 2  4
1
4
( 1
4
)3
V (θˆ2 ) = 4σ 2 + σ 2 + σ 2 = 6σ 2 = σ 2
2
Since both estimators are unbiased,
The variances can be compared to decide to select the better estimator.
� 𝟐𝟐
� 𝟏𝟏 is smaller than that of 𝜣𝜣
The variance of 𝜣𝜣
� 𝟏𝟏 is the better estimator.
 Hence, 𝚯𝚯
7-3 General Concepts of Point Estimation
7-3.3 Standard Error of an Estimator (cont.)
 Suppose we are sampling from a normal distribution 𝑋𝑋 with mean µ
and variance σ2 : N(µ , σ2 )
� is normal with mean µ and variance σ2 /𝑛𝑛 , so
 The distribution of 𝑿𝑿
the standard error of 𝑋𝑋� is:
𝜎𝜎
𝜎𝜎𝑋𝑋� =
𝑛𝑛

 If we did not know σ but substituted the sample standard deviation S


into the above equation, the estimated standard error of 𝑋𝑋� would be:

𝑆𝑆
𝜎𝜎�𝑋𝑋� =
𝑛𝑛
Example 7-5: Thermal Conductivity
 These observations are 10 measurements of
xi
thermal conductivity of Armco iron.
41.60
 Since σ is not known, we use s to calculate 41.48
the standard error. 42.34
41.95
 Since the standard error is about 0.2% of
41.86
the mean, the mean estimate is fairly 42.18
precise. 41.72
 We can be very confident that the true 42.26
population mean is 41.924 ± 2(0.0898) or 41.81
42.04
in the range [41.744, 42.104]. 41.924 = Mean
 A point estimate of the mean thermal conductivity is the
0.284 = Std dev (s )
sample mean 𝑥𝑥̅ = 41.925.
 Assume conductivity in normally distributed and 2 times the
0.0898 = Std error
standard error is 2𝜎𝜎�𝑥𝑥 = 2(0.0898)

Sec 7-3.3 Standard Error Reporting a Point Estimate 29


7-3 General Concepts of Point Estimation
7-3.4 Mean Square Error of an Estimator
The mean square error of an estimate Θ̂ is the expected
squared difference between Θ̂ and θ.
Definition

Conclusion: The mean squared error (MSE) of the estimator is


equal to the variance of the estimator plus the bias squared.
7-3.4 Mean Square Error of an Estimator
Relative Efficiency
 The MSE is an important criterion for
comparing two estimators.

 If the relative efficiency is less than 1, we


conclude that the 1st estimator is superior than
the 2nd estimator.
7-3.4 Mean Square Error of an Estimator
Optimal Estimator
• A biased estimator can be
preferred than an unbiased
estimator if it has a smaller
MSE.
• An estimator whose MSE is
smaller than that of any
other estimator is called an Figure 7-8 A biased estimator
optimal estimator. that has a smaller variance
than the unbiased estimator
.
7-4 Methods of Point Estimation
7-4.1 Method of Moment
Definition

Method of Moment
7-4.1 Method of Moments
 Let 𝑋𝑋1, 𝑋𝑋2, … , 𝑋𝑋𝑛𝑛 be a random sample from the probability
distribution 𝑓𝑓(𝑥𝑥), where 𝑓𝑓(𝑥𝑥) can be either a:
– Discrete probability mass function, or
– Continuous probability density function

 the kth population moment (or distribution moment) is


𝐸𝐸(𝑋𝑋 𝑘𝑘 ), 𝑘𝑘 = 1, 2, … .
1 𝑛𝑛
 𝑡𝑡𝑡𝑡𝑡 𝒌𝒌𝒕𝒕𝒕𝒕 𝒔𝒔𝒔𝒔𝒔𝒔𝒔𝒔𝒔𝒔𝒔𝒔 𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎 𝑖𝑖𝑖𝑖 ∑𝑖𝑖=1 𝑋𝑋𝑖𝑖𝑘𝑘 𝑘𝑘 = 1,2, …
𝑛𝑛

 If 𝑘𝑘 = 1 (called the first moment), then:


– Population moment is μ.
– Sample moment is 𝑋𝑋. �

 Sample mean is the moment estimator of the population mean.

34
Example 9

 Suppose that X1,X2 , ,Xn … is a random sample from an


exponential distribution with parameter λ. Now there is only one
parameter to estimate

E(X) = �
X

 For the exponential, E(X) = 1/λ

 Therefore, E(X) = �
X results in 1/λ = �
X,

 λ = 1/�
X, is the moment estimator of λ

35
Example 10

 Suppose that 𝑋𝑋1,𝑋𝑋2,…,𝑋𝑋𝑛𝑛 is a random sample from a continuous


uniform distribution from the interval [𝑘𝑘,3𝑘𝑘],𝑘𝑘>0 𝑢𝑢𝑛𝑛𝑘𝑘𝑛𝑛𝑜𝑜𝑤𝑤𝑛𝑛 𝑝𝑝𝑎𝑎𝑟𝑟𝑎𝑎𝑚𝑚𝑡𝑡𝑒𝑒𝑟𝑟.
Use the method of moments to obtain an estimator of 𝑘𝑘.

Answer:
 Using the method of moments we should equate the first population
∑ 𝑋𝑋𝑖𝑖
moment 𝐸𝐸[𝑋𝑋], to the first sample moment
𝑛𝑛

∑ 𝑋𝑋𝑖𝑖
𝐸𝐸[𝑋𝑋]= � 𝐸𝐸[𝑋𝑋]=(3𝑘𝑘 + 𝑘𝑘)/2=2𝑘𝑘,
=𝑋𝑋,
𝑛𝑛

𝑋𝑋�
𝑋𝑋� = 2k, k =
2
36
7-4 Methods of Point Estimation
Example 2:
Example 2: Normal Distribution Moment Estimators
Suppose that X1, X2, …, Xn is a random sample from a
normal distribution with parameter μ and σ2 where
E(X) = μ and E(X2) = μ2 + σ2.
n
µ = X = 1 1 n 2
=
∑ Xi
n i 1=
and µ +σ = ∑ Xi
2 2

ni1
2
n
1
n

1 n ∑ X i
2
− n n∑ i
X
 i1 
∑ i
2
σ
= =
X 2
− =
X 2 i 1 =

n i =1 n
  n   n
2

∑ Xi   ∑( Xi − X )
2
 n 
=  ∑ X i2 −  i =1   =i =1
1
(biased)

n i =1 n  n
 
 
38
Question: last year final exam
Let X1 , X2 , … … , X7 denote a random sample from a population having a mean
μ and variance σ2 .
1) Which one of the following estimators of 𝜇𝜇 is the best? (Justify your
𝑋𝑋 + 𝑋𝑋 + 𝑋𝑋 + 𝑋𝑋 6𝑋𝑋 −2 𝑋𝑋4 −2 𝑋𝑋6
answer) 𝜃𝜃̂1 = 1 3 5 7 𝜃𝜃̂2 = 2
4 2
2) Propose (with justification) an estimator of μ better than θ� 1 and θ� 2
3) Suppose that 𝑋𝑋1 , 𝑋𝑋2 , … , 𝑋𝑋𝑛𝑛 is a random sample from an exponential
distribution with an unknown parameter 𝜆𝜆. Using method of moment,
determine an estimator of 𝜆𝜆 (Hint: mean of the distribution).
Problem:
Assume that your task was to estimate the expected value of the age of the 18,500 students at
Qatar University. You were already told that the age of students at QU follows a Normal
Distribution 𝑵𝑵 𝝁𝝁, 𝝈𝝈𝟐𝟐 with a standard deviation of 4 years. The sample that you could get is
composed of 25 students. Answer the following questions:
1-Use the method of moment to find an estimator, 𝝁𝝁 � 𝟏𝟏 , for the average age of the students at the
university.
𝟏𝟏
One unknown => one equation : 𝑬𝑬 𝑿𝑿 = 𝝁𝝁 = ∑𝒙𝒙𝒊𝒊 �  𝝁𝝁
= 𝑿𝑿 �
�𝟏𝟏 = 𝑿𝑿
𝟐𝟐𝟐𝟐
� 𝟏𝟏 is a biased or unbiased estimator.
2-Check if 𝝁𝝁
𝟏𝟏 𝟏𝟏 𝟐𝟐𝟐𝟐𝟐𝟐
�𝟏𝟏 = 𝑬𝑬
𝑬𝑬 𝝁𝝁 ∑𝒙𝒙𝒊𝒊 = ∑𝑬𝑬 𝒙𝒙𝒊𝒊 = �𝟏𝟏 is unbiased ((bias =0)
= 𝝁𝝁 which shows that 𝝁𝝁
𝟐𝟐𝟐𝟐 𝟐𝟐𝟐𝟐 𝟐𝟐𝟐𝟐

3-Show if the below estimators of 𝝁𝝁 are biased or not and find the value of the bias for
unbiased ones (if any)?
𝟏𝟏
� 𝟐𝟐 =
𝝁𝝁 ∑𝟏𝟏𝟏𝟏 𝑿𝑿 (the sum of even numbered samples from X2 till X20),
𝟏𝟏𝟏𝟏 𝒊𝒊=𝟏𝟏 𝟐𝟐𝟐𝟐
𝟏𝟏
� 𝟑𝟑 =
𝝁𝝁 𝟒𝟒𝟒𝟒𝟑𝟑 + 𝟐𝟐𝑿𝑿𝟏𝟏𝟏𝟏 + 𝟏𝟏. 𝟓𝟓𝟓𝟓𝟏𝟏𝟏𝟏 + 𝟎𝟎. 𝟓𝟓𝑿𝑿𝟐𝟐𝟐𝟐 ,
𝟒𝟒

𝟏𝟏 𝟏𝟏𝟏𝟏 𝟏𝟏 𝟏𝟏𝟏𝟏𝟏𝟏
� 𝟐𝟐 = 𝑬𝑬
𝑬𝑬 𝝁𝝁 ∑ 𝑿𝑿 = (𝑬𝑬 𝑿𝑿𝟐𝟐 + 𝑿𝑿𝟒𝟒 + ⋯ + 𝑿𝑿𝟐𝟐𝟐𝟐 = � 𝟐𝟐 is unbiased estimator
= 𝝁𝝁 => 𝝁𝝁
𝟏𝟏𝟏𝟏 𝒊𝒊=𝟏𝟏 𝟐𝟐𝟐𝟐 𝟏𝟏𝟏𝟏 𝟏𝟏𝟏𝟏
𝟏𝟏 𝟖𝟖𝟖𝟖
𝑬𝑬[�
𝝁𝝁𝟑𝟑 ] = 𝑬𝑬 𝟒𝟒𝟒𝟒𝟑𝟑 + 𝟐𝟐𝑿𝑿𝟏𝟏𝟏𝟏 + 𝟏𝟏. 𝟓𝟓𝟓𝟓𝟏𝟏𝟏𝟏 + 𝟎𝟎. 𝟓𝟓𝑿𝑿𝟐𝟐𝟐𝟐 = � 𝟑𝟑 is biased estimator,
= 𝟐𝟐𝟐𝟐 ≠ 𝝁𝝁  𝝁𝝁
𝟒𝟒 𝟒𝟒
bias= 𝟐𝟐𝟐𝟐 − 𝝁𝝁 = 𝝁𝝁
Problem (cont.)
� 𝟏𝟏 , 𝝁𝝁
4-Compare the three estimators 𝝁𝝁 � 𝟐𝟐 and 𝝁𝝁
� 𝟑𝟑 using the Mean Squared Error as a
criteria for comparison and show which one is the best among the three estimators

𝑴𝑴𝑴𝑴𝑴𝑴 𝜣𝜣� = 𝑽𝑽 𝜣𝜣 � + 𝒃𝒃𝒃𝒃𝒃𝒃𝒃𝒃 𝟐𝟐


𝝈𝝈𝟐𝟐 𝟏𝟏𝟏𝟏
� 𝟏𝟏 = 𝑽𝑽 𝝁𝝁
𝑴𝑴𝑴𝑴𝑴𝑴 𝝁𝝁 � 𝟏𝟏 = = = 𝟎𝟎. 𝟒𝟒𝟒𝟒
𝟐𝟐𝟐𝟐 𝟐𝟐𝟐𝟐
𝝈𝝈𝟐𝟐 𝟏𝟏𝟏𝟏
� 𝟐𝟐 = 𝑽𝑽 𝝁𝝁
𝑴𝑴𝑴𝑴𝑴𝑴 𝝁𝝁 � 𝟐𝟐 = = = 𝟏𝟏. 𝟔𝟔
𝟏𝟏𝟏𝟏 𝟏𝟏𝟏𝟏
𝟐𝟐
𝟏𝟏
� 𝟑𝟑
𝑴𝑴𝑴𝑴𝑴𝑴 𝝁𝝁 � 𝟑𝟑 + 𝝁𝝁 =
= 𝑽𝑽 𝝁𝝁 𝟏𝟏𝟏𝟏𝝈𝝈𝟐𝟐 + 𝟒𝟒𝝈𝝈𝟐𝟐 + 𝟐𝟐. 𝟐𝟐𝟐𝟐𝝈𝝈𝟐𝟐 + 𝟎𝟎. 𝟐𝟐𝟐𝟐𝝈𝝈𝟐𝟐 + 𝝁𝝁𝟐𝟐
𝟏𝟏𝟏𝟏
𝟐𝟐𝟐𝟐. 𝟓𝟓 𝟐𝟐
= 𝝈𝝈 + 𝝁𝝁𝟐𝟐 = 𝟐𝟐𝟐𝟐. 𝟓𝟓 + 𝝁𝝁𝟐𝟐
𝟏𝟏𝟏𝟏

𝑴𝑴𝑴𝑴𝑴𝑴 � 𝟏𝟏
𝝁𝝁 𝑴𝑴𝑴𝑴𝑴𝑴 � 𝟏𝟏
𝝁𝝁
� 𝟐𝟐
< 𝟏𝟏 𝒂𝒂𝒂𝒂𝒂𝒂 � 𝟑𝟑
𝝁𝝁𝟏𝟏 is the most efficient estimator
< 𝟏𝟏 �
𝑴𝑴𝑴𝑴𝑴𝑴 𝝁𝝁 𝑴𝑴𝑴𝑴𝑴𝑴 𝝁𝝁

5-Can you do better than the estimator that you found to be the best in part (iv),
given you have the same number of samples? Explain.
� 𝟏𝟏 because is the
Using the same sample, we cannot build a better estimator than 𝝁𝝁
MVUE

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy