Provided

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

LEARNING

UNIT
9

e
b ut
t Tests: One-Sample, Two-

tri
is
Independent-Sample, and

rd
Related-Samples Designs
,o
st
po

Excel Toolbox
Mathematical operators
y,

• +
• -
op

• ( )
• *
tc

• /
• ^2 [square]
no

• ^.5 [square root]

Functions

• AVERAGE
o

• COUNT
D

• STDEV.S
• SUM
• VAR.S
• T.TEST
(Continued)

127
Copyright ©2019 by SAGE Publications, Inc.
This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
128  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

(Continued)

Other tools

• format cells
• freeze panes

e
• fill down or paste

ut
• inserting equations
• Analysis ToolPak

b
tri
is
I n this Learning Unit, we explore the nature of hypothesis testing when one group

rd
or two groups are observed; for two groups we explore situations in which the same
or different participants are observed in each group. We further explore the informa-
tiveness of hypothesis testing for making decisions, and explore other ways of adding
information about the nature of observed effects and how to appropriately interpret
The estimated
standard error is
an estimate of the — one-sample t test, ,o
them. We do this with three different versions of a t test:
st
standard deviation
of a sampling — independent-sample t test, and
distribution of
po

sample means — related-samples t test.


selected from a
population with an
unknown variance.
It is an estimate
Origins of the t Tests
y,

of the standard
error, or standard
An alternative to the z statistic was proposed by William Sealy Gosset (Student,
op

distance that
sample means 1908), a scientist working with the Guinness brewing company to improve brew-
can be expected
to deviate from
ing processes in the early 1900s. Because Guinness prohibited its employees from
publishing “trade secrets,” Gosset obtained approval to publish his work only under
tc

the value of the


population mean the condition that he used a pseudonym (“Student”). He proposed substituting the
stated in the null
hypothesis.
sample variance for the population variance in the formula for standard error. When
no

this substitution is made, the formula for error is called the estimated standard
The t statistic, 
known as t error (sM ):
observed or t
obtained, is an
s2 SD
Estimated standard error:=
sM =
o

inferential statistic
used to determine n n
D

the number of
standard deviations
in a t distribution
that a sample mean
The substitution is possible because, as explained in learning units 2 and 7, the sample
deviates from the variance is an unbiased estimator of the population variance: On average, the sample
mean value or variance equals the population variance. Using this substitution, an alternative test
mean difference
stated in the null
statistic can be introduced for one sample when the population variance is unknown.
hypothesis. The formula, known as a t statistic, is as follows for one sample:

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  129

M− m SD
t obt = , where sM =
sM n

Gosset showed that substituting the sample variance for the population variance led
to a new sampling distribution known as the t distribution, which is also known as
Student’s t, referring to the pseudonym Gosset used when publishing his work. In

e
Figure 9.1, you can see how similar the t distribution is to the normal distribution. The
difference is that the t distribution has greater variability in the tails, because the sam-

ut
ple variance is not always equal to the population variance. Sometimes the estimate
for variance is too large; sometimes the estimate is too small. This leads to a larger

b
probability of obtaining sample means farther from the population mean. Otherwise,

tri
the t distribution shares all the same characteristics of the normal distribution: It is
symmetrical and asymptotic, and its mean, median, and mode are all located at the
center of the distribution.

is
The Degrees of Freedom

rd
The t distribution is associated with degrees of freedom (df ). In Learning
Unit 2, we identified that the degrees of freedom for sample variance equal n − 1.

,o
Because the estimate of standard error for the t distribution is computed using the

FIGURE 9.1  ●  A normal distribution and two t distributions.


st
The tails of a t distribution are thicker, which reflects
po

the greater variability in values resulting from not


knowing the population variance. The t distribution,
or Student’s t, is
a normal-like
distribution with
y,

Normal distribution greater variability


t distribution df = 20 in the tails than a
op

normal distribution,
t distribution df = 5 because the
sample variance is
substituted for the
tc

population variance
to estimate the
standard error in
this distribution.
no

The degrees of
freedom (df) for
a t distribution
are equal to
o

the degrees of
freedom for sample
D

variance for a given


sample: n − 1. Each
t distribution is
0 associated with
specified degrees of
freedom; as sample
Notice that the normal distribution has less variability in the tails; otherwise, these distributions share the
size increases, the
same characteristics.
degrees of freedom
Source: www.unl.edu also increase.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
130  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

sample variance, the degrees of freedom for the t distribution are also n − 1. The t dis-
tribution is a sampling distribution in which the estimated standard error is computed
using the sample variance in the formula. As sample size increases, the sample vari-
ance more closely approximates the population variance. The result is that there is less
variability in the tails as sample size increases. So the
shape of the t distribution changes (the tails approach
Appendix A12

e
the x-axis faster) as the sample size is increased. Each
changing t distribution is thus associated with the same
See Appendix A12, p. 298, for

ut
degrees of freedom as for sample variance: df = n − 1.
more on degrees of freedom for To locate probabilities and critical values in a t distri-
parametric tests.

b
bution, we use a t table, such as Table 9.1, which repro-
duces part of Table C.2 in Appendix C. In the t table,

tri
there are six columns of values listing alpha levels for
one-tailed tests (top heading) and two-tailed tests (lower heading). The rows show the

is
degrees of freedom (df ) for a t distribution.
To use this table, you need to know the sample size (n), the alpha level (α), and the

rd
location of the rejection region (in one or both tails). For example, if we select a sample

,o
TABLE 9.1  ●  A portion of the t table adapted from Table C.2 in Appendix C.

Proportion in One Tail


st
.25 .10 .05 .025 .01 .005
po

Proportion in Two Tails Combined


df .50 .20 .10 .05 .02 .01
y,

 1 1.000 3.078 6.314 12.706 31.821 63.657


op

 2 0.816 1.886 2.920 4.303 6.965  9.925

 3 0.765 1.638 2.353 3.182 4.541  5.841


tc

 4 0.741 1.533 2.132 2.776 3.747  4.604

 5 0.727 1.476 2.015 2.571 3.365  4.032


no

 6 0.718 1.440 1.943 2.447 3.143  3.707

 7 0.711 1.415 1.895 2.365 2.998  3.499

 8 0.706 1.397 1.860 2.306 2.896  3.355


o

 9 0.703 1.383 1.833 2.282 2.821  3.250


D

10 0.700 1.372 1.812 2.228 2.764  3.169

Source: Table III in Fisher, R. A., & Yates, F. (1974). Statistical tables for biological, agricultural and medical
research (6th ed). London, England: Longman Group Ltd. (previously published by Oliver and Boyd Ltd.,
Edinburgh). Adapted and reprinted with permission of Addison Wesley Longman.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  131

of 11 students, then n = 11, and df = 10 (n − 1 = 10). To find the t distribution with 10


degrees of freedom, we look for 10 listed in the rows. The critical values for this distri-
bution at a .05 level of significance appear in the column with that probability listed:
For a one-tailed test, the critical value is 1.812 for an upper-tail critical test and −1.812
for a lower-tail critical test. For a two-tailed test, the critical values are ±2.228. Each
critical value identifies the cutoff for the rejection region, beyond which the decision

e
will be to reject the null hypothesis for a hypothesis test.
Keep in mind that a t distribution is an estimate of a normal distribution. The

ut
larger the sample size, the more closely a t distribution estimates a normal distribu-
tion. When the sample size is so large that it equals the population size, we describe

b
the sample size as infinite. In this case, the t distribution is a normal distribution. You
can see this in the t table in Appendix C. The critical values at a .05 level of signifi-

tri
cance are ±1.96 for a two-tailed t test with infinite (∞) degrees of freedom and 1.645
(upper-tail critical) or −1.645 (lower-tail critical) for a one-tailed test. These are the

is
same critical values listed in the unit normal table at a .05 level of significance. In
terms of the null hypothesis, in a small sample, there is a greater probability of obtain-

rd
ing sample means that are farther from the value stated in the null hypothesis. As
sample size increases, obtaining sample means that are farther from the value stated
in the null hypothesis becomes less likely. The result is that critical values get smaller
as sample size increases.

Computing the One-Sample t Test


,o
st
In this section, we compute the one-sample t test, which is used to compare a
po

mean value measured in a sample to a known value in the population. Specifically,


this test is used to test hypotheses concerning a single group mean selected from a
population with an unknown variance. To compute the one-sample t test, we make
y,

three assumptions:
op

1. Normality. We assume that data in the population being sampled are normally
distributed. This assumption is particularly important for small samples. In
larger samples (n > 30), the standard error is smaller, and this assumption
tc

becomes less critical as a result.

2. Random sampling. We assume that the data we measure were obtained from a
sample that was selected using a random sampling procedure. It is considered
no

inappropriate to conduct hypothesis tests with nonrandom samples. The one-sample t


test is a statistical
3. Independence. We assume that each outcome or observation is independent, procedure used to
compare a mean
meaning that one outcome does not influence another. Specifically, outcomes
o

value measured
are independent when the probability of one outcome has no effect on the in a sample to a
D

probability of another outcome. Using random sampling usually satisfies this known value in the
population. It is
assumption. specifically used
to test hypotheses
concerning the
Keep in mind that satisfying the assumptions for the t test is critically important.
mean in a single
That said, for each example in this book, the data are intentionally constructed such population with an
that the assumptions for conducting the tests have been met. In Example 9.1 we unknown variance.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
132  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

follow the four steps to hypothesis testing introduced in Learning Unit 7 to compute
a one-sample t test at a two-tailed .05 level of significance using an example adapted
from published research.
Example 9.1. Learning is a common construct that behavioral sciences study.
One common type of learning is the ability to recognize new objects, referred to
as novelty recognition (Fisher-Thompson, 2017; Privitera, Mayeaux, Schey, & Lapp,

e
2013). An example with animals is the object recognition task. A mouse placed in an
environment with two identical objects for five minutes is later returned to the same

ut
environment, but one of the objects has been replaced with a new or novel object.
Because mice are naturally curious, we expect that the mouse will spend more time

b
investigating the novel object, thus demonstrating object recognition. To operation-
alize, or make measurable, the percentage of time spent investigating the novel object

tri
relative to the familiar object, we make the following calculation:

is
Time Spent Investigating Novel Object
×100
Time Spent Investigating Novel Object +

rd
Time Spent Investigating Familiar Object

,o
Using this formula, if a mouse spends the same amount of time investigating each
object (in other words, the mouse fails to show object recognition), then the result
will be 50%. Thus, our standard we will compare against in the null hypothesis for
st
this test will be 50%. A score below 50% indicates that subjects recognized the novel
object but preferred the familiar object. Although unlikely, familiarity preference is a
po

remote possibility.
Using a sample data set adapted from published research, we will use the four steps to
hypothesis testing introduced in Learning Unit 7 to test whether the mean score in sam-
ple data significantly differs from the expected value of 50% at a .05 level of significance.
y,

Step 1: State the hypotheses. The population mean is 50%, and we are testing
whether or not the population mean differs from the sample mean:
op

H 0 : μ = 50% For mice given the opportunity to investigate a novel and a


familiar object, the mean percentage of time spent investigating
tc

the novel object is equal to 50%, as would be expected by chance.

H1 : μ ≠ 50% For mice given the opportunity to investigate a novel and a


no

familiar object, the mean percentage of time spent investigating


the novel object is not equal to 50%.

Again, if a mouse spends the same amount of time investigating each object (in
o

other words, the mouse fails to show object recognition), then the result will be 50%.
Thus, our standard we will compare against in the null hypothesis for this test is 50%.
D

The higher the percentage above 50%, the more time the mouse spent investigating
the novel object, and thus the more likely we will be to reject the null hypothesis and
conclude that object recognition occurred.
Step 2: Set the criteria for a decision. The level of significance for this test is
.05. We are computing a two-tailed test with n – 1 degrees of freedom. We will use a

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  133

data set with 15 scores, each from a different mouse, a sample size that is appropriate
for this behavioral task in research with nonhumans. With n = 15, the degrees of free-
dom for this test are 15 – 1 = 14. To locate the critical values, we find 14 listed in the
rows of Table C.2 in Appendix C and go across to the
column for a .05 proportion in two tails combined. The
critical values are ±2.145. Appendix A12

e
We will compare the value of the test statistic with
See Appendix A12, p. 298, for
these critical values. If the value of the test statistic is
more on degrees of freedom for

ut
beyond a critical value (either greater than 2.145 or
parametric tests.
less than 2.145), then there is less than a 5% chance

b
we would obtain that outcome if the null hypothesis
were correct, so we reject the null hypothesis; other-

tri
wise, we retain the null hypothesis.
Step 3: Compute the test statistic. Download Novel_Objects.xlsx from the

is
student study site: http://study.sagepub.com/priviteraexcel1e. As shown in Figure 9.2,
Column A contains an ID number for each animal; Column B contains the percentage

rd
of total investigation time each animal devoted to the novel object. Column C, which
we save for use later, contains the expected percentage of time each animal would
have devoted to the novel object if it did not show a preference for either the novel or
the familiar object.

,o
As shown in Figure 9.2, we insert in column D some labels to keep track of our
calculations in column E for the one-sample t test:
st
D4: Mean (M)
po

D5: Sample size (n)

D6: Standard deviation (SD)


y,

D7: Degrees of freedom (df)

D8: Critical value of t ( t crit )


op

We covered mean in Learning Unit 1 and standard deviation in Learning Unit 2.


To the right of the cells mentioned above, we type these functions and formulas
tc

into column E:
no

E4: =AVERAGE(B4:B18)

E5: =COUNT(B4:B18)
Appendix B
E6: =STDEV.S(B4:B18)
See Appendix B2, p. 301, for
o

E7: =E5-1 formatting cells.


D

E8: 2.145

At this point we have what we need to proceed with our calculation. From the val-
ues we have calculated already in rows 4 to 8, we prepare column D with three more
labels:

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
134  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

D9: Estimated standard error ( sM )

D10: Obtained value of t ( t obt )

D11: p value

On our way to finding the t statistic, we compute the estimated standard error. To

e
compute the estimated standard error, we divide the sample standard deviation by the
square root of the sample size:

ut
SD
sM =

b
n

tri
In column E,

is
E9: =E6/E5^0.5

rd
which yields

9.4

,o
=
sM = 2.42
15
st
in cell E9 in Figure 9.2b.
We will compare the sample mean to the population mean stated in the null
hypothesis: μ = 50. The estimated sample standard deviation is the denominator of
po

the t statistic.

M −µ
t obt =
y,

sM
op

Find the t statistic by substituting the values for the sample mean, M = 59.1; the popu-
lation mean stated in the null hypothesis, μ = 50; and the estimated standard error we
just calculated, sM = 2.42. In column E ,
tc

E10: =(E4-50)/E9
no

which yields

59.1 − 50
t obt = = 3.74 in cell E10 in Figure 9.2b.
2.42
o
D

Note that although there is no function in Excel to calculate a t value, there is a func-
tion to calculate the p value associated with a t test. To calculate an exact p value for a
one-sample t test, we use a second column of expected values equal to 50% for each of the
15 mice, shown in column C in Figure 9.2. We use the T.TEST function built into Excel.
In column E,

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  135

E11: =T.TEST(B4:B18,C4:C18,2,1).

This function requires two cell ranges of data: B4:B18 contains the observed per-
centage of time spent investigating the novel object, and C4:C18 contains the expected
percentage of time spent sniffing the novel object: 50% in each cell. After those two

e
FIGURE 9.2  ●  One-sample t test. (a) Functions and formulas. (b) Resulting

ut
calculations from functions and formulas.

(a)

b
tri
is
rd
,o
st
po

(b)
y,
op
tc
no
o
D

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
136  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

ranges of data, the next argument required in the func-


Appendix B tion is the number of tails, for which we specify 2. The
final argument is the type of t test, which we specify as
See Appendix B2, p. 301, on
related-samples, which Excel terms “Paired,” with a 1.
formatting cells to add superscripts. As expected with such a large t obt , the p value returned
of .002 is small, shown in cell E11 in Figure 9.2b.

e
Step 4: Make a decision. To decide to reject or
retain the null hypothesis, we compare the obtained value ( t obt = 3.74) to the critical

ut
values in the t table in Appendix C2. For df = n – 1, 15 – 1 = 14, the critical value at
α = .05 is 2.145. Because t obt of 3.74 exceeds the critical value, the decision is to reject

b
the null hypothesis. This t obt indicates that our observed value of 9.1 percentage
points above the expected value of 50 percentage points is 3.74 times larger than the

tri
average deviation of 2.42 percentage points of a mean based on 15 samples. If this
result were reported in a research journal, it would look something like this following

is
APA format (APA, 2010):

rd
The percentage of time mice explored the novel object (M = 59.1, SD = 9.4)
was significantly higher than the percentage expected by chance, t(14) = 3.74,
p = .002. Thus, the results support the conclusion that the mice demonstrated
object recognition.

,o
Effect Size for the One-Sample t Test
st
As described in Learning Unit 7, hypothesis testing identifies whether an effect exists
po

in a population. When we decide to retain the null hypothesis, we conclude that an


effect does not exist in the population. When we decide to reject the null hypothesis,
we conclude that an effect does exist in the population. However, hypothesis testing
does not tell us how large the effect is.
y,

In Example 9.1, we concluded that mice investigated a novel object more than they
investigated a familiar object. To determine the size of an effect, we compute effect
op

size, which gives an estimate of the size of an effect in the population. Two measures
of effect size for the one-sample t test are described in this section: estimated Cohen’s
d and proportion of variance (eta squared).
tc

To label these calculations, in column D we enter

D13: Estimated Cohen's d


no

Estimated
Cohen’s d is a
measure of effect
D14: Eta squared ( η 2 )
size in terms of
the number of Estimated Cohen’s d. The estimate of effect size that is most often used with a t
o

standard deviations
that mean scores
test is the estimated Cohen’s d. As described at the beginning of this learning unit
D

shifted above on the t test, when the population standard deviation is unknown, we use the sample
or below the standard deviation, because it gives an unbiased estimate of the population standard
population mean
stated by the null
deviation. Similarly, with the estimated Cohen’s d formula, we use the sample stan-
hypothesis. The dard deviation as follows:
larger the value
of d, the larger
M −µ
the effect in the d=
population. SD

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  137

In column E,

E13: =(E4-50)/E6

which yields

59.1 − 50.0 9.1

e
d= = = 0.97
9.4 9.4

ut
in cell E13 in Figure 9.2b.

b
We conclude that novelty of an object will increase investigation by mice of that

tri
object by 0.97 standard deviations above the expectation of equal investigation of
familiar and novel objects. The effect size conventions (Cohen, 1988) given in the
middle column of Table 9.2 show that this is a large effect size. We could report this

is
measure with the significant t test in Example 9.1 by stating,

rd
The percentage of time mice explored the novel object (M = 59.1, SD = 9.4) was
significantly higher than the percentage expected by chance, t(14) = 3.74,
p < .01, d = 0.97. Thus, the results support the conclusion that the mice
demonstrated object recognition.
,o
Proportion of Variance: Eta squared ( η ). Another measure of effect size is to
2
st
estimate the proportion of variance that can be accounted for by some treat-
ment. A treatment, which is any unique characteristic of a sample or any unique
po

way that a researcher treats a sample, can change the value of a dependent variable.
A treatment is associated with variability in a study. Proportion of variance estimates
how much of the variability in a dependent variable can be accounted for by the treat-
y,

ment. In the proportion of variance formula, the variability explained by a treatment


is divided by the total variability observed:
op

variance explained
Proportion of variance = Proportion of
total variance variance is a
tc

measure of effect
size in terms of
In Example 9.1, we found that mice investigated a novel object more than they inves- the proportion
tigated a familiar object. The unique characteristic of the sample in this study was that or percentage
no

of variability
the mice encountered a novel object that attracted their attention, not just two familiar in a dependent
objects. The variable we measured (i.e., the dependent variable) was percentage of total variable that can
time investigating that was devoted to the novel object. Measuring proportion of vari- be explained or
accounted for by a
o

ance determines how much of the variability in the dependent variable (percentage of treatment.
investigation time) can be explained by the treatment (the fact that one of the objects
D

In hypothesis
was novel). Here, we describe a measure of proportion of variance, eta squared ( η ).
2
testing, a
Eta squared is a measure of proportion of variance that can be expressed in a single treatment is
any unique
formula based on the result of a t test:
characteristic of
a sample or any
t2 unique way that a
η2 = researcher treats a
t + df
2
sample.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
138  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

In this formula, t is the value of the t statistic, and df is the degrees of freedom. In
this example, t = 3.74, and df = 14. To find variance, we square the standard deviation.
Thus, in the eta squared formula, we square the value of t to find the proportion of
variance. In column E,

E14: =E10^2/(E10^2+E7)

e
which yields

ut
3.742 13.9876
η2 = = = 0.50

b
3.742 + 14 13.9876 + 14

tri
in cell E14 in Figure 9.2b.
We conclude that 50% of the variability in the percentage of time spent investi-

is
gating objects (the dependent variable) can be explained by the fact that one of the

rd
objects was novel (the treatment). We could report this measure with the significant t
test in Example 9.1 by stating,

The percentage of time mice explored the novel object (M = 59.1, SD = 9.4)

,o
was significantly higher than the percentage expected by chance, t(14) = 3.74,
p < .01 ( η = .50). Thus, the results support the conclusion that the mice
2
st
demonstrated object recognition.
po

The third column in Table 9.2 displays guidelines for interpreting a trivial, small,
medium, and large effect for a variety of measures for effect size, including η2. Using
this table, we find that η 2 = .4998 is a large effect. Although eta squared is a popular
measure of proportion of variance, it tends to overestimate the proportion of variance
y,

explained by a treatment. To correct for this bias, many researchers use a modified
eta squared formula, called omega-squared. Coverage of omega-squared is beyond the
op

scope of this book.


tc

TABLE 9.2  ●  The size of an effect using estimated Cohen’s d and proportion of
variance (eta squared).
no

Description of Effect d η2 ω2
Trivial — η2 < .01 ω2 < .01
o

Small d < 0.2 .01 < η2 < .09 .01 < ω2 < .09
D

Medium 0.2 < d < 0.8 .10 < η2 < .25 .10 < ω2 < .25

Large d > 0.8 η2 > .25 ω2 > .25

Note that Cohen’s d is interpreted the same with negative values. The sign (+, -) simply indicates the direction
of the effect.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  139

Confidence Intervals for the One-Sample t Test


In Example 9.1, we stated a null hypothesis regarding the value of the mean in a pop-
ulation. We can further describe the nature of the effect by determining where the
effect is likely to be in the population by computing the confidence intervals.
As introduced in Learning Unit 7, there are two types of estimates: a point estimate
and an interval estimate. When using one sample, a point estimate is the sample mean

e
we measure. The interval estimate, reported as a confidence interval, is stated within a

ut
given level of confidence, which is the likelihood that an interval contains an unknown
population mean.

b
To illustrate confidence intervals for the one-sample t test, we will revisit Example
9.1 to compute the confidence intervals at a 95% level of confidence for the data ana-

tri
lyzed using the one-sample t test. To find the confidence intervals, we need to evaluate
an estimation formula. We will use the estimation formula to identify the upper and

is
lower confidence limits within which the unknown population mean is likely to be
contained. The estimation formula for the one-sample t test is as follows:

rd
M ± t ( sM )

,o
In all, we follow three steps to estimate the value of a population mean using a point
estimate and an interval estimate:
st
Step 1: Compute the sample mean and standard error.
po

Step 2: Choose the level of confidence and find the critical values at that level of
confidence.

Step 3: Compute the estimation formula to find the confidence limits.


y,
op

Step 1: Compute the sample mean and standard error. We have already
computed the sample mean, which is the point estimate of the population mean,
M = 59.1 in cell E4 of Figure 9.2b. We have also already computed the standard
tc

error of the mean, which is the sample standard deviation divided by the square
root of the sample size, SM = 2.42 in cell E9 of Figure 9.2b.
Step 2: Choose the level of confidence and find the critical values at that
no

level of confidence. In this example, we chose the 95% confidence interval (CI). The
critical value at this level of confidence will be the same as we found in Step 2 for Example
9.1 using hypothesis testing. As shown in Table 9.3, the 95% level of confidence corre-
sponds to a two-tailed test at a .05 level of significance using hypothesis testing. Thus,
o

the critical value for the interval estimate is 2.145, as shown in cell E8 of Figure 9.2b.
D

To explain further how this critical value was determined, remember that in a sam-
pling distribution, 50% of sample means fall above the sample mean we selected, and
50% fall below it. We are looking for the 95% of sample means that surround the
sample mean we selected, meaning the 47.5% of sample means above and the 47.5%
of sample means below the sample mean we selected. This leaves only 2.5% of sample
means remaining in the upper tail and 2.5% in the lower tail. Table 9.3 shows how

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
140  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

TABLE 9.3  ●  L
 evels of significance using hypothesis
testing and the corresponding levels of
confidence using estimation.

Level of Significance
Level of Confidence (α level, two-tailed)

e
99% .01

ut
95% .05

b
90% .10

tri
80% .20

is
different levels of confidence using estimation correspond to different two-tailed levels

rd
of significance (α) using hypothesis testing. Referring to Table 9.3, we find that a 95% CI
corresponds to a two-tailed test at a .05 level of significance. To find the critical value at
this level of confidence, we look in the t table in Table C.2 in Appendix C. The degrees

,o
of freedom are 14 (df = n − 1 for a one-sample t test). The critical value for the interval
estimate is 2.145. Multiplying the observed standard error of the mean by the critical
value of t tells us how far above the sample mean 47.5% of all sample means would fall
st
and how far below the mean another 47.5% of all sample means would fall. This range
above and below the sample mean encompasses 95% of sample means.
po

Step 3: Compute the estimation formula to find the confidence limits


for a 95% confidence interval. In column D,

D16: t ( sM )
y,

D17: 95% CI upper limit


op

D18: 95% CI lower limit

To compute the formula, multiply t by the estimated standard error. In column E,


tc

E16: =E8*E9
no

which yields

t ( sM ) = 2.145 ( 2.42 ) = 5.20


o
D

in cell E16 in Figure 9.2b.


Add 5.20 to the sample mean to find the upper confidence limit, and subtract 5.20
from the sample mean to find the lower confidence limit. In column E,

E17: =E4+E16

E18: =E4-E16

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  141

which yields

M + t ( sM ) = 59.1 + 5.20 = 64.3

in cell E17 in Figure 9.2b, and

e
M − t ( sM ) = 59.1 − 5.20 = 53.9

ut
in cell E18 in Figure 9.2b.

b
As shown in Figure 9.3, the 95% confidence interval in this population is between
a percentage of 53.9% and 64.3% of investigation time directed toward a novel object.

tri
We can estimate within a 95% level of confidence that the mean percentage of time
investigating a novel object is between 53.9% and 64.3% in the population. We are

is
95% confident that the population mean falls within this range, because 95% of all
sample means we could have selected from this population fall within the range of

rd
sample means we specified.

Computing the One-Sample


t Test Using the Analysis Toolpak
,o
We can also calculate this t test using the Analysis ToolPak available in Excel for easy
st
and accurate calculation. We’ll guide you through the steps to do the analysis we did
for the one-sample t test.
po

Return to the workbook Novel_Objects.xlsx. Click on the Data tab, and then on
the Data Analysis icon all the way to the right. Select “t-Test: Paired Two Sample
for Means,” as shown in Figure 9.4a. As we mentioned above, we can get the same
result from a one-sample t test as we can from a related-samples t test (which
y,

is called a Paired t-Test in Excel) when we pair the value predicted by the null
hypothesis, column C in Figure 9.2, with each score that was measured, column B
op

in Figure 9.2.
Selecting “t-Test: Paired Two Sample for Means” yields the dialog box in Figure 9.4b.
For Variable 1, we select the observed values of the percentage of time spent exploring
tc

the novel object in cells B3 through B18, which includes in B3 a label for the data.
no

FIGURE 9.3  ●  A
 t a 95% CI, the trust population mean score falls between 53.9
and 64.3 in this population of curious mice.
o

95% CI 53.9 to 64.3


D

50 55 60 65 70 75 80
53.9 64.3

The point estimate is M = 59.1

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
142  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

FIGURE 9.4  ●  P
 erforming a one-sample t test with the Analysis ToolPak in
Excel. (a) Selecting “t-Test: Paired Two Sample for Means” to
perform a one-sample t test. (b) Specifying the location of the
data and parameters for the t test.

(a)

e
b ut
tri
is
rd
(b)

,o
st
po
y,
op
tc

For Variable 2, we select the expected percentages in C3 through C18, all of which
are 50, and include in C3 a label for the data. The Hypothesized Mean Difference we
expect to be 0. (Zero is the default if this box remains blank.) Check the Labels box so
no

that the output contains the labels from B3 and C3. We keep our output on the same
page by selecting Output Range and clicking in cell G1.
Clicking OK on the dialog box returns the output table in Figure 9.5. The labels
that we included in the cell range for the analysis are in H3 and I3. We can change
o

these labels as we desire. Notice that in Figure 9.5, we get the same mean of 59.1% in
D

cell H4, same t obt of 3.74 in cell H10, and same two-tailed p value of .002 in cell H13
as we did in Figure 9.2b. Although neither an estimate of effect size nor confidence
intervals are generated automatically, the output table gives the mean from which
we would subtract the expected value of 50%, the variance, and the degrees of free-
dom. With this information we can calculate effect size and confidence intervals as
we did above.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  143

FIGURE 9.5  ●  Results of one-sample t test using the Analysis ToolPak.

e
b ut
tri
is
rd
Computing the Two-Independent-Sample t Test
,o
st
In this section, we compute the two-independent-sample t test, which is used
po

to compare the mean difference between two groups; specifically, to test hypoth-
eses regarding the difference between two population means. In terms of the null
hypothesis, we state the mean difference that we expect in the population and com-
y,

pare it to the difference we observe between the two sample means in our sample.
Often, a visual inspection of data from two groups can be quite insightful in terms
op

of determining whether groups differ. Appendix A8 provides an illustration for how


to inspect grouped data visually. For a two-independent-sample t test concerning two
population means, we make four assumptions:
tc

The two-
1. Normality. We assume that data in each population being sampled are independent-
normally distributed. This assumption is particularly important for small sample t test is
no

a statistical
samples, because the standard error is typically much larger. In larger sample procedure used to
sizes (n > 30), the standard error is smaller, and this assumption becomes less compare the mean
critical as a result. difference between
two independent
o

2. Random sampling. We assume that the data we measure were obtained from groups. This test
is specifically used
samples that were selected using a random sampling procedure.
D

to test hypotheses
concerning the
3. Independence. We assume that each measured outcome or observation is difference between
independent, meaning that one outcome does not influence another. two population
Specifically, outcomes are independent when the probability of one outcome means, where the
variance in one or
has no effect on the probability of another outcome. Using random sampling both populations is
usually satisfies this assumption. unknown.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
144  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

4. Equal variances. We assume that the variances in each population are equal
to each other. This assumption is usually satisfied when the larger sample
variance is not greater than two times the smaller:

larger s2
<2
smaller s2

e
ut
Keep in mind that satisfying the assumptions for the
Appendix A8 t test is critically important. That said, for each exam-
ple in this book, the data are intentionally constructed

b
See Appendix A8, p. 290, for how such that the assumptions for conducting the tests have
to visually inspect data to compare

tri
been met. In Example 9.2 we follow the four steps to
differences between two groups. hypothesis testing introduced in Learning Unit 7 to

is
compute a two-independent-sample t test using an
example adapted from published research.

rd
Example 9.2. For an example, let us consider the impact of safety training in
the workplace. Nonfatal workplace injuries can be expressed as a rate: the number
of injuries per 200,000 hours worked by all employees. A nonfatal incidence rate of

,o
5 means that 5 nonfatal injuries in 200,000 hours of work were accumulated by all
employees at a company. Thus, the incident rate that we analyze has been adjusted for
size of company.
st
Using a sample data set adapted from published research, we will use the four steps
to hypothesis testing introduced in Learning Unit 7. We examine at a .05 level of sig-
po

nificance whether safety training for 40 companies produces a difference in incidence


rate as compared to 40 other companies without safety training.
Step 1: State the hypotheses. The null hypothesis states that there is no differ-
ence between the two groups, and we are testing whether or not there is a difference:
y,

H 0 : µ1 − µ2 = 0 There is no difference; safety training has no effect on the


op

incidence rate of nonfatal injuries.

H 0 : µ1 − µ2 ≠ 0 Safety training does have an effect on the incidence rate of


nonfatal injuries.
tc

Step 2: Set the criteria for a decision. The level of significance for this test
no

is .05. We are computing a two-tailed test, so we place the rejection region in both
tails. For the t test, the degrees of freedom for each group or sample are n – 1. Table 9.4
compares degrees of freedom for one-sample and for two-independent-sample t tests.
To find the degrees of freedom for two samples, then, we add the degrees of freedom
o

in each sample. This can be found using one of three methods:


D

Method 1: df for two-independent-sample t test = df1 + df 2

Method 2: df for two-independent-sample t test = ( n1 − 1) + ( n2 − 1)

Method 3: df for two-independent-sample t test = N - 2

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  145

As summarized in Table 9.4, we can add the degrees


of freedom for each sample using the first two methods. Appendix A12
In the third method, N is the total sample size for both
See Appendix A12, p. 298, for
groups combined, and we subtract 2 from this value. All
three methods will produce the same result for degrees
more on degrees of freedom for
of freedom. The degrees of freedom for each sample parametric tests.
here are 40 – 1 = 39. Thus, the degrees of freedom for

e
the two-independent-sample t test are the sum of these

ut
degrees of freedom:

b
df = 39 + 39 = 78

tri
In Table C.2 in Appendix C, p. 318, the degrees of freedom in the leftmost column
increase by 1 up to df = 30. After 30, they increase by 10. Because there is no entry for

is
df = 78, we use the next smallest value, which is df = 60. Move across the columns to
find the critical value for a .05 proportion in two tails combined. The critical values

rd
for this test are ±2.000.
We will compare the value of the test statistic with these critical values. If the value
of the test statistic is beyond a critical value (either greater than +2.000 or less than

,o
-2.000), then there is less than a 5% chance we would obtain that outcome if the null
hypothesis were correct, so we reject the null hypothesis; otherwise, we retain the
null hypothesis.
st
Step 3: Compute the test statistic. Download Employee_Safety_Training.xlsx
from the student study site: http://study.sagepub.com/priviteraexcel1e. Column A con-
po

tains the rate of nonfatal injuries per 200,000 hours that employees worked at companies

TABLE 9.4  ●  Computing the degrees of freedom for a t test.


y,

Teacher
op

Participants Present Absent Difference Scores


tc

1 220 210 (220 − 210) = 10

2 245 220 (245 − 220) = 25


no

3 215 195 (215 − 195) = 20

4 260 265 (260 − 265) = −5

5 300 275 (300 − 275) = 25


o

6 280 290 (280 − 290) = −10


D

7 250 220 (250 − 220) = 30

8 310 285 (310 − 285) = 25

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
146  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

with safety training; column B contains that same measure at companies without safety
training. We can copy A2 to B3 and paste them to D2 to E3 as shown in Figure 9.6. These
column headers label the two treatments of the independent variable.
Also as shown in Figure 9.6, we insert in column C labels to keep track of our cal-
culations in columns D and E for the two-independent-sample t test:

e
C4: Mean (M)

ut
C5: Sample size (n)
C6: Variance ( s2 )

b
C7: Degrees of freedom (df)

tri
To the right of the cells mentioned above, we type these functions and formulas

is
into column D and E:

rd
D4: =AVERAGE(A4:A43) E4: =AVERAGE(B4:B43)

D5: =COUNT(A4:A43) E5: =COUNT(B4:B43)

,o
D6: =VAR.S(A4:A43) E6: =VAR.S(B4:B43)

D7: =D5-1 E7: =E5-1


st
As mentioned above, the critical value of t taken from Table C.2 in Appendix C is
2.000.
po

C8: Critical value of t, df = 60 in Table C.2 ( t crit )


D8: 2.000
y,

These values in D4 through E7 allow us to proceed with the calculation of the


op

two-independent-sample t test and compare that result with the t crit of 2.000. We
prepare column C with six more labels:
tc

C9: Sample mean difference (M1 – M2)

C10: Hypothesized mean difference (µ1 – µ2)


no

C11: Pooled sample variance ( s2p )

C12: Standard error for difference ( sM 1 - M2 )


o

C13: Obtained value of t (tobt)


D

C14: p value

In the formula for a two-independent-sample t test, we subtract the mean differ-


ence between the sample means, cell D9, from the mean difference stated in the null,
cell D10. To column D we add:

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  147

D9: =D4-E4
D10: 0 Appendix B
See Appendix B2, p. 301, on
We divide this difference by the combined standard formatting cells to add superscripts
error in both samples, called the estimated standard or subscripts.
error for the difference, which is computed as
See Appendix B8, p. 312, on

e
inserting equations, especially to

ut
s2p s2p
sM − M2 = + use both a superscript and subscript
1
n1 n2
or add multiple subscripts.

b
tri
Notice that the numerator in the estimated standard
2
error for the difference formula is s p , which is called the
pooled sample variance. The first step, then, to com- Appendix A9

is
pute the estimated standard error for the difference is to
See Appendix A9, p. 292, for more

rd
compute the pooled sample variance. Because we have
equal sample sizes in the two groups, we can average the
detail regarding the calculation and
two sample variances using the following formula: interpretation of the pooled sample
variance.

s2p =
s12 + s22
2 ,o
st
The estimated
Appendix A9 provides more detail regarding the calculation and interpretation of the standard error for
the difference is
po

pooled sample variance. an estimate of the


In column D11, we calculate pooled sample variance: standard deviation
of a sampling
D11: =(D6+E6)/2 distribution of
mean differences
y,

between two
which yields sample means.
op

It is an estimate
3.61 + 4.92 of the standard
s2p = = 4.27 error or standard
2
distance that mean
tc

differences can be
in cell D11 in Figure 9.6b. expected to deviate
from the mean
Having the pooled sample variance allows us to then calculate the estimated stan- difference stated in
no

dard error for the difference in column D (notice that 4.27 is now the numerator in the null hypothesis.
the estimated standard error for the difference formula): The pooled sample
variance is the
D12: =((D11/D5)+(D11/E5))^0.5 mean sample
o

variance of two
samples. When
which yields
D

the sample size


is unequal, the
4.27 4.27 variance in each
sM − M2 = + = 0.46 group or sample
1
39 39
is weighted by its
respective degrees
in cell D12 in Figure 9.6b. of freedom.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
148  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

Now we have the three components needed to calculate the two-independent-sam-


ple t test:

( M1 − M2 ) − ( µ1 − µ2 )
t obt =
sM 1 − M2

e
In column D we insert

ut
D13: =(D9-D10)/D12

b
which yields

tri
−1.95 − 0
t obt = = −4.222

is
0.46

rd
in cell D13 in Figure 9.6b.
Finally, in column D we calculate a p value:

D14: =T.TEST(A4:A43,B4:B43,2,2)

,o
which yields a p value of .000065 in cell D14 in Figure 9.6b.
st
po

Step 4: Make a decision. The t obt value in cell D13 is -4.222. This value far
exceeds our two-tailed critical value at α = .05 of 2.000 for df = 60 from Table C2 in
Appendix C. In fact, the exact p value that we can calculate with Excel indicates that
the probability of such an outcome occurring, if the null hypothesis were true, is
y,

very unlikely: p = .000065 in cell D14 in Figure 9.5b. If this result were reported in a
research journal, it would look something like this following APA format (American
op

Psychological Association, 2010):


tc

The mean nonfatal incidence rate at companies with employee safety


training (M = 8.23, SD = 1.90) was significantly lower than was the rate at
companies without employee safety training (M = 10.18, SD = 2.22), t(78) =
no

-4.222, p < .001.

Effect Size for the Two-Independent-Sample t Test


o

Hypothesis testing is used to identify whether an effect exists in one or more popula-
D

tions of interest. When we reject the null hypothesis, we conclude that an effect does
exist in the population. When we retain the null hypothesis, we conclude that an
effect does not exist in the population. In Example 9.2, we concluded that an effect
does exist. We will compute effect size for the test in Example 9.2 to determine the
effect size of this result or mean difference. We can identify two measures of effect

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  149

FIGURE 9.6  ●  Two-independent-sample t test. (a) Functions and formulas.


(b) Results of the calculations.

(a)

e
b ut
tri
is
rd
,o
st
(b)
po
y,
op
tc
no
o
D

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
150  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

size for the two-independent-sample t test: estimated Cohen’s d and proportion of


variance with eta squared.
To label these calculations, in column C we enter

C16: Estimated Cohen's d

C17: Eta squared ( η 2 )

e
ut
Estimated Cohen’s d. As stated in Example 9.1 above, estimated Cohen’s d is
most often used with the t test. When the estimated Cohen’s d is used with the two-
independent-sample t test, we place the difference between two sample means in

b
the numerator and the pooled sample standard deviation (or square root of

tri
the pooled sample variance) in the denominator. The pooled sample standard devi-
ation is an estimate for the pooled or mean standard deviation for the difference

is
between two population means. The formula for an estimated Cohen’s d for the
two-independent-sample t test is

rd
M1 - M 2
s2p

In column D, enter ,o
st
D16: =(D4-E4)/D11
po

which yields

8.23 − 10.18
d= = −0.46
y,

4.27
op

in cell D 16 in Figure 9.6b.


We conclude safety training decreases nonfatal incident rate by 0.46 standard devi-
ations below the mean as compared to no safety training. The effect size conventions
tc

given in the middle column of Table 9.2 show that this is a medium effect size. We
could report this measure with the significant t test in Example 9.2 by stating,
Pooled sample
no

standard deviation
is the combined
The mean nonfatal incidence rate at companies with employee safety training
sample standard (M = 8.23, SD = 1.90) was significantly lower than was the rate at companies without
deviation of two employee safety training (M = 10.18, SD = 2.22), t(78) = -4.222, p < .01, d = -0.46.
samples. It is
o

computed by taking
Proportion of Variance: Eta squared ( η ). Another measure of effect size is
2
the square root of
D

the pooled sample proportion of variance, which estimates the proportion of variance in a dependent
variance. This
measure estimates variable that can be explained by some treatment. In Example 9.2, this measure can
the standard describe the proportion of variance in the nonfatal incident rate (the dependent vari-
deviation for the able) that can be explained by whether companies did or did not have safety training
difference between
two population (the treatment). One measure of proportion of variance for the two-independent-
sample t test is eta squared, η .
2
means.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  151

Eta squared can be expressed in a single formula based on the result of a t test:

t2
η2 =
t + df
2

In Example 9.2, t = -4.22, and df = 78. To find proportion of variance using the eta

e
squared formula, we then square the value of t in the numerator and the denominator.

ut
In column D, insert:

D17: =D13^2/(D13^2+78)

b
tri
which yields

is
−4.2222 17.83
η2 = = = .19
−4.2222 + 78 17.83 + 78

rd
in cell D17 in Figure 9.6b.
We conclude that only 19% of the variability in nonfatal incident rates can be

,o
explained by whether companies did or did not provide safety training. Based on the
effect size conventions in Table 9.2, this result indicates a medium effect size. We can
report this estimate with the significant t test in Example 9.2 by stating,
st
The mean nonfatal incidence rate at companies with employee safety training
po

(M = 8.23, SD = 1.90) was significantly lower than was the rate at companies
without employee safety training (M = 10.18, SD = 2.22), t(78) = -4.222, p < .01,
η 2 = .19.
y,

Confidence Intervals for


op

the Two-Independent-Sample t Test


In Example 9.2, we stated a null hypothesis regarding the mean difference in a pop-
tc

ulation. We can further describe the nature of the effect by determining where the
effect is likely to be in the population by computing the confidence intervals.
As introduced in Learning Unit 7, there are two types of estimates: a point estimate
no

and an interval estimate. When comparing two samples, a point estimate is the sam-
ple mean difference we measure. The interval estimate, often reported as a confidence
interval, is stated within a given level of confidence, which is the likelihood that an
interval contains an unknown population mean difference.
o

To illustrate confidence intervals for the two-independent-sample t test, we


D

will revisit Example 9.2, and using the same data, we will compute the confidence
intervals at a 95% level of confidence using the three steps to estimation first
introduced in Example 9.1. For a two-independent-sample t test, the estimation
formula is

(
M1 − M 2 ± t sM 1 − M2 )

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
152  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

Step 1: Compute the sample mean and standard error. The difference between
the two sample means is M1 − M 2 = −1.95 nonfatal injuries per 200,000 hours that
employees worked. Therefore, the mean difference or point estimate of the population
mean difference is −1.95. (We already computed this value for Example 9.2 in Step 3
of hypothesis testing.)
The estimated standard error for the difference, sM - M , is equal to 0.46. (We already
1 2

e
computed this value as well for Example 9.2 in Step 3 of hypothesis testing.)
Step 2: Choose the level of confidence and find the critical values at

ut
that level of confidence. In this example, we want to find the 95% confidence
interval (CI), so we choose a 95% level of confidence. Remember, in a sampling dis-

b
tribution, 50% of the differences between two sample means fall above the mean
difference we selected in our sample, and 50% fall below it. We are looking for the

tri
95% of differences between two sample means that surround the mean difference we
measured in our sample. A 95% CI corresponds to a two-tailed test at a .05 level of

is
significance. To find the critical value at this level of confidence, we look in the t table
in Table C.2 in Appendix C. As explained in Step 2 in Example 9.2, we use df = 60. The

rd
critical value for the interval estimate is t = 2.000.
Step 3: Compute the estimation formula to find the confidence limits
for a 95% confidence interval. Refer again to Figure 9.6. In column C, insert

C19: t ( sM 1 - M2 )
,o
st
C20: 95% CI upper limit

C21: 95% CI lower limit


po

Because we are estimating the difference between two sample means in the popula-
tion with an unknown variance, we use the M1 − M 2 ± t sM − M estimation formula. ( 1 2
)
y,

To compute the formula, multiply t by the estimated standard error for the difference:

( )
op

t sM 1 − M2

In column D,
tc

D19: =D8*D12
no

which yields

(
t sM 1 − M2 ) = 2.000 ( 0.46 ) = 0.92
o

in cell D19 in Figure 9.6b.


D

Add 0.92 to the sample mean difference to find the upper confidence limit, and
subtract 0.92 from the sample mean difference to find the lower confidence limit. In
column D,

D20: =D9+D19

D21: =D9-D19

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  153

which yields

(
M1 − M 2 + t sM 1 − M2 ) = −1.95 + 0.92 = −1.03

in cell D20 in Figure 9.6b, and

e
(
M1 − M 2 − t sM − M2 ) = −1.95 − 0.92 = −2.87

ut
1

in cell D21 in Figure 9.6b.

b
As shown in Figure 9.7, the 95% confidence interval in this population is between

tri
a mean difference in nonfatal injury incidence rate of −2.87 and −1.03 per 200,000
hours worked. We can estimate within a 95% level of confidence that the difference
between groups in nonfatal injury incidence rate is between −2.87 and −1.03 per

is
200,000 hours worked. We are 95% confident that the mean difference in the popu-

rd
lation falls within this range, because 95% of all sample mean differences we could
have selected from this population fall within the range of sample mean differences
we specified.

Computing the Two-Independent-


Sample t Test Using the Analysis Toolpak ,o
st
We can also use the Analysis ToolPak available in Excel for easy and accurate cal-
culation. We’ll guide you through the steps to do the analysis for the two-indepen-
po

dent-sample t test.
Click on the Data tab, and then on the Data Analysis icon all the way to the right.
Select “t-Test: Two-Sample Assuming Equal Variances” (Figure 9.8a). According to
y,

the fourth assumption of the two-independent-sample t test described above, vari-


ances in the two samples must be equal. The rule of thumb we use is that the larger
op

variance is no more than twice the smaller variance. That is the case with these
data, as is shown in Figure 9.6b, cells D6 and E6. The variances for the two groups
are 3.61 and 4.92.
tc

FIGURE 9.7  ●  At a 95% CI, the mean difference nonfatal injury incidents falls
between -2.87 and -1.03.
no

95% CI −2.87 to −1.03


o

−3 −2 −1 0 1
D

−2.87 −1.03

The point estimate is M = −1.95

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
154  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

FIGURE 9.8  ●  Performing a two-independent-sample t test with the Analysis


ToolPak in Excel. (a) Selecting “t-Test: Two-Sample Assuming
Equal Variance” to perform a one-sample t test. (b) Specifying
the location of the data and parameters for the t test.

(a)

e
b ut
tri
is
rd
(b)

,o
st
po
y,
op

Selecting “t-Test: Two-Sample Assuming Equal Variance” yields the dialog box in
Figure 9.8b. For Variable 1, we select the nonfatal injury incident rate for companies
tc

with safety training in cells A3 through A43, which includes in A3 a label for the data.
For Variable 2, we select the data for companies without safety training in cells B3
no

through B43, and include in B3 a label for the data. The Hypothesized Mean Difference
is 0. Check the Labels box so that the output contains the labels from A3 and B3. We
keep our output on the same page by selecting Output Range and clicking in cell F3.
Clicking “OK” on the dialog box returns the output table in Figure 9.9. Notice
o

that in Figure 9.9, we get the same means of 8.23 in cell H4 and 10.18 in cell I4 as we
obtained in Figure 9.6b. We also get the same t obt of -4.222 in cell H10 and same p
D

value of .000065 in cell H13 as we did in Figure 9.7b. The t crit in cell H14 in Figure 9.9
is for df = 78 and is thus more precise than the one stated in Step 2 above and shown
in Figure 9.6. Although neither an estimate of effect size nor confidence intervals are
generated automatically, the output table gives the means of the two groups, the vari-
ance, and the degrees of freedom. With this information we can calculate effect size
and confidence intervals as we did above.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  155

FIGURE 9.9  ●  R
 esults of two-independent-sample t test using the Analysis
ToolPak.

e
b ut
tri
is
rd
Computing the Related-Samples t Test
,o
st
po

In this section, we compute the related-samples t test, which is used to compare


the mean difference between pairs of scores. In terms of the null hypothesis, we
start by stating the null hypothesis for the mean difference between pairs of scores
in a population, and we then compare this to the difference we observe between
y,

paired scores in a sample. The related-samples t test is different from the two-
independent-sample t test in that first we subtract one score in each pair from the The related-
op

samples t test is
other to obtain the difference score for each participant; then we compute the test a statistical
statistic. Appendix A10 provides an overview for the reason we compute difference procedure used to
scores. For a related-samples t test, we make two assumptions: test hypotheses
tc

concerning two
related samples
1. Normality. We assume that data in the population of difference scores are selected from
normally distributed. Again, this assumption is most important for small populations in
no

which the variance


sample sizes. With larger samples (n > 30), the standard error is smaller, and in one or both
this assumption becomes less critical as a result. populations is
unknown.
2. Independence within groups. The samples are related or matched between
A difference
o

groups. However, we must assume that difference scores were obtained from score is a score or
different individuals within each group or treatment.
D

value obtained by
subtracting one
score from another.
Again, keep in mind that satisfying the assumptions for the t test is critically In a related-
important. That said, for each example in this book, the data are intentionally samples t test,
constructed such that the assumptions for conducting the tests have been met. In difference scores
are obtained prior
Example 9.3, we follow the four steps to hypothesis testing introduced in Learning to computing the
Unit 7 to compute a related-samples t test using an example adapted from published test statistic.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
156  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

research. Note that there are many types of designs


Appendix A10 that fit into the category of related-samples. An over-
view of the types of designs that fit into this category
See Appendix A10, p. 293, for more
is provided in Appendix A11.
detail regarding why difference
Example 9.3. One area of focus in cognitive psy-
scores are calculated for a related- chology is attention. Psychologists have examined
samples t test.

e
what kinds of visual stimuli capture our attention
most quickly. The course of human evolution may

ut
have predisposed us to notice animals more readily
than we notice inanimate objects (Hagen & Laeng,

b
2016; New, Cosmides, & Tooby, 2007). In our evolu-
Appendix A11
tionary past, animals could have been predators that

tri
See Appendix A11, p. 295, for would harm us or food that would nourish us. Thus
more detail regarding the types animal objects in the environment may have held

is
of designs that are considered more meaning than nonanimal objects such as plants
related-samples designs. or rocks. Changes to animate stimuli may capture our

rd
attention more quickly than changes in other stim-
uli. Suppose we conduct a study of whether people
are faster to detect change in animate targets (e.g.,

,o
people or animals) than in inanimate targets (e.g., plants, cars). We show partici-
pants several pairs of scenes that are virtually identical except for one change. That
change could be to an animate object or to an inanimate object. For 35 participants,
st
we record to the nearest 0.01 second the time taken for correct identification of a
change. Using a sample data set adapted from published research, we will use the
po

four steps to hypothesis testing introduced in Learning Unit 7 to test for a differ-
ence in their responses to each kind of change, animate versus inanimate, at a .05
level of significance.
Step 1: State the hypotheses. Because we are testing whether or not a difference
y,

exists, the null hypothesis states that there is no mean difference, and the alternative
hypothesis states that there is a mean difference:
op

H 0 : µ1 − µ2 = 0 No difference; changes in animate as compared to inanimate


objects do not differ in time to detection.
tc

H 0 : µ1 − µ2 ≠ 0 Changes in animate as compared to inanimate objects differ in


time to detection.
no

Step 2: Set the criteria for a decision. The level of significance for this test
is .05. This is a two-tailed test for the mean difference between two related samples.
The degrees of freedom for this test are df = 35 – 1 = 34.
o

Because df = 34 is not available in Table C.2 in


D

Appendix C, we take the closest smaller value, which


Appendix A12
is df = 30. Move across the columns to find the critical
See Appendix A12, p. 298, for value for a .05 proportion in two tails combined. The
more on degrees of freedom for critical values for this test are ±2.042.
parametric tests. We will compare the value of the test statistic
with these critical values. If the value of the test

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  157

statistic is beyond a critical value (either greater than +2.042 or less than -2.042),
then there is less than a 5% chance we would obtain that outcome if the null
hypothesis were correct, so we reject the null hypothesis; otherwise, we retain
the null hypothesis.
Step 3: Compute the test statistic. Download Visual_Change.xlsx from the
student study site: http://study.sagepub.com/priviteraexcel1e. As shown in Figure 9.10,

e
participants are identified in column A, and their times to detect visual change in
animate and inanimate objects are listed in the same row. Note that, in this spread-

ut
sheet, the information for a participant is all on one row, and that one row contains
information from only a single participant.

b
To compute the test statistic, we (1) compute a difference score by subtracting for
each participant one measure from the other measure; (2) compute the mean, vari-

tri
ance, and standard deviation of difference scores; (3) compute the estimated standard
error for difference scores; and then (4) compute the test statistic.

is
(1) Compute the difference scores. In cell D3, type “D” to signify that the col-
umn will contain difference scores, as in Figure 9.10. To calculate a difference score

rd
for the first participant, enter into cell D4 =B4-C4. Select cell D4. Fill down to cell
D38, or copy D4 and paste from D5 to D38. Keep in mind that the sign (negative
or positive) of difference scores matters when we com-
pute the mean and standard deviation.

,o
(2) Compute the mean, variance, and standard
deviation of difference scores, and the estimated
Appendix B
st
standard error for the difference scores (sMD). See Appendix B2, p. 301, on
We’ll reserve column E for calculating D , which2 formatting cells to add superscripts
po

we need on our way to calculating the variance and or subscripts.


standard deviation of the difference scores. Use col- See Appendix B8, p. 312, on
umn F, as in Figure 9.10, for labels to keep track of inserting equations, especially to
what we calculate:
y,

use both a superscript and subscript


or add multiple subscripts.
op

F4: Mean difference score ( M D )

F5: Sample size (n)


tc

2 The estimated
F6: Variance of the difference scores ( sD )
standard error
for difference
F7: Standard deviation of difference scores ( sD )
scores ( sMD ) is
no

an estimate of the
F8: Standard error for difference scores ( sMD ) standard deviation
of a sampling
distribution of mean
To the right of the cells in column F mentioned above, type into Column G func- difference scores.
o

tions and formulas to calculate the values, as shown in Figure 9.10: It is an estimate
of the standard
D

error or standard
G4: =AVERAGE(D4:D38) distance that the
mean difference
scores deviate from
G5: =COUNT(D4:D38) the mean difference
score stated in the
G6: =VAR.S(D4:D38) null hypothesis.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
158  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

FIGURE 9.10  ●  Related-samples t test. (a) Functions and formulas.


(b) Results of calculations.

(a)

e
b ut
tri
is
rd
(b)
,o
st
po
y,
op
tc
no
o

G7: =G6^0.5
D

G8: =G7/G5^.5

(3) Compute the test statistic. At this point we are ready to proceed with the calcu-
lation and evaluation of the related-samples t test. Use column F, as in Figure 9.10, for
labels to keep track of what we calculate:

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  159

F9: Degrees of freedom (df)

F10: Critical value of t ( t crit )

F11: Obtained value of t ( t obt )

F12: p value

e
To the right of the cells in column F mentioned above, type into Column G func-

ut
tions and formulas to calculate the values, as shown in Figure 9.10:

G9: =G5-1

b
G10: 2.042

tri
The test statistic for a related-samples t test estimates the number of standard devia-

is
tions in a t distribution that a sample mean difference falls from the population mean
difference stated in the null hypothesis. Similar to the other t tests, the mean differ-

rd
ence is placed in the numerator, and the estimate of the standard error is placed in the
denominator. By placing the mean differences in the numerator and the estimated
standard error for difference scores in the denominator, we obtain the formula for the
test statistic for a related-samples t test:

,o M D − µD
st
t obt =
sMD
po

In column G,

G11: =(G4-0)/G8
y,

which yields
op

−0.674 − 0
t obt = = −2.555
.264
tc

in cell G11 in Figure 9.10b.


Excel allows us to calculate an exact p value, as shown in cell G12 in Figure 9.10a:
no

G12: =T.TEST(B4:B38,C4:C38,2,1)
o

This function requires two cell ranges of data: B4:B38 contains the times to iden-
tify change in the animate object, C4:C38 contains the times to identify change in
D

the inanimate object. After those two ranges of data, the next argument required in
the function is the number of tails, for which we specify 2. The final argument is the
type of t test, which we specify as “paired” 1. “Paired” is the term used in Excel to cal-
culate a related-samples t test. As expected with the t obt , the p value is .015, as shown
in cell G12 of Figure 9.10b.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
160  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

Step 4: Make a decision. To make a decision, we compare the obtained value


to the critical value. We reject the null hypothesis if the obtained value exceeds the
critical value. Figure 9.10 reveals that the obtained value ( t obt = −2.555 ) exceeds the
lower critical value; it falls in the rejection region. The decision is to reject the null
hypothesis. If we were to report this result in a research journal, it would look some-
thing like this:

e
Changes to animate objects were identified significantly more quickly than

ut
were changes to inanimate objects, t(34) = -2.555, p = .015.

b
Effect Size for the Related-Samples t Test

tri
Hypothesis testing identifies whether or not an effect exists. In Example 9.3, we con-
cluded that an effect does exist—people noticed changes to animate objects more

is
quickly than they noticed changes to inanimate objects; we rejected the null hypoth-
esis. The size of this effect is determined by measures of effect size. We will compute

rd
effect size for Example 9.3, because the decision was to reject the null hypothesis for
that hypothesis test. There are two measures of effect size for the related-samples t test:
estimated Cohen’s d and proportion of variance with eta squared.

,o
To label these calculations, in column C we enter

F14: Estimated Cohen's d


st
F15: Eta-squared ( η )
2
po

Estimated Cohen’s d. As stated in Example 9.1 above, estimated Cohen’s d is


most often used with the t test. When the estimated Cohen’s d is used with the related-
samples t test, it measures the number of standard deviations that mean difference
scores shifted above or below the population mean difference stated in the null hypoth-
y,

esis. The larger the value of d, the larger the effect in the population. To compute
estimated Cohen’s d with two related samples, we place the mean difference between
op

two samples in the numerator and the standard deviation of the difference scores to
estimate the population standard deviation in the denominator:
tc

MD
d=
sD
no

In column G,

G14: =G4/G7
o
D

which yields

−0.674
d= = −0.432
1.560

in cell G14 of Figure 9.10b.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  161

We conclude that time to recognize a change in animate objects is 0.432 standard


deviations shorter than time to recognize changes in inanimate objects. The effect
size conventions listed in Table 9.2 show that this is a medium effect size (-0.8 < d
< -0.2). We could report this measure with the significant t test in Example 9.3 by
stating,

e
Changes to animate objects were identified significantly more quickly than
were changes to inanimate objects, t(34) = -2.555, p < .05 (d = -0.432).

ut
Proportion of Variance: Eta squared ( η 2 ). Another measure of effect size is

b
proportion of variance, which estimates the proportion of variance in a dependent
variable that can be explained by some treatment. In Example 9.3, this measure can

tri
describe the proportion of variance in the difference in recognition time (the depen-
dent variable) that can be explained by whether the changed object was animate

is
or inanimate (the treatment). One measure of proportion of variance for the two-
independent-sample t test is eta squared, η .
2

rd
Eta squared can be expressed in a single formula based on the result of a t test:

t2

,o
η2 =
t 2 + df
st
In column G,
po

G15: =G11^2/(G11^2+G9)

which yields
y,

−2.5552 6.528
η2 = = = .161
−2.5552 + 34 6.528 + 34
op

in cell G15 in Figure 9.10b.


Typically, we report proportions to the hundredths place. So with rounding, we
tc

conclude that 16% of the variability in reaction time can be explained by whether the
object that changed was animate or inanimate. Based on the effect size conventions in
no

Table 9.2, this result indicates a medium effect size. We can report this estimate with
the significant t test in Example 9.3 by stating,

Changes to animate objects were identified significantly more quickly than


o

were changes to inanimate objects, t(34) = -2.555, p < .05 ( η 2 = .16 ).


D

Confidence Intervals for


the Related-Samples t Test
In Example 9.3, we stated a null hypothesis regarding the mean difference in a pop-
ulation. We can further describe the nature of the effect by determining where the
effect is likely to be in the population by computing the confidence intervals.

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
162  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

As introduced in Learning Unit 7, there are two types of estimates: a point esti-
mate and an interval estimate. When using two related samples, a point estimate is the
sample mean difference score we measure. The interval estimate, often reported as a
confidence interval, is stated within a given level of confidence, which is the likelihood
that an interval contains an unknown population mean.
To illustrate confidence intervals for the related-samples t test, we will revisit

e
Example 9.3, and using the same data, we will compute the confidence intervals at a
95% level of confidence using the three steps to estimation first introduced in Example

ut
9.1. For a related-samples t test, the estimation formula is

b
M D ± t ( sMD )

tri
Step 1: Compute the sample mean and standard error. The mean differ-

is
ence, which is the point estimate of the population mean difference, is equal to
M D = −0.674 . The estimated standard error for difference scores sMD = 0.264 .

rd
Step 2: Choose the level of confidence and find the critical values at that
level of confidence. In this example, we want to find the 95% confidence interval, so
we choose a 95% level of confidence. Remember, in a sampling distribution, 50% of the

,o
mean differences fall above the mean difference we selected in our sample, and 50%
fall below it. We are looking for the 95% of mean differences that surround the mean
difference we selected in our sample. A 95% CI corresponds to a two-tailed test at a .05
st
level of significance. To find the critical value at this level of confidence, we look in the
t table in Table C.2 in Appendix C. The degrees of freedom are 34 ( df = nD − 1) for two
po

related samples. The critical value for the interval estimate is t = 2.042.
Step 3: Compute the estimation formula to find the confidence limits
for a 95% confidence interval. In column G,
y,

F17: t ( sMD )

F18: 95% CI upper limit


op

F19: 95% CI lower limit


tc

Because we are estimating the mean difference between two related samples from a
population with an unknown variance, we use the M D ± t ( sMD ) estimation formula.
To compute the formula, multiply t by the estimated standard error for difference
no

scores. In column G,

G17: =G10*G8
o

which yields
D

t ( sMD ) = 2.042 ( 0.264 ) = 0.539

in cell G17 of Figure 9.10b.


Add 0. 539 to the sample mean difference to find the upper confidence limit, and
subtract 0.539 from the sample mean to find the lower confidence limit. In column G,

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
Learning Unit 9   •  t Tests: One-Sample, Two-Independent-Sample, and Related-Samples Designs  163

G18: =G4+G17

G19: =G4-G17

which yields
M D + t ( sMD ) = −0.674 + 0.539 = −0.135

e
in cell G18 of Figure 9.10b, and

ut
M D − t ( sMD ) = −0.674 − 0.539 = −1.212

b
tri
in cell G19 of Figure 9.10b.
As shown in Figure 9.11, the 95% confidence interval in this population is between
-1.212 seconds and -0.135 seconds. We can estimate within a 95% level of confidence

is
that people take more time to notice a change in an inanimate object than they take

rd
to notice a change in an animate object.

Computing the Related-Samples


t Test Using the Analysis Toolpak
,o
We can also use the Analysis ToolPak available in Excel for easy and accurate calculation.
We will guide you through the steps to do the analysis for the related-samples t test.
st
Click on the Data tab, and then on the Data Analysis icon all the way to the right.
Select “t-Test: Paired Two Sample for Means,” as shown in Figure 9.12a, which yields
po

the dialog box in Figure 9.12b. For Variable 1, we select reaction times when the ani-
mate object changed, cells B3 through B38, which includes in B3 a label for the data.
For Variable 2, we select reaction times when the inanimate object changed, cells C3
to C38, and include in C3 a label for the data. The Hypothesized Mean Difference is
y,

0. Check the Labels box so that the output contains the labels from B3 and C3. We
keep our output on the same page by selecting Output Range and clicking in cell I1.
op

Clicking “OK” on the dialog box returns the output table in Figure 9.13. Notice that
we get the same t obt of -2.555 as in cell G11 of Figure 9.10b, and the same p value of
.015 as in cell G12 of Figure 9.10b. The t crit for df = 34 in Figure 9.13 cell J14 is 2.032 .
tc

FIGURE 9.11  ●  A
 t a 95% CI, the mean difference in response time falls
no

between -1.212 and -0.135.

95% CI −1.212 to −0.135


o
D

−2 −1.5 −1 −0.5 0 0.5


−1.212 −0.135

The point estimate is M = −0.674

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.
164  Section IV  •  Comparing Means: Significance Testing, Effect Size, and Confidence Intervals

FIGURE 9.12  ●  P
 erforming a one-sample t test with the Analysis ToolPak in Excel. (a) Selecting
“t-Test: Paired Two Sample for Means” to perform a one-sample t test.
(b) Specifying the location of the data and parameters for the t test.

(a)

e
b ut
tri
is
rd
(b)

,o
st
po
y,
op

FIGURE 9.13  ●  Results of related-samples t test using the Analysis ToolPak.


tc
no
o
D

Copyright ©2019 by SAGE Publications, Inc.


This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy