0% found this document useful (0 votes)
60 views9 pages

Expi Psych Chapter 14-16

The document provides information on analyzing results using chi-square tests and t-tests. It defines key terms like degrees of freedom and hypotheses for different statistical tests. It explains how to select the appropriate test based on the scale of measurement and number of groups. Requirements and assumptions for chi-square tests of independence and t-tests for independent and matched groups are outlined. Interpreting results and evaluating effect size is also discussed.

Uploaded by

Morish Roi Yacas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views9 pages

Expi Psych Chapter 14-16

The document provides information on analyzing results using chi-square tests and t-tests. It defines key terms like degrees of freedom and hypotheses for different statistical tests. It explains how to select the appropriate test based on the scale of measurement and number of groups. Requirements and assumptions for chi-square tests of independence and t-tests for independent and matched groups are outlined. Interpreting results and evaluating effect size is also discussed.

Uploaded by

Morish Roi Yacas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

CHAPTER 14: ANALYZING RESULTS

Hypotheses
H0: "[Variable 1] is independent of [Variable 2]"
To select the appropriate statistical test, first decide which H1: "[Variable 1] is not independent of [Variable 2]”
level of measurement (Nominal, ordinal, interval, or ratio or
scale) is being used to measure the DV and answer the H0: "[Variable 1] is not associated with [Variable 2]"
following questions:
H1: "[Variable 1] is associated with [Variable 2]"
• How many independent variables are there?
• How many treatment conditions are there?
• Is the experiment run between- or within-subjects? When null is true, X2 = 0, because none of the
• Are the subjects matched? frequencies differ.
-as X2 is larger than the critical value, you can
reject the null.

Chi-square test requires subjects to be tested


individually (not more than once).
-data are first organized in the form of a 2 x 2
contingency table.
-obtained frequencies (O) are compared with
expected population frequencies (E).

STATISTICS FOR TWO INDEPENDENT GROUPS

CHI-SQUARE (X2) TEST


- Measures data on nominal scales or categorized
variables
- a nonparametric test: it does not assume that
the population has certain parameters (i.e.
normal distribution) or that variances in the two
groups are about equal to each other.

Two main kinds of chi-square tests:

1. Chi-square Goodness of fit test


2. Chi-square test for independence DEGREES OF FREEDOM (df)
- This will tell you how many members of a set of
Compares frequencies (how many subjects answered data could vary or change value without
correctly and incorrectly) and determines whether changing the value of a statistic we already
those frequencies of responses represent frequencies know for those data;
expected in the population. - Formula: df= (R-1)x(C-1)
●Obtained Frequencies (O)- frequencies - Fewer degrees of freedom mean more
obtained in the experiment. variability between samples
●Expected Frequencies (E)- expected
population frequencies.

DATA REQUIREMENTS OR ASSUMPTIONS FOR CHI-


SQUARE TEST OF INDEPENDENCE

1. Two categorical variables.


2. Two or more categories (groups) for each variable.
3. Independence of observations.
4. Relatively large sample size.
•Expected frequencies for each cell are at least
•Expected frequencies should be at least 5 for
the majority (80%) of the cells.
INTERPRETING CHI-SQUARE TEST Hypotheses
- After finding the df, we can look up the critical value H0: μ1 = μ2 ("the two population means are equal")
in the table for chi-square distribution. If the critical H1: μ1 ≠ μ2 ("the two population means are not equal")
value is less than the obtained number, then we OR
reject H0. H0: μ1 - μ2 = 0 ("the difference between the two population
- means is equal to 0") H1: μ1 - μ2 ≠ 0 ("the difference between
Cramer’s Coefficient Phi the two population means is not 0")
• Estimate of the degree of association between the two *μ1 and μ2 are the population means for group 1 and group
categorical variables tested by chi-square. 2, respectively

To run a Chi-square test of Independence in SPSS, click


Analyze > Descriptive Statistics > Crosstabs

EFFECTS OF SAMPLE SIZE


- If sample size affects variability, it also affects the
size of the test statistics. The exact shape of the
distribution of t changes depending on the size of
the samples.
- t distributions are like normal curve– as sample size
increases, t distribution becomes more and more like
t TEST FOR INDEPENDENT GROUPS the normal curve; small samples have a flatter, wider
- used to evaluate interval or ratio data from a two- shape.
group experiment.
- t test for independent groups (or Independent Robust: rare problem; it means that the assumptions of the
Samples t Test) compares the means of two test, like a normal distribution, are violated without changing
independent groups in order to determine whether the rate of the type I and II errors.
there is statistical evidence that the associated
population means are significantly different - Fewer subjects --> less likely to reject the null.
- It is a parametric test. - Critical t values change depending on the type of
- When we evaluate the likelihood of obtaining a hypothesis tested (one or two tailed).
particular value of t, we are performing a t test.
CONFIDENCE INTERVALS
- Represent a range of values above and below our
Data requirements/ Assumptions sample mean that is likely to contain the population
1. Dependent variable is continuous (i.e., interval or ratio mean with the probability level (usually at 95% or
level) 99%) that the mean of the population would actually
2. Independent variable is categorical (i.e., two or more fall somewhere in that range.
groups) - For example, with a mean of 0 and 95% confidence
3. Cases that have values on both the dependent and interval equal to +2.060, the range for our 95%
independent variables confidence interval (CI) would be 0 +2.060.
4. Independent samples/groups (i.e., independence of - Doing the addition and subtraction yields a CI
observations) ranging from -2.060 to 2.060
5. Random sample of data from the population
6. Normal distribution (approximately) of the dependent
variable for each group 7. Homogeneity of variances (i.e.,
variances approximately equal across groups) 8. No outliers
ANALYZING MULTIPLE GROUPS AND
FACTORAL EXPERIMENTS

ANALYSIS OF VARIANCE (ANOVA)


- ANOVA is a statistical procedure used to evaluate
differences among three or more treatment means.
- Divides all the variance in the data into component
parts and then compares and evaluates them for
statistical significance.

In the simplest analysis of variance, all the variability in the


t TEST FOR MATCHED GROUPS data can be divided into two parts:
- Also called a Within-Subjects t Test or Paired t Test. 1. Within-groups variability- degree to which the
- It compares the means of two measurements taken scores of subjects in the same treatment group differ from
from the same individual, object, or related units. one another (how much subjects vary from others in the
- uses the same family of t distributions. group).
- In paired t-test, we only require that the difference 2. Between-Groups variability- degree to which the scores of
of each pair is normally distributed. subjects in different treatment groups
differ from one another (how much subjects vary across each
Data requirements/ Assumptions different level of the IV)
1. Dependent variable that is continuous (i.e., interval or ratio
level)
2. Related samples/groups (i.e., dependent observations)
3.Random sample of data from the population
4. Normal distribution (approximately) of the difference
between the paired values
5. No outliers in the difference between the two related
groups

Hypotheses

H0: μ1 = μ2 ("the paired population means are equal")


H1: μ1 ≠ μ2 ("the paired population means are not equal")
OR
H0: μ1 - μ2 = 0 ("the difference between the paired
population means is equal to 0") H1: μ1 - μ2 ≠ 0 ("the
difference between the paired population means is not 0")
SOURCES OF VARIABILITY
*μ1 is the population mean of variable 1, and μ2 is the
population mean of variable 2.
• Individual differences
• Making small mistakes within the experiment (i.e. errors in
measuring lines that subjects drew) or other errors with the
procedure
• Experimental manipulation: We expect our treatment
conditions to create variability in the responses of subjects
who are tested under different levels of the independent
variable.
• Only a source of variability in a between subjects
design

- The variability within groups comes from error; the


variability between groups comes from error and
treatment effects. If IV had an effect, between
groups variability should be larger than within.

F Ratio- A statistical test used in ANOVA; the ratio between


the variability observed between treatment groups and the
variability observed within treatment groups.
INTERPRETING RESULTS
- When testing F, you are testing only the overall
pattern of treatment me

If you are using an ANOVA test with more than 2 treatment


groups, you cannot tell WHERE the significant difference is
but you can tell that there IS one.
ANOVA:
1. One-way ANOVA Two types of follow-up tests:
1. Post hoc tests: tests done after the overall analysis
2. Two-way ANOVA indicates a significant difference; used to make pair by pair
comparisons of the different groups to see where the
difference is; less powerful, more conservative; can increase
Data requirements/ Assumptions for One-Way ANOVA chance of type II error.
1. Dependent variable that is continuous (i.e., interval or ratio
level) 2. A priori comparisons: tests between specific treatment
2. Independent variable that is categorical (i.e., two or more groups that were anticipated or planned before the
groups) experiment was conducted; aka "planned comparisons";
3. Cases that have values on both the dependent and chance of type I error not increased if the number of planned
independent variables comparisons is less than the number of treatment groups;
4. Independent samples/groups (i.e., independence of less conservative, more powerful
observations)
5. Random sample of data from the population
One-way WITHIN SUBJECTS (or repeated measures)
6. Normal distribution (approximately) of the dependent
variable for each group (i.e., for each level of the factor)
ANOVA
7. Homogeneity of variances (i.e., variances approximately - Analyze the effects in a multiple group experiment
equal across groups) 8. No outliers testing one IV that uses a within subjects design.
- When we analyze data from a factorial experiment
Hypotheses (between-subjects factorials, within-subjects
H0: μ1 = μ2 = μ3 = ... = μk ("all k population means are equal") factorials, and mixed factorial designs), we evaluate
H1: At least one μi different ("at least one of the k population main effects and any interaction between the
means is not equal to factors.
the others")
*μi is the population mean of the ith group (i = 1, 2, ..., k)
*k = the total number of groups (levels of the independent TWO-WAY ANOVA
variable) - used to estimate how the mean of a quantitative
variable changes according to the levels of two
categorical variables.
One-way Between subjects ANOVA - Treatment groups are independent from each other
and the observations are randomly sampled. Assume
- Used to evaluate a between-subjects experiment population from each group is normally distributed
with three or more levels of a single independent on the DV.
variable.
- Samples must be randomly selected. Normally
distributed on the DV and the variances are equal
(homogeneous).
- ANOVA uses the term mean square to denote
variability or variance.
F ratio= mean square between / mean square within.

MSw: represents the portion of the variability in the data that


is produced by the combination of sources called “error”

MSb: the amount of variability produced by both error and


treatment effects in the experiment

<3 ILYA ga :p
/sbi
CHAPTER 15: DRAWING
CONCLUSIONS: The Search for the
Elusive Bottom Line
Evaluating the experiment from the inside: Internal
Validity

A research is internally valid when:


• It clearly demonstrates a cause and effect relationship.
• When the experimental methodology is correct.
• It is free of confounding variables.
• Appropriate methodology and control techniques were
used.

How to make your research internally valid?


- Plan ahead
- Anticipate potentially confounding variables
- Follow standard techniques like random assignment,
constancy of conditions, counterbalancing, etc.
- Be sure the experimental setup created the
conditions you wanted

Statistical Conclusion Validity


- The validity of drawing conclusions about a
treatment effect from the statistical results that
were obtained.

Taking a broader perspective: External validity


- An experiment is externally valid when the results
can be extended to other situations
- Do the findings have implications outside the
experiment?
- It’s not an either/or matter; it is a continuum

Basic requirements in making a research experiment


externally valid:
1. The experiment is internally valid
2. Findings can be replicated

Generalizing across subjects


How we can generalize from one group of subjects to
another?

Generalizing from procedures to concepts/ principles


Operation of general principles; results are not unique to the
particular procedures used in the experiment.

Generalizing beyond the lab


Lab is the most precise tool for measuring the effect of an IV
as it varies under controlled conditions but this causes a
problem for real life situations that cannot be perfectly
controlled.

Five general approaches to increase external validity


1. Aggregation
2. Multivariate Designs
3. Nonreactive measurements
4. Field Experiments
5. Naturalistic Observation

Handling a Nonsignificant Outcome


1. Faulty procedure
- lookout for confounding; numerous uncontrolled
variables increasing amount of variability between
subjects' scores; unreliable measuring instrument; IV
with a weak effect; sample may not be large enough;
manipulation was inadequate
2. Faulty hypothesis
- reasoning got confused; overlooking key factors
from prior studies,
- be cautious in drawing conclusions from <3 ILYA ga :p
nonsignificant results. /sbi
• A good method is to use the funnel analogy—start
CHAPTER 16: WRITING THE broad and narrow it.
RESEARCH REPORT • A reader should follow the thinking that led to the
hypothesis.
• Cite all background literature.
I. THE WRITTEN REPORT: Purpose and • Hypothesis is usually stated toward the end of the
Format intro.
- The primary purpose of a written report is
communication.
- Research reports are written in a scientific writing
style – fact-filled, highly structured, and concise form
of writing used in research reports.
- Goal is to present objective information; avoid
seeming opinionated about your topic. (Avoid
pronouns like I or We)

Tips for Writing Scientifically:

1. Scientific style is parsimonious—the author gives d. METHOD


complete information in as few words as possible.
2. Avoid flowery words, slang words, and contractions. • Tells reader how you went about doing the
3. Use unbiased and nonsexist language; "he or she", experiment. Describe the materials and procedures
"people" instead of "mankind" used in enough detail to allow other researchers to
4. Avoid language with negative overtones; do not say replicate the experiment.
homosexuals; do not say "mentally ill person,” say • Labeled into subsections (but may deviate from the
"persons diagnosed with mental illness" instead. subheadings)
o Participants
- Psychological reports are expected to follow the o Materials/apparatus
American Psychological Association (APA) format. o Procedure
- Some may vary but generally follows the same
structure.
- The latest version of the APA Publication Manual is
the APA 7, which released in October 2019.

II. MAJOR SECTION: of a research report

a. TITLE
 Descriptive title  should include dependent and
independent variables and the relationship between
them
 Recommended title length is 12 words or less, so
titles must be concise.
e. PARTICIPANTS
b. ABSTRACT
• Important characteristics of subject sample
 Summary of the report • Should answer these key questions:
 Typically ranges from 150-250 words o How many?
 Written in past tense o Relevant characteristics (age, sex, weight,
 Concise summary of the experiment: should contain etc.)? How are participants selected?
the problem studied, the method, results, and the o How was they compensated?
conclusions • Give information into the external validity of the
 Leave out citations unless replicating a specific experiment.
published experiment • Give any information on participants that may have
 Write about the specific subjects used dropped out and what happened (circumstances).

c. INTRODUCTION
f. MEASURES
 Tells the reader what you are doing and why.
 State hypothesis and how it will be tested. • Includes descriptions of the measures used for data
 After reading the introduction, readers should have collection (e.g., questionnaires, behavioral
answer to the following questions: observations, interviews).
o What problem are you studying? • If you created your own measuring instrument,
o What does the prior literature in the area describe the instrument and include sample items in
say about the problem? What is your the Method section; lengthy questionnaires are
hypothesis? usually placed in an appendix.
o What thinking led up to that hypothesis? • If a standardized questionnaire is used, identify it by
o What is the overall plan for testing the name and include a citation. Include information
hypothesis? about the reliability and validity of your measuring
o Do you make any specific predictions about instruments.
the outcome of the study? • Also describe your computer equipment and
software programs if data collection is computerized.

g. PROCEDURES
• Clear description of all the procedures followed.
Report everything step by step in chronological
order.
• How subjects were assigned to the different groups.
The setting and length of each session(s).
• Experimental manipulations and how extraneous
variables were controlled.
• Exact instructions given to participants.
• Design: between-subjects, within-subjects, or
mixed.
:If your experimental design is not easily
understood by reading the procedures, consider the
option of including a subsection called Designs.
FIRST PAGE
• First page is the title page; here you will type your
h. RESULTS title, your name, and your affiliation, all centered in
the top half of the page.
• Statistical procedures used and what was found. • Create a page header that includes the running head
• Begin with a brief summary of the primary findings (flush left) and the page number (flush right).
then report the results of the statistical tests and • Running head is a brief version of your title, typed in
summary data. capital letters. It should be 50 characters or less. If
• Usually don't report individual scores unless a very the full title is short, RH may be the same as your full
small N design. title.
• Indicate statistical tests and the values • Author notes are also included on the title page. First
o Indicate degrees of freedom and paragraph of author notes include each author’s
significance levels name, departmental affiliation, and university.
o Means, standard deviations, significance
level selected
o Avoid stating data twice (don't repeat data
in the report and in a figure) Should only
contain objective data

i. DISCUSSION

• Overall purpose is to evaluate experiment and


interpret the results.
• In the discussion, you need to pull everything
together. You need to explain what you have
accomplished:
o Was your hypothesis supported?
o How do the findings fit in with prior
research in the area?
o Are they consistent? If not, can any
discrepancies be reconciled? What do your
results add to current knowledge.
• Discussion section is the place to talk about what
you think your results mean: What are the
implications of the research? Can you generalize
from the findings? Does further research suggest SECOND PAGE
itself? • Second page contains the abstract.
• Type the word Abstract as a centered
• heading in boldface type on page 2.
• Type the abstract in block form (no paragraph
j. REFERENCES
indent) and use double spacing.
• Any articles or books mentioned in the report should
be listed in your References section at the end.
• Be sure that the references are accurate and that
they follow the most recent APA procedures for
listing them—APA 7.

III. PREPARING YOUR MANUSCRIPT:


Procedural details

1. Double-space everything in the manuscript.


2. Leave margins of at least 1 inch all the way around.
3. Same font should be used throughout the paper.
Some suggestions are 12-point Calibri, 12-point Arial,
12-point Times New Roman.
4. Use 12-point Calibri.
THIRD PAGE

• At the top of page 3, type the complete title in


uppercase and lowercase letters, centered.
• Use 5- to 7-space paragraph indents for typing the
body of the Introduction.
• Do not label the Introduction section.
• Do not leave any extra spaces between the title and
the beginning of the paper or between different
paragraphs or between different sections of the
report.

METHOD

• You do not need a new page to start the Method


section; begin it wherever the Introduction ends.
• Type the word Method as a boldfaced, centered
heading.
• Label each subheading (such a in boldface, flush
with the left margin of the page)
DISCUSSION

• Type the word Discussion as a bold, centered


heading after the Results section ends and then type
your discussion in paragraph form.

RESULTS

• Start Results section where the Method section


ends, and simply type the word Results as a
boldfaces, centered heading.
• Be sure to follow the correct format for reporting
statistical data. REFERENCES
• Begin your reference list on a new page.
• Type the centered heading References in Bold at the
top of the page.
• List all your references in alphabetical order by the
last name of the first author.
• Use the hanging indent style and start each
reference on a new line.
USE PAST TENSE
When discussing or describing research that has already
taken place, use past tense in your descriptions. Also use
the past tense to describe your results (e.g., ”binge
drinking decreased significantly”).

The present tense is usually preferred for defining terms,


making general claims, discussing implications of your
results, or stating conclusions.

TRANSITIONAL WORDS
Transitions are used to create “flow” in your paper and
make its logical development clearer to readers.

When possible, keep your sentences short.

AVOID UNUSUAL SYNTAX AND TECHNICHAL


JARGON
It is distracting and makes your report more difficult to
read

Also, never use euphemisms for everyday words.


APPENDIX
• An appendix is a section that can come after the
Reference list that includes supplementary content
that does not belong in the main text. AVOID GRAMMATICAL OR TYPOGRAPHICAL
• Format an appendix the same way you would start a
ERRORS
reference list, with "Appendix" and the title bolded
and centered at the top of a new page.
Such errors may detract greatly from your research
• If there is more than one appendix, start each on a
report. Check your spelling and punctuation carefully.
new page and include a capital letter with the
heading.
• Appendices are lettered and organized by the order
WORK ON REVISIONS
they are referred to in the body of the article.
Good revision and editing can transform a mediocre first
draft into an excellent final paper.

 As with any writing, try to form each paragraph


around a single main idea, but avoid paragraphs that
are composed of only one or two sentences.
 When you are describing numbers in a sentence,
know when to use numerals (10, 20, etc.) and when
to use words to express them. Per APA 7, Section
6.32, use numerals to express numbers 10 or above
(e.g., 11, 23, 256). Per Section 6.33, write out
numbers as words to express numbers up to nine
(e.g., three, seven, eight).
 Do not make grand statements based on the data
from one experiment. Your results may enable you
to reject the null hypothesis; however, you have not
proven anything; you simply confirmed your
predictions.
 Use words like probably, likely, and might; avoid
• To refer to the Appendix within your text, write (see words like proven, true, and absolutely in discussing
Appendix A) at the end of the sentence in your results. Remember that your statistics are only
parentheses. making probability statements.
• Must be double-spaced.  Finally, keep in mind that this process is part of a
• Appendices often include: scientific venture.
o Informed Consent Letters to participants  Pay particular attention to your Discussion section.
o Survey Instruments This section is where you wrap things up.
o Interview/ Focus group protocols  Try to view each report that has a beginning, a
o Data Observation Sheets middle, and an end.

IV. MAKING REVISIONS

USE ACTIVE VOICE

Your report will be more interesting to read if you use


the active voice whenever possible.

For example, “Smith and Jones found that...” rather than


“It was found by Smith and Jones that...”
<3 ILYA ga :p
/sbi

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy