0% found this document useful (0 votes)
13 views23 pages

Research Methods Notes

The document provides an extensive overview of research methods, particularly focusing on experiments, their designs, and various observational techniques. It discusses key concepts such as independent and dependent variables, types of experimental designs (repeated measures, independent groups, matched pairs), and the strengths and weaknesses of laboratory, field, and natural experiments. Additionally, it covers self-reports, case studies, and ethical considerations in research, emphasizing the importance of validity, reliability, and confidentiality.

Uploaded by

sarala030578
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views23 pages

Research Methods Notes

The document provides an extensive overview of research methods, particularly focusing on experiments, their designs, and various observational techniques. It discusses key concepts such as independent and dependent variables, types of experimental designs (repeated measures, independent groups, matched pairs), and the strengths and weaknesses of laboratory, field, and natural experiments. Additionally, it covers self-reports, case studies, and ethical considerations in research, emphasizing the importance of validity, reliability, and confidentiality.

Uploaded by

sarala030578
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

RESEARCH METHODS

EXPERIMENTS
Is an objective, scientific procedure used to make a discovery, run and test a hypothesis and to
present a known fact - to check validity.
An investigation that is conducted to establish a primary cause and effect relationship is called
an experiment.

By finding out the cause and effect relationship, the experiment may be designed to manipulate,
isolate and maneuver certain variables to achieve a desired aim of the study. These variables
are known as the IV and DV.

IV - Has a causal effect on the DV - meaning the IV is the variable that researchers
systematically use, manipulate and control and the DV is the variable which bears the effect and
thus is measured by the experimenter.

Confounding Variable - Has an unintentional and undetermined effect on the DV (talent /


interest / ability / personality of participant).
Extraneous Variable - Could affect the DV but is it a variable that can be controlled by the
experimenter (diet / sleep routine).

Demand characteristics - features of the experimental situation which give away the aims
causing participants to change their behaviour. Reduces validity.
Random Allocation - A way to reduce the effect of confounding variables by such as individual
differences. Participants are put in each level of the IV such that each person has an equal
chance of being in any condition.
Participant Variables - Individual differences between participants that could affect their
behaviour in a study. They could hide or exaggerate differences.

Order Effects: Practice and Fatigue effects are the consequences of participating in a study
more than once. They cause changes in performance between conditions that are not due to
the IV.
Practice Effect - A situation where participants’ performance improves because they
experience the experimental task more than once and gain familiarity.
Fatigue Effect - A situation where participants’ performance declines because they have
experienced an experimental task more than once. Ex; due to boredom or tiredness.

counterbalancing - used to overcome order effects in a repeated measures design. ABBA


design.
Standardization - keeping the procedure for every participant exactly the same to ensure that
any differences between participants or conditions are due to the variables under the
investigation.
Reliability - extent to which a procedure would produce the same results with the same people
on each occasion. The consistency of a research.
Validity - the extent to which the researcher is testing what they claim to be testing.
Generalise - apply the findings of a study to a wider setting / population.
Ecological validity - extent to which the findings of research in one situation would generalize
to other situations. Influenced by whether the situation represents the real world effectively and
has mundane realism.

EXPERIMENTAL DESIGNS
How participants in the study are assigned to different settings, environments and scenarios in
an experiment.

Repeated Measures Design:


Uses the same group of participants in different conditions and scenarios repeatedly.
Strengths:
Less chances of participant variables, as each participant experiences all levels of the IV.
Therefore, it is less likely to misinterpret the effects of the IV on the DV.
Fewer people needed to conduct the experiment, hence it is quicker and may warrant faster
results in a study with less logistical issues.
Weaknesses:
Order effects are likely to follow (the effects of an experimental order design which may distort
results and reduce validity)
Participants may exhibit demand characteristics as they are familiarized with the objective of the
study.

Independent Groups Design:


Taking the group of participants, randomly dividing them into different factions and then making
them go through conditions so that each participant may only be in one condition and thus have
limited exposure to the aim of the study.
Involves using 2 separate groups of participants; one in each condition.
Strengths:
Different participants are used in each level of the experiment (IV) so less order effects are to be
expected.
Less demand characteristics as they do not experience or witness all levels of the IV
Difference in results will be detected quickly.
Weaknesses:
The results may be altered based upon the factor of participant variable, given that there may
be significant individual differences on each respective level of the IV.
More participants are needed to use this experimental design and thus may be more expensive
and time-consuming.

Matched Pair Design:


Categorizes participants based on similar characteristics such as age, gender, ethnicity, IQ, etc.
into pairs and then randomly assigning them to the different conditions. One member of each
pair must be randomly assigned to the experimental group and the other to the control group.
Each condition uses different participants but they are matched in terms of similar variables.
Strengths:
Participants are exposed to only 1 level of the IV, hence there are reduced demand
characteristics.
Less participant variables because experimenter has attempted to pair up the participants so
that each condition has people with similar abilities and characteristics.
Reduced order effects.
Weaknesses:
It is extremely risky in the sense that the loss of 1 participant will warrant the loss of 2
participant’s data.
Very time consuming
Has the chance to distort results unless a reliable and validated matching criteria is established
as having the same similarities is very difficult and rare.
Small sample size and thus not generalizable for the larger context.

TYPES OF EXPERIMENTS
Laboratory
Experimental procedures are mainly conducted in artificial controlled settings. Controls are
applied to administer and operationalize effective measurements and variables. Participants are
tested under strict conditions, set up by the examiner.
Strengths:
High controls - More controls on extraneous variables from affecting the DV. Standardized
procedures - more reliable.
When variables are controlled and monitored, we can find out the cause and effect relationship
faster and easier.
This improves validity (accuracy and authenticity of research)
Weaknesses:
High controls means low ecological validity.
Participants might get an idea of the aim and the setting might make the results prone to
demand characteristics.
Low ecological validity also limits generalizability of the outcomes to a real-life context.
This makes the findings prone to researcher bias (a confounding variable).
Poor external validity

Field Experiments
Take place in a natural environment (real-life setting) for the behaviour being studied. Influence
of extraneous variables cannot be as strictly controlled. But the researcher manipulates
something (IV) to see the effect of this on the DV.
Strengths:
High ecological validity as it is more reflective of the participant’s behaviour in real life situations.
Participants may be unaware of the aim or the objective of the study - having less exposure to
the IV there is an immunity to demand characteristics.
High in generalizability and representativeness.
Weaknesses:
Harder to control variables in the study, difficult to standardize and thus replicate.
This may threaten credibility
If certain controls are not established, vital information may be missed out depending on the
scope of the experiments.
Human errors in researching are highly probably - difficult to account for.
May raise ethical issues of consent as participants are unaware of them being studied.

Natural Experiments
Is conducted in the participant’s natural everyday life setting where they are unaware that they
are being researched making it a covert observation.
Experiments have no control over the moniropring or manipulation of variables or the levels of
the IV. They happen, change and occur by themselves and so do the differences and variations
in the experiment. IV is naturally occurring and researchers cannot assess, operationalise or
control the variables.
Strengths:
Extremely high ecological validity and representativeness as participants are exhibiting the most
natural of their behaviour patterns.
Less prone to demand characteristics as it is a covert observation.
Can be used to investigate variables that are not practical or ethical to manipulate.
Can be used to study real life world issues.
Weaknesses:
Being unable to randomly allocate participants to conditions means that sample bias may be an
issue. A casual IV-DV effect is unlikely.
Ethical issues (lack of informed consent and deception is often required)
Less prospects of standardizing or controlling procedures which may add confounding
variables, which may add confounding variables, altering the results of the experiment.
Low validity as it is not replicable due to researcher’s lack of control, procedure cannot be
repeated so reliability of results cannot be checked.
Possible more time consuming than labs or fields
There is no control over the variables.

OVERT OBSERVATION
Participants can be promptly asked for their informed consent. However, to lessen the
probability of demand characteristics / hide aim of research, the participants may possibly be
deceived. It is important that after the experiment has concluded, they must be debriefed.

In natural / field experiments, because the observation is covert, it is highly unlikely that the
participants are aware of the situation taking place. The ethical issue arises when considering
the withdrawal rights in a covert scenario.
Because they don't know the implications of the effects of the procedure in the experiment they
will also not know when to withdraw and backout in order to protect themselves from plausible
physiological and psychological harm.
To maintain objective integrity of the experiment, privacy and confidentiality is a necessity in the
cases. In lab experiments, confidentiality can be respected as if there are interviews and
questionnaires they are most probably pre-planned and set carefully with the prospect in mind.
However, the invasion of privacy is a risk when considering field or natural experiments, as they
are usually covert but are in the daily, personal spaces of the participants’ lives.

Confidentiality however, can be respected in all the experiments by keeping their participation
(via names and other identity details) anonymous. The prospect of them having any trace or link
to the study in the future, which might reveal sensitive information such as their workplace,
home or name - risking compromising classified participation (confidentiality) must also be taken
care of when designing a study.

Informed Consent
Deception and Debriefing
Withdrawal rights
Protection from harm (physical and psychological)
Confidentiality
Privacy

SELF REPORTS
How the participants dispense information about themselves to the researcher directly. Involves
informed consent as the participant knows he's in a study.

QUESTIONNAIRES
Questions are presented to the participant in a written format either online or on paper.
Closed ended - Have a fixed and predetermined response set such as ‘Yes/No’. Take the form
of simple choices or ones that are specific to sector information.
Open ended - ask for descriptive qualitative responses that are individual to the participants
themselves. Naturally contain more in depth quality which aid in exploring reasons behind a
particular action or response. Keyword; ‘Why…’ ‘Describe…’
If more than 1 researcher is involved, there may be differences between them. - Inter-rater
reliability.

Rating scales - psychometric measurement tool to assess and quantify variables. Easier to
assess statistically and improves chances of organizing.
Advantages:
Quick / easy
Provides quantitative data - easier to analyse / organize data
Easier to summarize / distribute data
Privacy is respected as it's anonymous. Reduces social desirability bias - increases validity
It is replicable
Disadvantages:
Participants may respond to demand characteristics threatening validity.
Data provided is not qualitative and not in depth - vague
Gives a limited perspective of research
No guarantee they're not lying - social desirability bias
Interrater reliability - The extent to which the way the 2 researchers interpreting the qualitative
responses will produce the same records from the same raw data.
Filler Questions - items put into a questionnaire or test to disguise the aim of the study by hiding
important questions among them.

INTERVIEWS
Face to face research method using verbal questions.
Question and answer sessions are followed and responses are noted. Allow a far more
collection of qualitative data.

Structured Interview - Questions asked are commonand same among all participants with the
order of them being fixed. There may even be specific instructions for the researcher i.e body
language (relaxed or strict) / dress code and overall demeanour - depending on the kinds of
responses they might want to prompt.
Questions are all standardized.

Unstructured interview - Has no limitations, no standardisation. Questions are not in a


predetermined format. They are flexible according to what the participant says and thus
questions may be different for each participant. It however, may be hard to collect and
categorize and harder to compare.

Semi-structured Interview - Contains a mix of fixed questions and improvisational ones.


Comparisons can be made and average can be calculated. Also allows researchers to develop
ideas and explore issues. Can gather more clarity about a certain topic. Edits are thus possible
and allow researchers to explore underlying issues to make correlations / causal relations.

Advantages of self reports:


Participants are given the chance to express a wide range of feelings, thoughts and then explain
them. Data is rich - detailed (qualitative)
Data is numeric (quantitative) - easier to analyse and statistically relevant.
A large sample can be dealt with quickly and efficiently as a large audience can be reached.
Increasing representativeness and generalisability.
Easy to replicate - reliable. They are likely to be administered in a consistent way.

Disadvantages:
Closed questions often limit the range of expression of a participant which may miss out vital
information.
Participants may provide socially desirable responses (demand characteristics may surface) if
they are aware of the objectives of the research.
There are high chances of validity being low as a limited range of response sets might not
reflect a participant’s actual viewpoint and they may be compelled to answer differently.
Open ended can be time consuming to analyse.
Withdrawal is common
Researchers must be careful not to be subjective. They should aim for objectivity. Responses to
open questions may be interpreted differently by the researchers - may differ in opinion.

Subjectivity - personal viewpoint which may be biased by one's feelings, beliefs and
experiences. So may differ between individuals.
Objectivity - An unbiased external viewpoint that is not affected by an individual’s feelings.

CASE STUDIES
A detailed investigation which goes on for a certain extended period of time which focuses on
one subject. It is however not exclusive to one person – it may be an organization, a family, etc.
They involve a ‘longitudinal research’ which often used in therapies, includes a non-constricted
time-limit meaning it can go on for months and in some cases even years, which then develops
the study based on that particular subject which is being used to study a particular behaviour. It
is however not solely used for therapeutic purposes.

Detailed and in depth data gathered via different techniques. Useful for following developmental
changes.

Advantages:
Situations where it is logistically difficult or impossible to have a large participant sample – case
studies are ideal In those situations that allow behaviours to be studied in great detail.

Longitudinal study results in the collection of both quantitative data and qualitative data (rich and
detailed), which may measure and quantify developing behaviours. Should all lead to similar
conclusions.

Sample may be self-selecting so this frees the researcher up from ethical considerations such
as informed consent, privacy and confidentiality.

Ecological validity is usually quite high, as the behaviour that is being studied is a part of
everyday life.

Disadvantages:
Case studies very rarely produce quantitative data sufficient enough for statistical analysis –
which brings in the argument of this being a mere collection of anecdotal evidence (evidence
that is collected without strict controls or support, in a casual manner which is reliant heavily on
personal testimony.)
Level of detail may be invading a person’s private life. Hard to disguise their identity risk bearing
the guideline of confidentiality.

These often require a quite intense and intimate relationship between the participant and the
researcher and thus the problem of objectivity arises. They may develop opinions that directly
influence results gathered as they might be emotionally involved.
Conclusive decisions cannot be made as it only includes very few or one participant.
Non-generalizable.

Because the participant is unique, this might make researchers proceed with invalid procedures
and may draw false conclusions, making assumptions on lackluster grounds of evidence. Hard
to draw valid assumptions and unbiased findings as the study is only valid for the researcher.
Cannot be replicated. Findings may be limited to only this one case.

OBSERVATIONS
Observations are the procedure of watching and then consequently recording and documenting
the behaviour of the human or animal participants.
Can be done in 2 standard ways;

1. Naturalistic observation - conducted in the participants’ normal atmosphere without any


interference from the researcher (who are observing) them in their usual physical and
natural environment.
2. Controlled observation - conducted in an environment that has been manipulated by the
researcher. (lab or field)

If one considers the whole spectrum of possible behaviours it is a possibility that observations
may be non-focused – if this lack of strict controls continues then it is deemed an unstructured
observation.
Unstructured - observer records the whole range of possible behaviors, which is usually
confined to an pilot study stage at the beginning of a study to refine the behavioural categories
to be observed.
Advantage:
Ensures that any important information or behaviour is recognised but it may be very difficult to
record all the activities accurately and many may be irrelevant.
Likely that a structured will produce more reliable data.

A structured observation however is designed to concentrate on a specific set and range of


behaviours, record them and then proceed to categorize them. This also helps for the testing
and verification of the study’s reliability via a technique called inter-rater reliability (the
consensus of 2 or more experimenters to verify the validity of the study by judging on the
degree of agreement in their respective research results via the same, common methods.)

Behavioural categories must be observable actions and operationalised. This helps the
observers to be consistent i.e improves inter-observer reliability.

Inter-observer reliability - the consistency between 2 researchers watching the same event
and whether they will produce the same records.

Observations are often also conducted in social settings, either participant or non participant.
A participant observation - A researcher who watches from the perspective of being part of
the social setting. They are part of the situation / setting.
Non participant observation - A researcher who does not become involved in the situation being
studied. Ex; by watching through 1 way glass or by keeping apart from the social group of the
participants.
Observers often variate their stance as:
Overt observers - role of the observer is obvious to the participants. Observers are openly
watching and documenting the participant behaviour with the participant knowing they are being
studied.
Strengths:
Does not raise ethical issues
Is practical and thus can be conducted over an extended period of time.
The researcher can make notes and record details openly without having to rely on memory as
they don't have to worry about blowing their cover.
Researchers can ask a number of questions using different methods.
Weaknesses:
A very high risk of demand characteristics which lowers validity as activity recorded is less likely
to reflect real-world behaviour.
High risk of incurring socially desirable responses from the participants.
Results may not always be representative - questioning the credibility of research

Covert observers - role of the observer is not obvious as it is ‘undercover’ / hidden or


disguised.
Strengths:
Increases validity - less or no exposure to the aim so no demand characteristics
Reduced effects of social desirability
Data can be better controlled as researchers can dig deeper and assess more in their natural
state of behaviour.
High rate of inter-rater reliability as 2 observers may be simultaneously observing.
Weaknesses:
Raises ethical issues such as informed consent / deception / privacy
Patients may feel distressed at the violation of their privacy
The legality of this is often questioned
There is risk of identity being revealed which would discredit the whole operation and leaves the
researcher in constant stress. Data recorded may lack validity as it is based off of memory
Hard to sustain or conduct over time. Participants may interact with researchers in ways they
wouldn't want to if they knew what the real purpose was.

Advantages of observations:
The observed behavior is natural, authentic as they are unaware – this increases ecological
validity.
The data collected is often quantitative though structured controls which have clearly defined
categories is on terms with being objective and statistically comprehendible via analysis.
Chances gathering extremely rich data is very high if the observation is unstructured.
If participants are unaware, risk of inducing demand characteristics is improbable which
increases validity.

Disadvantages of observations:
The participant cannot explain or elaborate for the cause of them behaving a certain way as it is
a subjective approach (which when asked might expose the aim of the research).
Observations may not be reliable due to natural and logistical issues such as view obstruction,
missing out on details, relying on memory etc.
Naturalistic observations make it hard for controls to be established and this in turn, makes it
harder to control confounding variables – making it difficult to formulate a cause-and-effect
relationship.
Difficulty in replication.
Various ethical issues arise – deception, lack of informed consent as people are being observed
without their permission.

CORRELATIONS
Technique used to investigate a link between 2 measured variables.
Useful when it is possible only to measure variables rather than manipulate them.
If and any link is found between two variables in a correlation cannot be assumed to be a causal
relationship. We cannot assume that one variable is the reason that there is change in another
variable.

In order to look for or establish a correlation between two variables, each variable must exist
over a range/spectrum and it must be possible to measure them numerically. To collect data and
information for correlations all of the above mentioned research methods are used (self-reports,
observations, etc.)

It is important to note that before assuming that one correlation is the cause of increase in a
variable which in turn has caused an increase in the other variable – there are other factor
factors that might respectively cause changes in both variables.

All that is possible to be established is that the two variables that exist in a relationship vary
together, not that there exists a causal relationship between them, as it even may coincidental.
As a result, in a correlation there are ‘measured variables’ or ‘co-variables’ rather than
dependent and independent variables.

The relationship’s nature between the two variables in a correlation can be described in terms of
its directions – positive or negative.

In a positive correlation, the two variables present increase together, in the same direction, so
higher values on one variable consequently correspond with higher values on the other (directly
proportional). For example, a positive correlation would be between exposure to aggressive
models and violent behavior – greater exposure to aggressive models would result in increased
violent behavior (as witnessed in Bandura et al.’s study).

In a negative correlation, the two variables present increase and decrease together
consequently (inversely proportional). Higher values on one variable consequently correspond
with lower values on the other. For example, a negative correlation might exist between
‘Obesity’ and ‘Low income’ – with higher levels of obesity being observed with lower levels of
income, given low-quality food with next to zero nutritional value is often cheap such as fast
food, candy, etc.

Correlation Coefficient
Is a number between -1 and +1 that states how strong a correlation is. If it is close to 0 then
there is very little connection between the 2 variables at all.
If it is approaching +1 there is a positive correlation aka the variables are directly proportional,
with the both of them increasing as a consequence of one’s increment. (or decrement).
If it is approaching -1 then there is a negative correlation aka the variables are inversely
proportional, with the both of increasing/decreasing as a consequence of the other
decreasing/increasing.

Evaluation:
A correlational study can only be effective/valid if the measures of both variables test real
occurrences. For this, the variables must be clearly well-defined and relate directly to the
relationship that is being investigated.
The reliability of the correlation is dependent on the consistency of the variables. For some
correlations, such as those which utilize scientific scales – the measures will be high in reliability
(as they can be tested again and results will be objective).
For other cases, in which variables were measured using techniques such as self-reports or
observations, there is the plausible risk that reliability will be lower (as it will be difficult to
replicate and results will be subjective).

They are useful because they facilitate researchers to explore and then navigate problems
(hypothetically) when it is not practically or ethically possible to conduct experiments.

Advantages:
The main strength of a correlation is that it can provide precise information about the degree of
a relationship’s variables.
• Study behavior that is otherwise difficult/impossible to study.
• Collect quantitative data for statistical analysis which will help in determining whether the data
supports the study or not.
Disadvantage:
The main weakness of a correlation is that it is inconclusive i.e. it cannot show cause-and-effect
(which variables control which).
• No control over other influencing factors and variables.

Correlation does not mean Causation

RESEARCH PROCESSES
AIMS:
Tells us the purpose of the investigation. Help explain the reasons why a particular hypothesis is
being investigated. Expresses what the study intends to show.
In a correlation, the aim is to investigate the link between 2 measured variables.

HYPOTHESIS:
Is a ‘testable’ statement that is used to make the research more precise / exact. Predicts a
difference between levels of the IV or a correlation.
Ideally provides more detail about the variables being investigated and should be ‘falsifiable’ as
it is ‘tested’.
Alternative Hypothesis is the main hypothesis which can be written in many different ways. They
differ in the nature of prediction.

Types of Hypothesis:
Non-Directional Hypothesis (2 tailed hypothesis):
Used to determine the change in the IV and DV - it does however not indicate the direction
change i.e whether or not the effect results in an increase or decrease.
Predicts that there will be a change but not the direction of that effect.
IV will change the DV but not whether the effect will be an increase or decrease. Predicts that
there will be a relationship between the 2 measured variables.
This type of hypothesis is usually chosen if the effect of a certain variable is being used for the
first time, there is therefore no previous evidence to suggest what the results might be.
Directional Hypothesis (one tailed hypothesis):
When ‘previous evidence’ or ‘previous research’ suggests the direction of an effect (of the IV’s
on the DV), it is then when a Directional hypothesis is used. This is also known as the
‘one-tailed’ hypothesis.
In an experiment it means saying which condition will be best (produces the highest score).
In a correlational study whether there will be a negative or positive effect.
It is important to remember that your hypothesis should not say that one factor instigates a
change in the other.

Null Hypothesis:
In an experiment, this states that any difference in the DV between the levels of the IV is so
small that it is likely to have arisen by chance. The difference between the DV and IV is so
insignificant it is probably a coincidence. Predict scenarios of pure chance.
You always have to state both of the levels of the IV and DV. Correlational study predicts no
relationship.

DEFINING, MANIPULATING AND CONTROLLING VARIABLES


Variables are factors that change or can be changed. In experiments these are the IV and DV as
well as any extraneous variables that are or are not controlled.

Experiments look for changes and variations in the DV between two or more than two, levels of
the IV which are put down by the experimenter/researcher.

The essential aspect is for the IV to be concretely defined or better, operationalized so that the
manipulation of the conditions, project the intended effects.

To make it clear:
Variables are factors that are prone to change or can be changed and moulded.

Operationalizing: Involves defining each variable of interest in terms of the operationalization in


terms of the operations taken to ‘measure’ it. This allows vague components to be empirically
measured and observed.
DV must also be operationalised so we can measure it effectively.

In order to be sure about their research’s findings, variables need to be controlled. Specifically,
in experiments where extraneous variables are likely to disrupt and complicate the results and
distort them for interpretation.

Confounding Variables, can either work against the reaction of the IV or in favour by
increasing the intended/expected outcome of the IV because they selectively act on DVs. Thus,
they serve as a “consequential effect” of the IV and you are left with no chance of knowing what
caused the change. They confuse the results.
Extraneous Variables which randomly affect all levels of the IV aren’t so problematic. The
difficult part is to identify and select which variables to manage before the experiment launches.
It is however. Also, important to note how if extraneous variables are not recognized and
acknowledged beforehand, they become uncontrollable variables, which would make the results
difficult to be construed as it would be difficult to distinguish the reactions/effects of the IV from
those of other variables that affect the DV.

Standardisation:
There are controls present that ensure that IVs represent what they are designed to I.e the
differences between them will produce the intended scenarios to examine the hypothesis –
ensuring validity and reliability. This enables every participant in the study to be treated equally
so that no participant variable arises. This is called standardization.

This is achieved by having a unified, standardized set of instructions, that provide the same
instructions to every participant involved in the study. For instance, a 10-question questionnaire
which asks about people’s dietary habits – all the participants should be told how to answer it
only strictly regarding their food patterns, so if any social desirability is there, it should be equal.

Procedures also need to be strictly standardized - this involves having equipment, tests and
designs that are consistent, measuring the same variable every time and always do so in the
exact, identical way. Assess the questionnaire about people’s dietary habits again. They should
again focus on strictly people’s food consumption patterns rather than why food patterns are like
that. It is related but not necessary to the context.

In laboratory experiments, standardization is easier because variables such as equipment are


better and more easily controlled. An example of this would be the stopwatch which is used to
regulate time intervals in the experiment or FMRI Brain scans which is an objective measure.
They do however also have to be performed in a standardized manner for them to be
interpreted. The controls used should be appropriated and be taught how to implement.

Situational Variable - A confounding variable caused by an aspect of the environment


Control - A way to keep a potential extraneous variable constant
SAMPLING OF PARTICIPANTS
A population is a group of people (or animals) with one or more characteristics in common. A
population of people can also be defined as people who share certain interests or have a
common feature. The sample of that population is what is recruited in a research.
Sample - is a group of people who participate in a study.
They are taken from the population and should be ideally representative of that group so that
findings are generalisable.
Target population of the study should also be recognized early on so that the sample the
researcher chooses should be relevant and representative.

Important things to consider when sampling:

• Sample details such as age, ethnicity, gender. They are basic essentials that should always
be considered.

• Sample details such as socio-economic standing, employment, education, occupation,


geographical location.

• Sample size. (Should be balanced in terms of being representative)

• Small samples usually are less reliable and less representative and thus generally less valid
to the clause of research.

SAMPLING TECHNIQUES:
Opportunity Sample - involves the researcher approaching people who are easy to find and
easily available, such as students who are studying mathematics in the same university
department. They are chosen because they are available.
Advantages:
It is relatively quick and easy to recruit participants. A large, It is convenient for the researcher
and the participant as in some cases – there are also incentives.
Representative samples can be obtained without a lot of effort.
Disadvantages:
Participants in the study are unlikely to be actually representative of the target population in the
sense that it could be biased (if they’re paid or given credits to), when the researcher chooses
the sample.

Volunteer Sampling (Self Selecting) - This usually revolves around researcher and
experimenters advertising for participants. An advert could usually appear in a newspaper or on
notice boards, online too. The people who reply are ‘self-selecting’ - that is they willingly
volunteer themselves for the research. Sometimes volunteers are not given incentives at all,
neither credentialled nor paid, often they are given a small amount of both or one of the other
Those who reply to the notices become the sample.
Advantages:
They are useful when the research requires participants that are specific to the needs of the
experiment. They are likely to be committed.
Recruiting participants is easier because the advert can easily be placed in print media, social
media and digital media which has a large enough reach.
It is less time-consuming.
Disadvantages:
This is expensive (adverts in the media would cost a lot of capital investment) and in some
cases it would take even more effort to convince the participant to be in the study.
People may not see the advert, they might ignore it, they might not reply even after seeing it.
Extraneous variables such as the actual eligibility criteria of the participants being different from
what’s actually required.
No way to assure the representativeness of the target population. People who respond may be
similar (have free time).
Plausible demand characteristics and social desirability is risked to effect as the findings in the
case where an incentive is involved when they volunteer.
Random Sampling:
Each participant is given the equal chance of being selected from the target population. If the
target population is ‘factory workers’ and there are 800 of them – the only way to actually
randomly select the sample is to put all 800 names together and pick out the first 20, 30 names
(depending on the required sample size of the study). May be allocated numbers and selected
in an unbiased way.
Advantages:
They are more likely to be representative than opportunity or self-selecting samples as the
clause for bias does not exist as the selection is up to chance, random.
It is efficient if a certain demographic is to be studied and out of that lot the participants are
selected at random.
Disadvantages:
It is however often time-consuming when a large target population is considered. If for example
not all names of the potential participant pool, it would be difficult to conduct a random sample.
The chance of equal opportunity in the random selection process is often too idealistic in the
sense that not everybody might be inclined to fully participate. It is possible for them to leave
and then for the researcher to recruit and replace a new participant.
This might bias the sample.

DATA AND ANALYSIS


Psychological research often requires a numerical and quantitative organization of results that
they get from their findings – the results in question are called ‘raw data’. To categorize large
findings in these scenarios, it is often mathematically simplified and visually represented via
graphs.

The numerical results collected by psychologists is known as ‘quantitative data’, the data
which is detailed and descriptive is called ‘qualitative data’. Quantitative data indicates the
quantity of the psychological measure I.e. the strength or amount of a response and tends to be
measured on scales, such as time, or as numeric score on tests such as Personality, IQ and
T-maze tests.

Quantitative data is associated with experiments and correlations which use numeric scales but
it is also possible to collect quantitative data from observations and interviews.
Advantages:
It usually uses objective measures and scales.
They are reliable (can be tested repeatedly) aka replicated.
Quicker to analyse statistically when there are large volumes of data involved – in terms of
statistical comparison.
Disadvantages:
This method of data collection often limits responses so there is an aspect of the findings being
less valid and less representative. No explanation of ‘why’.
Large samples are needed for the findings to be generalizable.

Qualitative is a descriptive in depth result indicating the quality of the psychological


characteristic.
Advantages:
Data is often valid because it is descriptive and detailed, not limited by fixed choices.
Can often help researchers control certain variables by making them aware of it (eg. Childhood,
family) allowing them to estimate cause and effects.
Data is more in-depth which inhibits a deeper understanding of the study.
Important responses are less likely to be ignored because of averaging.
Disadvantages:
It is subjective to the studies/experiments, and cannot be usually generalized for the larger
context.
Data may have bias – of both the participant and researcher. This may render data invalid.
Difficult to statistically analyse and comprehend.
Difficult to replicate without strict standardization and thus low reliability.

DATA ANALYSIS
Measures of Central Tendency:
A set of quantitative results can be summarised down to one number that represents the middle
score and an aggregate – this is known as measures of central tendency or average.

3 different measures:
Mode - frequently repeated score, number in a data set. There can be more than 2 modes. It is
unaffected by extreme scores and it is useful to observe repetitive behavioural patterns.
Limitations of using this measure is that it offers no insight about the other scores, it isn’t very
‘central’, it is also very fluctuating from one sample to another.

Median - is only used with numerical data on a linear scale. To find the median, all the scores in
the data set are put in a list from smallest to the largest. The middle one in the list is called the
‘median’. To configure this, all scores are arranged from ascending order – the middle number in
this is the median value. If there are an even number of participants, in which case there are two
numbers in the middle, these are added together and then divide by 2.

In essence, the median value is the halfway point that separates the lower quartile from the
upper quartile. It is unaffected by extremes, in the sense that there is no distortion of results. It
however can be misleading when there are only a few scores and doesn’t take into account
most of them.

Mean - The mean is the measure of central tendency that we usually call the ‘average’. It can
only be used with numerical data from linear scales. The mean is worked out by adding up all
the scores in the data set and dividing them by the total number of scores. It is the most
thorough and informative measure of central tendency as it takes into account all scores. There
is however the probability of it giving a distorted result if there are any anomalous scores.

It is done by adding up all the values to find a total, dividing the value by the number of values
added together that were present.

MEASURES OF SPREAD
This indicates how far spread, dispersed and varied data is within a set. If two data sets are the
same size, with the same mean, they could still vary in terms of how close the majority of data
points were to that average. Differences such as this are described by measures of spread: the
range and the standard deviation.

Range - To calculate:
1. Find the largest and smallest value in the set of data.
2. Subtract the smallest value from the largest value and add 1.

Conventionally, the addition of 1 is not done. In psychological research this is done so that we
measure the gaps between points, not the points themselves.
Standard Deviation - takes into account the difference between each data point and the mean –
this is known as deviation from the standard.

As the standard deviation tells us the spread of a group, groups with scores that are more
widely dispersed have a larger standard deviation. When the standard deviations of two groups
are similar, this indicates they have a similar variation around the mean/average.

Graphs - This is used to visually illustrate data, with a variety of them for different purposes. The
ones being included in our syllabus being Bar charts, Histograms and scatter graphs.

Bar Charts -used when data is in separate categories rather than a continuous scale. Bar charts
are therefore used for the totals of data collected in named categories and for all measures of
central tendency.

Histograms - useful to show the pattern in the whole data set, where the data is continuous in
which case the data is being measured on a scale rather than distinct categories. A histogram
may be used to illustrate the distribution of a set of scores.

Scatter Graphs - The results which are collected from a correlational study are presented on a
scatter graph. To construct a scatter graph, a dot is marked at a point where the participant’s
score on each variable cross, there is also the ‘line of best fit’ reoccurring on a scatter graph.
The position of this line is calculated and its line is drawn so that it comes close to as many
‘points’ as possible. In the case of a strong correlation, all the data points lie near/close to the
line whereas in a weak correlation’ its vice versa – they are more spread out. When there is no
correlation, a concrete line is not formed.

Normal Distribution Curve - bell shaped is symmetrical and is even spread.

ETHICAL CONSIDERATIONS
Experiments and studies conducted using humans or animals have the potential to cause
concerns about the welfare of the participants – these are called ethical issues. There are
certain problems that may arise when the nature of the study is put into context – such as
psychological discomfort, harm, stress, the procedure’s nature, the need to lie to hide the aim of
the study. Ethical issues may also arise from the implications of their research, for example the
possibility of results having a negative impact on the society.

To regulate these concerns, organizations and council bodies exist which produce a code of
conduct, with rules such as approval charters from the governing bodies (such as universities)
and guidelines that help experimenters work in way that do not violate the ethics code as it
instructs limitations and concerns of the welfare of the individuals involved in the study.

This is important because if participants take away a negative perception and experience from
their participation it will negatively impact the whole psychological community which in turn will
lose credibility.

ETHICAL GUIDELINES RELATED TO HUMAN PARTICIPANTS:


Informed Consent -
In order to reduce or negate the variable of demand characteristics, social desirability and
validity of the study it is important to hide the aim of the study. It is however important for them
to know what is in the study so they can provide their informed consent.
Ideally, informed consent should be obtained from participants before the study commences, not
by revealing the aim of the study but by providing them with enough sufficient information so
that they may decide whether or not to participate in the study. However, in the cases of
naturalistic observations and field experiments it is not possible for informed consent to be
taken. This is where ‘presumptive consent’ comes in. This means the researchers might ask a
group similar to the target population (sharing similar traits) whether they would find the study
acceptable or not so a relevant result is acquired, thinking that the target population may also
would have agreed.

Protection
A study may have the potential to cause psychological discomfort, stress and harm to the
participants involved (for eg. Milgram et al.). In situations like those it is imperative that
participants should be protected, should not be put at higher risks and steps should thus be
taken to eliminate the risk altogether. It is also a preventive measure that the study being
conducted should be stopped if unexpected risks arise.

Right to Withdraw
Participants are also given the right to withdraw and it must be made clear to participants at the
start of the study. Although participants can be offered incentives to join a research, these
cannot be retracted away if they wish to leave for valid reasons. Researchers cannot abuse
their position of authority, forcing a participant to stay if they don’t want to. Participants and
Researchers should both be aware of this.

Deception
If possible, they (participants) should not be deliberately misinformed. When it is absolutely
essential to do so – they should be apologized and debriefed instantly. They should also be
allowed to remove their results if they wish to.

Confidentiality
All the data that is collected and stored should comprise separately from the participants’
personal information – age, name, gender, ethnicity, occupation. This information must not be
shared with any other 3rd party – this would be a breach of confidentiality. Ideally, to ensure
confidentiality the personal details of the participant should be destroyed so that any breach is
impossible. If by any chance, there is a need to initiate contact with the participant again or to
pair up an individual’s score in each condition in say, a repeated measures design – a serial
number can be allotted to the participant(s) to identify them.

Privacy
Research methods such as self-reports and observations which ask personal questions in a
study risks invading privacy. This means invading personal space or an emotional territory that
the individuals do not want to share. They can make this clear, setting boundaries with the
researcher. In the case of a questionnaire, participants should be allotted personal space. In
observations, people should only be observed/watched where the participants would usually
expect to be observed. Their information can be published only when the participant themselves
grants them permission via informed consent or hyper-exceptional circumstances where the
safety and lives of the participant or others are at stake.

Debriefing
It is done by thanking participants who have been in a study, apologized to when deceived and
they are provided the chance to ask questions. They are also informed of the full aim of the
study and ensure that they do not want to withdraw their data. It is however, important to
understand how debriefing does not serve as a clause to designing an unethical procedure or
experiment, thus it is important for the researchers to consider minimizing ‘collateral damage’
and distress to the participants, in any case.

ETHICAL GUIDELINES RELATED TO USE OF ANIMALS


Animals are frequently used in psychological research for a number of different reasons as
suggested by psychologists – they are convenient models, a way to execute procedures that
could not be possible (because of ethical considerations) and because of redundancy. This is
why research is conducted on animals but their welfare needs safeguarding.

Animals are also often protected by law but these guidelines specifically consider the effects of
research in which animals may be caged/confined, harmed, in pain or stressed – this suffering
should be minimized. Veterinary help/advice should be sought in case where needed.

REPLACEMENT
Researchers should consider replacing actual animal experiments with alternatives such as
videos from previously conducted studies or computer simulations.

SPECIES AND STRAIN


The chosen species and strain should be the one that is least likely to go through distress or
pain. Other relevant and important factors such as if the animals were bred in captivity, if the
animals were participants in a study prior to the current one and the sentence period of the
studies.

NUMBER OF ANIMALS
Only the minimum number of animals needed to produce reliable and valid findings should be
utilized. To minimize the number, pilot studies, reliable measures of the DV, good experimental
design and research method along with solid data analysis.

PROCEDURES: PAIN AND DISTRESS


Research that may potentially cause disease, injury, physiological and psychological distress,
discomfort and death should be avoided at all costs. The experimental design should work on
reducing any possible pain of the animals, rather than worsen the situation.
Alternatively, naturally occurring instances may be used – such as during research, attention
has to be paid to the animals’ daily care and veterinary needs and any costs inflicted upon the
animals should be justified by an objective, scientific explanation that benefits the work.

HOUSING
Isolation and overcrowding can cause animals to become distressed as some of them have
solitary, territorial tendencies and habits. The caging condition should be considered according
to the social behaviour patterns of the animals. Overcrowding can cause aggression and
consequently, distress. Their food and water should be sufficient regarding their dietary habits.
However, the artificial environment only needs to recreate the aspects of the natural
environment that are important to welfare and survival. Eg. Warmth, space for exercise or
somewhere to hide. Cage cleaning should top priority.

REWARD AND DEPRIVATION AND AVERSIVE STIMULI


When initiating studies that concern the dietary habits of animals, it should be designed to
satisfy the needs. The usage of preferred food should be considered as an alternative to
deprivation and alternatives to aversive stimuli (Aversive stimuli by definition is an intentionally
simulated unpleasant event/occurrence that intends to decrease the plausible probability of a
behaviour, when it is presented as a consequence for example – a punishment.) Deprivation
should be used where possible.

ANAESTHESIA, ANALGESIA AND EUTHANASIA


Animals should be protected from pain.
Anaesthesia: It is a process of temporary loss of sensation, awareness and consciousness that
is induced through IV (Intravenous circulation). This is usually to induce a paralysis (for muscle
relaxation).

Analgesia: Medication used to relieve pain, inflammation and etc.

Euthanasia: It is also known as mercy killing. It is the process of intentionally killing, relieving the
subject from the pain and suffering, withholding artificial-life support and treatments.

EVALUATING RESEARCH METHODS


Reliability
Whenever research is conducted data is inherently obtained. Researchers must attempt to
make sure that the way in which these results are collected is the same every time. When
differences in findings occur upon times of repeating the research, such inconsistencies are
deemed problems in reliability.

The reliability of the measures used to collect data depends on the ‘tool’ used. A researcher
collecting reaction times or pulse rates as data will probably have reliability as the machines
used are likely to produce very consistent measures of time or rates.

The way to check reliability is to use the test-retest procedure. This involves using a measure
once, and then using it again in the same situation. If the reliability is high, the same results will
be witnessed and collected on both occasions meaning there will be a high correlation between
the two score sets.
There is also the problem in reliability that there are subjective interpretations of data. For
instance, a researcher who is using a questionnaire or interview with open questions may come
to find that the same answers could be interpreted in different ways, producing low reliability. If
these differences arose between different researchers, this would come to be called an
inter-rater reliability problem. This however, can be solved by operationalizing.

Similarly, in an observation researcher gave different interpretations of the same actions, this
would be low inter-observer reliability. If the reliability was low, the researchers in either case
would need to discuss why the differences arose and find ways to make their interpretations or
observations more alike. This can be done by agreeing on operational definitions of the
variables being measured and by looking at examples together. These steps would help to
make the research indefinitely more objective.

To minimize differences, in the way research is conducted that could effectively reduce
reliability, standardization can be used, that is if the procedure is kept the same. This could be
done by including instructions, materials and apparatus, although it is important to note that
there would be no reason to change many of these. The important aspects of standardization
are those factors which might differ, such as experimenter’s manner towards participants in
different levels of the IV, an interviewer’s body language, verbal mannerisms or an observer’s
success at covering their presence.
Validity
Many factors affect validity (and this includes reliability too because a test or task cannot
measure what it actually intends to unless the methods are consistent. Objectivity also affects
validity in the sense that if a researcher is subjective in their handling and specifically
interpretation of data, their findings will not properly reflect the intended measure.

There are different types of validity that are important – this includes face validity (which is
essentially the measure of the procedure and how it appears) A test or task must seem to test
what it is actually supposed to. Consider a test of helping behaviour that involved offering to
assist people who were stuck in a bathtub full of spiders or lizards.
It might not be a valid test of helping because people who were frightened of spiders or lizards
would not help, even though they might otherwise be of altruistic nature (selflessly helping). This
would be deemed a lack of face validity.

If participants start to think that they understand the aim of the study, their behaviour patterns
and characteristics are very likely to be affected by what we call social desirability and demand
characteristics – this obviously lowers validity. When designing a study, the researcher should
aim to minimize demand characteristics that do not make apparent or indicative to the
participants how they are expected to behave.
Another problem of validity is whether the research’s findings are too specific to that own study,
not being able to apply it to other situations. This lacks the general reach it was supposed to
have – this means there is a lack of ecological validity. This type of validity explores if findings
from the laboratory have a real-life application into the ‘real world’.
The task itself matters too. If in a task, participants are asked to do tasks that are similar to the
ones in real-life contexts then it has mundane realism (the degree of it being similar to events in
real-life contexts). This is significant for a study to have as it would naturally have higher
ecological validity if the tasks are realistic. For instance, in an experiment on emotions
responses to dangerous animals such as Bears, Insects, Bats or Tigers can be used.

As it is highly likely that a small number of people would have seen bears, tigers, a few more
would have seen a bat but insects are more likely to have been seen by everybody in the
participant sample – having higher mundane realism and thus higher ecological validity. This is
a variant of; external validity. External Validity is basically referring to whether or not the findings
of the study can be generalized beyond the present study.

Generalisability
As it is apparent, Ecological Validity contributes to the generalisability of the results. Another
factor which affects the ability to generalize is the participants of the sample.

If the sample is very small, or does not contain a wide range of the different kinds of people in
the population (such as gender, age, ethnicity, etc) it is actually unlikely to be representative.

Restricted samples like the one mentioned are more likely to occur when the sampling methods
of either opportunity or volunteer sampling is used, rather than if random sampling is used.

Important Things To Remember About Research Methodology And Processes:

• Are measures reliable?

• Are the tools and equipment being used collecting consistent results?
• Are the researchers using those in ways that are consistent?

• Is the interpretation of data objective?

• Is the study valid? Does it represent what the aim intends to find out?

• Take into account the position of reliability and generalizability when it comes to validity.
• Are there any variables that may affect results? Such as Social Desirability, Demand
Characteristics, Familiarity Bias, Researcher Bias, etc?

• To improve the study, light focus needs to be on: Method, Design, Procedure and Sampling
Tool.

COGNITIVE APPROACH
Main assumptions:
Behaviour and emotions can be explained in terms of the role of cognitive processes such as
attention, language, thinking and memory. Our complex mental processes can be studied
scientifically.
Humans can be seen as data processing systems. The workings of a computer and the human
mind are alike. They encode information - store information - provide an output.

ASSUMPTIONS OF THE COGNITIVE APPROACH


- behaviour and emotions can be explained in terms of cognitive processes such as attention,
language, thinking and memory
​ - similarities and differences between people can be understood in terms of individual patterns
of cognition

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy