IOP3702 Presentation 17march2012hvdo
IOP3702 Presentation 17march2012hvdo
IOP3702 Presentation 17march2012hvdo
Personnel Psychology:
Organisational Entry
YOUR LECTURERS
Ms Larissa Louw
(012) 429 8098
AJH 3-85
louwla@unisa.ac.za
2
AIM OF GROUP DISCUSSION
5
Muchinsky et al. (2005)
CHAPTER 1
The historical background of industrial
psychology
&
Coetzee & Schreuder (2010)
CHAPTER 1
Introduction to personnel psychology
PERSONNEL PSYCHOLOGY
DEFINITION
7
Muchinsky et al. (2005)
CHAPTER 2
Research methods in industrial psychology
&
Coetzee & Schreuder (2010)
CHAPTER 2
Research methods in personnel psychology
THE RESEARCH PROCESS
9
• STEP 1: Statement of the research problem
• STEP 4: How do you analyse the data, i.e. make sense of it:
– QUALITATIVE – use content analysis
– QUANTITATIVE – use statistical analysis, i.e. descriptive statistics,
correlation, regression, inferential statistics or meta analysis
Explained from page 31 to page 38
11
Question Type Example
Descriptive question A picture of a state of events, e.g. levels of
productivity
Is there a relationship between the type of
interview conducted and the interviewer’s success
of rating an applicant’s personality?
13
STEP 3 - DATA GATHERING TECHNIQUES
Technique Definition Quantitative or Qualitative Application
Surveys A survey is a set of questions that requires Closed-ended questions can be asked in a structured
(Questionnaires) an individual to express an opinion, questionnaire for a quantitative study. Open-ended questions
answer or provide a rating regarding a can be asked in a semi-structured or unstructured
specific topic. questionnaire for a qualitative study.
Observation The researcher observes (which entails Using a pre-developed checklist to rate the existence or
watching and listening) employees in their frequency of certain behaviours and events in a quantitative
organisational setting. study. When the research questions are more exploratory (in
a qualitative study), the researcher can take detailed notes
Interviews Interviews are one-on-one sessions Although a structured interview format can be used in a
between an interviewer and an quantitative study, interviews are used most often in
interviewee, typically for the purpose of qualitative studies where a semi-structured or unstructured
answering a specific research question. interview can be used to gather information.
Focus Groups It is a method of data collection in which Usually used in a qualitative study.
pre-selected groups of people have
facilitated discussion with the purpose of
answering specific research questions.
Archival Data Archival data, or also called documentary In a quantitative study, the archival data would consist of
sources of information, is material that is numerical information like questionnaire responses, test
readily available and the data is already scores, performance ratings, financial statistics or turnover
captured in one form or another. rates. In a qualitative study, the archival data would include
textual information like documents, transcripts of interviews,
letters, annual reports, mission statements or other official
documentation.
14
STEP 4 - ANALYSIS OF THE DATA
Categorise data into
common themes or
patterns
15
META-ANALYSIS
16
Muchinsky et al. (2005)
CHAPTER 3
Criteria: Standards for decision-making
&
Coetzee & Schreuder (2010)
CHAPTER 4
Job Analysis and criterion development
CRITERIA
18
CRITERIA (Definition)
19
CONCEPTUAL VS ACTUAL CRITERIA
Conceptual criteria
Actual Criteria
20
The relationship between conceptual and actual criteria can be expressed in terms of
three concepts: deficiency, relevance, and contamination [see fig 3.1 p 48 of
Muchinsky et al. (2005) and fig. 4.7 p 129 of Coetzee & Schreuder (2010)]
Criterion deficiency is the degree to which the actual criteria fail to overlap the
conceptual criteria, that is how deficient the actual criteria are in representing the
conceptual ones.
Criterion relevance is the degree to which the actual criteria and conceptual criteria
coincide.
Criterion contamination is the part of the actual criteria that is unrelated to the
conceptual criteria.
Also distinguish between error and bias
(measures something else vs. measures something related to nothing at all)
21
Criterion distortion
Conceptual Observed
criterion criterion
Criterion
Criterion Criterion contamination
deficiency relevance
JOB ANALYSIS
as well as what technologies are employed to accomplish the end result, and
verifiable characteristics of the job environment with which workers interact, including
A thorough JA documents the tasks that are performed on the job, the situation in
which the work is performed (i.e. tools used) and the human attributes needed to
23
The components of a job
Job – group of
positions
Position – one
employee
Tasks –
basic unit
of work
JOB ANALYSIS PROCEDURES
A clear understanding of JA requires knowledge of four job-related concepts;
25
The components of a job - secretary
Position – HR
manager’s secretary
Tasks – typing,
diary
management,
call screening
Job analysis info: uses
Job description
Person
specification
Job
evaluation
Performance
Gr 4 Gr 4 criteria
Gr 3 Gr3 Gr 3
USES OF JOB ANALYSIS INFORMATION (JAI)
1. The most important use of JAI is the identification of competence and
competencies for a specific position.
Other Uses:
JAI can be used in vocational counselling and offering insight into the KSAO’s
needed to perform successfully in various occupations
28
JA process
Job
Descrip-
1 Identify
tion 5 Measure
tasks
& Person
specifica-
tion
Data
collection 4 Essential 2 Task
KSAOs statements
3 Rate
General data collection methods
Interview
Observation
Job diaries
Surveys
Critical
incidents
Specific methods
KSAOs
SELECTION DECISIONS
33
Standards of criteria
• Objective criteria
(production, sales, tenure or turnover, absenteeism, theft)
• Subjective criteria
(employee performance)
A SME refers to a person that has up to date experience with the job for a
long enough period to be familiar with all of its tasks.
34
Types of JA
task Person
oriented
task
Task
oriented
COMPETENCY MODELING
Definition of Competency
“…sets of behaviours that are instrumental in the delivery of
desired results of outcomes”
“...an underlying characteristic of a person which results in
effective and/or superior performance of a job”
Job analysis and competency modelling differ in three main areas:
• The generalisability of the information across jobs within an organisation.
• The method by which the attributes are derived.
• The degree of acceptance within the organisation for the identified
attributes.
38
Competency modelling
Assemble
team
Validate & Key
calibrate business
results processes
Apply
Competencies
weighting
Weight Proficiency
levels
Job &
competency
profiles
JOB PERFORMANCE CRITERIA
Objective Criteria
• Production
• Sales
• Tenure or turnover
• Absenteeism
• Accidents
• Theft
Subjective criteria
• Judgements made of an employee’s performance
• Usually rating or ranking
• Most frequently used judgemental criteria
41
Muchinsky et al. (2005)
CHAPTER 4
Predictors: Psychological assessments
&
Coetzee & Schreuder (2010)
CHAPTER 5
Psychological assessment: predictors of
human behaviour
Psychological assessment: predictor constructs
Behavioural Personality
attributes attributes
Cognitive
attributes
Reliability
Re- Reliability
Test
test coefficient
Reliability: alternate form
• Occasion A
Test A • Construct:
integrity
Coefficient of
equivalence
• Occasion B
Test B • Construct:
integrity
Reliability: internal consistency
Test 1Test 1
Correlation coefficient
Reliability: measurement error
Unsystematic Systematic
Test construction
Test measures
construct different
Test administration from the
psychological
Test taker attribute it was
characteristics intended to
measure
Test scoring
Predictors: Psychological Assessments
49
Validity coefficient
Predictor Criterion
scores scores
Validity coefficient
CONCURRENT VS PREDICTIVE VALIDITY
Criterion (job performance data) can be collected in either a concurrent- or a
predictive-validity design. The major distinction is the time interval between
collection of the predictor and criterion data.
Concurrent validity
Present workers take a test scores are correlated with job performance.
Predictive validity
The preferred method used in personnel selection. Test is given to all
applicants, but not used as a selection instrument. Data on job performance
are subsequently collected and the original test scores are then compared to
the actual Job performance.
Validity generalisation
Validity generalisation refers to a predictor’s validity spreading or generalising
to other jobs beyond the one in which it was validated.
51
Ethical & professional practice
Fair practice
Fair use & application
Test taker needs & rights
Predictor match purpose
Consider moderating factors
ETHICAL STANDARDS IN TESTING
Information
confidentiality
Confidentiality:
• access should be controlled
• tell applicant:
– the purpose of the test,
– how results will be used and
– who will see the results
Retention of records:
• How long (legal requirements)?
• Who has access?
• How secure?
• For what purpose?
55
Types of predictors
Personality
Cognitive Behaviour
Predictor
constructs
PSYCHOLOGICAL TESTS
Types of tests
A
D speed vs. power tests
M individual vs. group tests
I
N paper-and-pencil vs. performance tests
Test Content
C intelligence tests
O mechanical aptitude tests
N sensory/motor ability tests
T
E personality and interest inventories
N
T integrity tests
testing physical abilities
57
Cognitive predictors
Structural
Information
processing
Developmental
Personality predictors
Projective
techniques
Structured
personality
assessment
Specific
personality
constructs
NON-TEST PREDICTORS:
INTERVIEWS
Interviews are the most popular selection method.
Degree of structure
60
Interviews
Structured Unstructured
Semi-
Situational
structured
SITUATIONAL INTERVIEW (SI)
q A Situational Interview presents the applicant with a situation and asks for
a description of the actions he would take in that situation.
q Past situations
• Focus on how applicants have handled situations in the past, that
have required the skills and abilities necessary for effective job
performance
62
Behavioural assessment
Situational
judgement
tests
Work Assessment
samples centres
ASSESSMENT CENTRES
Purpose: To evaluate (usually) management personnel for promotion,
transfer or training
Assessment centres also do not have the racial or sex bias of other predictors
- culture-fair predictors of job performance.
64
WORK SAMPLES
• Can use work samples for e.g. for mechanic
One of the tasks could be: how to drain oil
from the gearbox
• Has a high face validity
• Blue collar jobs
• Effective when job includes working with things rather than
people
• Assess what person can do ≠ potential
• Time consuming and costly to develop
• Validity of work sample 0,42 - 0,66
• Work samples are among the most highly valid
means of personnel selection.
• More applicable to practical jobs
65
SITUATIONAL EXERCISES
• White-collar counter-part of work samples, more
applicable to professional/managerial positions.
• Best examples:
– In-basket (problem solving skills)
– Leaderless Group Discussion (inter-personal sensitivity)
67
LETTER OF RECOMMENDATION
Restricted range:
• Almost all letters are positive (employer might want
to get rid of poor performer)
• Applicants themselves choose who will write the
letters - pick people that will make them look good
REFERENCE CHECKING
q This refers to the gathering and use, at one or more stages of
the personnel selection process, of information about
applicants.
§ job experience
§ job performance
§ his/her character
§ physical and mental health
69
EMOTIONAL INTELLIGENCE (EI)
q El can be regarded as the “soft” side issues of individual
differences - such as moods, feelings, emotions.
72
ONLINE ASSESSMENT
A method for collecting data or administering instruments.
The benefit is the capability to deliver assessments direct to the
test taker.
The concerns are the verification and psychometric quality:
• This involves administration without direct supervision
• Verification - did the person completing the test do so unaided
• Psychometric quality - are the scores distorted as the person
was not observed completing the questionnaire
73
Assessment of predictors along 4 evaluative
standards
•Cost: Look at indirect/hidden costs.
•Applicability: How many jobs does the predictor cover.
•Fairness: Must not unfairly discriminate against a group.
•Validity: Not one ideal predictor with a 100% validity,
otherwise not ideal in others.
74
Muchinsky et al. (2005)
CHAPTER 5
Personnel decisions
CHAPTER 6
Fairness in personnel decisions
&
Coetzee & Schreuder (2010)
CHAPTER 6
Recruitment and Selection
SELECTION DECISIONS
76
RECRUITMENT
The personnel function of recruitment refers to the
process of attracting people to apply for a job
SOCIAL VALIDITY
77
RECRUITMENT
• Both parties are engaged in assessing the degree of fit with each other
• Mutual process especially if you want a better employee
VS
• Glamour advertising that over-emphasises good points and ignores bad
points (high turnover)
AFFIRMATIVE ACTION (AA)
AA is a social policy aimed at reducing the effects of prior
discrimination.
Four goals of AA
1. Correct present inequities
2. Compensate past inequities
3. Provide role models
4. Promote diversity
80
SELECTION
Personnel selection is the process of identifying from the pool of recruited
applicants those to whom a job will be offered. p137 in Muchinsky et al.,
(2005) or p185 in Coetzee & Schreuder (2010)
IMPLICATIONS
•Selection implies that some applicants will get hired while others will not
•Some applicants are better suited than others for a particular job and the
purpose of selection is to identify the “better” applicants
Three factors determine the quality of the newly selected employees and the
degree to which they will have an impact on the organisation:
• validity of the predictor
• selection ratio
• base rate
81
82
Figure 2.2 Scatter plot for test scores
and job-performance ratings
based on 15 employees
83
PREDICTOR VALIDITY
50%
(X)
}
r = 0,8
(Y)
84
SELECTION RATIO
THE SELECTION RATIO REFERS TO THE PROPORTION OF APPLICANTS THAT
ARE PLACED IN RELATION TO THOSE TESTED WHO ARE AVAILABLE FOR
PLACEMENT (SR = n/N)
Select only best (small SR) OR Leave only worst (Large SR)
BASE RATE
THE PROPORTION OF PERSONS JUDGED SUCCESSFUL USING CURRENT
SELECTION PROCEDURE
85
SELECTION RATIO
0,75 0,25
Criterion scores on
average will be ↑
X (Y)
x (X)
86
SELECTION RATIO
87
% OF PRESENT EMPLOYEES CONSIDERED
SATISFACTORY
Largest gains in
average criterion
performance will
occur with a base
rate of 0,5 :
greater gain in
actual number
of NEW employees 25%
that will be
successful in 50%
performing their job.
75%
88
Fig 6.9 Varying base rates on a predictor with a given validity
Success CX
Failure r = 0.70
Criterion
BR = 0.80
Predictor
Success CX
Failure
r = 0.70
Criterion
BR = 0.50
Success CX
Predictor
Failure
r = 0.70
BR = 0.20
Predictor
90
FAIRNESS IN PERSONNEL DECISIONS
DEFINITION OF FAIRNESS:
• If all the parties received equitable treatment
• If there was conformity with universally accepted standards
• Consistency was exhibited (Bendix, 1996)
DECISIONS REGARDING:
91
DISTINGUISH BETWEEN:
Bias Fairness
§ Statistical concept § Judgement based on values
§ Impact of psychometric properties § The way test results are interpreted
§ of test on test results § and applied
§ When a test makes systematic errors § Value judgement regarding decisions
§ in measurement or prediction § or actions taken as a result of test
§ scores
92
LEGAL FRAMEWORK
EMPLOYMENT EQUITY ACT 55 of 4098
Every person who can do a job (suitably qualified), should have a fair chance to
get the job
Chapter 3 Section 20 (3) & (4)
For purposes of this Act, a person may be suitably qualified for a job as a result
of any one of, or any combination of that person’s--
a. formal qualifications;
b. prior learning;
c. relevant experience; or
d. capacity to acquire, within a reasonable time, the ability to do the job.
(4) When determining whether a person is suitably qualified for a job, an
employer must--
a. review all the factors listed in subsection (3); and
b. determine whether that person has the ability to do the job in terms of
any one of, or any combination of those factors.
Courts rely on social opinion — test of reasonable man
Burden of proof lies with employer or organisation
Distinguish between direct or indirect discrimination
94
LEGAL GUIDELINES WITH REGARDS
TO FAIRNESS
§ Inherent requirements of the job
§ Operational requirements
§ Misconduct of employee
§ Incapacity of employee (cannot perform satisfactory)
Thorndike:
If difference in average test performance exists, then judgement on test-fairness must
rest on inferences that are made from the test rather than comparison of mean scores.
Focus attention on fair use of the test scores, rather than on the scores themselves.
Take test item:
The usual temperature for baking a cake is about
a. 250° b. 300° c. 3500 d. 400°
Percentage of right answers favour female over male
BUT IF:
Criterion predicted: how palatable a cake one can bake
Poorer performance of male on item = poorer performance in kitchen
96
MODELS OF FAIRNESS
B. DIFFERENCES IN VALIDITY
Tests are predictive for all groups but to a varying degree
Possible cause = sample size
97
MODELS OF FAIRNESS cont.
Cleary:
If scores are significantly different for groups on test but job performance is equal
98
Differential validity •Adverse impact means that
members of one group are selected
at substantially greater rates than
members of another group.
•This figure is an example of a
predictor-criterion relationship that
is legal.
• Figure 6.15 Valid predictor with adverse impact •The validity for both groups are
equivalent, but the minority group
scores lower on the predictor and
Performance does poorer on the job.
criterion C Nonminority •What may have happened in this
instance is that the factors that
R Both
depressed the test score of the
minority group may also have
ellipses served to depress job performance
I look the
same,
scores.
they are •Adverse impact is defensible in
T just at this case.
different •The reason is that the minority do
positions
E on the
poorer on what the organisation
considers essential for job success.
graph.
•More applicants of the nonminority
R group will then be selected.
I
O
Minority
N
Reject Accept
PREDICTOR
Predictor score
MODELS OF FAIRNESS
D. THORNDIKE’S QUOTA MODEL
Propose that one go further than regression line model.
Requires that success ratio equal selection ratio Proportion of each group that would be successful, should be
selected. 30% can be successful, but only 20% were selected.
This ensures that a greater % of the minority group is selected than is likely under the previous models of test
fairness.
Figure A: A situation that is “fair” if the Cleary Figure B: Cut-off points needed to achieve
model is used but “unfair” if the fairness under the Thorndike model
Thorndike model is used
100
Differential validity •In this instance we have unequal
predictor means for the two
groups.
•Because the minority group has
a lower predictor mean than the
nonminority group, members of
• Figure 6.16: Equal validity, unequal predictor means the minority group would not be
as likely to be selected, even
though the probability of success
s on the job for both groups is
C a essentially the same.
t
i
Minority •A strategy that can be used here
R f
is to use separate cut-off scores
for the different groups. This cut-
a
off score is based on the predictor
c
I t
performance (or score), but the
Expectancy of expectancy for success on the job
o
success on stays the same.
the job T r
y •Even though the test score may
mean different things for different
E U groups, as long as the expectancy
n of success on the job is equal for
s the two groups, the use of
R at Cut-off for separate cut-off scores is justified.
isf nonminority
a
I ct
or
O y
Nonminority
N
50 75
Reject Accept
PREDICTOR
MODELS OF FAIRNESS
CONDITIONAL PROBABILITY MODEL
Goes one step further than Thorndike’s model
People should have an equal chance of selection, regardless of group membership
Basic principle: For both minority and majority groups whose members can achieve a
satisfactory criterion score, there should be the same probability of selection regardless
of group membership
If probability of being selected when successful = 0,8 for one group, should also be 0,8
for other group
Model will give greater preference for minority group than Thorndike’s model
Figure 4(b) Subpopulations with common regression Figure 4(c) Subpopulations with common regression
line. Selection strategy fair according to line. Selection strategy fair according to
Constant Ratio Model Conditional Probability Model
102
MODELS OF FAIRNESS
EQUAL RISK MODEL
Consider distribution of criterion scores about regression line
Want 70% of selected candidates to succeed, set predictor cut-off at a point which allows
30% risk on criterion success (1/2 standard deviation in a normal distribution)
The risk should be equal in all groups
Model will give greater preference for minority group than Thorndike’s model
EVALUATION OF THE MODELS
Different cut-offs : cut-offs should be set in such a way that risks are equal
Hire all applicants with at least 70% chance of success (30% risk)
103
HOW TO ENSURE FAIRNESS
* Use Job analysis
104
• Equity = accept individuals differ in attributes and if attributes
are inherent job requirements can select fairly on attributes
• Equality = minimising differences are all treated the same
“opportunity for employment should be extended equally in
society”
• CLASH : Attributes are not equally distributed!