IOP3702 Presentation 17march2012hvdo

Download as pdf or txt
Download as pdf or txt
You are on page 1of 97

IOP3702

Personnel Psychology:
Organisational Entry
YOUR LECTURERS

Mr H von der Ohe


(012) 429 8283
AJH 3-103
vdoheh@unisa.ac.za

Ms Larissa Louw
(012) 429 8098
AJH 3-85
louwla@unisa.ac.za

2
AIM OF GROUP DISCUSSION

• To present an overview of the subject matter

• To consider the differences between the two


prescribed books

• To establish a scientific approach to personnel


psychology

• To advise students on how to prepare for the exam

• To obtain feedback from students


3
• Chapters 1-6 (excluding ch 3)
• If something is not discussed here, that does not necessarily
mean that it is not applicable and/or important.
• Due to time constraints we cannot always cover everything or
there would not be enough time to discuss anything new.
• Today we will show you how everything in this module fits
together.
• The 101-tutorial and prescribed book are your important
tutorial matter

• CONCENTRATE ON THE OUTCOMES AND ASSESSMENT


CRITERIA WHEN PREPARING FOR THE EXAMINATION
(Tut Letter 101) – We derive our examination questions from
it. So, if you understand and can “answer” the assessment
criteria of each outcome you should not have any problems in
the examination.
PREPARATION FOR THE EXAMINATION
Bear the following in mind when studying for this course:

• The format of the May/June and October/November examination is the same.


• There will be no multiple-choice questions
• The examination will consist of three sections
Section A: Three 10 mark questions of which you have to do two (20 marks)
Section B: Three 15 marks questions of which you have to do two (30 marks)
Section C: A compulsory 25 mark question (25 marks)
• The examination will count out of 75 marks for the two hour paper. This should
make it easier to give sufficient facts in the time available.
• Study the prescribed book as indicated in the study guide.
• Assignment questions are no indication of what to concentrate on.
• No statistical or questions that require mathematical calculations will be asked in
the examination!

5
Muchinsky et al. (2005)
CHAPTER 1
The historical background of industrial
psychology

&
Coetzee & Schreuder (2010)
CHAPTER 1
Introduction to personnel psychology
PERSONNEL PSYCHOLOGY
DEFINITION

Personnel psychology is an applied discipline that focuses on


individual differences in behaviour and job performance and on

methods of measuring and predicting such performance

•3702 is about recruiting, selecting and settling


employees into the workplace
•3706 is about managing the employee once he or she
has been appointed

7
Muchinsky et al. (2005)
CHAPTER 2
Research methods in industrial psychology

&
Coetzee & Schreuder (2010)
CHAPTER 2
Research methods in personnel psychology
THE RESEARCH PROCESS

9
• STEP 1: Statement of the research problem

• STEP 2: How do you design a study to answer the question?

• STEP 3: How do you measure the information you need, and


collect the data to answer the research problem?

• STEP 4: How do you analyse the data, i.e. make sense of it:
– QUALITATIVE – use content analysis
– QUANTITATIVE – use statistical analysis, i.e. descriptive statistics,
correlation, regression, inferential statistics or meta analysis
Explained from page 31 to page 38

• STEP 5: How do you draw conclusions from the data?


STEP 1 - TYPE OF RESEARCH QUESTIONS
Question Type Example

Predictive questions If you try to predict something, e.g.


which employees will be productive
Can the results of a selection interview be used to
successfully predict the performance of an
applicant?
Evaluative question To determine the quality or
effectiveness of a programme,
practice or procedure
How effective is the current interviewer training
programme that is being used in the organisation?

11
Question Type Example
Descriptive question A picture of a state of events, e.g. levels of
productivity
Is there a relationship between the type of
interview conducted and the interviewer’s success
of rating an applicant’s personality?

Exploratory question If a relatively new field is investigated


What are the kind of influencing techniques that
candidates use in a selection interview?
Causal question This is the most difficult to answer - why
do events occur as they do?
Does feedback after a negative selection decision
cause a decrease in the negative effect on job
applicants?
STEP 2 - THE RESEARCH DESIGN

13
STEP 3 - DATA GATHERING TECHNIQUES
Technique Definition Quantitative or Qualitative Application

Surveys A survey is a set of questions that requires Closed-ended questions can be asked in a structured
(Questionnaires) an individual to express an opinion, questionnaire for a quantitative study. Open-ended questions
answer or provide a rating regarding a can be asked in a semi-structured or unstructured
specific topic. questionnaire for a qualitative study.
Observation The researcher observes (which entails Using a pre-developed checklist to rate the existence or
watching and listening) employees in their frequency of certain behaviours and events in a quantitative
organisational setting. study. When the research questions are more exploratory (in
a qualitative study), the researcher can take detailed notes
Interviews Interviews are one-on-one sessions Although a structured interview format can be used in a
between an interviewer and an quantitative study, interviews are used most often in
interviewee, typically for the purpose of qualitative studies where a semi-structured or unstructured
answering a specific research question. interview can be used to gather information.
Focus Groups It is a method of data collection in which Usually used in a qualitative study.
pre-selected groups of people have
facilitated discussion with the purpose of
answering specific research questions.
Archival Data Archival data, or also called documentary In a quantitative study, the archival data would consist of
sources of information, is material that is numerical information like questionnaire responses, test
readily available and the data is already scores, performance ratings, financial statistics or turnover
captured in one form or another. rates. In a qualitative study, the archival data would include
textual information like documents, transcripts of interviews,
letters, annual reports, mission statements or other official
documentation.

14
STEP 4 - ANALYSIS OF THE DATA
Categorise data into
common themes or
patterns

15
META-ANALYSIS

Meta-analysis is a statistical procedure designed to


combine the result of many individual, independently
conducted, empirical studies into a single result or
outcome
The logic behind meta-analysis:
•You can arrive at a more accurate conclusion regarding a
particular research topic if you combine or aggregate the results
of many studies that address the topic, instead of relying on
findings from a single study.

16
Muchinsky et al. (2005)
CHAPTER 3
Criteria: Standards for decision-making

&
Coetzee & Schreuder (2010)
CHAPTER 4
Job Analysis and criterion development
CRITERIA

• Each time you evaluate someone or something, you use


CRITERIA

• Use different standards to determine what makes a good (or


bad) movie, dinner, football game, etc.

• In the context of Industrial Psychology criteria are important


for defining “goodness” of employees, programmes, units in
organisations, organisation itself.

• Using different criteria = different results (disagreement)

18
CRITERIA (Definition)

Criteria can be defined as the evaluative standard by


which objects, individuals, procedures or groups are
assessed for the purpose of ascertain their quality.

Criteria are the evaluative standards which are used as


reference points in making judgements

19
CONCEPTUAL VS ACTUAL CRITERIA

Conceptual criteria

A conceptual criteria is a theoretical construct, an abstract idea that can never


actually be measured. It is an ideal set of factors that constitutes a successful
person as conceived in psychologists mind.

Actual Criteria

Actual criteria serve as measure of the conceptual criteria that we would


prefer to (but cannot) assess. The decision then becomes which variables to
select as the actual criteria.

20
The relationship between conceptual and actual criteria can be expressed in terms of
three concepts: deficiency, relevance, and contamination [see fig 3.1 p 48 of
Muchinsky et al. (2005) and fig. 4.7 p 129 of Coetzee & Schreuder (2010)]
Criterion deficiency is the degree to which the actual criteria fail to overlap the
conceptual criteria, that is how deficient the actual criteria are in representing the
conceptual ones.
Criterion relevance is the degree to which the actual criteria and conceptual criteria
coincide.
Criterion contamination is the part of the actual criteria that is unrelated to the
conceptual criteria.
Also distinguish between error and bias
(measures something else vs. measures something related to nothing at all)

21
Criterion distortion

Conceptual Observed
criterion criterion

Criterion
Criterion Criterion contamination
deficiency relevance
JOB ANALYSIS

Can be defined as the collection of data describing observable (or otherwise

verifiable) job behaviours performed by workers, including both what is accomplished

as well as what technologies are employed to accomplish the end result, and

verifiable characteristics of the job environment with which workers interact, including

physical mechanical, social and informational elements.

A thorough JA documents the tasks that are performed on the job, the situation in

which the work is performed (i.e. tools used) and the human attributes needed to

perform the work.

This data is used to make various personnel decisions.

23
The components of a job

Job family – group of jobs

Job – group of
positions

Position – one
employee
Tasks –
basic unit
of work
JOB ANALYSIS PROCEDURES
A clear understanding of JA requires knowledge of four job-related concepts;

1. TASK – the basic units of work that are directed


towards specific job objectives.

2. POSITION – A set of tasks performed by a single


employee.

3. JOB – Similar positions are grouped or aggregated


to form a job.

4. JOB FAMILY – Similar jobs are aggregated to form a job


family.

25
The components of a job - secretary

Job family – clerical positions e.g.


secretary and receptionist

Job – secretaries of the CEO,


HR manager, Marketing
manager, Financial manager

Position – HR
manager’s secretary

Tasks – typing,
diary
management,
call screening
Job analysis info: uses

Job description

Person
specification

Job
evaluation
Performance
Gr 4 Gr 4 criteria

Gr 3 Gr3 Gr 3
USES OF JOB ANALYSIS INFORMATION (JAI)
1. The most important use of JAI is the identification of competence and
competencies for a specific position.

• A competence is the “WHAT” needs to be achieved


• A competency refers to “HOW” it is achieved

2. Provides a basis for organising different positions into a job, and


different jobs into a job family.

3. JAI contributes to determine the content of training needed to perform


the job.

4. JAI provides one basis for determining the content of performance


evaluation or appraisal.

Other Uses:
JAI can be used in vocational counselling and offering insight into the KSAO’s
needed to perform successfully in various occupations

28
JA process
Job
Descrip-
1 Identify
tion 5 Measure
tasks
& Person
specifica-
tion

Data
collection 4 Essential 2 Task
KSAOs statements

3 Rate
General data collection methods

Interview

Observation

Job diaries

Surveys

Critical
incidents
Specific methods

Worker Tools &


activities Equipment

KSAOs
SELECTION DECISIONS

33
Standards of criteria
• Objective criteria
(production, sales, tenure or turnover, absenteeism, theft)
• Subjective criteria
(employee performance)

Sources of Job Information

There are three major sources of job information:


1. Job incumbent
2. Supervisor
3. Job Analyst

Note that each source is a subject matter expert (SME).

A SME refers to a person that has up to date experience with the job for a
long enough period to be familiar with all of its tasks.

34
Types of JA

task Person
oriented

task
Task
oriented
COMPETENCY MODELING

Definition of Competency
“…sets of behaviours that are instrumental in the delivery of
desired results of outcomes”
“...an underlying characteristic of a person which results in
effective and/or superior performance of a job”
Job analysis and competency modelling differ in three main areas:
• The generalisability of the information across jobs within an organisation.
• The method by which the attributes are derived.
• The degree of acceptance within the organisation for the identified
attributes.

38
Competency modelling

Assemble
team
Validate & Key
calibrate business
results processes

Apply
Competencies
weighting

Weight Proficiency
levels
Job &
competency
profiles
JOB PERFORMANCE CRITERIA

Objective Criteria
• Production
• Sales
• Tenure or turnover
• Absenteeism
• Accidents
• Theft

Subjective criteria
• Judgements made of an employee’s performance
• Usually rating or ranking
• Most frequently used judgemental criteria

41
Muchinsky et al. (2005)
CHAPTER 4
Predictors: Psychological assessments

&
Coetzee & Schreuder (2010)
CHAPTER 5
Psychological assessment: predictors of
human behaviour
Psychological assessment: predictor constructs

Behavioural Personality
attributes attributes

Cognitive
attributes
Reliability

Re- Reliability
Test
test coefficient
Reliability: alternate form

• Occasion A
Test A • Construct:
integrity

Coefficient of
equivalence
• Occasion B
Test B • Construct:
integrity
Reliability: internal consistency

Test 1Test 1
Correlation coefficient
Reliability: measurement error

Unsystematic Systematic
Test construction
Test measures
construct different
Test administration from the
psychological
Test taker attribute it was
characteristics intended to
measure
Test scoring
Predictors: Psychological Assessments

A predictor is any variable to forecast a criterion.

Assessing the quality of predictors


• Reliability (consistency of a measure)
• Test-retest reliability
• Equivalent-form reliability
• Internal consistency reliability
• Inter rater reliability
• Validity (accuracy and precision)
CONCURRENT
– Criterion-related validity PREDICTIVE

– Content validity Degree to which the predictor covers a representative


sample of the behaviour being assessed
– Construct validity
Most theoretical and complex e.g. IQ - construct
..Use predictor to measure construct

49
Validity coefficient

Predictor Criterion
scores scores

Validity coefficient
CONCURRENT VS PREDICTIVE VALIDITY
Criterion (job performance data) can be collected in either a concurrent- or a
predictive-validity design. The major distinction is the time interval between
collection of the predictor and criterion data.

Concurrent validity
Present workers take a test scores are correlated with job performance.

Several problems with this type of validity

Predictive validity
The preferred method used in personnel selection. Test is given to all
applicants, but not used as a selection instrument. Data on job performance
are subsequently collected and the original test scores are then compared to
the actual Job performance.

Validity generalisation
Validity generalisation refers to a predictor’s validity spreading or generalising
to other jobs beyond the one in which it was validated.

51
Ethical & professional practice

Fair practice
Fair use & application
Test taker needs & rights
Predictor match purpose
Consider moderating factors
ETHICAL STANDARDS IN TESTING

To prevent misuse of tests:


• American Psychological Association (APA) guidelines
• Society for Industrial & Organisational Psychology (SIOPSA):
Guidelines for the Validation and Use of Personnel Selection Procedures
• SHL Group:
Guidelines for best practice in occupational assessment in South Africa

Main responsibility lies with the industrial psychologist

SA has some of the strictest rules in the world concerning:


• Who can buy
• How advertised (no free samples)
• Who is in control (industrial psychologist)
• Who can administer
53
Assessment practices standards

Test taker right to


privacy

Information
confidentiality

Test taker written


informed consent
OTHER ETHICAL ISSUES
Privacy:
• only test what is needed for job e.g. do not use personality for a job where it is
irrelevant to performance (mechanics)
• not to reveal more information than the persons wants.

Confidentiality:
• access should be controlled
• tell applicant:
– the purpose of the test,
– how results will be used and
– who will see the results

Retention of records:
• How long (legal requirements)?
• Who has access?
• How secure?
• For what purpose?

55
Types of predictors

Personality

Cognitive Behaviour

Predictor
constructs
PSYCHOLOGICAL TESTS

Psychological tests have been the most frequently used predictors in


industrial psychology.

Types of tests
A
D speed vs. power tests
M individual vs. group tests
I
N paper-and-pencil vs. performance tests

Test Content
C intelligence tests
O mechanical aptitude tests
N sensory/motor ability tests
T
E personality and interest inventories
N
T integrity tests
testing physical abilities

57
Cognitive predictors

Structural

Information
processing

Developmental
Personality predictors

Projective
techniques

Structured
personality
assessment
Specific
personality
constructs
NON-TEST PREDICTORS:
INTERVIEWS
Interviews are the most popular selection method.

Degree of structure

Interviews can be classified along a continuum of structure (structured, semi-


structured, unstructured interview)

60
Interviews

Structured Unstructured

Semi-
Situational
structured
SITUATIONAL INTERVIEW (SI)
q A Situational Interview presents the applicant with a situation and asks for
a description of the actions he would take in that situation.

q SI can focus on:


q Hypothetical, future orientated situations
• Applicants are asked how they would respond if they were
confronted with typical problems

q Past situations
• Focus on how applicants have handled situations in the past, that
have required the skills and abilities necessary for effective job
performance

Ø The candidate’s responses are then scored on a scale.

62
Behavioural assessment

Situational
judgement
tests

Work Assessment
samples centres
ASSESSMENT CENTRES
Purpose: To evaluate (usually) management personnel for promotion,
transfer or training

Provide a group-orientated, standardised series of activities that provide a


basis for judgements or predictions of human behaviour believed or known to
be relevant to work performed in an organisation.

The validity of assessment centres is determined by comparing judgments of


performance made in the assessment centre with some criterion of
performance back on the job, usually rated job performance, promotions or
salary. Validity of assessment centres 0,47 - 0,64

Assessment centres also do not have the racial or sex bias of other predictors
- culture-fair predictors of job performance.

One of the top predictors of job performance.

64
WORK SAMPLES
• Can use work samples for e.g. for mechanic
One of the tasks could be: how to drain oil
from the gearbox
• Has a high face validity
• Blue collar jobs
• Effective when job includes working with things rather than
people
• Assess what person can do ≠ potential
• Time consuming and costly to develop
• Validity of work sample 0,42 - 0,66
• Work samples are among the most highly valid
means of personnel selection.
• More applicable to practical jobs
65
SITUATIONAL EXERCISES
• White-collar counter-part of work samples, more
applicable to professional/managerial positions.

• Best examples:
– In-basket (problem solving skills)
– Leaderless Group Discussion (inter-personal sensitivity)

• Criterion of success for managers more difficult to


define, therefore more difficult to predict = lower
validity
Work samples = replica of job
Situational exercise = mirror part of job
Validity 0,20 - 0,35
BIOGRAPHICAL INFORMATION
Biographical information is frequently recorded on an application blank. Problems:
– Equal access (e.g. male vs. female sports)
– Invasive (e.g. religion, dating behaviour)
– “Fake ability” (distort responses to create a more socially desirable impression)
Biographical data successful in predicting earnings,
absenteeism and productivity. 0,79 validity coefficient for
predicting the turnover employees.

67
LETTER OF RECOMMENDATION

Least valid of all predictors (average validity of 0,13)

Restricted range:
• Almost all letters are positive (employer might want
to get rid of poor performer)
• Applicants themselves choose who will write the
letters - pick people that will make them look good
REFERENCE CHECKING
q This refers to the gathering and use, at one or more stages of
the personnel selection process, of information about
applicants.

q A popular way of checking references is by telephone.


Candidate are required to furnish the names and contact
numbers of previous employers and other people that may be
contacted for this purpose.

q The following information may be gathered:

§ job experience
§ job performance
§ his/her character
§ physical and mental health

69
EMOTIONAL INTELLIGENCE (EI)
q El can be regarded as the “soft” side issues of individual
differences - such as moods, feelings, emotions.

q The relevance of these constructs to the world of work have


been denied for many years, but we are now coming to
realise that moods, feelings and emotions play a significant
role in the workplace.

Five dimensions to the construct of EI

1. Knowing one’s emotions


2. Managing one’s emotions
3. Motivating oneself
4. Recognising emotions in others
5. Handling relationships

72
ONLINE ASSESSMENT
A method for collecting data or administering instruments.
The benefit is the capability to deliver assessments direct to the
test taker.
The concerns are the verification and psychometric quality:
• This involves administration without direct supervision
• Verification - did the person completing the test do so unaided
• Psychometric quality - are the scores distorted as the person
was not observed completing the questionnaire

73
Assessment of predictors along 4 evaluative
standards
•Cost: Look at indirect/hidden costs.
•Applicability: How many jobs does the predictor cover.
•Fairness: Must not unfairly discriminate against a group.
•Validity: Not one ideal predictor with a 100% validity,
otherwise not ideal in others.

Get test for situation = trade-off

Trade-offs between the 4 different standards

74
Muchinsky et al. (2005)
CHAPTER 5
Personnel decisions
CHAPTER 6
Fairness in personnel decisions

&
Coetzee & Schreuder (2010)
CHAPTER 6
Recruitment and Selection
SELECTION DECISIONS

76
RECRUITMENT
The personnel function of recruitment refers to the
process of attracting people to apply for a job

SOCIAL VALIDITY

When job applicants view the selection process of an


organisation as fair, of high quality and as acceptable
one can say that the social validity of that
organisations’ selection procedure is high

77
RECRUITMENT

• Both parties are engaged in assessing the degree of fit with each other
• Mutual process especially if you want a better employee

• Affirmative action: recruit previously disadvantaged, who might not


otherwise seek employment
– place advert so that designated group will see and read it
– visit schools/universities of designated groups
– cannot state that you specifically want a male or a female

VS
• Glamour advertising that over-emphasises good points and ignores bad
points (high turnover)
AFFIRMATIVE ACTION (AA)
AA is a social policy aimed at reducing the effects of prior
discrimination.

Four goals of AA
1. Correct present inequities
2. Compensate past inequities
3. Provide role models
4. Promote diversity

Has AA been effective in meeting national goals of prosperity in


employment?
• Recruitment
• Psychological and behavioural effect
79
Figure 5.2 Model of personnel
decisions in organisations

80
SELECTION
Personnel selection is the process of identifying from the pool of recruited
applicants those to whom a job will be offered. p137 in Muchinsky et al.,
(2005) or p185 in Coetzee & Schreuder (2010)

IMPLICATIONS
•Selection implies that some applicants will get hired while others will not
•Some applicants are better suited than others for a particular job and the
purpose of selection is to identify the “better” applicants

Three factors determine the quality of the newly selected employees and the
degree to which they will have an impact on the organisation:
• validity of the predictor
• selection ratio
• base rate
81
82
Figure 2.2 Scatter plot for test scores
and job-performance ratings
based on 15 employees

83
PREDICTOR VALIDITY
50%

(X)
}
r = 0,8
(Y)

84
SELECTION RATIO
THE SELECTION RATIO REFERS TO THE PROPORTION OF APPLICANTS THAT
ARE PLACED IN RELATION TO THOSE TESTED WHO ARE AVAILABLE FOR
PLACEMENT (SR = n/N)

Select only best (small SR) OR Leave only worst (Large SR)

BASE RATE
THE PROPORTION OF PERSONS JUDGED SUCCESSFUL USING CURRENT
SELECTION PROCEDURE

85
SELECTION RATIO
0,75 0,25

Criterion scores on
average will be ↑
X (Y)

x (X)

86
SELECTION RATIO

87
% OF PRESENT EMPLOYEES CONSIDERED
SATISFACTORY
Largest gains in
average criterion
performance will
occur with a base
rate of 0,5 :
greater gain in
actual number
of NEW employees 25%
that will be
successful in 50%
performing their job.
75%

88
Fig 6.9 Varying base rates on a predictor with a given validity

For new predictor br = 0,5 is highest utility


Criterion

Success CX
Failure r = 0.70

Criterion
BR = 0.80

Predictor

Success CX
Failure

r = 0.70

Criterion
BR = 0.50
Success CX
Predictor
Failure

r = 0.70
BR = 0.20

Predictor
90
FAIRNESS IN PERSONNEL DECISIONS
DEFINITION OF FAIRNESS:
• If all the parties received equitable treatment
• If there was conformity with universally accepted standards
• Consistency was exhibited (Bendix, 1996)

FAIRNESS = SOCIAL CONCEPT


• is a social concept - not a psychometric or statistical concept
• has no single meaning ∴ no single statistical definition
• is relevant to all personnel decisions and must be applied
• is not just about selection, but also about other personnel decisions

DECISIONS REGARDING:

Selection Disciplinary steps


Discrimination Dismissals
Training Benefits
Promotion Re-employment

91
DISTINGUISH BETWEEN:

Bias Fairness
§ Statistical concept § Judgement based on values
§ Impact of psychometric properties § The way test results are interpreted
§ of test on test results § and applied
§ When a test makes systematic errors § Value judgement regarding decisions
§ in measurement or prediction § or actions taken as a result of test
§ scores

SUBSTANTIVE FAIRNESS = DECISION OR ACTION

PROCEDURAL FAIRNESS = THE WAY IT IS DONE

92
LEGAL FRAMEWORK
EMPLOYMENT EQUITY ACT 55 of 4098
Every person who can do a job (suitably qualified), should have a fair chance to
get the job
Chapter 3 Section 20 (3) & (4)
For purposes of this Act, a person may be suitably qualified for a job as a result
of any one of, or any combination of that person’s--
a. formal qualifications;
b. prior learning;
c. relevant experience; or
d. capacity to acquire, within a reasonable time, the ability to do the job.
(4) When determining whether a person is suitably qualified for a job, an
employer must--
a. review all the factors listed in subsection (3); and
b. determine whether that person has the ability to do the job in terms of
any one of, or any combination of those factors.
Courts rely on social opinion — test of reasonable man
Burden of proof lies with employer or organisation
Distinguish between direct or indirect discrimination

94
LEGAL GUIDELINES WITH REGARDS
TO FAIRNESS
§ Inherent requirements of the job
§ Operational requirements
§ Misconduct of employee
§ Incapacity of employee (cannot perform satisfactory)

• Rely on substantive and procedural fairness


• Involve stakeholders in development of policies and
decision-making
• Use assessment tools that are valid
• Ensure that candidates are treated equally
95
MODELS OF FAIRNESS
A. SYSTEMATIC MEAN DIFFERENCE (C&S, p 219)
Difference in group means exist
Causes: socio-economic background & characteristics

Thorndike:
If difference in average test performance exists, then judgement on test-fairness must
rest on inferences that are made from the test rather than comparison of mean scores.
Focus attention on fair use of the test scores, rather than on the scores themselves.
Take test item:
The usual temperature for baking a cake is about
a. 250° b. 300° c. 3500 d. 400°
Percentage of right answers favour female over male
BUT IF:
Criterion predicted: how palatable a cake one can bake
Poorer performance of male on item = poorer performance in kitchen

If criterion predicted: range of general information


Item biased against and unfair to males

96
MODELS OF FAIRNESS
B. DIFFERENCES IN VALIDITY
Tests are predictive for all groups but to a varying degree
Possible cause = sample size

97
MODELS OF FAIRNESS cont.

DIFFERENCES IN REGRESSION LINES

Cleary:
If scores are significantly different for groups on test but job performance is equal

A test is only fair if the regression line is the same


Recommendation – use different regression lines
Common regression line will over predict or under predict some performance

98
Differential validity •Adverse impact means that
members of one group are selected
at substantially greater rates than
members of another group.
•This figure is an example of a
predictor-criterion relationship that
is legal.
• Figure 6.15 Valid predictor with adverse impact •The validity for both groups are
equivalent, but the minority group
scores lower on the predictor and
Performance does poorer on the job.
criterion C Nonminority •What may have happened in this
instance is that the factors that
R Both
depressed the test score of the
minority group may also have
ellipses served to depress job performance
I look the
same,
scores.
they are •Adverse impact is defensible in
T just at this case.
different •The reason is that the minority do
positions
E on the
poorer on what the organisation
considers essential for job success.
graph.
•More applicants of the nonminority
R group will then be selected.

I
O
Minority
N

Reject Accept

PREDICTOR
Predictor score
MODELS OF FAIRNESS
D. THORNDIKE’S QUOTA MODEL
Propose that one go further than regression line model.
Requires that success ratio equal selection ratio Proportion of each group that would be successful, should be
selected. 30% can be successful, but only 20% were selected.
This ensures that a greater % of the minority group is selected than is likely under the previous models of test
fairness.

Figure A: A situation that is “fair” if the Cleary Figure B: Cut-off points needed to achieve
model is used but “unfair” if the fairness under the Thorndike model
Thorndike model is used

100
Differential validity •In this instance we have unequal
predictor means for the two
groups.
•Because the minority group has
a lower predictor mean than the
nonminority group, members of
• Figure 6.16: Equal validity, unequal predictor means the minority group would not be
as likely to be selected, even
though the probability of success
s on the job for both groups is
C a essentially the same.
t
i
Minority •A strategy that can be used here
R f
is to use separate cut-off scores
for the different groups. This cut-
a
off score is based on the predictor
c
I t
performance (or score), but the
Expectancy of expectancy for success on the job
o
success on stays the same.
the job T r
y •Even though the test score may
mean different things for different
E U groups, as long as the expectancy
n of success on the job is equal for
s the two groups, the use of
R at Cut-off for separate cut-off scores is justified.
isf nonminority
a
I ct
or
O y
Nonminority
N
50 75

Reject Accept

PREDICTOR
MODELS OF FAIRNESS
CONDITIONAL PROBABILITY MODEL
Goes one step further than Thorndike’s model
People should have an equal chance of selection, regardless of group membership
Basic principle: For both minority and majority groups whose members can achieve a
satisfactory criterion score, there should be the same probability of selection regardless
of group membership
If probability of being selected when successful = 0,8 for one group, should also be 0,8
for other group
Model will give greater preference for minority group than Thorndike’s model

Figure 4(b) Subpopulations with common regression Figure 4(c) Subpopulations with common regression
line. Selection strategy fair according to line. Selection strategy fair according to
Constant Ratio Model Conditional Probability Model

102
MODELS OF FAIRNESS
EQUAL RISK MODEL
Consider distribution of criterion scores about regression line
Want 70% of selected candidates to succeed, set predictor cut-off at a point which allows
30% risk on criterion success (1/2 standard deviation in a normal distribution)
The risk should be equal in all groups
Model will give greater preference for minority group than Thorndike’s model
EVALUATION OF THE MODELS

o No agreement which is correct / best model


o Use combination of models
o South African Society for Industrial Psychology recommends the regression
line model

Different cut-offs : cut-offs should be set in such a way that risks are equal
Hire all applicants with at least 70% chance of success (30% risk)

103
HOW TO ENSURE FAIRNESS
* Use Job analysis

* Avoid criteria that requires prior knowledge

* The testing situation should be the same for everyone

* Selection procedures should be job related

* Fair personnel policies


1. provide information on job relatedness
2. provide feedback
3. establish good rapport

* Perceived fairness of employees influenced by


1. equity
2. equality
3. special needs

* Establish a model of fairness

* Consult with all stakeholders

104
• Equity = accept individuals differ in attributes and if attributes
are inherent job requirements can select fairly on attributes
• Equality = minimising differences are all treated the same
“opportunity for employment should be extended equally in
society”
• CLASH : Attributes are not equally distributed!

• Psychology – focus on equity


• Law – more directed to equality perspective

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy