Reliability 1-1
Reliability 1-1
DEFINITION:
Reliability
is the degree of consistency and accuracy with
which an instrument measures the attribute for which it is
designed to measure.
Reliability is defined as the ability of an instrument to
create reproducible results.
ATTRIBUTES OF RELIABILITY
STABILITY
INTERNAL CONSISTENCY
EQUIVALENCE
MEASURING OF RELIABILITY
Stability Equivalence
Internal consistency
Split-half
Test-retest Repeated method Inter-rater Alternate
method observation reliability form
TEST-RETEST METHOD
Karl pearson’s correlation coefficient formula:
𝑵 ∑ 𝑿𝒀 − ( ∑ 𝑿 ) ( ∑ 𝒀 )
𝒓=
√𝑵 ∑ 𝑿 𝟐
−(∑ 𝑿
𝟐
)√ 𝑵 ∑ 𝒀 𝟐
−( ∑ 𝒀 )
𝟐
Formula:2 𝐫=
∑ (𝐱 − 𝐱 ) (𝐲 − 𝐲)
√∑ ( 𝐱 − 𝐱 ) − ∑ ( 𝐲 − 𝐲 )
𝟐 𝟐
𝟏 𝟐𝒓
Entire test reliability: 𝒓 =
𝟏+𝒓
Where r=the correlation coefficient computed on
the split-halves with formula 2 and =the estimated
reliability of entire test.
CRONBACH’S ALPHA FORMULA:
Where
r= the estimated reliability
K= the total number of item in the test
the varience of each individual item;
=the variance of the total test score
=the sum of
INTERRATER/INTEROBSERVER
RELIABILITY:
r=
The preferred index is Cohen’s kappa.
Alternate form