Week 9 - Concept Notes - 0
Week 9 - Concept Notes - 0
Week 9 - Concept Notes - 0
Demographic Forms
Demographic forms are used by the researchers to collect basic information about the
participants. Basic information such as age, gender, ethnicity, and annual income are some of
the information asked in a demographic form.
Example:
1. Age: ___
2. Gender:
____ Male
____ Female
____ Prefer not to say
3. Civil Status:
____ Single
____ Married
____ Widowed
4. Nationality: _____________________
Performance Measures
Performance measures are used to assess or rate an individual’s ability such as achievement,
intelligence, aptitude, or interests. Some examples of this type of measure include the
National Achievement Test administered by the Department of Education or the college
admission tests conducted by the different universities in the country.
Attitudinal Measures
Attitudinal measures are instruments used to measure an individual’s attitudes and opinions
about a subject. These instruments assess the respondent’s level of agreement to the
statements, which often requires them to choose from varied responses such as strongly
agree to strongly disagree. The questionnaire on the Explore part is an example of an
attitudinal measure since it determines the extent on which the participants will agree or
disagree with a given statement.
Reliability
The reliability of a measure can be simply defined as the stability and consistency of an
instrument under different circumstances or points in time. This is true for all the types of
reliability although it differs in the type of consistency of the measure. Reliability can tell
about the instrument’s internal consistency, stability over time, and alternate forms.
Internal Consistency
Internal consistency means that any group of items taken out from a specific instrument will
likely bring about the same results just like when the entire instrument was administered. It
will tell how consistent are the items from a research instrument measuring a specific
concept. The internal consistency of a measure can be obtained through the following
techniques (Howitt 2014):
● Split-half reliability. The score which resulted from half of the items on the instrument was
correlated with the score on the other half of the instrument.
● Odd-even reliability. The obtained score of the even-numbered items (e.g., items 2, 4, 6, 8,
and so on) was correlated with the score on the odd-numbered items (e.g., items 1, 3, 5, 7,
and so on) of the same instrument.
● Cronbach’s alpha. Also called the alpha reliability. This is obtained by getting the mean of
every possible half of the items correlated with every possible other half of the items. In other
words, Cronbach’s alpha gets the average of all possible halves into two equal sets of items.
Alternate Forms
To cancel out the effects of remembering the items as discussed above in the test-retest
reliability, another way to measure reliability is by using alternate forms. Another term for
this type of reliability is called parallel forms reliability. This requires the researcher to use
equivalent versions of the test, wherein the participant’s scores in both tests are being
correlated. For example, a teacher may use alternate versions of math tests (e.g., Set A and
Set B) for their students, but it basically measures the same scope or content (e.g., quadratic
equation).
Validity
A general definition of validity is the instrument’s capacity to measure what it is supposed to
measure. This means that the instrument is an accurate measure of the variable being
measured. There are three types of validity: face and content validity, criterion validity, and
construct validity (Kumar 2011).
Content validity is the ability of the test items to include important characteristics of the
concept that is intended to be measured. For example, you may take your first periodical test
for the school year and judge immediately whether the scope of the test is in line with the
lessons your teacher has taught you for the entire first quarter.
Criterion Validity
Criterion validity tells whether a certain research instrument can give the same result as
compared to other similar instruments. There are two types of criterion validity: concurrent
and predictive validity (Langdridge and Hagger-Johnson 2013).
Concurrent validity can be obtained by correlating two research instruments taken at the
same time. This type of validity is similar to alternate forms reliability wherein two similar
instruments are used to evaluate the quality of the instrument.
Predictive validity refers to the ability of an instrument to predict another variable, which is
called a criterion. The criterion should be different from the construct originally being
measured. For example, a college entrance exam composed of different subtests such as
reasoning, numerical, and verbal ability has a predictive validity to a student’s likelihood to
succeed in the university they are applying to.
Construct Validity
Construct validity can be assessed by examining whether a specific instrument relates to
other measures. This type of validity is the most sophisticated among all the other types of
validity. The process of obtaining construct validity involves correlating the scores between
the instrument to be evaluated to other instruments. Construct validity can be classified into
two types: convergent validity and discriminant validity (Leary 2011).
Convergent validity is obtained when an instrument correlates with other similar
instruments that it is expected to correlate with. For example, a scale about self-esteem can
be correlated to other instruments measuring related constructs such as self-confidence.
Discriminant validity, on the other hand, is obtained if an instrument does not correlate
with other instruments that it should not correlate with. For example, a scale about self-
esteem must be correlated to instruments not related to the construct, such as intelligence.