The document discusses the concept of validity in psychometrics, emphasizing its importance in measuring how well a test assesses what it claims to measure. It outlines different types of validity, including content, criterion, construct, and face validity, and explains the validation process and the relationship between reliability and validity. Additionally, it highlights the role of bias and fairness in test measurement.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
13 views
Validity
The document discusses the concept of validity in psychometrics, emphasizing its importance in measuring how well a test assesses what it claims to measure. It outlines different types of validity, including content, criterion, construct, and face validity, and explains the validation process and the relationship between reliability and validity. Additionally, it highlights the role of bias and fairness in test measurement.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2
A type of validity that is more from the
PSYCHOMETRIC PROPPERTIES: VALIDITY
perspective of the test taker as opposed to THE CONCEPT OF VALID ITY the test user
Validity: A judgment or estimate of how Example: Personality tests
well a test measures what it purports to Introversion-Extroversion test will be perceived as a measure in a particular context. highly (face) valid measure of personality functioning Validation: The process of gathering and evaluating evidence about validity. The inkblot test may not be perceived as a (face) Both test developers and test users may valid method of personality functioning. play a role in the validation of a test. Local validation: Test users may validate a CONTENT VALIDITY test with their own group of testtaker. Content validity: A judgment of how VALIDITY IS OFTEN CONCEPTUALIZED adequately a test samples behavior ACCORDING TO 3 CATEG ORIES: representative of the universe of behavior that the test was designed to sample. Content Validity Do the test items adequately represent the content that should be included in the test? - This is a measure of validity based on an Test blueprint: A plan regarding the types evaluation of the subjects, topics, or of information to be covered by the items, content covered by the items in the test. the number of items tapping each area of Criterion Validity coverage, and the organization of the items in the test. - This is a measure of validity obtained by Culture and the relativity of content evaluating the relationship of scores validity: The content validity of a test varies obtained on the test to scores on other across cultures and time. tests or measures. CRITERION - RELATED VALIDITY Construct Validity A criterion is the standard against which a - - This is a measure of validity that is arrived test or a test score is evaluated. at by executing a comprehensive analysis Characteristics of a criterion: An adequate of: criterion should be relevant to the matter at a. How scores on the test relate to other test hand, valid for the purpose for which it is scores and measures. being used, and uncontaminated, meaning b. How scores on the test can be understood it is not based on predictor measures. within some theoretical framework for understanding the construct that the test was designed to measure.
FACE VALIDITY
A judgment about the relevance of test
items. Validity coefficient: A correlation Discriminant evidence: Validity coefficient coefficient that provides a measure of the showing little relationship between test relationship between test scores and scores scores and/or other variables with which on the criterion measure. scores on the test should not theoretically Validity coefficients are affected by be correlated. restriction or inflation of range. Factor analysis: Class of mathematical Incremental validity: The degree to which procedures designed to identify specific an additional predictor explains something variables on which people may differ. about the criterion measure that is not explained by predictors already in use. RELATIONSHIP BETWEEN RELIABILITY AND To what extent does a test predict the VALIDITY criterion over and above other variables? Reliability and validity are partially related and partially independent. CONSTRUCT VALIDITY Reliability is a prerequisite for validity, Judgment about the appropriateness of meaning a measurement cannot be valid inferences drawn from test scores regarding unless it is reliable. individual standings on a construct. It is not necessary for a measurement to be If a test is a valid measure of a construct, valid for it to be considered reliable. then high scorers and low scorers should behave as theorized. VALIDITY AND TEST BI AS All types of validity evidence, including Bias: A factor inherent in a test that evidence from the content and criterion- systematically prevents accurate, impartial related varieties of validity, come under the measurement. umbrella of construct validity. - Bias implies systematic variation in test scores. EVIDENCE OF CONSTRUCT VALIDITY - Prevention during test development is the Evidence of homogeneity - How uniform a best cure for test bias. test is in measuring a single concept. Rating error: A judgment resulting from the Evidence of changes with age – Some intentional or unintentional misuse of a constructs are expected to change over rating scale. time (e.g., reading rate). - Raters may be either too lenient, too Evidence of pretest-posttest changes - Test severe, or reluctant to give ratings at the scores change as a result of some extremes (central tendency error). experience between a pretest and a Halo effect: A tendency to give a particular posttest (e.g., therapy). person a higher rating than he or she Evidence from distinct groups - Scores on a objectively deserves because of a favorable test vary in a predictable way as a function overall impression. of membership in some groups. Fairness: The extent to which a test is used Convergent evidence: Scores on the test in an impartial, just, and equitable way. undergoing construct validation tend to correlate highly in the predicted direction with scores on older, more established tests designed to measure the same (or a similar) construct.