Course Content
Entrepreneurial Development (Unit 8)
ASRB NET / SRF & Ph.D. Extension Education
Reliability and Validity of Research Instruments
  1. Reliability

Meaning

  • Reliability refers to the consistency, stability, and dependability of a measuring instrument.
  • A reliable instrument will give the same results when applied repeatedly under similar conditions.

Kerlinger (1986): “Reliability is the accuracy or precision of a measuring instrument.”

 

Types of Reliability

  • Test–Retest Reliability; Same test administered to the same group after some time. Measures stability over time. Example: Administering an adoption scale to farmers twice in a month.
  • Split-Half Reliability; Test divided into two halves (odd-even, first–second half). Correlation between halves indicates internal consistency.
  • Parallel-Form (Equivalent Form) Reliability; Two equivalent versions of the test administered to the same group. Example: Two versions of a knowledge test on crop practices.
  • Inter-Rater / Inter-Observer Reliability; Agreement between different observers/raters. Example: Two extension agents rating farmer participation in training.

Key Point

  • Reliability = Consistency of measurement
  • Symbolically measured by correlation coefficient (r), usually reliability ≥ 0.70 is acceptable.

 

2) Validity

Meaning

  • Validity is the extent to which an instrument measures what it is supposed to measure.
  • A valid instrument ensures accuracy and truthfulness of results.

Kerlinger (1986): “Validity is the extent to which an instrument measures what it claims to measure.”

 

Types of Validity

  • Content Validity; Degree to which test items represent the entire domain of the construct. Example: Knowledge test on Integrated Pest Management (IPM) should cover all aspects (insect pests, methods, chemicals, biological control).
  • Construct Validity; Degree to which the instrument actually measures the theoretical construct. Checked through factor analysis or correlation with related variables. Example: An attitude scale should truly reflect attitude, not just knowledge.
  • Criterion-Related Validity; Degree to which instrument correlates with an external criterion.
    • Two forms:
      • Concurrent validity: Instrument correlates with an existing standard at the same time.
      • Predictive validity: Instrument predicts future performance (e.g., entrance exam predicting academic success).
  • Face Validity (Weakest form); The instrument appears to measure what it should, judged by experts.
    • More subjective than statistical.

 

Key Point

  • Validity = Accuracy of measurement
  • An instrument can be reliable but not valid (e.g., consistently giving wrong results).
  • But if it is valid, it must also be reliable.

 

Relationship Between Reliability and Validity

  • Reliable but not valid: A weighing machine consistently shows +5 kg error.
  • Valid but not reliable: Impossible, because inconsistency cannot lead to true measurement.
  • Best instrument: Both reliable and valid.

 

Exam-Ready Summary

Aspect

Reliability

Validity

Meaning

Consistency of results

Accuracy of results

Key Question

Does the tool give the same results repeatedly?

Does the tool measure what it is supposed to measure?

Types

Test-retest, Split-half, Parallel forms, Inter-rater

Content, Construct, Criterion-related, Face

Relation

Necessary but not sufficient for validity

Implies reliability + accuracy

One-liner for exams: Reliability is the consistency of a research instrument, while validity is its accuracy. Reliability is necessary but not sufficient for validity.

 

error: Content is protected !!