Course Content
Entrepreneurial Development (Unit 8)
ASRB NET / SRF & Ph.D. Extension Education
Development of Knowledge Test

A knowledge test is a tool used to measure the factual information, understanding, and awareness of respondents about a particular subject, innovation, or practice (e.g., farmers’ knowledge about Integrated Pest Management, soil conservation, or dairy practices).

It must be valid, reliable, and objective.

 

Steps in Developing a Knowledge Test

  1. Planning the Test
  • Define the objectives: What do you want to measure? (e.g., knowledge of farmers on organic farming).
  • Specify the content area: List the important units/topics/subtopics.
  • Decide the type of knowledge:
    • Factual knowledge (facts, terms, definitions).
    • Comprehension (understanding of principles, concepts).
    • Application knowledge (ability to apply knowledge in practical situations).

 

  1. Collection of Items (Question Pool)
  • Prepare a large pool of items (statements/questions) covering the full content area.
  • Items can be:
    • Multiple-choice questions (MCQs)
    • True/False statements
    • Fill-in-the-blanks
    • Matching type
  • Ensure items are simple, unambiguous, and relevant.

Example:

  • IPM includes both chemical and non-chemical methods. (True/False)
  • Which of the following is a bio-control agent? (Options…)

 

  1. Editing of Items
  • Remove ambiguous, vague, or overlapping items.
  • Ensure language is simple and free from technical jargon.
  • Each item should test one aspect of knowledge only.

 

  1. Pre-Testing / Item Analysis
  • Administer the draft test to a small representative sample of respondents (pilot testing).
  • Analyze each item using:
    • Difficulty Index (P): % of respondents answering correctly. Ideal range: 20–80% (neither too easy nor too difficult).
    • Discrimination Index (D): Ability of an item to discriminate between high and low scorers. Good items: D ≥ 0.30.
    • Point-biserial correlation (rpbis): Correlation of item score with total test score. Accept items with rpbis ≥ 0.20.

 

  1. Selection of Final Items
  • Retain items that fall within acceptable difficulty and discrimination ranges.
  • Discard weak items.
  • Ensure content coverage and balance.

 

  1. Administration of Final Test
  • Prepare instructions for respondents.
  • Ensure standardized administration (same procedure for all).
  • Decide scoring pattern (1 mark for correct, 0 for wrong/blank).

 

  1. Establishing Reliability
  • Reliability = Consistency of the test results.
  • Methods:
    • Split-half reliability (test divided into two halves, scores correlated).
    • Test-retest method (administer twice, correlate scores).
    • Kuder-Richardson formula (KR-20/21) for dichotomous items.
    • Acceptable reliability: ≥ 0.70.

 

  1. Establishing Validity
  • Validity = Does the test measure what it is supposed to measure?
  • Types:
    • Content validity (adequate coverage of subject matter).
    • Construct validity (logical structure of the test).
    • Criterion-related validity (correlation with external criterion, e.g., experts’ ratings).

 

  1. Standardization of the Test
  • Prepare the final test with instructions, scoring key, and interpretation guidelines.
  • Document procedure for administration, scoring, and interpretation.

 

Characteristics of a Good Knowledge Test

  • Validity – measures intended knowledge accurately.
  • Reliability – gives consistent results.
  • Objectivity – free from personal bias.
  • Usability – easy to administer and score.
  • Discrimination – differentiates between knowledgeable and less knowledgeable respondents.

 

Example Application in Extension Education

  • Measuring farmers’ knowledge on organic farming.
  • Measuring students’ knowledge of ICT tools in extension.
  • Assessing knowledge gain before and after training programs.

 

error: Content is protected !!