Rasch Analysis Motivation

Item Response Theory

Item response theory (IRT) is a psychometric approach that focuses both on the subjects’ response to a test item as well as the qualities of the test items.

In general, a subject’s ability to answer a question (item) correctly depends on the subject’s ability and the difficulty of the question. The easier the question and the more ability of the subject, the higher the likelihood that the subject will answer the question correctly.

Unlike classical item analysis and test theory, IRT does not assume that all items are equally difficult. Instead, it assumes that a subject’s score on a particular item is based on a combination of the subject’s ability and the item’s difficulty.

Rasch Model

In particular, we explore the Rasch model approach to IRT. This model provides a way to measure the ability of subjects and difficulty items with the following benefits:

  1. We can place items on a continuum from least difficult to most difficult in a way that we can quantify and compare the differences between the difficulty levels of items in a meaningful way. E.g. if items 1, 2, and 3 have difficulty levels -1, 1, and 3 respectively, then item 3 is just as more difficult than item 2 as item 2 is more difficult than item 1.
  2. We can place subjects on a continuum from those with the least ability to those with the most ability in a way that we can quantify and compare the differences between the ability levels in a meaningful way. E.g. if subjects A, B, and C have ability levels -2, 1, and 2 respectively, then the difference between the ability levels of A and B is three times that between B and C.
  3. We can compare ability levels with difficulty levels. E.g. if a subject has ability 2 then we should expect that this subject will generally be able to answer correctly an item with difficulty level 1, but not an item with difficulty level 3.

Note that the classical measures of ability and difficulty, namely a subject’s total test score (or percentage items correct) as ability coupled with the percentage of subjects who answered an item correctly (or incorrectly) as difficulty, don’t achieve any of these three objectives.

References

Moultan, M. H. (2003) Rasch estimation demonstration spreadsheet
https://www.rasch.org/moulton.htm

Wright, B. D. and Stone, M. H. (1979) Best test design. MESA Press: Chicago, IL
https://research.acer.edu.au/measurement/1/

Wright, B. D. and Masters, J. N. (1982) Rating scale analysis. MESA Press: Chicago, IL
https://research.acer.edu.au/measurement/2/

Boone, W. J. (2016) Rasch analysis for instrument development: why, when, and how?
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5132390/pdf/rm4.pdf

Boone, W. J. and Noltemeyer, A. (2017) Rasch analysis: A primer for school psychology researchers and practitioners. Cogent Education
https://edisciplinas.usp.br/mod/resource/view.php?id=3333001

Furr, M. and Bacharach, V. R. (2007) Psychometrics: an introduction; Chapter 13: Item response theory and Rasch models. Sage Publishing
https://in.sagepub.com/sites/default/files/upm-binaries/18480_Chapter_13.pdf

Wright, B. and Stone, M. (1999) Measurement essentials, 2nd ed.
https://www.rasch.org/measess/

Wikipedia (2019) Item response theory
https://en.wikipedia.org/wiki/Item_response_theory

Leave a Comment