Performance evaluation of the small sample dichotomous IRT analysis in assessment calibration

Fotaris, Panagiotis ORCID: https://orcid.org/0000-0001-7757-7746, Mastoras, Theodoros, Mavridis, Ioannis and Manitsaris, Athanasios (2010) Performance evaluation of the small sample dichotomous IRT analysis in assessment calibration. In: Fifth International Multi-Conference on Computing in the Global Information Technology (ICCGI 2010), 20-25 Sep 2010, Valencia, Spain.

Full text not available from this repository.

Abstract

Item Response Theory (IRT) provides the accepted framework for examining student responses to individual test items so as to assess their quality. Moreover, IRT proves especially valuable in both improving items, which will be reused in future tests, and eliminating the ambiguous or misleading ones. However, to ensure all IRT parameters are correctly estimated, every single item needs to be tested on a large number of examinees to define its properties. As a result, IRT tends to be shunned by teaching staff who only have access to a relatively small number of students. Nevertheless, the accuracy of parameter estimates is of lesser importance in assessment calibration, when items whose parameters exceed a threshold value are flagged for revision. This study uses simulated data sets under various simulation conditions and introduces two new quality indices together with their respective IRT goodness-of-fit tests as a means to explore the feasibility of applying IRT-based assessment calibration to small sample sizes.

Item Type: Conference or Workshop Item (Paper)
ISBN: 9781424480685
Identifier: 10.1109/ICCGI.2010.19
Page Range: pp. 214-219
Identifier: 10.1109/ICCGI.2010.19
Keywords: psychometrics; assessment calibration; goodness-of-fit test; item response theory
Subjects: Computing
Depositing User: Vani Aul
Date Deposited: 07 Dec 2013 14:33
Last Modified: 28 Aug 2021 07:16
URI: https://repository.uwl.ac.uk/id/eprint/445

Actions (login required)

View Item View Item

Menu