Sangam: A Confluence of Knowledge Streams

Facilitating Variable-Length Computerized Classification Testing Via Automatic Racing Calibration Heuristics

Show simple item record

dc.contributor Frick, Theodore W
dc.creator Barrett, Andrew Frederick
dc.date 2015-04-19T07:23:08Z
dc.date 2015-04-19T07:23:08Z
dc.date 2015-04
dc.date 2015
dc.date.accessioned 2023-02-21T11:19:39Z
dc.date.available 2023-02-21T11:19:39Z
dc.identifier http://hdl.handle.net/2022/19795
dc.identifier.uri http://localhost:8080/xmlui/handle/CUHPOERS/253008
dc.description Thesis (Ph.D.) - Indiana University, School of Education, 2015
dc.description Computer Adaptive Tests (CATs) have been used successfully with standardized tests. However, CATs are rarely practical for assessment in instructional contexts, because large numbers of examinees are required a priori to calibrate items using item response theory (IRT). Computerized Classification Tests (CCTs) provide a practical alternative to IRT-based CATs. CCTs show promise for instructional contexts, since many fewer examinees are required for item parameter estimation. However, there is a paucity of clear guidelines indicating when items are sufficiently calibrated in CCTs. Is there an efficient and accurate CCT algorithm which can estimate item parameters adaptively? Automatic Racing Calibration Heuristics (ARCH) was invented as a new CCT method and was empirically evaluated in two studies. Monte Carlo simulations were run on previous administrations of a computer literacy test, consisting of 85 items answered by 104 examinees. Simulations resulted in determination of thresholds needed by the ARCH method for parameter estimates. These thresholds were subsequently used in 50 sets of computer simulations in order to compare accuracy and efficiency of ARCH with the sequential probability ratio test (SPRT) and with an enhanced method called EXSPRT. In the second study, 5,729 examinees took an online plagiarism test, where ARCH was implemented in parallel with SPRT and EXSPRT for comparison. Results indicated that new statistics were needed by ARCH to establish thresholds and to determine when ARCH could begin. The ARCH method resulted in test lengths significantly shorter than SPRT, and slightly longer than EXSPRT without sacrificing accuracy of classification of examinees as masters and nonmasters. This research was the first of its kind in evaluating the ARCH method. ARCH appears to be a viable CCT method, which could be particularly useful in massively open online courses (MOOCs). Additional studies with different test content and contexts are needed.
dc.language en
dc.publisher [Bloomington, Ind.] : Indiana University
dc.subject Assessment
dc.subject Computer Adaptive Testing
dc.subject Criterion-referenced Testing
dc.subject Instructional Technology
dc.subject Item Calibration
dc.subject Mastery Testing
dc.subject Educational tests & measurements
dc.subject Educational technology
dc.subject Instructional design
dc.title Facilitating Variable-Length Computerized Classification Testing Via Automatic Racing Calibration Heuristics
dc.type Doctoral Dissertation


Files in this item

Files Size Format View
Barrett_indiana_0093A_13460.pdf 3.257Mb application/pdf View/Open

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse