Showing posts with label ability estimation. Show all posts
Showing posts with label ability estimation. Show all posts

Tuesday, May 16, 2023

[Article Review] Computerized Adaptive Testing: A Dive into Enhanced Techniques

Reference

Anselmi, P., Robusto, E., & Cristante, F. (2023). Enhancing Computerized Adaptive Testing with Batteries of Unidimensional Tests. Applied Psychological Measurement, 47(3), 167-182. https://doi.org/10.1177/01466216231165301

Review

The article authored by Anselmi, Robusto, and Cristante (2023) introduces a pioneering procedure for Computerized Adaptive Testing (CAT) with unidimensional test batteries. The goal is to optimize the process by constantly updating the estimation of a given ability with every new response and, concurrently, the current estimations of all other abilities within the test battery. Their innovative approach integrates data from these abilities into an empirical prior, which subsequently undergoes regular updates.

In a bid to validate their approach, the researchers employed two simulation studies contrasting the performance of their suggested procedure against a standard CAT technique for unidimensional test batteries. Results indicated a notable uptick in accuracy for fixed-length CATs using the proposed procedure. Simultaneously, there was an observed shortening in test length for variable-length CATs. Notably, the enhancements in both accuracy and efficiency escalated proportionally with the correlation among the abilities evaluated by the test batteries.

While the study provides a promising avenue for the enhancement of CAT, the outcomes' dependence on the correlation between abilities measured by the test batteries may hint at limitations in its applicability. The reliance on simulation studies also indicates a need for real-world validations. Nonetheless, Anselmi et al.'s innovative approach offers a commendable step forward in refining CAT procedures, potentially yielding significant efficiencies in real-world applications, contingent upon further validation.


Saturday, October 10, 2020

[Article Review] Unraveling the Mystery of Missing Data: Effective Handling Methods for Accurate Ability Estimation

Reference

Xiao, J., & Bulut, O. (2020). Evaluating the Performances of Missing Data Handling Methods in Ability Estimation From Sparse Data. Educational and Psychological Measurement, 80(5), 932-954. https://doi.org/10.1177/0013164420911136

Review

In the article "Evaluating the Performances of Missing Data Handling Methods in Ability Estimation From Sparse Data" (2020), Xiao and Bulut conducted two Monte Carlo simulation studies to evaluate the performance of four methods in handling missing data when estimating ability parameters. These methods include full-information maximum likelihood (FIML), zero replacement, and multiple imputations with chain equations utilizing classification and regression trees (MICE-CART) and random forest imputation (MICE-RFI). The authors assessed the accuracy of ability estimates for each method using bias, root mean square error, and the correlation between true ability parameters and estimated ability parameters.

The results of the study showed that FIML outperformed the other methods under most conditions. Interestingly, zero replacement provided accurate ability estimates when the missing proportions were very high. MICE-CART and MICE-RFI demonstrated similar performances, but their effectiveness appeared to vary depending on the missing data mechanism. As the number of items increased and missing proportions decreased, all methods performed better.

The authors also found that incorporating information on missing data could improve the performance of MICE-RFI and MICE-CART when the dataset is sparse and the missing data mechanism is missing at random. This research is valuable for educational assessments, where large amounts of missing data can distort item parameter estimation and lead to biased ability estimates.