Saturday, October 10, 2020

[Article Review] Unraveling the Mystery of Missing Data: Effective Handling Methods for Accurate Ability Estimation

Reference

Xiao, J., & Bulut, O. (2020). Evaluating the Performances of Missing Data Handling Methods in Ability Estimation From Sparse Data. Educational and Psychological Measurement, 80(5), 932-954. https://doi.org/10.1177/0013164420911136

Review

In the article "Evaluating the Performances of Missing Data Handling Methods in Ability Estimation From Sparse Data" (2020), Xiao and Bulut conducted two Monte Carlo simulation studies to evaluate the performance of four methods in handling missing data when estimating ability parameters. These methods include full-information maximum likelihood (FIML), zero replacement, and multiple imputations with chain equations utilizing classification and regression trees (MICE-CART) and random forest imputation (MICE-RFI). The authors assessed the accuracy of ability estimates for each method using bias, root mean square error, and the correlation between true ability parameters and estimated ability parameters.

The results of the study showed that FIML outperformed the other methods under most conditions. Interestingly, zero replacement provided accurate ability estimates when the missing proportions were very high. MICE-CART and MICE-RFI demonstrated similar performances, but their effectiveness appeared to vary depending on the missing data mechanism. As the number of items increased and missing proportions decreased, all methods performed better.

The authors also found that incorporating information on missing data could improve the performance of MICE-RFI and MICE-CART when the dataset is sparse and the missing data mechanism is missing at random. This research is valuable for educational assessments, where large amounts of missing data can distort item parameter estimation and lead to biased ability estimates.

Friday, October 2, 2020

[Article Review] Enhancing Performance Validity Tests: Exploring Nonmemory-Based PVTs for Better Detection of Noncredible Results

Reference

Webber, T. A., Critchfield, E. A., & Soble, J. R. (2020). Convergent, Discriminant, and Concurrent Validity of Nonmemory-Based Performance Validity Tests. Assessment, 27(7), 1399-1415. https://doi.org/10.1177/1073191118804874

Review

In the article "Convergent, Discriminant, and Concurrent Validity of Nonmemory-Based Performance Validity Tests" (Webber, Critchfield, & Soble, 2020), the authors explore the validity of nonmemory-based Performance Validity Tests (PVTs) in identifying noncredible performance. The study focuses on the Dot Counting Test (DCT), the Wechsler Adult Intelligence Scale-Fourth edition (WAIS-IV) Reliable Digit Span (RDS), and two alternative WAIS-IV Digit Span (DS) subtest PVTs. The authors aim to evaluate the efficiency of these PVTs in supplementing memory-based PVTs to detect noncredible neuropsychological test performance.

The methodology of the study includes testing examinees using DCT, WAIS-IV DS, and the following criterion PVTs: Test of Memory Malingering, Word Memory Test, and Word Choice Test. Validity groups were determined by passing 3 (valid; n = 69) or failing ≥2 (noncredible; n = 30) criterion PVTs. The results show that DCT, RDS, RDS-Revised (RDS-R), and WAIS-IV DS Age-Corrected Scaled Score (ACSS) were significantly correlated, although not with memory-based PVTs.

In conclusion, the authors demonstrate that combining RDS, RDS-R, and ACSS with DCT improved classification accuracy for detecting noncredible performance among valid-unimpaired examinees. However, this combination was not as effective for valid-impaired examinees. The study suggests that using DCT with ACSS may be the most effective approach to supplement memory-based PVTs in identifying noncredible neuropsychological test performance among cognitively unimpaired examinees.

[Article Review] Understanding the Role of Item Distributions on Reliability Estimation: The Case of Cronbach’s Coefficient Alpha

Reference

Olvera Astivia, O. L., Kroc, E., & Zumbo, B. D. (2020). The Role of Item Distributions on Reliability Estimation: The Case of Cronbach’s Coefficient Alpha. Educational and Psychological Measurement, 80(5), 825-846. https://doi.org/10.1177/0013164420903770

Review

In this article, Olvera Astivia, Kroc, and Zumbo (2020) address the distributional assumptions of Cronbach's coefficient alpha and their effect on reliability estimation. The authors propose a new framework based on the Fréchet-Hoeffding bounds to demonstrate the impact of item distributions on the estimation of correlations and covariances. They argue that coefficient alpha is bounded above by the shape of these distributions, which restricts the theoretical correlation range.

The researchers derive a general form of the Fréchet-Hoeffding bounds for discrete random variables and provide R code and a user-friendly web application for calculating these bounds. This practical application allows other researchers to easily test the influence of item distributions on their data. The study serves as a valuable contribution to the field by clarifying the role of distributional assumptions in reliability estimation and providing accessible tools for further research.

The implications of Olvera Astivia et al.'s (2020) findings are significant, as they challenge previous assumptions about coefficient alpha and suggest that certain correlation structures may be unfeasible. This insight is crucial for researchers who rely on this measure for evaluating the reliability of their assessments. By considering the distributional constraints, researchers can ensure more accurate interpretations of their findings and contribute to the development of more reliable measurement tools.