Tuesday, June 23, 2020

[Article Review] Examination of Short Form IQ Estimations for WISC-V in Clinical Practice

Reference

Lace, J. W., Merz, Z. C., Kennedy, E. E., Seitz, D. J., Austin, T. A., Ferguson, B. J., & Mohrland, M. D. (2022). Examination of five- and four-subtest short-form IQ estimations for the Wechsler Intelligence Scale for Children-Fifth edition (WISC-V) in a mixed clinical sample. Applied Neuropsychology: Child, 11(1), 50-61. https://doi.org/10.1080/21622965.2020.1747021

Review

Lace et al. (2022) investigated the efficacy of ten unique five-subtest (pentad) and four-subtest (tetrad) short-form (SF) combinations in estimating full-scale IQ (FSIQ) on the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) in a mixed clinical sample. A total of 268 pediatric participants were included in the study, with mean scores falling in the low average-to-average ranges. Regression-based and prorated FSIQ estimates were calculated, and the accuracy of each SF combination was assessed by comparing the estimates to the true FSIQ. Results showed that both regression-based and prorated/adjusted methods provided FSIQ estimates that were accurate within five Standard Score points of true FSIQ for approximately 81-92% (pentad) and 65-76% (tetrads) of participants. Prorated/adjusted estimates appeared to provide somewhat better accuracy than regression-based estimates. The study provides clinicians with useful information when selecting abbreviated assessments of intelligence in clinical practice.

The study adds value to the existing literature by examining the efficacy of SF IQ estimations for the WISC-V in a mixed clinical sample. The authors' finding that both regression-based and prorated/adjusted methods provided accurate FSIQ estimates within five Standard Score points of true FSIQ for most participants is useful for clinicians who are seeking to administer abbreviated assessments of intelligence. The authors also provide useful information on the benefits, detriments, and other considerations of each SF combination. However, the authors acknowledge the limitations of the study, including the use of an archival sample and the lack of consideration for specific clinical populations, which may limit the generalizability of the findings.

Tuesday, June 2, 2020

[Article Review] Small Samples, Big Results: A Review of Rasch vs. Classical Equating in Credentialing Exams

Reference

Babcock, B., & Hodge, K. J. (2020). Rasch Versus Classical Equating in the Context of Small Sample Sizes. Educational and Psychological Measurement, 80(3), 499-521. https://doi.org/10.1177/0013164419878483

Review

This article review analyzes the research conducted by Babcock and Hodge (2020) in their study, "Rasch Versus Classical Equating in the Context of Small Sample Sizes." The research aims to compare classical and Rasch techniques in equating exam scores when sample sizes are small, specifically N≤ 100 per exam form. Additionally, the study explores the potential of pooling multiple forms' worth of data to improve estimation within the Rasch framework.

Babcock and Hodge (2020) simulated multiple years of a small-sample exam program by resampling from a larger certification exam program's real data. Their results demonstrate that combining multiple administrations' worth of data through the Rasch model can lead to more accurate equating compared to classical methods designed for small samples. Interestingly, the study finds that WINSTEPS-based Rasch methods, which utilize multiple exam forms' data, work better than Bayesian Markov Chain Monte Carlo methods, as the prior distribution used to estimate the item difficulty parameters biased predicted scores when there were difficulty differences between exam forms.

In conclusion, the research by Babcock and Hodge (2020) contributes significantly to the field of educational measurement, particularly in the context of small sample exams. Their findings emphasize the benefits of Rasch methods for more accurate equating and suggest that pooling data from multiple exam forms can further enhance the estimation process. As a result, this study serves as a valuable resource for researchers and practitioners interested in the development and administration of credentialing exams for highly specialized professions.

[Article Review] Navigating the SEM Maze: Understanding the Impact of Estimation Methods on Fit Indices

Reference

Shi, D., & Maydeu-Olivares, A. (2020). The Effect of Estimation Methods on SEM Fit Indices. Educational and Psychological Measurement, 80(3), 421-445. https://doi.org/10.1177/0013164419885164

Review

In the article "The Effect of Estimation Methods on SEM Fit Indices" by Shi and Maydeu-Olivares (2020), the authors examine how different estimation methods, specifically maximum likelihood (ML), unweighted least squares (ULS), and diagonally weighted least squares (DWLS), impact three population structural equation modeling (SEM) fit indices: the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardized root mean square residual (SRMR). They investigate various types and levels of misspecification in factor analysis models, including misspecified dimensionality, omitting cross-loadings, and ignoring residual correlations.

The authors find that estimation methods significantly affect the RMSEA and CFI, necessitating the use of different cutoff values for different estimators. This result highlights the importance of being cautious when interpreting fit indices, as they can be influenced by the chosen estimation method. In contrast, the SRMR proves to be robust to the method used for estimating model parameters, making it a reliable choice for evaluating model fit at the population level.

Overall, Shi and Maydeu-Olivares (2020) provide valuable insight into the impact of estimation methods on SEM fit indices and offer guidance for researchers when selecting and interpreting these indices. Their findings underscore the need for careful consideration when choosing an estimation method and interpreting fit indices, as these choices can significantly influence conclusions drawn from SEM analyses.