Thursday, December 17, 2020

[Article Review] Nurturing Caregiving: A Key to Mitigate Early Adversities and Boost Adolescent Human Capital

Reference

Trude, A. C. B., Richter, L. M., Behrman, J. R., Stein, A. D., Menezes, A. M. B., Black, M. M., et al. (2021). Effects of responsive caregiving and learning opportunities during pre-school ages on the association of early adversities and adolescent human capital: an analysis of birth cohorts in two middle-income countries. The Lancet Child & Adolescent Health, 5(1), 37-46. https://doi.org/10.1016/S2352-4642(20)30309-6

Review

The study by Trude et al. (2021) investigates the impact of responsive caregiving and learning opportunities during preschool ages on the relationship between early adversities and adolescent human capital in two middle-income countries, Brazil and South Africa. The researchers analyzed longitudinal birth cohort data from the 1993 Pelotas Birth Cohort (Brazil) and the Birth to Twenty Plus (Bt20+) Birth Cohort (South Africa), focusing on three human capital indicators: intelligence quotient (IQ), psychosocial adjustment, and height.

The study found that an increase in cumulative adversities negatively impacted adolescent IQ in both cohorts. However, the negative effects of early adversities on IQ were attenuated by highly nurturing environments. Responsive caregiving and learning opportunities during preschool ages had a significant positive impact on adolescent IQ in the Brazilian cohort, while responsive caregiving played a more significant role in the South African cohort.

These findings emphasize the importance of nurturing care during early childhood in mitigating the effects of early adversities on adolescent human capital. By providing responsive caregiving and learning opportunities, caregivers can foster a protective environment that promotes positive cognitive and psychosocial development in children facing adversity. The study underscores the need for policies and interventions aimed at supporting nurturing care in middle-income countries, to bolster human capital and improve long-term outcomes for adolescents.

Saturday, December 5, 2020

[Article Review] Revolutionizing Need for Cognition Assessment: Unveiling the Efficient NCS-6

Reference

Coelho, G. L. d. H., Hanel, P. H. P., & Wolf, L. J. (2018). The Very Efficient Assessment of Need for Cognition: Developing a Six-Item Version. Assessment, 27(8), 1870-1885. https://doi.org/10.1177/1073191118793208

Review

In the article, "The Very Efficient Assessment of Need for Cognition: Developing a Six-Item Version" by Coelho, Hanel, and Wolf (2018), the authors introduce a shortened version of the Need for Cognition Scale (NCS-18) called the NCS-6. The need for cognition refers to people's tendency to engage in and enjoy thinking, which has become influential across social and medical sciences. Using three samples from the United States and the United Kingdom (N = 1,596), the researchers reduced the number of items from 18 to 6 based on various criteria such as discrimination values, threshold levels, measurement precision, item-total correlations, and factor loadings.

The authors then confirmed the one-factor structure and established measurement invariance across countries and gender. They demonstrated that while the NCS-6 provides significant time savings, it comes at a minimal cost in terms of its construct validity with external variables such as openness, cognitive reflection test, and need for affect. This suggests that the NCS-6 is a parsimonious, reliable, and valid measure of the need for cognition.

In conclusion, Coelho et al.'s (2018) article provides valuable insights into the development of a more efficient measure of the need for cognition. The NCS-6 not only reduces the time required for assessment but also maintains the validity and reliability of the original scale. This study contributes to the understanding and measurement of need for cognition, which has implications for various fields, including social and medical sciences.

[Article Review] Decoding Prior Sensitivity in Bayesian Structural Equation Modeling for Sparse Factor Loading Structures

Reference

Liang, X. (2020). Prior Sensitivity in Bayesian Structural Equation Modeling for Sparse Factor Loading Structures. Educational and Psychological Measurement, 80(6), 1025-1058. https://doi.org/10.1177/0013164420906449

Review

Liang's (2020) article, "Prior Sensitivity in Bayesian Structural Equation Modeling for Sparse Factor Loading Structures," delves into the application of Bayesian structural equation modeling (BSEM) with small-variance normal distribution priors (BSEM-N) for the examination and estimation of sparse factor loading structures. The author conducts a two-part investigation, consisting of a simulation study (Study 1) and an empirical example (Study 2), to explore the prior sensitivity in BSEM-N. The results reveal that the optimal balance between true and false positives is achieved when the 95% credible intervals of shrinkage priors barely cover the population cross-loading values.

In Study 1, the author examines the prior sensitivity in BSEM-N using model fit, population model recovery, true and false positive rates, and parameter estimation. The study assesses seven shrinkage priors on cross-loadings and five noninformative/vague priors on other model parameters. Study 2 provides a real data example to demonstrate the impact of different priors on model fit and parameter selection and estimation. The empirical findings suggest that a sparse cross-loading structure with a minimal number of nontrivial cross-loadings and relatively high primary loading values is ideal for variable selection.

The article's conclusion emphasizes the importance of considering the study's goal when selecting priors for BSEM-N. To improve parameter estimates, a relatively large prior variance is preferred. The author advises against using BSEM-N with zero-mean priors for the estimation of cross-loadings and factor correlations when cross-loadings are relatively large. This comprehensive review of Liang's (2020) work highlights the practical implications and methodological considerations for researchers employing BSEM-N in their studies.


Monday, November 2, 2020

[Article Review] Shining a Light on the Link between Vitamin D during Pregnancy and Children's Cognitive Development

Reference

Melough, M. M., Murphy, L. E., Graff, J. C., Derefinko, K. J., LeWinn, K. Z., Bush, N. R., Enquobahrie, D. A., Loftus, C. T., Kocak, M., Sathyanarayana, S., & Tylavsky, F. A. (2021). Maternal Plasma 25-Hydroxyvitamin D during Gestation Is Positively Associated with Neurocognitive Development in Offspring at Age 4–6 Years. The Journal of Nutrition, 151(1), 132-139. https://doi.org/10.1093/jn/nxaa309

Review

Melough et al.'s (2021) study explored the relationship between gestational 25-hydroxyvitamin D [25(OH)D] levels and IQ scores among children aged 4-6 years. The researchers used data from the CANDLE (Conditions Affecting Neurocognitive Development and Learning in Early Childhood) cohort, which included 1,503 women in their second trimester of healthy singleton pregnancies. The study found that higher maternal 25(OH)D levels during the second trimester were associated with higher Full Scale IQ, Verbal IQ, and Nonverbal IQ scores in offspring at 4-6 years old. The authors observed no evidence of effect modification by race.

The results of this study suggest that gestational vitamin D status may be an essential predictor of neurocognitive development. These findings have implications for prenatal nutrition recommendations and are particularly relevant for Black and other dark-skinned women who are at a higher risk of vitamin D deficiency. By emphasizing the importance of maintaining adequate vitamin D levels during pregnancy, healthcare providers can better support optimal neurocognitive development in children.

Saturday, October 10, 2020

[Article Review] Unraveling the Mystery of Missing Data: Effective Handling Methods for Accurate Ability Estimation

Reference

Xiao, J., & Bulut, O. (2020). Evaluating the Performances of Missing Data Handling Methods in Ability Estimation From Sparse Data. Educational and Psychological Measurement, 80(5), 932-954. https://doi.org/10.1177/0013164420911136

Review

In the article "Evaluating the Performances of Missing Data Handling Methods in Ability Estimation From Sparse Data" (2020), Xiao and Bulut conducted two Monte Carlo simulation studies to evaluate the performance of four methods in handling missing data when estimating ability parameters. These methods include full-information maximum likelihood (FIML), zero replacement, and multiple imputations with chain equations utilizing classification and regression trees (MICE-CART) and random forest imputation (MICE-RFI). The authors assessed the accuracy of ability estimates for each method using bias, root mean square error, and the correlation between true ability parameters and estimated ability parameters.

The results of the study showed that FIML outperformed the other methods under most conditions. Interestingly, zero replacement provided accurate ability estimates when the missing proportions were very high. MICE-CART and MICE-RFI demonstrated similar performances, but their effectiveness appeared to vary depending on the missing data mechanism. As the number of items increased and missing proportions decreased, all methods performed better.

The authors also found that incorporating information on missing data could improve the performance of MICE-RFI and MICE-CART when the dataset is sparse and the missing data mechanism is missing at random. This research is valuable for educational assessments, where large amounts of missing data can distort item parameter estimation and lead to biased ability estimates.

Friday, October 2, 2020

[Article Review] Enhancing Performance Validity Tests: Exploring Nonmemory-Based PVTs for Better Detection of Noncredible Results

Reference

Webber, T. A., Critchfield, E. A., & Soble, J. R. (2020). Convergent, Discriminant, and Concurrent Validity of Nonmemory-Based Performance Validity Tests. Assessment, 27(7), 1399-1415. https://doi.org/10.1177/1073191118804874

Review

In the article "Convergent, Discriminant, and Concurrent Validity of Nonmemory-Based Performance Validity Tests" (Webber, Critchfield, & Soble, 2020), the authors explore the validity of nonmemory-based Performance Validity Tests (PVTs) in identifying noncredible performance. The study focuses on the Dot Counting Test (DCT), the Wechsler Adult Intelligence Scale-Fourth edition (WAIS-IV) Reliable Digit Span (RDS), and two alternative WAIS-IV Digit Span (DS) subtest PVTs. The authors aim to evaluate the efficiency of these PVTs in supplementing memory-based PVTs to detect noncredible neuropsychological test performance.

The methodology of the study includes testing examinees using DCT, WAIS-IV DS, and the following criterion PVTs: Test of Memory Malingering, Word Memory Test, and Word Choice Test. Validity groups were determined by passing 3 (valid; n = 69) or failing ≥2 (noncredible; n = 30) criterion PVTs. The results show that DCT, RDS, RDS-Revised (RDS-R), and WAIS-IV DS Age-Corrected Scaled Score (ACSS) were significantly correlated, although not with memory-based PVTs.

In conclusion, the authors demonstrate that combining RDS, RDS-R, and ACSS with DCT improved classification accuracy for detecting noncredible performance among valid-unimpaired examinees. However, this combination was not as effective for valid-impaired examinees. The study suggests that using DCT with ACSS may be the most effective approach to supplement memory-based PVTs in identifying noncredible neuropsychological test performance among cognitively unimpaired examinees.

[Article Review] Understanding the Role of Item Distributions on Reliability Estimation: The Case of Cronbach’s Coefficient Alpha

Reference

Olvera Astivia, O. L., Kroc, E., & Zumbo, B. D. (2020). The Role of Item Distributions on Reliability Estimation: The Case of Cronbach’s Coefficient Alpha. Educational and Psychological Measurement, 80(5), 825-846. https://doi.org/10.1177/0013164420903770

Review

In this article, Olvera Astivia, Kroc, and Zumbo (2020) address the distributional assumptions of Cronbach's coefficient alpha and their effect on reliability estimation. The authors propose a new framework based on the Fréchet-Hoeffding bounds to demonstrate the impact of item distributions on the estimation of correlations and covariances. They argue that coefficient alpha is bounded above by the shape of these distributions, which restricts the theoretical correlation range.

The researchers derive a general form of the Fréchet-Hoeffding bounds for discrete random variables and provide R code and a user-friendly web application for calculating these bounds. This practical application allows other researchers to easily test the influence of item distributions on their data. The study serves as a valuable contribution to the field by clarifying the role of distributional assumptions in reliability estimation and providing accessible tools for further research.

The implications of Olvera Astivia et al.'s (2020) findings are significant, as they challenge previous assumptions about coefficient alpha and suggest that certain correlation structures may be unfeasible. This insight is crucial for researchers who rely on this measure for evaluating the reliability of their assessments. By considering the distributional constraints, researchers can ensure more accurate interpretations of their findings and contribute to the development of more reliable measurement tools.

Thursday, August 6, 2020

[Article Review] Unlocking the Potential of Fit Index Difference Values in Exploratory Factor Analysis

Reference

Finch, W. H. (2020). Using Fit Statistic Differences to Determine the Optimal Number of Factors to Retain in an Exploratory Factor Analysis. Educational and Psychological Measurement, 80(2), 217-241. https://doi.org/10.1177/0013164419865769

Review

In this article, the author investigates the effectiveness of model fit indices in determining the optimal number of factors to retain in exploratory factor analysis (EFA). The article emphasizes the absence of a universally optimal statistical tool for resolving this issue and discusses the mixed results of using model fit indices in conjunction with normally distributed indicators and categorical indicators.

Finch (2020) conducted a simulation study comparing the performance of fit index difference values and parallel analysis, a widely used and reliable method for determining factor retention. The results demonstrated that fit index difference values outperformed parallel analysis for categorical indicators and for normally distributed indicators when factor loadings were small. This finding highlights the potential of fit index difference values as a viable alternative to parallel analysis in certain situations.

The implications of Finch's (2020) findings have a considerable impact on the field of social sciences research. By understanding the effectiveness of fit index difference values in determining the optimal number of factors to retain in EFA, researchers can make more informed decisions when selecting the appropriate statistical tool. This, in turn, can lead to more accurate and valid results, enhancing the quality of research in the social sciences.

Monday, August 3, 2020

[Article Review] A New Look at Cohort Trend and Underlying Mechanisms in Cognitive Functioning

Reference

Zheng, H. (2021). A New Look at Cohort Trend and Underlying Mechanisms in Cognitive Functioning. The Journals of Gerontology: Series B, 76(8), 1652-1663. https://doi.org/10.1093/geronb/gbaa107

Review

In this study, Zheng (2021) examines trends and underlying mechanisms in cognitive functioning across seven decades of birth cohorts, from the Greatest Generation to the Baby Boomers. The study uses data from the Health and Retirement Study and measures cognitive functioning as a summary score on a 35-point cognitive battery of items. The author finds that cognitive functioning has been improving from the Greatest Generation to Late Children of Depression and War Babies, but then significantly declines since the Early-Baby Boomers and continues into Mid-Baby Boomers. This pattern is observed universally across genders, races/ethnicities, education groups, occupations, income, and wealth quartiles.

The author also finds that the worsening cognitive functioning among Baby Boomers cannot be attributed to childhood conditions, adult education, or occupation. Instead, it can be attributed to lower household wealth, lower likelihood of marriage, higher levels of loneliness, depression, and psychiatric problems, and more cardiovascular risk factors (e.g., obesity, physical inactivity, hypertension, stroke, diabetes, and heart disease). The implications of this study suggest that the worsening cognitive functioning among Baby Boomers may potentially reverse past favorable trends in dementia as they reach older ages, and cognitive impairment becomes more common if no effective interventions and policy responses are in place.

Overall, this article provides important insights into the cohort trend and underlying mechanisms in cognitive functioning, particularly among Baby Boomers. The findings have implications for dementia prevention and intervention policies, highlighting the importance of addressing risk factors such as cardiovascular disease, depression, and loneliness. The article's limitations include the study's reliance on self-reported data and the exclusion of certain groups such as those living in nursing homes or with severe cognitive impairment. Nonetheless, the study contributes to our understanding of cognitive functioning and its implications for healthy aging.

Tuesday, June 23, 2020

[Article Review] Examination of Short Form IQ Estimations for WISC-V in Clinical Practice

Reference

Lace, J. W., Merz, Z. C., Kennedy, E. E., Seitz, D. J., Austin, T. A., Ferguson, B. J., & Mohrland, M. D. (2022). Examination of five- and four-subtest short-form IQ estimations for the Wechsler Intelligence Scale for Children-Fifth edition (WISC-V) in a mixed clinical sample. Applied Neuropsychology: Child, 11(1), 50-61. https://doi.org/10.1080/21622965.2020.1747021

Review

Lace et al. (2022) investigated the efficacy of ten unique five-subtest (pentad) and four-subtest (tetrad) short-form (SF) combinations in estimating full-scale IQ (FSIQ) on the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) in a mixed clinical sample. A total of 268 pediatric participants were included in the study, with mean scores falling in the low average-to-average ranges. Regression-based and prorated FSIQ estimates were calculated, and the accuracy of each SF combination was assessed by comparing the estimates to the true FSIQ. Results showed that both regression-based and prorated/adjusted methods provided FSIQ estimates that were accurate within five Standard Score points of true FSIQ for approximately 81-92% (pentad) and 65-76% (tetrads) of participants. Prorated/adjusted estimates appeared to provide somewhat better accuracy than regression-based estimates. The study provides clinicians with useful information when selecting abbreviated assessments of intelligence in clinical practice.

The study adds value to the existing literature by examining the efficacy of SF IQ estimations for the WISC-V in a mixed clinical sample. The authors' finding that both regression-based and prorated/adjusted methods provided accurate FSIQ estimates within five Standard Score points of true FSIQ for most participants is useful for clinicians who are seeking to administer abbreviated assessments of intelligence. The authors also provide useful information on the benefits, detriments, and other considerations of each SF combination. However, the authors acknowledge the limitations of the study, including the use of an archival sample and the lack of consideration for specific clinical populations, which may limit the generalizability of the findings.

Tuesday, June 2, 2020

[Article Review] Small Samples, Big Results: A Review of Rasch vs. Classical Equating in Credentialing Exams

Reference

Babcock, B., & Hodge, K. J. (2020). Rasch Versus Classical Equating in the Context of Small Sample Sizes. Educational and Psychological Measurement, 80(3), 499-521. https://doi.org/10.1177/0013164419878483

Review

This article review analyzes the research conducted by Babcock and Hodge (2020) in their study, "Rasch Versus Classical Equating in the Context of Small Sample Sizes." The research aims to compare classical and Rasch techniques in equating exam scores when sample sizes are small, specifically N≤ 100 per exam form. Additionally, the study explores the potential of pooling multiple forms' worth of data to improve estimation within the Rasch framework.

Babcock and Hodge (2020) simulated multiple years of a small-sample exam program by resampling from a larger certification exam program's real data. Their results demonstrate that combining multiple administrations' worth of data through the Rasch model can lead to more accurate equating compared to classical methods designed for small samples. Interestingly, the study finds that WINSTEPS-based Rasch methods, which utilize multiple exam forms' data, work better than Bayesian Markov Chain Monte Carlo methods, as the prior distribution used to estimate the item difficulty parameters biased predicted scores when there were difficulty differences between exam forms.

In conclusion, the research by Babcock and Hodge (2020) contributes significantly to the field of educational measurement, particularly in the context of small sample exams. Their findings emphasize the benefits of Rasch methods for more accurate equating and suggest that pooling data from multiple exam forms can further enhance the estimation process. As a result, this study serves as a valuable resource for researchers and practitioners interested in the development and administration of credentialing exams for highly specialized professions.

[Article Review] Navigating the SEM Maze: Understanding the Impact of Estimation Methods on Fit Indices

Reference

Shi, D., & Maydeu-Olivares, A. (2020). The Effect of Estimation Methods on SEM Fit Indices. Educational and Psychological Measurement, 80(3), 421-445. https://doi.org/10.1177/0013164419885164

Review

In the article "The Effect of Estimation Methods on SEM Fit Indices" by Shi and Maydeu-Olivares (2020), the authors examine how different estimation methods, specifically maximum likelihood (ML), unweighted least squares (ULS), and diagonally weighted least squares (DWLS), impact three population structural equation modeling (SEM) fit indices: the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardized root mean square residual (SRMR). They investigate various types and levels of misspecification in factor analysis models, including misspecified dimensionality, omitting cross-loadings, and ignoring residual correlations.

The authors find that estimation methods significantly affect the RMSEA and CFI, necessitating the use of different cutoff values for different estimators. This result highlights the importance of being cautious when interpreting fit indices, as they can be influenced by the chosen estimation method. In contrast, the SRMR proves to be robust to the method used for estimating model parameters, making it a reliable choice for evaluating model fit at the population level.

Overall, Shi and Maydeu-Olivares (2020) provide valuable insight into the impact of estimation methods on SEM fit indices and offer guidance for researchers when selecting and interpreting these indices. Their findings underscore the need for careful consideration when choosing an estimation method and interpreting fit indices, as these choices can significantly influence conclusions drawn from SEM analyses.

Thursday, March 26, 2020

[Article Review] White Matter Microstructure and Cognitive Performance: Insights from a Meta-Analysis in Schizophrenia

Reference

Holleran, L., Kelly, S., Alloza, C., Agartz, I., Andreassen, O. A., Arango, C., ... & Donohoe, G. (2020). The Relationship Between White Matter Microstructure and General Cognitive Ability in Patients With Schizophrenia and Healthy Participants in the ENIGMA Consortium. American Journal of Psychiatry, 177(6), 537-547. https://doi.org/10.1176/appi.ajp.2019.19030225

Review

Holleran et al. (2020) conducted a meta-analysis to explore the relationship between white matter microstructure and cognitive performance in patients with schizophrenia and healthy participants using data from the ENIGMA Consortium. The study included 760 patients with schizophrenia and 957 healthy participants from 11 sites. The authors used principal component analysis to calculate a global fractional anisotropy component and a fractional anisotropy component for six long association tracts. The results showed that higher fractional anisotropy was associated with higher cognitive ability. The study provides robust evidence that cognitive ability is associated with global structural connectivity, independent of diagnosis.

The authors noted that schizophrenia is associated with widespread white matter microstructural abnormalities, but the functional effects of these abnormalities remain unclear. This study contributes to the current understanding of the relationship between white matter microstructure and cognitive performance in patients with schizophrenia and healthy participants. The meta-analysis included a large sample size from multiple sites, and a common analysis pipeline was used, which enhances the validity of the results. The findings suggest that there is a more general, rather than disease-specific, pattern of association between fractional anisotropy and cognitive ability.

Overall, the study by Holleran et al. (2020) provides valuable insights into the relationship between white matter microstructure and cognitive performance in patients with schizophrenia and healthy participants. The findings suggest that cognitive ability is associated with global structural connectivity, and the association is independent of diagnosis. The study highlights the importance of investigating the functional effects of white matter microstructural abnormalities in schizophrenia to improve social and functional outcomes in patients.

Tuesday, January 14, 2020

[Article Review] The Burden of Early-life Chemical Exposure on Neurodevelopmental Disabilities in the US

Reference

Gaylord, A., Osborne, G., Ghassabian, A., Malits, J., Attina, T., & Trasande, L. (2020). Trends in neurodevelopmental disability burden due to early life chemical exposure in the USA from 2001 to 2016: A population-based disease burden and cost analysis. Molecular and Cellular Endocrinology, 502, 110666. https://doi.org/10.1016/j.mce.2019.110666

Review

The study by Gaylord et al. (2020) aimed to quantify the burden of neurodevelopmental disability and the economic costs associated with early-life exposure to endocrine-disrupting chemicals (EDCs) in the United States from 2001 to 2016. Using data from the National Health and Nutrition Examination Surveys, the authors estimated the intellectual disability (ID) burden attributable to in-utero exposure to polybrominated diphenyl ethers (PBDEs), organophosphates, methylmercury, and early-life exposure to lead. They also calculated the economic costs of the IQ point lost and cases of intellectual disability. The results showed that PBDE exposure was the most significant contributor to the ID burden, resulting in a total of 162 million IQ points lost and over 738,000 cases of intellectual disability. Lead, organophosphates, and methylmercury were the other contributors. Although most of the trends showed improvement in children's neurodevelopmental health, they also indicated the use of potentially harmful chemical substitutions for those being phased out.

The findings of this study have significant implications for public health policies and regulations regarding the use and exposure of EDCs. The authors suggest that current regulations on EDCs should be strengthened to reduce the burden of neurodevelopmental disabilities and the associated economic costs. The study also highlights the need for continued monitoring of trends in early-life chemical exposure to ensure that the use of potentially harmful chemicals is adequately controlled. Limitations of the study include the use of cross-sectional data and the lack of information on exposure to other chemicals known to cause neurotoxicity. However, the study provides valuable insights into the burden of neurodevelopmental disabilities and the costs associated with exposure to EDCs in the US.