Tuesday, December 19, 2023

Introducing the Tellegen & Briggs Formula 4 Calculator: A New Psychometric Resource at Cogn-IQ.org

I am pleased to announce the availability of the Tellegen & Briggs Formula 4 Calculator on Cogn-IQ.org. This tool represents a significant advancement for psychometricians, facilitating the creation and combination of psychometric scales with remarkable precision.

The Tellegen & Briggs Formula, originally conceptualized by Auke Tellegen and P. F. Briggs in 1967, has long been recognized for its utility in recalibrating and interpreting scores from a variety of psychological assessments. Its initial application was with Wechsler's subtests, yet its versatility extends to various psychological and educational evaluations.


This new online calculator encapsulates the essence of the Tellegen & Briggs Formula, making it more accessible to practitioners and researchers. The interface is designed for ease of use, allowing for the input of necessary statistical parameters such as standard deviations of overall scales (e.g., IQ scores), subtest scores, number of subtests, sum of correlations between subtests, and mean scores.

It is important to note, as highlighted in the literature, the propensity of the formula to slightly underestimate scores in higher ranges and overestimate in lower ones. This deviation, while typically within a range of 2-3 points, can extend up to 6 points in certain instances, especially in cognitive assessments involving populations at the extremes of intellectual functioning. This nuance underscores the need for careful interpretation of this tool's results.

Despite this, the Tellegen & Briggs Formula remains an indispensable asset in the field of psychological testing, particularly when direct standardization data are not available. Its adaptability makes it a reliable framework for score standardization and interpretation in diverse assessment scenarios.

I encourage my colleagues to explore this tool and consider its application in their research and practice. The Tellegen & Briggs Formula 4 Calculator at Cogn-IQ.org is a testament to our ongoing commitment to enhancing the tools available to our profession, contributing to the rigor and precision of our work.

Reference: Cogn-IQ.org (2023). Tellegen-Briggs Formula 4 Calculator. Cogn-IQ Statistical Tools. https://www.cogn-iq.org/doi/12.2023/7126d827b6f15472bc04

Friday, December 1, 2023

Launch of Simulated IRT Dataset Generator v1.00 and Upcoming v1.10 at Cogn-IQ.org

I'm thrilled to announce the launch of the Simulated Item Response Theory (IRT) Dataset Generator v1.00 at Cogn-IQ.org, marking a significant step forward in our commitment to advancing educational technology and statistical analysis. 

The v1.00 of our Simulated IRT Dataset Generator, which went live yesterday, represents a groundbreaking tool in educational statistics and psychometrics. It is designed to aid researchers, educators, and psychometricians while generating simulated datasets based on Item Response Theory (IRT) parameters. 




Key Features of v1.00:


  • Customizable Scenarios: Users can simulate datasets under scenarios like homogeneous, heterogeneous, high difficulty, and more, offering versatility in research and analysis. 
  • User-Friendly Interface: The generator is designed with an intuitive interface, making it accessible for both beginners and advanced users. 
  • High Precision Data: With meticulous algorithmic design, the generator provides high-accuracy IRT datasets, essential for reliable research outcomes. 


Looking Ahead: v1.10 on the Horizon 


While we celebrate this milestone, our journey doesn't stop here. We are already working on the next version - v1.10, which promises to bring even more advanced features and enhancements. The upcoming version focuses on: 

  • Enhanced Kurtosis Control: Improving the algorithm for generating discrimination parameters with specific kurtosis targets. 
  • Increased Efficiency: Streamlining processes to enhance the computational efficiency of the generator. 
  • User Feedback Incorporation: Implementing changes based on user feedback from v1.00 to make the generator more robust and user-centric. 


Join the Evolution 


The Simulated IRT Dataset Generator is more than just a tool; it's part of our vision at Cogn-IQ.org to empower the educational community with advanced technology. We invite educators, researchers, and psychometric enthusiasts to explore v1.00 and contribute to the development of v1.10 with their valuable feedback. 

Stay tuned for more updates, and let's embark on this exciting journey of discovery and innovation together!

Reference: Cogn-IQ.org (2023). Simulated IRT Dataset Generator (V1.00). Cogn-IQ Statistical Tools. https://www.cogn-iq.org/doi/11.2023/fddd04c790ed618b58e0

Tuesday, November 28, 2023

Introducing a Cutting-Edge Item Response Theory (IRT) Simulator at Cogn-IQ.org

Exciting news for educators, psychometricians, and assessment professionals! I'm thrilled to announce that I'm currently developing an advanced Item Response Theory (IRT) Simulator. This tool is designed to revolutionize the way we approach test design, item analysis, and educational research.

Overview of the Simulator:

Our new IRT Simulator is a comprehensive, flexible, and user-friendly tool that allows users to create realistic test scenarios. It leverages the power of modern statistical techniques to provide insights into test item characteristics, test reliability, and more.



Key Features:

  • Customizable Scenarios: Choose from a variety of pre-defined scenarios like homogeneous, heterogeneous, multidimensional, and more, or create your own unique scenario.
  • Dynamic Item Parameter Generation: The simulator includes a powerful generateItemParams function that dynamically generates item parameters based on the chosen scenario. This includes mean difficulty, standard deviation of difficulty, base discrimination, and discrimination variance.
  • Advanced Parameters: We have introduced parameters like difficultySkew, allowing users to simulate tests with skewed difficulty distributions, enhancing the realism of the simulations.
  • User-Friendly Interface: Designed with user experience in mind, the simulator is intuitive and easy to navigate, making it accessible for both beginners and advanced users.

Development Journey:

I've been passionately working on this project, constantly refining and enhancing its capabilities. Through iterative testing and feedback, the simulator has evolved to include sophisticated features that cater to a wide range of testing scenarios.

Use Cases:

This simulator is an invaluable tool for:

  • Educational Researchers: Experiment with different test designs and analyze item characteristics.
  • Psychometricians: Assess the reliability and validity of test items in various scenarios.
  • Teachers and Educators: Understand how different test items might perform in real-world settings.

Looking Ahead:

The journey doesn't stop here. I'm committed to continuously improving the simulator, adding new features, and ensuring it remains at the forefront of educational technology.

Conclusion:

Stay tuned for more updates as I progress with this exciting project. I can't wait to share the final product with you all, and I'm looking forward to seeing how it contributes to the field of education and assessment.


Friday, October 27, 2023

Decoding High Intelligence: Interdisciplinary Insights at Cogn-IQ.org

In the pursuit of understanding high intelligence, this article traverses the historical and modern landscape of cognitive ability studies. It discusses the challenges in assessing high intelligence, such as the ceiling effects found in traditional IQ tests, and the neural correlates identified through neuroimaging. The complex interplay between genetics and environment is scrutinized, revealing the intricate dynamics that mold cognitive ability. 

The article extends beyond the critique of IQ measures to highlight the necessity for advanced psychometric tools for the highly gifted. The conclusion of this scholarly inquiry emphasizes that high intelligence serves not just as an academic fascination but as a fulcrum for societal progress. Exceptional intellects, when nurtured within a supportive environment replete with opportunity and mentorship, can significantly influence society. 

The paper advocates for a multidisciplinary approach to fully comprehend the depths of high intelligence, integrating neuroscience, psychology, genetics, and education. By fostering collaboration across diverse academic fields, we can better understand and support the development of high-IQ individuals, whose potential contributions are vital for driving humanity forward. 

This call to action underscores the importance of interdisciplinary research as both a scholarly imperative and a mechanism for societal enhancement, paving the way for high-IQ individuals to reach their full potential and impact the world. 

Reference: Jouve, X. (2023). The Current State Of Research On High-IQ Individuals: A Scientific Inquiry. Cogn-IQ Research Papers. https://www.cogn-iq.org/doi/10.2023/0726191e2e93fe820a24

The Complex Journey of the WAIS: Insights and Transformations at Cogn-IQ.org

The Wechsler Adult Intelligence Scale (WAIS) represents a cornerstone of intelligence assessment. Its genesis and evolution reflect a continuous endeavor to refine our understanding and measurement of human intellect. This analysis provides a historical and scientific overview of the WAIS, charting its development from David Wechsler's original vision to its current iterations. 

The paper examines the scientific foundation of the WAIS, its integration within the broader spectrum of intelligence testing, and its revisions across editions in response to evolving psychometric standards. Despite facing academic critiques, the WAIS remains a critical tool in psychological assessment, signifying the dynamic nature of psychometrics. The critiques serve not as detractions but as catalysts for the WAIS’s progressive adaptations, underscoring the necessity for ongoing recalibration in light of new research and theoretical advances. 

The WAIS’s journey illustrates the intersection of critique with advancement, highlighting the collaborative nature of scientific inquiry in refining knowledge. This balanced examination respects the WAIS's contributions to psychology while acknowledging the complexities and debates surrounding intelligence measurement. Through this lens, the WAIS is viewed as an evolving instrument, mirroring the fluidity of intelligence as a construct and the diversity of cognitive expression.

Reference: Jouve, X. (2023). The Evolution And Evaluation Of The WAIS: A Historical And Scientific Perspective. Cogn-IQ Research Papers. https://www.cogn-iq.org/doi/10.2023/6bfc117ff4cf6817c720

Wednesday, October 18, 2023

Tracing the SAT's Intellectual Legacy and Its Ties to IQ at Cogn-IQ.org

The Scholastic Assessment Test (SAT) has been a fixture in American education, evolving alongside our understanding of intelligence quotient (IQ). This article provides a historical analysis of the SAT, exploring its origins as a metric for academic potential and its intricate connection with IQ. 

The SAT has significantly influenced educational methods and policies, reflecting a complex relationship that has evolved through constant sociocultural and pedagogical shifts. The examination's development reflects a broader quest to understand human intellect and its measurement. 

This review offers a comprehensive overview of the SAT's transformation, acknowledging its historical significance while also addressing the critical discourse that has shaped its progress. It emphasizes the need for tools like the SAT to adapt in alignment with advancements in educational theories, cultural contexts, and recognition of diverse cognitive strengths. 

In considering the SAT, one must apply a balanced perspective, recognizing its historical context and role within the larger framework of psychological and pedagogical research. By maintaining a dialogue that respects the SAT's contributions and acknowledges its limitations, we can continue to strive for excellence in academic assessment, ensuring it remains equitable and relevant in our ever-changing educational landscape.

Reference: Jouve, X. (2023). The SAT's Evolutionary Dance With Intelligence: A Historical Overview And Analysis. Cogn-IQ Research Papers. https://www.cogn-iq.org/doi/10.2023/7117df06d8c563461acf

Wednesday, September 27, 2023

[Article Review] Hidden Harm: Prenatal Phthalate Exposure and Its Impact on Young Brains

Reference


Ghassabian, A., van den Dries, M., Trasande, L., Lamballais, S., Spaan, S., Martinez-Moral, M-P., ... Guxens, M. (2023). Prenatal exposure to common plasticizers: a longitudinal study on phthalates, brain volumetric measures, and IQ in youth. Molecular Psychiatry. https://doi.org/10.1038/s41380-023-02225-6

Review


The paper by Ghassabian et al. (2023) explores the under-researched area of prenatal phthalate exposure, specifically its associations with brain volumetric differences and cognitive development in youth. Drawing from a cohort of 775 mother-child pairs from the Generation R study, the authors leveraged both prenatal maternal urine phthalate levels and subsequent T1-weighted MRI scans of the children at age 10. They aimed to establish correlations between prenatal phthalate exposure and brain volume and to explore if these brain measures mediated an association with IQ levels at age 14.

Findings reveal that higher maternal concentrations of monoethyl phthalate (mEP) during pregnancy were linked to smaller total gray matter volumes in 10-year-old offspring. Notably, these volume differences partially mediated the connection between higher mEP levels and lower child IQ, accounting for 18% of the effect. Gender-specific effects were also reported; in girls, there was an association of higher monoisobutyl phthalate (mIBP) with decreased cerebral white matter volumes, which mediated the link between increased mIBP and reduced IQ. These results underscore the potential neurotoxic effects of phthalates on developing brains, signaling alarm for their ubiquitous presence in consumer products.

While this research paves the way for understanding the neurodevelopmental implications of phthalates, there are aspects to consider for future studies. Primarily, the question arises on potential confounding variables, such as socio-economic status or other environmental factors, that might influence the observed associations. Furthermore, exploring potential mechanisms behind these observed changes would provide deeper insight. Notwithstanding, Ghassabian et al.'s study shines a light on the pressing need to re-evaluate and potentially regulate the use of phthalates in consumer products.

Saturday, September 23, 2023

[Article Review] AMES: A New Dawn in Early Detection of Cognitive Decline

Reference

Huang, L., Mei, Z., Ye, J., & Guo, Q. (2023). AMES: An Automated Self-Administered Scale to Detect Incipient Cognitive Decline in Primary Care Settings. Assessment, 30(7), 2247-2257. https://doi.org/10.1177/10731911221144774

Review

Huang, Mei, Ye, and Guo (2023) unveiled Automated Memory and Executive Screening (AMES), a self-delivered cognitive screening tool that aims to detect early signs of cognitive decline in community-based settings. This tool was designed to evaluate cognitive realms, including memory, language, and executive function. Across a cohort of 189 participants, ranging from those with diagnosed mild cognitive impairment (MCI) to normal controls, the research gauged AMES's utility and accuracy.

The tool demonstrated a commendable convergent validity with established scales. Particularly noteworthy was its proficiency in distinguishing patients with MCI from normal controls, boasting an area under the curve (AUC) of 0.88, coupled with 86% sensitivity and 80% specificity. Similarly, for obj-SCD, the AUC was 0.78, with a sensitivity of 89% and a specificity of 63%. These figures underscore the tool's promise for early identification of cognitive impairment.

The AMES tool, as presented by Huang et al. (2023), is a constructive stride in the pursuit of timely intervention for cognitive decline. Its self-administered nature could make it a more accessible and less intimidating option for individuals. However, while its efficacy in discerning MCI is applaudable, the relatively lower specificity for obj-SCD suggests a potential for false positives. As with all screening tools, ensuring a balance between sensitivity and specificity is imperative. Future iterations and validations of AMES might further refine its accuracy and reduce potential misclassifications.

Thursday, September 14, 2023

[Article Review] Unmasking Overclaiming: Insights from 40,000 Teens

Reference

Jerrim, J., Parker, P. D., & Shure, N. (2023). Overclaiming: An international investigation using PISA data. Assessment in Education: Principles, Policy & Practice, 1-21. https://doi.org/10.1080/0969594X.2023.2238248

Review

In "Overclaiming: An International Investigation using PISA data," Jerrim, Parker, and Shure (2023) delve into the intriguing phenomenon where individuals assert more knowledge on a subject than they genuinely possess. By harnessing PISA data of over 40,000 teenagers from nine Anglophone countries, the authors aimed to gauge the propensity of these teenagers to profess knowledge of nonexistent mathematical constructs. The findings highlight significant disparities in overclaiming tendencies based on country, gender, and socio-economic background. Intriguingly, those with a higher tendency to overclaim also demonstrated pronounced levels of overconfidence. These individuals also perceived themselves as hard-working, persistent, and believed to be popular among their peers.

This comprehensive study sheds invaluable light on the cultural, gendered, and socio-economic dimensions of the overclaiming phenomenon. However, while the correlations between overclaiming, overconfidence, and certain self-perceptions are enlightening, the study doesn't fully delve into potential causative factors or underlying mechanisms. Moreover, given that the data is predominantly from Anglophone countries, the universality of these findings may be restricted. Further research in a wider array of countries and cultures would bolster the findings' applicability.

Overall, Jerrim et al. (2023) have produced an insightful study that broadens our understanding of overclaiming in teenagers. By connecting it with other psychological constructs, they present a foundational piece for future research. Yet, its geographical limitation and lack of deep exploration into underlying causations are areas that future research can address.

Friday, June 30, 2023

[Article Review] Unraveling Brain and Cognitive Changes: A Deep Dive into GALAMMs

Reference

Sørensen, Ø., Fjell, A. M., & Walhovd, K. B. (2023). Longitudinal Modeling of Age-Dependent Latent Traits with Generalized Additive Latent and Mixed Models. Psychometrika, 88(2), 456-486. https://doi.org/10.1007/s11336-023-09910-z

Review

In their 2023 study, Sørensen, Fjell, and Walhovd introduced generalized additive latent and mixed models (GALAMMs) to analyze clustered data. They developed these models primarily to address applications in cognitive neuroscience. Their method leverages a scalable maximum likelihood estimation algorithm, utilizing advanced computational techniques like the Laplace approximation, sparse matrix computation, and automatic differentiation. Crucially, this approach allows for a variety of mixed response types, heteroscedasticity, and crossed random effects.

The authors further illustrated the applicability of GALAMMs by presenting two case studies. The first highlighted how these models could comprehensively capture lifespan trajectories of various cognitive abilities, including episodic memory, working memory, and executive function. Such findings were drawn from widely used cognitive tests like the California Verbal Learning Test, digit span tests, and Stroop tests. In their second case, the researchers explored the impact of socioeconomic status on brain structure, specifically delving into the relationship between educational and income levels with hippocampal volumes, gauged via magnetic resonance imaging (MRI). Their results posited that by integrating semiparametric estimation with latent variable modeling, GALAMMs can offer a more nuanced depiction of how both the brain and cognition evolve throughout an individual's life.

Overall, this study presents a promising tool for the analysis of complex data structures, especially in the realm of cognitive neuroscience. While the authors provided solid evidence from their case studies, it would be beneficial to see how GALAMMs fare in a broader range of applications. Moreover, the efficacy of these models in different sample sizes, beyond moderate ones, remains a question worth exploring in future research.

Wednesday, June 14, 2023

[Article Review] Peering into Decision Making: A Dive into Modeling Eye Movements

Reference

Wedel, M., Pieters, R., & van der Lans, R. (2023). Modeling Eye Movements During Decision Making: A Review. Psychometrika, 88(2), 697-729. https://doi.org/10.1007/s11336-022-09876-4

Review

In the article "Modeling Eye Movements During Decision Making: A Review", authors Wedel, Pieters, and van der Lans (2023) undertake a comprehensive exploration of recent advancements in psychometric and econometric modeling of eye movements during decision-making tasks. The authors rightly identify eye movements as an instrumental method to gain insights into the otherwise elusive perceptual, cognitive, and evaluative processes that people undergo during decision-making. Their proposed theoretical framework emphasizes the intricate nature of task and strategy switching in relation to complex goals.

Building on this foundational framework, the trio proceeds to map out the existing literature, emphasizing how cognitive processes steer distinct eye-movement patterns. Their endeavor to categorize and contextualize prior works lends clarity to the field. However, a potential pitfall of the article lies in its optimistic depiction of these models. While they note the advances, more critical discussion around the challenges and limitations faced would have further enriched the narrative.

The authors call for further research and a more detailed psychometric modeling approach to understand eye movements during decision-making. This article, while shedding light on key areas of development, would benefit from a balanced perspective, accentuating not just the possibilities but also the boundaries of the domain.

Thursday, June 1, 2023

[Article Review] Revolutionizing Online Test Monitoring: A Dive into Kang's Latest Research

Reference

Kang, Hyeon-Ah. (2023). Sequential Generalized Likelihood Ratio Tests for Online Item Monitoring. Psychometrika, 88(2), 672-696. https://doi.org/10.1007/s11336-022-09871-9

Review

In her 2023 article published in Psychometrika, Kang delves into a critical dimension of psychometric testing: the continuous and intermittent monitoring of item functioning. At the heart of the article lies the introduction of sequential generalized likelihood ratio tests designed to surveil multiple item parameters across various sampling techniques. Kang's focus on the stability of item parameters across time sets significant precedence in an age where psychometric tests, especially online ones, see broad usage and necessitate consistent quality checks.

Through a combination of simulated and real assessment data, Kang validates the efficacy of the proposed monitoring procedures. One of the standout features of these methods, as highlighted in the study, is their ability to identify significant parameter shifts in a timely fashion while keeping error rates within acceptable margins. The research commendably compares these newly introduced methods against existing ones, showcasing their superior performance. Such empirical results strengthen the credibility of these procedures and their potential applicability in real-world settings.

The article suggests that multivariate parametric monitoring, anchored on robust likelihood-ratio tests, holds the promise of being a formidable tool in upholding the quality of psychometric items. Kang's emphasis on joint monitoring of multiple-item parameters provides a holistic approach to maintaining consistency and reliability. Grounded on the empirical findings, the study also offers tangible strategies for online item monitoring, an invaluable asset for practitioners in the field.

Tuesday, May 16, 2023

[Article Review] Computerized Adaptive Testing: A Dive into Enhanced Techniques

Reference

Anselmi, P., Robusto, E., & Cristante, F. (2023). Enhancing Computerized Adaptive Testing with Batteries of Unidimensional Tests. Applied Psychological Measurement, 47(3), 167-182. https://doi.org/10.1177/01466216231165301

Review

The article authored by Anselmi, Robusto, and Cristante (2023) introduces a pioneering procedure for Computerized Adaptive Testing (CAT) with unidimensional test batteries. The goal is to optimize the process by constantly updating the estimation of a given ability with every new response and, concurrently, the current estimations of all other abilities within the test battery. Their innovative approach integrates data from these abilities into an empirical prior, which subsequently undergoes regular updates.

In a bid to validate their approach, the researchers employed two simulation studies contrasting the performance of their suggested procedure against a standard CAT technique for unidimensional test batteries. Results indicated a notable uptick in accuracy for fixed-length CATs using the proposed procedure. Simultaneously, there was an observed shortening in test length for variable-length CATs. Notably, the enhancements in both accuracy and efficiency escalated proportionally with the correlation among the abilities evaluated by the test batteries.

While the study provides a promising avenue for the enhancement of CAT, the outcomes' dependence on the correlation between abilities measured by the test batteries may hint at limitations in its applicability. The reliance on simulation studies also indicates a need for real-world validations. Nonetheless, Anselmi et al.'s innovative approach offers a commendable step forward in refining CAT procedures, potentially yielding significant efficiencies in real-world applications, contingent upon further validation.


Wednesday, April 19, 2023

Explore the validity and reliability of the Jouve-Cerebrals Test of Induction, and its strong correlations with SAT Math and RIST scores.

The Jouve-Cerebrals Test of Induction (JCTI), a tool designed to measure inductive reasoning, is the focus of this comprehensive study involving 2,306 participants. Exhibiting a high-reliability score (Cronbach's Alpha = .90) and satisfactory Item Characteristic Curves, the JCTI has proven itself as a dependable measure in the field of cognitive assessment. 

A subset of the participants also provided SAT scores, and another took the Reynolds Intelligence Screening Test (RIST), allowing for an analysis of the JCTI's concurrent validity. The results demonstrated strong correlations between JCTI scores and SAT Math reasoning (r = .84), as well as high correlations with both verbal and nonverbal RIST subtests (approximately .90). However, a weaker correlation was observed with SAT Verbal reasoning (r = .38), highlighting an area for future investigation. 

The study, while robust, acknowledges its limitations, including the small sample size for concurrent validity analyses and reliance on self-reported SAT scores. These findings underscore the JCTI's utility in educational and vocational settings and point toward its potential applications in cognitive training programs. Future research is encouraged to delve deeper into the relationships between JCTI scores and other cognitive abilities and to explore the reasons behind the weaker correlation with SAT Verbal reasoning.

Link to Full Article: Jouve, X. (2023) Reliability and Concurrent Validity of the Jouve-Cerebrals Test of Induction: A Correlational Study with SAT and RIST. https://www.cogn-iq.org/articles/reliability-validity-jouve-cerebrals-test-induction-correlational-study-sat-rist.html

Monday, April 17, 2023

Assessing the Reliability of JCCES in Measuring Crystallized Cognitive Skills at Cogn-IQ.org

The Jouve-Cerebrals Crystallized Educational Scale (JCCES) has undergone a rigorous evaluation to understand its reliability and the consistency of its internal components. A total of 1,079 examinees participated, providing a rich dataset for analysis through both Classical Test Theory (CTT) and Item Response Theory (IRT), including the kernel estimator and Bayes modal estimator. 

The results showed that the JCCES exhibits excellent internal consistency, as evidenced by a Cronbach's Alpha of .96. The diverse range of difficulty levels, standard deviations, and polyserial correlation values among the items indicates that the JCCES is a comprehensive tool, capable of assessing a broad spectrum of crystallized cognitive abilities across various content areas. The use of the kernel estimator method further refined the evaluation of the examinee's abilities, emphasizing the significance of incorporating alternative answers in the test design to enhance inclusivity. The two-parameter logistic model (2PLM) demonstrated a good fit for the majority of the items, validating the test’s structure. 

While the study confirms the reliability of the JCCES, it also notes potential limitations, such as the model’s fit for specific items and the potential for unexplored alternative answers. Addressing these in future research could further improve the test’s validity and application, offering richer insights for educational interventions and cognitive assessment. 

The study’s findings play a crucial role in affirming the JCCES's reliability, showcasing its potential as a reliable tool for assessing crystallized cognitive skills.

Link to Full Article: Jouve, X. (2023) Evaluating the Jouve-Cerebrals Crystallized Educational Scale (JCCES): Reliability, Internal Consistency, and Alternative Answer Recognition. https://www.cogn-iq.org/articles/evaluating-jouve-cerebrals-crystallized-educational-scale-jcces-reliability-internal-consistency-alternative-answer-recognition.html

Wednesday, April 12, 2023

Assessing Nonverbal Intelligence: Insights from the Jouve Cerebrals Figurative Sequences at Cogn-IQ.org

This article provides a thorough examination of the Jouve-Cerebrals Figurative Sequences (JCFS), a self-administered test aimed at assessing nonverbal cognitive abilities related to pattern recognition and problem-solving. 

The study applies both classical test theory and item response theory to evaluate the internal consistency and concurrent validity of the JCFS, including its first half, the Cerebrals Contest Figurative Sequences (CCFS). The findings reveal strong internal consistency and good discriminatory power, showcasing the JCFS as a reliable and valid tool for measuring nonverbal cognitive abilities. 

However, the study acknowledges certain limitations, such as a small sample size and the absence of demographic information, pointing out the necessity for future research to affirm these results across larger and more diverse populations. 

Despite these limitations, the study underscores the importance of the JCFS as a significant addition to the tools available for assessing nonverbal cognitive abilities, emphasizing its potential utility in both clinical and research settings. 

The article encourages the use of JCFS alongside other assessments for a holistic evaluation of an individual's cognitive strengths and weaknesses, highlighting its role in informed decision-making and predicting future outcomes. 

Link to Full Article: Jouve, X. (2023) Psychometric Evaluation Of The Jouve-Cerebrals Figurative Sequences As A Measure Of Nonverbal Cognitive Ability. https://www.cogn-iq.org/articles/figurative-sequences-iq-test-psychometric-properties.html

Friday, April 7, 2023

A Rigorous Look at Verbal Abilities With The JCWS at Cogn-IQ.org

The Jouve-Cerebrals Word Similarities (JCWS) test emerges as a nuanced tool in the assessment of vocabulary and reasoning within a verbal context. In this paper, we delve into the psychometric properties of the JCWS, specifically, its first subtest, which is rooted in the Word Similarities test from the Cerebrals Contest. Exhibiting exceptional reliability (Cronbach's alpha of .96) and pronounced item discrimination, the CCWS proves to be a robust measure of verbal-crystallized ability, as evidenced by its significant correlations with WAIS scores. 

The JCWS subtests, in their entirety, display impressive internal consistency and reliability, marked by a split-half coefficient of .98 and a Spearman-Brown prophecy coefficient of .99. 

While these findings underscore the JCWS’s potential as a reliable instrument for evaluating verbal abilities, it is crucial to acknowledge the limitations present in this study, such as the small sample size used for assessing internal consistency and concurrent validity across the complete JCWS. The need for further research is evident, aiming to extend the evaluation of the JCWS’s validity and explore its applicability across diverse settings and populations. 

This article highlights the JCWS’s promise as an evaluative tool for verbal ability, potentially serving a pivotal role in both academic and clinical spheres, contingent upon further validation and exploration. 

Link to Full Article: Jouve, X. (2023) Psychometric Properties Of The Jouve-Cerebrals Word Similarities Test: An Evaluation Of Vocabulary And Verbal Reasoning Abilities. https://www.cogn-iq.org/articles/word-similarities-iq-test-psychometric-properties.html

Thursday, April 6, 2023

Assessing Verbal Intelligence with the IAW Test at Cogn-IQ.org

The I Am a Word (IAW) test represents a unique approach in the domain of verbal ability assessment, emphasizing an open-ended and untimed format to encourage genuine responses and cater to a diverse range of examinees. 

This study delved into the psychometric properties of the IAW test, with a sample of 1,083 participants from its 2023 revision. The findings attest to the test’s robust internal consistency and its strong concurrent validity when correlated with established measures such as the WAIS-III VCI and the RIAS VIX. 

These promising results suggest that the IAW test holds considerable potential as a reliable, valid, and inclusive tool in the field of intelligence assessment, fostering a more engaging and equitable testing environment. 

Despite its strengths, the study acknowledges certain limitations, including a modest sample size for concurrent validity analyses and the absence of test-retest reliability analysis, pointing towards avenues for future research to fortify the test’s psychometric standing and broaden its applicability across diverse domains and populations. 

The IAW test emerges not just as a measure of verbal intelligence, but as a testament to the evolving landscape of cognitive assessment, aiming for inclusivity, engagement, and precision. 

Link to Full Article: Jouve, X. (2023) I Am a Word Test: An Open-Ended And Untimed Approach To Verbal Ability Assessment. https://www.cogn-iq.org/articles/i-am-a-word-test-open-ended-untimed-verbal-ability-assessment-reliability-validity-standard-score-comparisons.html

Thursday, March 2, 2023

[Article Review] Reversing the Tide: Unraveling the Flynn Effect in U.S. Adults

Reference

Dworak, E. M., Revelle, W., & Condon, D. M. (2023). Looking for Flynn effects in a recent online U.S. adult sample: Examining shifts within the SAPA Project. Intelligence, 98, 101734. https://doi.org/10.1016/j.intell.2023.101734

Review

The Flynn effect, named after the psychologist James Flynn, refers to the phenomenon of a significant and steady increase in intelligence test scores over time. While extensive research has documented this trend in European countries, there is a dearth of studies exploring the presence or reversal of the Flynn effect in the United States, particularly among adult populations. In their recent study, Dworak, Revelle, and Condon (2023) addressed this gap by analyzing the cognitive ability scores of a large cross-sectional sample of U.S. adults from 2006 to 2018.

The authors used data from the Synthetic Aperture Personality Assessment Project (SAPA Project), which included responses from 394,378 adults. The cognitive ability scores were derived from two overlapping sets of items from the International Cognitive Ability Resource (ICAR). The researchers examined trends in standardized average composite cognitive ability scores and domain scores of matrix reasoning, letter and number series, verbal reasoning, and three-dimensional rotation.

The results revealed a pattern consistent with a reversed Flynn effect for composite ability scores from 35 items and domain scores (matrix reasoning; letter and number series) from 2006 to 2018 when stratified across age, education, or gender. However, slopes for verbal reasoning scores did not meet or exceed an annual threshold of |0.02| SD. Furthermore, a reversed Flynn effect was also present for composite ability scores from 60 items from 2011 to 2018, across age, education, and gender.

Interestingly, despite declining scores across age and demographics in other domains of cognitive ability, three-dimensional rotation scores showed evidence of a Flynn effect, with the largest slopes occurring across age-stratified regressions. This finding suggests that not all cognitive abilities are similarly affected by the Flynn effect or its reversal.

Dworak et al.'s (2023) study, makes a significant contribution to the literature on the Flynn effect by providing evidence of its reversal in a large sample of U.S. adults. However, it is essential to consider that the study is based on cross-sectional data, which limits the ability to draw causal conclusions or infer longitudinal trends. Future research could benefit from longitudinal designs to better understand the factors that contribute to the Flynn effect and its reversal in the United States. Additionally, exploring the role of social, cultural, and environmental factors that may impact cognitive abilities could provide further insight into this complex phenomenon.

Tuesday, February 21, 2023

[Article Review] Unlocking Potential: Evaluating the NIH Toolbox for Measuring Cognitive Change in Individuals with Intellectual Disabilities

Reference

Shields, R. H., Kaat, A., Sansone, S. M., Michalak, C., Coleman, J., Thompson, T., McKenzie, F. J., Dakopolos, A., Riley, K., Berry-Kravis, E., Widaman, K. F., Gershon, R. C., & Hessl, D. (2023). Sensitivity of the NIH Toolbox to detect cognitive change in individuals with intellectual and developmental disability. Neurology, 100(8), e778-e789. https://doi.org/10.1212/WNL.0000000000201528

Review

In their 2023 study, Shields et al. aimed to evaluate the sensitivity of the National Institutes of Health Toolbox Cognition Battery (NIHTB-CB) in detecting the cognitive change in individuals with intellectual disabilities (ID), specifically in those with fragile X syndrome (FXS), Down syndrome (DS), and other ID (OID). The study sought to provide further support for the use of NIHTB-CB as an outcome measure in clinical trials and other intervention studies targeting individuals with ID.

The researchers administered the NIHTB-CB and a reference standard cross-validation measure (Stanford-Binet Intelligence Scales, Fifth Edition [SB5]) to 256 participants with FXS, DS, and OID aged between 6 and 27 years. After two years, 197 individuals were retested. The study employed latent change score models to assess group developmental changes in each cognitive domain of the NIHTB-CB and SB5. Additionally, two-year growth was examined at three age points (10, 16, and 22 years).

Shields et al. (2023) found that the effect sizes of growth measured by the NIHTB-CB tests were comparable to or exceeded those of the SB5. The NIHTB-CB demonstrated significant gains in almost all domains in the OID group at younger ages (10 years), with continued gains at 16 years and stability in early adulthood (22 years). The FXS group exhibited delayed gains in attention and inhibitory control compared to the OID group. Meanwhile, the DS group showed delayed gains in receptive vocabulary compared to the OID group. Notably, the DS group experienced significant growth in early adulthood in two domains (working memory and attention/inhibitory control). Each group's pattern of NIHTB-CB growth across development corresponded to their respective pattern of SB5 growth.

The study's results support the sensitivity of the NIHTB-CB in detecting developmental changes in individuals with ID, making it a promising tool for clinical trials and intervention studies. However, the authors note that future research is needed to establish sensitivity to change within the context of treatment studies and to delineate clinically meaningful changes in NIHTB-CB scores linked to daily functioning.

Sunday, February 5, 2023

[Article Review] Exploring the Performance of Coefficient Alpha and Its Alternatives in Non-Normal Data

Reference

Xiao, L., & Hau, K.-T. (2023). Performance of Coefficient Alpha and Its Alternatives: Effects of Different Types of Non-Normality. Educational and Psychological Measurement, 83(1), 5-27. https://doi.org/10.1177/00131644221088240

Review

In the article "Performance of Coefficient Alpha and Its Alternatives: Effects of Different Types of Non-Normality" by Leifeng Xiao and Kit-Tai Hau (2023), the authors evaluate the performance of coefficient alpha and several alternatives under different non-normal data conditions. They tested indices such as ordinal alpha, omega total, omega RT, omega h, GLB, and coefficient H on continuous and discrete data with varying degrees of non-normality.

The study found that the estimation bias was acceptable for continuous data with different levels of non-normality when the scales were strong. However, the bias increased with moderate strength scales and grew larger as non-normality increased. For Likert-type scales, most indices were acceptable with non-normal data with at least four points, with more points resulting in better performance. The authors discovered that omega RT and GLB were robust for different exponentially distributed data, but the bias of other indices for binomial-beta distribution was generally large.

Xiao and Hau (2023) concluded that the demand for continuous and normally distributed data for alpha might not be necessary for less severely non-normal data. For severely non-normal data, at least four scale points should be used, with more points being better. Furthermore, the authors emphasized that no single golden standard exists for all data types and that other factors such as scale loading, model structure, or scale length are also essential.

Sunday, January 29, 2023

[Article Review] The Interesting Plateau of Cognitive Ability Among Top Earners: A Closer Look

Reference

Keuschnigg, M., van de Rijt, A., & Bol, T. (2023). The plateauing of cognitive ability among top earners. European Sociological Review, jcac076. https://doi.org/10.1093/esr/jcac076

Review

In their article "The plateauing of cognitive ability among top earners," Keuschnigg, van de Rijt, and Bol (2023) challenge the notion that the highest-paying jobs with the most prestige are occupied by individuals with exceptional cognitive ability. The authors hypothesize that among the relatively successful, average ability is concave in income and prestige. This study is significant as it offers a novel perspective on the relationship between cognitive ability and job success, as well as the role of social background and cumulative advantage in determining high occupational success.

Using Swedish register data containing measures of cognitive ability and labor-market success for 59,000 men who took a compulsory military conscription test, the authors find a strong overall relationship between cognitive ability and wage. However, they also reveal a striking plateau of cognitive ability above €60,000 per year, at a modest level of +1 standard deviation. Interestingly, the top 1% of earners score slightly worse on cognitive ability than those in the income strata right below them. The authors observe a similar but less pronounced plateauing of ability at high occupational prestige.

This article contributes to the existing literature on cognitive ability and job success by highlighting the plateauing of cognitive ability among top earners. The findings suggest that factors such as social background and cumulative advantage may play a more significant role in determining high occupational success than previously thought. As a result, the article provides valuable insights for policymakers and researchers interested in understanding the mechanisms behind occupational success and the limitations of cognitive ability as a determinant of success in the labor market.