Tuesday, December 28, 2010

Identifying the Underlying Dimensions of the JCCES Mathematical Problems using Alternating Least Squares Scaling

Abstract

This study aimed to investigate the underlying dimensions of the Jouve Cerebrals Crystallized Educational Scale (JCCES) Mathematical Problems (MP) using Alternating Least Squares Scaling (ALSCAL). The dataset consisted of intercorrelations between 38 MP items, with 588 participants. Various dimensional solutions were assessed for the goodness of fit. A 4-dimensional solution provided a reasonable fit (RSQ = 0.815, stress = 0.155), accounting for 81.5% of the variance in the disparities. The 4-dimensional solution is more parsimonious than the marginally better 5-dimensional solution (RSQ = 0.851, stress = 0.127). The study's limitations include the sample size, selection bias, and the exploratory nature of ALSCAL. The results contribute to understanding the structure of mathematical problem-solving abilities and have implications for theory, practice, and future research in cognitive and educational psychology.

Keywords: Jouve Cerebrals Crystallized Educational Scale, Mathematical Problems, Alternating Least Squares Scaling, multidimensional scaling, cognitive abilities

Introduction

Psychometrics is a scientific discipline focused on the development and evaluation of psychological assessments, including the measurement of cognitive abilities such as mathematical problem-solving skills (Embretson & Reise, 2000). The Jouve Cerebrals Crystallized Educational Scale (JCCES) Mathematical Problems (MP) subtest is an instrument used to assess these skills. To improve our understanding of the structure underlying the JCCES Mathematical Problems, the present study investigates the dimensionality of this instrument using Alternating Least Squares Scaling (ALSCAL).

The JCCES MP is grounded in various psychometric theories, particularly item response theory (IRT) and classical test theory (CTT) (De Ayala, 2009; Nunnally & Bernstein, 1994). Additionally, the JCCES MP aligns with cognitive and educational psychology theories, such as the multiple-component model of mathematical problem-solving (Swanson & Beebe-Frankenberger, 2004), which posits that problem-solving requires a combination of distinct cognitive abilities. Other relevant theories include Geary's (1994) cognitive mechanisms and Hecht's (2001) cognitive strategies that underlie mathematical problem-solving.

The selection of ALSCAL as the analytical method for this study is based on its suitability for exploratory research on multidimensional scaling (MDS) (Kruskal & Wish, 1978; Young, et al., 1978). ALSCAL has been used in various psychometric research to examine the dimensional structure of cognitive assessments (e.g., Gorsuch, 1983; Hambleton & Swaminathan, 1985). The method provides a data-driven approach to derive dimensional solutions and assess their goodness of fit, which can inform the development and interpretation of psychological assessments.

The present study's research question is: What is the lowest dimensionality offering a reasonable fit for the structure of items in the JCCES Mathematical Problems, as assessed by ALSCAL? By answering this question, the study aims to contribute to the literature on the dimensional structure of mathematical problem-solving assessments and inform future research and educational practice. In the context of previous research, the study will examine whether the identified dimensions align with established theories, such as Geary's (1994) cognitive mechanisms, Hecht's (2001) cognitive strategies, and the multiple-component model (Swanson & Beebe-Frankenberger, 2004).

Method

Research Design

The present study employed a correlational research design to investigate the structure of items in the Jouve Cerebrals Crystallized Educational Scale (JCCES) Mathematical Problems (MP) using Alternating Least Squares Scaling (ALSCAL; Young, et al., 1978). This design was chosen as it allowed for the exploration of potential relationships between the items without manipulating any variables.

Participants

The sample consisted of 588 participants, who were recruited through convenience sampling from various online media. Participants' demographic characteristics included age, gender, and educational level, which were collected through self-report measures. No exclusion criteria for the study were set.

Materials

The primary material used in this study was the JCCES Mathematical Problems subtest. The JCCES is a measure of crystallized intelligence, with established reliability and validity (Jouve, 2010a; 2010b). The Mathematical Problems subtest consists of 38 items that assess a range of mathematical problem-solving abilities, such as numerical reasoning, analytical thinking, and computational fluency. Participants completed the subtest in a computerized format.

Procedures

Participants were instructed that they had all the necessary time to complete the subtest, and their responses were scored automatically and then checked manually to insuring the most possible data reliability. The resulting dataset consisted of intercorrelations between the 38 items of the MP subtest.

Statistical Methods

ALSCAL was used to analyze the intercorrelations between the items and derive solutions for different dimensionalities, ranging from two to five dimensions. The analysis involved iterative optimization procedures to minimize stress values, with convergence criteria set at an improvement of less than 0.001 for stress values across consecutive iterations (Young et al., 1978). Kruskal's Stress Formula 1 was used to compute stress values, while RSQ values represented the proportion of variance in the scaled data (disparities) accounted for by their corresponding distances (Kruskal & Wish, 1978). The goodness of fit for various dimensional solutions was assessed, with the aim of identifying the lowest dimensionality offering a reasonable fit, operationalized as an RSQ value greater than 0.80 and a stress value within the range of 0.10 to 0.20.

Results

The primary objective of this study was to investigate the structure in items of the Jouve Cerebrals Crystallized Educational Scale (JCCES) Mathematical Problems (MP) using Alternating Least Squares Scaling (ALSCAL). The dataset consisted of intercorrelations between the 38 items of the MP subtest, with a sample size of 588 participants. The analysis involved assessing the goodness of fit for various dimensional solutions, with the aim of identifying the lowest dimensionality offering a reasonable fit, operationalized as an RSQ value greater than 0.80 and a stress value within the range of 0.10 to 0.20.

Statistical Analyses

ALSCAL (Young, et al., 1978) was employed to derive solutions for different dimensionalities, ranging from two to five dimensions. The analysis entailed iterative optimization procedures to minimize stress values, with convergence criteria set at an improvement of less than 0.001 for stress values across consecutive iterations. Kruskal's Stress Formula 1 was used to compute stress values, while RSQ values represented the proportion of variance in the scaled data (disparities) accounted for by their corresponding distances.

Results of Statistical Analyses

The ALSCAL analysis was conducted to derive solutions for different dimensionalities, and the results are presented below in a comprehensive manner.

5-dimensional solution

The 5-dimensional solution showed the lowest stress value (0.127) among all the solutions, indicating a relatively better fit to the data. The RSQ value, which represents the proportion of variance in the scaled data accounted for by the corresponding distances, was 0.851. This value suggests that 85.1% of the variance in the disparities could be explained by the distances in the 5-dimensional solution. However, the additional dimension compared to the 4-dimensional solution may increase model complexity without providing a substantial improvement in fit.

Iteration history:

  • Iteration 1: SSTRESS = 0.23325
  • Iteration 2: SSTRESS = 0.18514, Improvement = 0.04811
  • Iteration 3: SSTRESS = 0.18247, Improvement = 0.00267
  • Iteration 4: SSTRESS = 0.18223, Improvement = 0.00024

4-dimensional solution

The 4-dimensional solution was identified as the lowest dimensionality offering a reasonable fit based on the predefined criteria (RSQ > 0.80, stress within 0.10 to 0.20). The stress value for this solution was 0.155, while the RSQ value was 0.815, indicating that 81.5% of the variance in the disparities was accounted for by the distances in the 4-dimensional solution.

Iteration history:

  • Iteration 1: SSTRESS = 0.26892
  • Iteration 2: SSTRESS = 0.22219, Improvement = 0.04673
  • Iteration 3: SSTRESS = 0.21927, Improvement = 0.00292
  • Iteration 4: SSTRESS = 0.21902, Improvement = 0.00025
3-dimensional solution

The 3-dimensional solution showed a stress value of 0.210 and an RSQ value of 0.739. This solution did not meet the predefined criteria for a reasonable fit, as the RSQ value was below the threshold of 0.80.

Iteration history:

  • Iteration 1: SSTRESS = 0.32326
  • Iteration 2: SSTRESS = 0.28030, Improvement = 0.04295
  • Iteration 3: SSTRESS = 0.27771, Improvement = 0.00259
  • Iteration 4: SSTRESS = 0.27727, Improvement = 0.00045
2-dimensional solution

The 2-dimensional solution had the highest stress value (0.305) among all the solutions, indicating a relatively poor fit to the data. The RSQ value was 0.647, suggesting that only 64.7% of the variance in the disparities was accounted for by the distances in the 2-dimensional solution.

Iteration history:

  • Iteration 1: SSTRESS = 0.42841
  • Iteration 2: SSTRESS = 0.37073, Improvement = 0.05768
  • Iteration 3: SSTRESS = 0.36893, Improvement = 0.00180
  • Iteration 4: SSTRESS = 0.36702, Improvement = 0.00191
  • Iteration 5: SSTRESS = 0.36696, Improvement = 0.00006
Interpretation of Results

The 4-dimensional solution, with a stress value of 0.155 and an RSQ value of 0.815, suggests that the structure of the Jouve Cerebrals Crystallized Educational Scale (JCCES) Mathematical Problems (MP) can be adequately represented in a 4-dimensional space. This solution accounts for 81.5% of the variance in the disparities, and it represents a balance between model complexity and goodness of fit.

The ALSCAL analysis results imply that there are four underlying dimensions or constructs in the JCCES Mathematical Problems items that contribute significantly to the structure of the data. These dimensions may represent distinct cognitive abilities, problem-solving strategies, or other factors that influence an individual's performance on the JCCES Mathematical Problems items.

It is essential to note that while the 5-dimensional solution provided marginally better-fit statistics (RSQ = 0.851 and stress = 0.127), the additional dimension would increase the model complexity without a substantial improvement in the goodness of fit. As a result, the 4-dimensional solution is more parsimonious and appropriate for this study.

To further interpret and understand the meaning of these dimensions, it is necessary to examine the item content and characteristics of the JCCES Mathematical Problems items, as well as any relevant theoretical frameworks in the field of cognitive and educational psychology. This examination would help researchers identify and label the dimensions, providing a better understanding of the underlying structure of the JCCES Mathematical Problems and informing future research and educational practice.

Limitations

Despite the successful identification of a 4-dimensional solution that met the predefined criteria, there are certain limitations to this study. First, the sample size of 588 participants may not be sufficient to generalize the findings to a broader population. Additionally, selection bias may be present, as the participants may not be representative of the entire population of interest. Finally, the study is limited by the methodological approach, as ALSCAL is an exploratory technique and may not provide definitive conclusions about the underlying structure of the data.

In conclusion, the 4-dimensional solution provided a reasonable fit for the structure in items of the JCCES Mathematical Problems, with an RSQ value of 0.815 and a stress value of 0.155. This solution offers a basis for further investigation and interpretation of the underlying dimensions in the JCCES Mathematical Problems dataset. However, it is important to consider the limitations of this study when interpreting and generalizing these findings.

Discussion

Interpretation of Results and Comparison with Previous Research

The results of the present study indicate that a 4-dimensional solution best represents the structure of the JCCES Mathematical Problems items. This finding is consistent with previous research suggesting that mathematical problem-solving involves multiple dimensions or cognitive abilities (e.g., Geary, 1994; Hecht, 2001; Swanson & Beebe-Frankenberger, 2004). These dimensions may represent distinct skills or strategies, such as numerical reasoning, spatial visualization, analytical thinking, and computational fluency. The identification of these dimensions provides a deeper understanding of the underlying structure of the JCCES Mathematical Problems and can inform both theoretical and practical applications in the field of cognitive and educational psychology.

Unexpected Findings and Their Importance

One notable finding in the present study was that the 5-dimensional solution, although offering slightly better-fit statistics, did not provide a substantial improvement in the goodness of fit compared to the 4-dimensional solution. This finding suggests that the additional dimension in the 5-dimensional solution may not be necessary or meaningful, and the 4-dimensional solution is more parsimonious and appropriate. This result highlights the importance of considering model complexity and parsimony in addition to fitting statistics when selecting the best solution in multidimensional scaling analyses.

Implications for Theory, Practice, and Future Research

The present study's findings contribute to the understanding of the structure of mathematical problem-solving abilities as assessed by the JCCES Mathematical Problems. By identifying four underlying dimensions, researchers can further explore these dimensions' nature and implications for cognitive and educational psychology theories. For instance, the findings can inform the development of more targeted interventions and instructional strategies to improve specific dimensions of mathematical problem-solving abilities.

Moreover, the results can help practitioners, such as educators and clinicians, to better interpret and use the JCCES Mathematical Problems in various settings, such as educational assessment, cognitive assessment, and intervention planning. By understanding the underlying dimensions, practitioners can more accurately identify students' strengths and weaknesses and provide targeted support to enhance their mathematical problem-solving skills.

Limitations and Alternative Explanations

As mentioned earlier, several limitations should be considered when interpreting the findings of the present study. The sample size and potential selection bias may limit the generalizability of the results to a broader population. Additionally, the exploratory nature of the ALSCAL analysis does not allow for definitive conclusions about the underlying structure of the data. Future studies may employ confirmatory techniques, such as confirmatory factor analysis or structural equation modeling, to validate the 4-dimensional solution identified in the present study.

Another limitation is the potential influence of other factors, such as individual differences in motivation, attention, or working memory capacity, which may have affected participants' performance on the JCCES Mathematical Problems and, consequently, the identified dimensions. Future research could examine these potential influences and incorporate them into the analysis to gain a more comprehensive understanding of the underlying structure of mathematical problem-solving abilities.

Directions for Future Research

Future research should aim to replicate and extend the present study using larger and more diverse samples to increase generalizability. Furthermore, researchers could examine the content and characteristics of the JCCES Mathematical Problems items to better understand and label the identified dimensions. This analysis could involve examining the items in relation to relevant theoretical frameworks, such as the multiple-component model of mathematical problem-solving (Swanson & Beebe-Frankenberger, 2004), to provide more meaningful interpretations of the dimensions.

Additionally, longitudinal studies could investigate the development of the identified dimensions across different age groups and educational levels to explore their potential implications for educational practice and cognitive development. Finally, future research could examine the relationship between the identified dimensions and other cognitive abilities or academic achievement measures to explore the practical significance and predictive validity of the JCCES Mathematical Problems.

Conclusion

In conclusion, this study found that a 4-dimensional solution best represents the structure of the JCCES Mathematical Problems items, accounting for 81.5% of the variance in the disparities. These dimensions may represent distinct cognitive abilities or problem-solving strategies, providing valuable insights into the structure of mathematical problem-solving abilities. The findings have significant implications for both theory and practice, informing future research and educational interventions targeting specific dimensions of mathematical problem-solving.

However, it is important to acknowledge the study's limitations, including the sample size, potential selection bias, and the exploratory nature of the ALSCAL analysis. Future research should focus on validating the 4-dimensional solution using confirmatory techniques, examining the content of the items, and investigating potential influences of individual differences. Longitudinal studies and investigations of the relationship between the identified dimensions and other cognitive abilities or academic achievement measures are also recommended to further our understanding of the underlying structure of mathematical problem-solving abilities.

References

De Ayala, R. J. (2009). The theory and practice of item response theory. New York, NY: Guilford Press.

Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates. https://doi.org/10.4324/9781410605269

Hambleton, R. K., & Swaminathan, H. (1985). Item response theory: Principles and applications. Boston, MA: Kluwer-Nijhoff.

Hecht, S. A., & Vagi, K. J. (2010). Sources of Group and Individual Differences in Emerging Fraction Skills. Journal of educational psychology, 102(4), 843–859. https://doi.org/10.1037/a0019824

Geary, D. C. (1994). Children's mathematical development: Research and practical applications. Washington, DC: American Psychological Association. https://doi.org/10.1037/10163-000

Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Hillsdale, NJ: Erlbaum.

Jouve, X. (2010a). Investigating the Relationship Between JCCES and RIAS Verbal Scale: A Principal Component Analysis Approach. Retrieved from https://cogniqblog.blogspot.com/2010/02/on-relationship-between-jcces-and.html

Jouve, X. (2010b). Relationship between Jouve Cerebrals Crystallized Educational Scale (JCCES) Crystallized Educational Index (CEI) and Cognitive and Academic Measures. Retrieved from https://cogniqblog.blogspot.com/2010/02/correlations-between-jcces-and-other.html

Jouve, X. (2010c). Jouve Cerebrals Crystallized Educational Scale. Retrieved from http://www.cogn-iq.org/tests/jouve-cerebrals-crystallized-educational-scale-jcces

Kruskal, J. B., & Wish, M. (1978). Multidimensional scaling. Beverly Hills, CA: Sage. https://doi.org/10.4135/9781412985130

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.

Swanson, H. L., & Beebe-Frankenberger, M. (2004). The Relationship Between Working Memory and Mathematical Problem Solving in Children at Risk and Not at Risk for Serious Math Difficulties. Journal of Educational Psychology, 96(3), 471–491. https://doi.org/10.1037/0022-0663.96.3.471

Young, F. W., Takane, Y., & Lewyckyj, R. (1978). ALSCAL: A nonmetric multidimensional scaling program with several individual-differences options. Behavior Research Methods & Instrumentation, 10(3), 451–453. https://doi.org/10.3758/BF03205177

Tuesday, June 1, 2010

[Article Review] Unlocking the Connection: The Relationship Between SAT Scores and General Cognitive Ability

Reference

Frey, M. C., & Detterman, D. K. (2004). Scholastic Assessment or g?: The Relationship Between the Scholastic Assessment Test and General Cognitive Ability. Psychological Science, 15(6), 373-378. https://doi.org/10.1111/j.0956-7976.2004.00687.x

Review

In their seminal article, Frey and Detterman (2004) delve into the relationship between the Scholastic Assessment Test (SAT) and general cognitive ability (g). Their research aimed to understand the correlation between the two constructs and evaluate the SAT as a potential measure of g, in addition to exploring its use as a premorbid measure of intelligence. Two distinct studies were conducted: the first utilized data from the National Longitudinal Survey of Youth 1979, while the second examined the correlation between revised SAT scores and scores on the Raven's Advanced Progressive Matrices among undergraduates.

The first study reported a significant correlation of .82 (corrected for nonlinearity) between measures of g extracted from the Armed Services Vocational Aptitude Battery and SAT scores of 917 participants. The second study further substantiated the relationship, revealing a correlation of .483 (corrected for restricted range) between revised SAT scores and scores on the Raven's Advanced Progressive Matrices among the undergraduate sample. These findings indicate that the SAT is predominantly a test of g, with the authors providing equations for converting SAT scores to estimated IQs. This conversion could be useful for estimating premorbid IQ or conducting individual difference research among college students.

Frey and Detterman's (2004) research provides valuable insights into the relationship between the SAT and general cognitive ability, offering empirical support for the SAT's validity as a measure of g. This information has important implications for the use of the SAT in educational and psychological settings. Furthermore, the conversion equations presented by the authors may facilitate researchers in estimating premorbid IQ or conducting individual differences research with college students, broadening the potential applications of SAT scores.

Friday, April 16, 2010

Dissecting Cognitive Measures in Reasoning and Language at Cogn-IQ.org

The study scrutinizes the dimensions of general reasoning ability (gθ) as gauged by the Jouve-Cerebrals Test of Induction (JCTI) and the Scholastic Assessment Test-Recentered (SAT), specifically its Mathematical and Verbal subscales. Conducting a principal components factor analysis with a sample of American students, the study elucidates a bifurcated cognitive landscape. The Mathematical SAT and JCTI robustly align with inductive reasoning abilities, ostensibly representing a general reasoning factor. 

Conversely, the Verbal SAT demonstrates a considerable orientation toward language development. This nuanced delineation of cognitive faculties suggests that while the Mathematical SAT and JCTI robustly map onto general reasoning, the Verbal SAT serves as a distinct indicator of language development skills. 

Notwithstanding the limitations of sample size and the exclusion of top SAT performers, these insights advance the discourse on the psychometric properties of these assessments and their correlation with cognitive abilities. The exploration paves the way for more expansive studies that could further substantiate the interrelations among these cognitive domains and refine our comprehension of educational assessment tools.

Reference: Jouve, X. (2010). Uncovering The Underlying Factors Of The Jouve-Cerebrals Test Of Induction And The Scholastic Assessment Test-Recentered. Cogn-IQ Research Papers. https://www.cogn-iq.org/doi/04.2010/dd802ac1ff8d41abe103

Sunday, February 14, 2010

Relationship between Jouve Cerebrals Crystallized Educational Scale (JCCES) Crystallized Educational Index (CEI) and Cognitive and Academic Measures

Abstract

This study aimed to examine the relationships between the Jouve Cerebrals Crystallized Educational Scale (JCCES) Crystallized Educational Index (CEI) and various measures of cognitive abilities and academic achievement. Pearson correlation analyses were used to test the research hypotheses. The results showed strong correlations between the JCCES CEI and measures of cognitive abilities, including the Reynolds Intellectual Assessment Scale (RIAS), Wechsler Adult Intelligence Scale - Third Edition (WAIS-III), Wechsler Intelligence Scale for Children - Third Edition (WISC-III), General Ability Measure for Adults (GAMA), and Stanford Binet Intelligence Scale (SBIS). Additionally, strong correlations were observed between the JCCES CEI and measures of academic achievement, including the Scholastic Assessment Test (SAT), American College Test (ACT), and Graduate Record Examination (GRE). The results suggest that the JCCES CEI is an effective measure of general cognitive ability and academic achievement across different age groups.

Keywords: Jouve Cerebrals Crystallized Educational Scale, Crystallized Educational Index, cognitive abilities, academic achievement, Pearson correlation analyses, Scholastic Assessment Test, American College Test, Graduate Record Examination.

Introduction

Psychometrics, the scientific study of psychological measurement, has been a critical aspect of psychology since the early 20th century, with the development of the first intelligence tests by pioneers such as Binet and Simon (1905) and Wechsler (1939). These seminal works laid the foundation for the development of various instruments to assess cognitive abilities, personality traits, and educational outcomes (Anastasi & Urbina, 1997). Over the years, psychometric theories have evolved, with advancements in factor analysis, item response theory, and other methodologies contributing to the refinement of existing instruments and the development of new ones (Embretson & Reise, 2000).

One such instrument is the Jouve Cerebrals Crystallized Educational Scale (JCCES), which assesses crystallized intelligence, a key component of general cognitive ability (Cattell, 1971; Horn & Cattell, 1966). Crystallized intelligence, often considered the product of accumulated knowledge and experiences, has been shown to be a reliable predictor of academic achievement and occupational success (Deary et al., 2007; Neisser et al., 1996).

The present study aims to examine the relationships between the JCCES Crystallized Educational Index (CEI) and various other measures of cognitive abilities and academic achievement, such as the Reynolds Intellectual Assessment Scale (RIAS), the Wechsler Adult Intelligence Scale - Third Edition (WAIS-III), the Scholastic Assessment Test (SAT), the American College Test (ACT), the Graduate Record Examination (GRE), the Armed Forces Qualification Test (AFQT), the Wechsler Intelligence Scale for Children - Third Edition (WISC-III), the General Ability Measure for Adults (GAMA), and the Stanford Binet Intelligence Scale (SBIS). Pearson correlation analyses were employed to investigate these relationships.

A comprehensive understanding of the relationships between the JCCES CEI and these well-established measures can provide valuable insight into the validity and utility of the JCCES in various contexts. Previous research has demonstrated that crystallized intelligence is a significant predictor of academic achievement (Deary et al., 2007) and is often correlated with other measures of cognitive abilities (Carroll, 1993). Therefore, the present study seeks to extend the existing literature by further examining these relationships, while also assessing the JCCES CEI's potential as an effective tool for predicting academic and cognitive outcomes.

The results of this study may have important implications for the use of the JCCES in educational and occupational settings and may contribute to the ongoing refinement of psychometric theories and methodologies. By exploring the relationships between the JCCES CEI and a range of well-established cognitive and achievement measures, this study aims to provide a comprehensive understanding of the JCCES's validity and utility within the broader context of psychometrics research.

Results

Statistical Analyses

The research hypotheses were tested using Pearson correlations to examine the relationships between the Jouve Cerebrals Crystallized Educational Scale (JCCES) Crystallized Educational Index (CEI) and various other measures. Assumptions made for the Pearson correlation analyses included linearity, homoscedasticity, and normality of the data.

Presentation of Results

The results of the Pearson correlation analyses between the JCCES CEI and various measures of cognitive abilities and academic achievement are presented in detail below. The majority of correlations were statistically significant at the p < .001 level, indicating strong relationships between the JCCES CEI and the respective measures.

Reynolds Intellectual Assessment Scale (RIAS, N = 138): The JCCES CEI demonstrated strong correlations with the Verbal Intelligence Index (VII) (r = .859, p < .001), Guess What? (GWH) (Information) (r = .814, p < .001), and Verbal Reasoning (VRZ) (r = .859, p < .001).

Wechsler Adult Intelligence Scale - Third Edition (WAIS-III, N =76): The JCCES CEI showed strong correlations with Full Scale IQ (FSIQ) (r = .821, p < .001), Verbal IQ (VIQ) (r = .837, p < .001), Performance IQ (PIQ) (r = .660, p < .001), Verbal Comprehension Index (VCI) (r = .816, p < .001), Vocabulary (VOC) (r = .775, p < .001), Similarities (SIM) (r = .579, p < .001), and Information (INF) (r = .769, p < .001).

Scholastic Assessment Test (SAT) (three different versions): The JCCES CEI exhibited strong correlations with SAT Composite scores for all three versions: <1995 (r = .814, p < .001, N = 87), 1995-2005 (r = .826, p < .001, N = 118), and >2005 (r = .858, p < .001, N = 125). Similarly, significant correlations were observed with Verbal and Mathematical scores across the three versions.

American College Test (ACT, N = 133): The JCCES CEI was significantly correlated with the ACT Composite score (r = .691, p < .001) and all subscales, including English (r = .636, p < .001), Mathematical (r = .600, p < .001), Reading (r = .676, p < .001), and Science (r = .685, p < .001).

Graduate Record Examination (GRE, N = 66): The JCCES CEI demonstrated a strong correlation with the GRE Composite score (r = .844, p < .001), Verbal (r = .768, p < .001), and Quantitative (r = .819, p < .001) scores. However, the correlation with the GRE Analytical subscale was weaker (r = .430, p = .020, N = 29).

Armed Forces Qualification Test (AFQT, N = 62): The JCCES CEI showed a strong correlation with the AFQT percentile converted to a deviation IQ (r = .825, p < .001).

Wechsler Intelligence Scale for Children - Third Edition (WISC-III, N = 29): The JCCES CEI had strong correlations with Full Scale IQ (FSIQ) (r = .851, p < .001), Verbal IQ (VIQ) (r = .665, p = .003, N = 18), and Performance IQ (PIQ) (r = .703, p = .001, N = 18).

General Ability Measure for Adults (GAMA, N = 64): The JCCES CEI was significantly correlated with the GAMA IQ score (r = .617, p < .001) and all subscales, including Matching (r = .467, p < .001), Analogies (r = .612, p < .001), Sequences (r = .455, p < .001), and Construction (r = .482, p <.001).

Stanford Binet Intelligence Scale (SBIS, N = 10): The JCCES CEI exhibited the strongest correlation with the SBIS Full Scale IQ (FSIQ) (r = .883, p = .001).

Interpretation of Results

Upon examining the Pearson correlation analysis results in greater detail, we can further interpret the relationships between the JCCES CEI and various cognitive and academic measures. The majority of the correlations were strong, supporting the research hypothesis that the JCCES CEI is positively related to these measures.

The strong relationships between the JCCES CEI and various intelligence scales provide further evidence that the JCCES CEI is an effective measure of general cognitive ability across different age groups. Both the Wechsler Adult Intelligence Scale - Third Edition (WAIS-III) and the Wechsler Intelligence Scale for Children - Third Edition (WISC-III) are widely recognized and well-established measures of cognitive ability, assessing various domains such as verbal comprehension, perceptual organization, working memory, and processing speed.

Intelligence Tests

  1. Wechsler Adult Intelligence Scale - Third Edition (WAIS-III): The WAIS-III is designed for individuals aged 16 to 89 years, assessing cognitive abilities across multiple domains. The strong correlation between the JCCES CEI and the WAIS-III Full Scale IQ (FSIQ) (r = .821, p < .001, N = 76) indicates that the JCCES CEI effectively captures general cognitive ability in adults. This positive relationship suggests that the JCCES CEI could be a useful tool for assessing cognitive abilities in various settings, such as educational, clinical, and occupational contexts.
  2. Wechsler Intelligence Scale for Children - Third Edition (WISC-III): The WISC-III is designed for children aged 6 to 16 years, assessing cognitive abilities across a similar range of domains as the WAIS-III. The strong correlation between the JCCES CEI and the WISC-III Full Scale IQ (FSIQ) (r = .851, p < .001, N = 29) suggests that the JCCES CEI is also effective in measuring general cognitive ability in children. This positive relationship implies that the JCCES CEI could be a valuable instrument for evaluating cognitive abilities in educational settings, as well as for identifying potential learning difficulties or giftedness in children.
Academic Tests

The Scholastic Assessment Test (SAT) is a widely used standardized test for college admissions in the United States, designed to measure students' critical thinking, problem-solving, and overall academic aptitude. The strong relationships between the JCCES CEI and SAT Composite scores across all three versions suggest that the JCCES CEI is a reliable indicator of academic achievement as measured by the SAT.

The SAT has undergone several changes over the years, resulting in three distinct versions. The following details illustrate the strong relationships between the JCCES CEI and each version of the SAT:

  1. SAT <1995: This version of the SAT consisted of two main sections: Verbal and Mathematical. The JCCES CEI showed a strong correlation with the SAT Composite score for this version (r = .814, p < .001, N = 87), indicating that the JCCES CEI is positively related to both verbal and mathematical abilities as measured by the SAT <1995.
  2. SAT 1995-2005: This version of the SAT maintained the Verbal and Mathematical sections, but introduced a new format and scoring system. The JCCES CEI displayed a strong correlation with the SAT Composite score for this version (r = .826, p < .001, N = 118), suggesting that the JCCES CEI remains a reliable indicator of academic achievement despite changes to the SAT format.
  3. SAT >2005: This version of the SAT introduced a third section, Writing, in addition to the existing Verbal (renamed as Reading) and Mathematical sections. The JCCES CEI demonstrated a strong correlation with the SAT Composite score for this version (r = .858, p < .001, N = 125), implying that the JCCES CEI is positively related to all three aspects of the SAT: Reading, Mathematical, and Writing.

The American College Test (ACT) correlations with the JCCES CEI provide further evidence that the JCCES CEI captures various aspects of academic achievement across multiple subject areas. The ACT is a standardized test that assesses high school student's general educational development and their ability to complete college-level work, covering four main subject areas: English, Mathematics, Reading, and Science.

The Pearson correlation analyses results for the ACT subscales are as follows:

  1. English: The JCCES CEI exhibited a strong correlation with the ACT English subscale (r = .636, p < .001, N = 133). This suggests that the JCCES CEI is positively related to English language skills, including grammar, punctuation, sentence structure, and rhetorical skills.
  2. Mathematics: The JCCES CEI displayed a strong correlation with the ACT Mathematics subscale (r = .600, p < .001, N = 133). This indicates a positive relationship between the JCCES CEI and mathematical problem-solving abilities, including knowledge of algebra, geometry, and trigonometry.
  3. Reading: The JCCES CEI showed a strong correlation with the ACT Reading subscale (r = .676, p < .001, N = 133). This implies that the JCCES CEI is positively associated with reading comprehension skills, including the ability to understand and analyze complex literary and informational texts.
  4. Science: The JCCES CEI demonstrated a strong correlation with the ACT Science subscale (r = .685, p < .001, N = 133). This suggests that the JCCES CEI is positively related to scientific reasoning and problem-solving skills, including the ability to interpret and analyze data from various scientific disciplines.

The moderate correlation between the JCCES CEI and the Graduate Record Examination (GRE) Analytical subscale (r = .430, p = .020, N = 29) is indeed notable, as it suggests a weaker relationship between the JCCES CEI and analytical abilities compared to the strong correlations observed with other cognitive and academic measures. Several factors might contribute to this finding, including:
  1. Differences in assessed skills: The JCCES CEI, which consists of Verbal Analogies, Mathematical Problems, and General Knowledge subtests, primarily measures crystallized intelligence. Crystallized intelligence refers to the knowledge and skills acquired through experience and education, such as vocabulary and factual information. In contrast, the GRE Analytical subscale assesses analytical writing skills, including the ability to articulate complex ideas, support arguments with relevant reasons and examples, and demonstrate critical thinking. The moderate correlation between the JCCES CEI and the GRE Analytical subscale may reflect the differences in the skills assessed by these two measures.
  2. Variability in the sample: The sample used in this study might have influenced the observed correlation between the JCCES CEI and the GRE Analytical subscale. The study participants might have had varying levels of exposure to analytical writing tasks, which could affect their performance on the GRE Analytical subscale. Additionally, the sample size for the GRE Analytical subscale (N = 29) was smaller than that of other measures, which might limit the generalizability of the findings.

Discussion

The present study aimed to examine the relationships between the Jouve Cerebrals Crystallized Educational Scale (JCCES) Crystallized Educational Index (CEI) and various measures of cognitive abilities and academic achievement. The results of the study support the research hypothesis that the JCCES CEI is positively related to these measures. Specifically, the JCCES CEI demonstrated strong correlations with measures of verbal intelligence, information, verbal reasoning, full-scale IQ, verbal IQ, performance IQ, verbal comprehension, vocabulary, similarities, information, SAT composite scores across three different versions, ACT composite score, and subscales, GRE composite score, quantitative score, and AFQT IQ score. The JCCES CEI also exhibited strong correlations with the GAMA IQ score and all subscales, as well as the SBIS Full Scale IQ.

The strong correlations between the JCCES CEI and various intelligence scales provide further evidence that the JCCES CEI is an effective measure of general cognitive ability across different age groups. The positive relationships between the JCCES CEI and various cognitive and academic measures suggest that the JCCES CEI could be a useful tool for assessing cognitive abilities and academic achievement in various settings, such as educational, clinical, and occupational contexts (Deary et al., 2007).

The strong correlations between the JCCES CEI and the SAT Composite scores across all three versions suggest that the JCCES CEI is a reliable indicator of academic achievement as measured by the SAT. The strong correlations observed between the JCCES CEI and the ACT composite score and subscales suggest that the JCCES CEI captures various aspects of academic achievement across multiple subject areas.

The moderate correlation between the JCCES CEI and the GRE Analytical subscale suggests a weaker relationship between the JCCES CEI and analytical abilities compared to the strong correlations observed with other cognitive and academic measures. This finding may reflect differences in the skills assessed by these two measures, as well as the variability in the sample used in this study.

Implications for Theory, Practice, and Future Research

The findings of the present study have several implications for theory and practice. The strong correlations observed between the JCCES CEI and measures of cognitive abilities and academic achievement support the validity and reliability of the JCCES as a measure of general cognitive ability and academic achievement. The JCCES CEI could be a valuable tool for assessing cognitive abilities and academic achievement in educational, clinical, and occupational settings.

The results of this study also have implications for future research. The present study used a cross-sectional design, and future research could use a longitudinal design to examine the stability and predictive validity of the JCCES CEI over time. Additionally, future research could explore the relationship between the JCCES CEI and other measures of academic achievement, such as high school and college GPA. Furthermore, future research could examine the factor structure of the JCCES and its relationships with other measures of cognitive abilities.

Limitations

There are several limitations to this study that may have affected the results. First, the sample size varied across the different measures, with smaller sample sizes for some of the tests. Smaller sample sizes may have limited the statistical power to detect significant correlations.

Second, selection bias may have influenced the results, as participants may have been more likely to respond to the survey if they had higher cognitive abilities or academic achievement. This could have resulted in an overestimation of the correlations between the JCCES CEI and other measures.

Finally, many samples relied on self-reported data, which may be subject to reporting biases and inaccuracies. Although the JCCES is an untimed, self-administered, open-ended test, it is possible that participants' responses were influenced by factors such as social desirability or recall biases, which may have affected the validity of the study results.

Future Research

Future research could address some of the limitations of this study, including increasing sample sizes for certain measures and using more diverse samples to improve generalizability. Additionally, future research could examine the JCCES CEI's relationship with other cognitive and academic measures not included in this study, such as measures of creativity or problem-solving ability.

Further exploration of the weaker relationship between the JCCES CEI and the GRE Analytical subscale could also be valuable. Additional research could investigate whether the moderate correlation is due to differences in the skills assessed or limitations of the sample used in this study. Future studies could also examine the JCCES CEI's relationship with other measures of analytical abilities, such as performance on analytical writing tasks or measures of critical thinking.

Implications

The results of this study have important implications for both theory and practice. The strong relationships between the JCCES CEI and various measures of cognitive abilities and academic achievement provide further evidence for the construct validity of the JCCES as a measure of general cognitive ability. The JCCES CEI may be particularly useful in educational and occupational settings for assessing individuals' cognitive abilities, identifying potential learning difficulties or giftedness, and predicting academic and occupational success.

Additionally, the strong correlations between the JCCES CEI and the SAT and ACT suggest that the JCCES CEI is an effective tool for predicting academic achievement. As such, the JCCES CEI may be useful for guiding educational interventions and for identifying individuals who may benefit from academic support.

However, it is important to note that the JCCES CEI should not be used as the sole measure for assessing cognitive abilities or academic achievement. Rather, the JCCES CEI should be used in conjunction with other measures to provide a more comprehensive evaluation of an individual's strengths and weaknesses.

Conclusion

In conclusion, the results of this study provide strong evidence for the construct validity of the JCCES as a measure of general cognitive ability. The JCCES CEI demonstrated strong correlations with various measures of cognitive abilities and academic achievement, including well-established measures such as the WAIS-III and the SAT. The study results suggest that the JCCES CEI may be a useful tool for assessing cognitive abilities and predicting academic and occupational success. However, the limitations of the study should be taken into consideration when interpreting the results. Future research could address some of the limitations and further explore the JCCES CEI's relationship with other measures of cognitive abilities and academic achievement.

References

Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). Upper Saddle River, NJ: Prentice-Hall.

Binet, A., & Simon, T. (1905). New methods for the diagnosis of the intellectual level of subnormals. L'Année Psychologique, 11, 191-244.

Cattell, R. B. (1971). Abilities: Their structure, growth, and action. Boston, MA: Houghton Mifflin.

Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge, UK: Cambridge University Press.

Deary, I. J., Strand, S., Smith, P., & Fernandes, C. (2007). Intelligence and educational achievement. Intelligence, 35(1), 13-21. https://doi.org/10.1016/j.intell.2006.02.001

Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates.

Wechsler, D. (1939). The measurement of adult intelligence. Baltimore, MD: Williams & Wilkins.

Thursday, February 4, 2010

Investigating the Relationship Between JCCES and RIAS Verbal Scale: A Principal Component Analysis Approach

Abstract


This study examined the relationship between the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the Reynolds Intellectual Assessment (RIAS) Verbal Scale using Principal Component Analysis (PCA). The PCA revealed a strong relationship between JCCES and RIAS Verbal Scale, supporting the hypothesis that there is a common underlying construct representing general verbal and crystallized intelligence. Additionally, mathematical problem-solving was found to be a distinct construct from general verbal and crystallized intelligence. Despite some limitations, this study provides empirical support for the relationship between crystallized intelligence and verbal abilities, as well as the distinction between mathematical and verbal abilities, which can inform educational interventions and assessments.


Keywords: Jouve Cerebrals Crystallized Educational Scale, Reynolds Intellectual Assessment, Verbal Scale, Principal Component Analysis, crystallized intelligence, mathematical problem-solving


Introduction


Psychometrics, the science of measuring psychological attributes, has a long history of developing and refining theories and instruments to assess cognitive abilities (Cattell, 1963; Carroll, 1993). The present study focuses on the relationship between the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the Reynolds Intellectual Assessment (RIAS) Verbal Scale (Reynolds & Kamphaus, 2003), two psychometric instruments designed to assess crystallized intelligence and verbal abilities, respectively. Crystallized intelligence, first proposed by Cattell (1963), refers to the ability to access and utilize accumulated knowledge and experience, which is closely related to verbal abilities (Ackerman, 1996; Kaufman & Lichtenberger, 2006). Theories of cognitive abilities, such as those proposed by Cattell (1971), Horn and Cattell (1966), and Carroll (1993), have suggested that crystallized intelligence and verbal abilities share a common underlying construct.


Previous research has supported the relationship between crystallized intelligence and verbal abilities (Cattell, 1971; Horn & Cattell, 1966; Carroll, 1993), as well as the distinction between mathematical and verbal abilities (Deary et al., 2007). However, few studies have specifically examined the relationship between the JCCES and RIAS Verbal Scale. The present study aims to address this gap by investigating the relationship between these two instruments using principal component analysis (PCA), a statistical technique commonly employed in psychometrics to reduce data complexity and identify underlying constructs (Jolliffe, 1986).


The research question guiding this study is: What is the relationship between the JCCES and RIAS Verbal Scale, as assessed by PCA? To answer this question, the study will test the hypothesis that there is a strong relationship between the JCCES and RIAS Verbal Scale, as indicated by high factor loadings on a common underlying construct, which may represent general verbal and crystallized intelligence. Additionally, the study will explore the relationship between mathematical problem-solving and the other variables, given the distinction between mathematical and verbal abilities noted in previous research (Deary et al., 2007).


This study builds on the existing literature by providing a more detailed examination of the relationship between the JCCES and RIAS Verbal Scale, which has implications for both theory and practice. Understanding the relationship between these two instruments can inform the development of educational interventions and assessments tailored to the specific needs of learners with different cognitive profiles (Kaufman & Lichtenberger, 2006; McGrew & Flanagan, 1998). Furthermore, the findings contribute to the understanding of the structure of cognitive abilities, particularly the relationship between crystallized intelligence and verbal abilities (Ackerman, 1996). This may be useful for refining theoretical models of cognitive abilities and guiding future research in this area (Carroll, 2003; Deary et al., 2007).


Method


Research Design


The current study employed a correlational research design to investigate the relationship between the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the Reynolds Intellectual Assessment (RIAS) Verbal Scale. This design allowed for the examination of associations between the variables without manipulating or controlling any of the measures (Creswell, 2009). A correlational design was chosen because it is well-suited for studying the relationships among naturally occurring variables, such as crystallized intelligence and verbal abilities (Campbell & Stanley, 1963; Kerlinger, 2000).


Participants


A total of 125 participants were recruited for this study, 81 males (64.71%) and 44 females (35.29%). The participants' mean age of 33.82 years old (SD = 12.56). In terms of education, 79.83% of the participants held at least a college degree. Participants were recruited using convenience sampling methods, such as posting advertisements on social media and online forums. 


Materials


The JCCES is a measure of crystallized intelligence, consisting of three subtests: Verbal Analogies (VA), Mathematical Problems (MP), and General Knowledge (GK). The RIAS Verbal Scale (Reynolds & Kamphaus, 2003) is a measure of verbal intelligence, consisting of two subtests: Guess What? (GWH) and Verbal Reasoning (VRZ). Both the JCCES and RIAS have been validated in previous research and have demonstrated strong psychometric properties.


Procedure


Participants were provided with informed consent forms. The tasks were presented in a fixed order, starting with the JCCES (VA, MP, and GK) followed by the RIAS Verbal Scale (GWH and VRZ). Instructions for each task were provided before the commencement of each subtest. Participants were given unlimited time to complete the tasks. Upon completion of the tasks, participants were debriefed and thanked for their participation.


Statistical Analysis


Data were analyzed using Excel. Descriptive statistics were calculated for the demographic variables, and a Principal Component Analysis (PCA) was conducted to examine the relationships among the JCCES and RIAS Verbal Scale subtests (Jolliffe, 1986). The PCA included Bartlett's sphericity test to assess the suitability of the data for PCA (Bartlett, 1954) and the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy to evaluate the adequacy of the sample size for each variable (Kaiser, 1974; Hutcheson & Sofroniou, 1999).


Results


The present study investigated the relationship between the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the Reynolds Intellectual Assessment (RIAS) Verbal Scale. A Principal Component Analysis (PCA) was conducted to test the research hypotheses. This analysis was performed on five variables: Verbal Analogies (VA), Mathematical Problems (MP), and General Knowledge (GK) from JCCES, and Guess What? (GWH) and Verbal Reasoning (VRZ) from RIAS. The PCA was a Pearson (n) type, with no missing data for any of the variables. The analysis included Bartlett's sphericity test to assess the suitability of the data for PCA, and the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy to evaluate the adequacy of the sample size for each variable.


Results of the Statistical Analyses


The correlation matrix revealed significant positive correlations between all the variables, with coefficients ranging from 0.471 to 0.761. Bartlett's sphericity test confirmed the appropriateness of the data for PCA (χ² = 385.145, df = 10, p < 0.0001, α = 0.05). The KMO measure of sampling adequacy was satisfactory for all variables (0.844 to 0.891) and the overall KMO value was 0.868, indicating an adequate sample size.


The PCA extracted five factors with eigenvalues ranging from 0.224 to 3.574. The first factor (F1) accounted for 71.472% of the total variance, the second factor (F2) for 12.329%, and the remaining factors (F3 to F5) for 16.199%. A Varimax rotation was applied to facilitate the interpretation of the factor loadings. After rotation, the percentage of variance accounted for by the first two factors (D1 and D2) was 57.213% and 26.588%, respectively, totaling 83.801% of the cumulative variance.


The rotated factor loadings revealed that VA, GK, GWH, and VRZ loaded highly on the first factor (D1), with loadings ranging from 0.774 to 0.894. MP loaded highly on the second factor (D2), with a loading of 0.952.


Interpretation of the Results


The results of the PCA support the hypothesis that there is a strong relationship between the JCCES and the RIAS Verbal Scale, as indicated by the high loadings of the VA, GK, GWH, and VRZ variables on the first factor (D1). This factor can be interpreted as a common underlying construct, which may represent general verbal and crystallized intelligence. The high loading of the MP variable on the second factor (D2) suggests that mathematical problem-solving is a distinct construct from the general verbal and crystallized intelligence measured by the other variables.


Limitations


There are some limitations to this study that may have affected the results. First, the sample size of 125 participants is relatively small, which may limit the generalizability of the findings. However, the KMO measure indicated that the sample size was adequate for the PCA. Second, the sample was not equally distributed in terms of gender, with a majority of males (64.71%) and a high percentage of participants with at least one college degree (79.83%). This may have introduced selection bias, potentially limiting the applicability of the findings to more diverse populations. Lastly, the study relied solely on PCA to analyze the relationships between the variables, and future research may benefit from using additional statistical techniques, such as confirmatory factor analysis, structural equation modeling, or multiple regression, to further validate the findings and provide a more comprehensive understanding of the relationships among the variables.


Discussion

Interpretation of the Results in the Context of the Research Hypotheses and Previous Research

The present study aimed to investigate the relationship between the JCCES and the RIAS Verbal Scale, with the results supporting a strong relationship between these two measures. This finding is consistent with previous research on the relationship between crystallized intelligence and verbal abilities (e.g., Cattell, 1971; Horn & Cattell, 1966; Carroll, 1993). The high loadings of VA, GK, GWH, and VRZ on the first factor (D1) suggest a common underlying construct, which may represent general verbal and crystallized intelligence. This is supported by the notion that crystallized intelligence involves the acquisition and application of verbal and cultural knowledge, as well as the ability to reason using previously learned information (Cattell, 1963). Studies have consistently demonstrated that crystallized intelligence is closely related to verbal abilities, reflecting an individual's ability to access and utilize their accumulated knowledge base (Ackerman, 1996; Kaufman & Lichtenberger, 2006).

The high loading of MP on the second factor (D2) indicates that mathematical problem-solving is a distinct construct from general verbal and crystallized intelligence. This finding adds to the existing literature on the differentiation of mathematical abilities from verbal abilities (e.g., Deary, et al., 2007). Moreover, the significant positive correlations between all the variables suggest that there may be some shared cognitive processes underlying performance on these tasks, consistent with the concept of a general factor of intelligence (Spearman, 1904).

Implications for Theory, Practice, and Future Research

The results of this study have several important implications. First, they provide empirical support for the relationship between crystallized intelligence and verbal abilities, as well as the distinction between mathematical and verbal abilities (Cattell, 1971; Horn & Cattell, 1966; Carroll, 1993). This can inform the development of educational interventions and assessments that are tailored to the specific needs of learners with different cognitive profiles (Kaufman & Lichtenberger, 2006; McGrew & Flanagan, 1998). For example, educators can use the JCCES and RIAS Verbal Scale to identify students who may benefit from additional support in developing their verbal or mathematical skills (Fletcher et al., 2007).

Second, the findings contribute to the understanding of the structure of cognitive abilities, particularly the relationship between crystallized intelligence and verbal abilities (Ackerman, 1996). This may be useful for refining theoretical models of cognitive abilities and guiding future research in this area (Carroll, 2003; Deary et al., 2007). For instance, researchers could investigate the underlying cognitive processes that contribute to the shared variance between the JCCES and RIAS Verbal Scale, as well as the processes that differentiate mathematical problem-solving from verbal and crystallized intelligence (Geary, 1994; Hattie, 2009). This could involve examining the role of working memory, attention, and executive functions in the development of these cognitive abilities (Baddeley, 2003; Conway et al., 2002).

Limitations and Alternative Explanations

Although the present study has several strengths, there are also some limitations that warrant consideration. As noted earlier, the sample size was relatively small, and the sample was not equally distributed in terms of gender and educational attainment. This may limit the generalizability of the findings and introduce potential selection bias (Maxwell, 2004; Pedhazur & Schmelkin, 1991). Future research should aim to replicate these findings in larger and more diverse samples to increase the robustness and external validity of the results (Cook & Campbell, 1979; Shadish, et al., 2002).

Additionally, the study relied solely on PCA to analyze the relationships between the variables. Future research could employ other statistical techniques, such as confirmatory factor analysis (Jöreskog, 1969; Bollen, 1989), structural equation modeling (Kline, 2005; Schumacker & Lomax, 2004), or multiple regression (Cohen, et al., 2003; Tabachnick & Fidell, 2007), to further validate the findings and provide a more comprehensive understanding of the relationships among the variables. These alternative statistical approaches could help to address potential methodological issues, such as measurement error and the influence of confounding variables, and strengthen the evidence base for the observed relationships between crystallized intelligence and verbal abilities (Bryant & Yarnold, 1995; Little, et al., 1999).

Directions for Future Research

Based on the findings and limitations of the present study, several directions for future research can be identified. First, researchers could investigate the underlying cognitive processes that contribute to the shared variance between the JCCES and RIAS Verbal Scale, as well as the processes that differentiate mathematical problem-solving from verbal and crystallized intelligence (Neisser et al., 1996; Stanovich & West, 2000). This could involve examining the neural substrates of these cognitive abilities (Jung & Haier, 2007), as well as the role of environmental and genetic factors in their development (Plomin & Spinath, 2004).

Second, future research could examine the predictive validity of the JCCES and RIAS Verbal Scale for various educational and occupational outcomes, such as academic achievement, job performance, and job satisfaction (Deary, 2001; Kuncel, et al., 2004). This would help to establish the practical utility of these measures in real-world settings and inform the development of evidence-based interventions and policies aimed at fostering individual success and well-being (Gottfredson, 1997).

Third, researchers could explore the potential moderating role of individual differences, such as age, gender, and socioeconomic status, on the relationship between the JCCES and RIAS Verbal Scale (Deary, et al., 2005; Lubinski & Benbow, 2006). This would help to identify specific subgroups of the population for whom these measures may be particularly informative or relevant and inform the development of targeted interventions and supports.

Finally, future research could investigate the longitudinal stability of the relationships between the JCCES and RIAS Verbal Scale, as well as the potential causal mechanisms underlying these relationships (McArdle, et al., 2002). Longitudinal designs would allow researchers to examine the development and change of cognitive abilities over time (Baltes, et al., 1980) and provide insights into the factors that contribute to the observed patterns of covariation among the variables (Salthouse, 2004).

Conclusion

In conclusion, the present study examined the relationship between the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the Reynolds Intellectual Assessment (RIAS) Verbal Scale using a Principal Component Analysis (PCA). The results of the PCA supported the hypothesis that there is a strong relationship between the JCCES and the RIAS Verbal Scale, with the first factor representing a common underlying construct of general verbal and crystallized intelligence and the second-factor representing mathematical problem-solving as a distinct construct. The findings contribute to the understanding of the structure of cognitive abilities and have implications for theory, practice, and future research. The study provides empirical support for the relationship between crystallized intelligence and verbal abilities and the differentiation of mathematical abilities from verbal abilities. The results can inform the development of educational interventions and assessments that are tailored to the specific needs of learners with different cognitive profiles. The study's limitations include a relatively small sample size and an unequal distribution of gender and educational attainment, highlighting the need for future research to replicate the findings in larger and more diverse samples. Future research could also investigate the underlying cognitive processes, predictive validity, individual differences, and longitudinal stability of the relationships among these variables.

References

Ackerman, P. L. (1996). A theory of adult intellectual development: Process, personality, interests, and knowledge. Intelligence, 22(2), 227–257. https://doi.org/10.1016/S0160-2896(96)90016-1

Baltes, P. B., Reese, H. W., & Lipsitt, L. P. (Eds.). (1980). Life-span developmental psychology: Introduction to research methods. New York: Academic Press. https://doi.org/10.4324/9781315799704

Bollen, K. A. (1989). Structural equations with latent variables. New York, NY: John Wiley & Sons.

Baddeley, A. (2003). Working Memory: Looking Back and Looking Forward. Nature Reviews Neuroscience, 4(10), 829–839. https://doi.org/10.1038/nrn1201

Bartlett, M. S. (1954). A note on the multiplying factors for various chi square approximations. Journal of the Royal Statistical Society: Series B, 16(2), 296-298. 

Bryant, F. B., & Yarnold, P. R. (1995). Principal-components analysis and exploratory and confirmatory factor analysis. In L. G. Grimm & P. R. Yarnold (Eds.), Reading and understanding multivariate statistics (pp. 99-136). Washington, DC: American Psychological Association.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Chicago, IL: Rand McNally.

Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54(1), 1–22. https://doi.org/10.1037/h0046743

Cattell, R. B. (1971). Abilities: their structure, growth, and action. Boston, MA: Houghton Mifflin.

Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York: Cambridge University Press. https://doi.org/10.1017/CBO9780511571312

Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. Mahwah, NJ: Lawrence Erlbaum Associates. https://doi.org/10.4324/9780203774441

Conway, A. R. A., Cowan, N., Bunting, M. F., Therriault, D. J., & Minkoff, S. R. B. (2002). A latent variable analysis of working memory capacity, short-term memory capacity, processing speed, and general fluid intelligence. Intelligence, 30(2), 163-183. https://doi.org/10.1016/S0160-2896(01)00096-4

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston, MA: Houghton Mifflin.

Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). New York: Sage Publications, Inc.

Deary, I. J. (2001). Intelligence: A very short introduction. Oxford, UK: Oxford University Press.

Deary, I. J., Strand, S., Smith, P., & Fernandes, C. (2007). Intelligence and educational achievement. Intelligence, 35(1), 13-21. https://doi.org/10.1016/j.intell.2006.02.001

Deary, I. J., Taylor, M. D., Hart, C. L., Wilson, V., Smith, G. D., Blane, D., & Starr, J. M. (2005). Intergenerational social mobility and mid-life status attainment: Influences of childhood intelligence, childhood social factors, and education. Intelligence, 33(5), 455–472. https://doi.org/10.1016/j.intell.2005.06.003

Fletcher, J.M., Lyon, G.R., Fuchs, L.S., and Barnes, M.A. (2007). Learning disabilities: From identification to intervention. New York: Guilford Press.

Geary, D. C. (1994). Children's mathematical development: Research and practical applications. Washington, DC: American Psychological Association. https://doi.org/10.1037/10163-000

Gottfredson, L. S. (1997). Why g matters: The complexity of everyday life. Intelligence, 24(1), 79–132. https://doi.org/10.1016/S0160-2896(97)90014-3

Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses related to achievement. London, UK: Routledge.

Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized general intelligences. Journal of Educational Psychology, 57(5), 253-270. https://doi.org/10.1037/h0023816

Hutcheson, G. D., & Sofroniou, N. (1999). The Multivariate Social Scientist: Introductory Statistics Using Generalized Linear Models. Thousand Oaks, CA: Sage Publications. https://doi.org/10.4135/9780857028075

Jolliffe, I. T. (1986). Principal component analysis and factor analysis. In I. T. Jolliffe (Ed.), Principal component analysis (pp. 115-128). New York, NY: Springer-Verlag.

Joreskog, K. G. (1969). A general approach to confirmatory maximum likelihood factor analysis. Psychometrika, 34(2, Pt.1), 183–202. https://doi.org/10.1007/BF02289343

Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: converging neuroimaging evidence. The Behavioral and brain sciences, 30(2), 135–187. https://doi.org/10.1017/S0140525X07001185

Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika, 39(1), 31–36. https://doi.org/10.1007/BF02291575

Kaufman, A. S., & Lichtenberger, E. O. (2006). Assessing adolescent and adult intelligence (3rd ed.). Hoboken, NJ: John Wiley & Sons Inc.

Kline, R. B. (2005). Principles and practice of structural equation modeling. New York, NY: Guilford Press.

Little, T. D., Lindenberger, U., & Nesselroade, J. R. (1999). On selecting indicators for multivariate measurement and modeling with latent variables: When "good" indicators are bad and "bad" indicators are good. Psychological Methods, 4(2), 192–211. https://doi.org/10.1037/1082-989X.4.2.192

Lubinski, D., & Benbow, C. P. (2006). Study of mathematically precocious youth after 35 years: Uncovering antecedents for the development of math-science expertise. Perspectives on Psychological Science, 1(4), 316-345. https://doi.org/10.1111/j.1745-6916.2006.00019.x

Kerlinger, F. N. (2000). Foundations of Behavioral Research. San Diego, CA: Harcourt College Publishers.

Kuncel, N. R., Hezlett, S. A., & Ones, D. S. (2004). Academic performance, career potential, creativity, and job performance: can one construct predict them all?. Journal of personality and social psychology, 86(1), 148–161. https://doi.org/10.1037/0022-3514.86.1.148

Maxwell, S. E. (2004). The Persistence of Underpowered Studies in Psychological Research: Causes, Consequences, and Remedies. Psychological Methods, 9(2), 147–163. https://doi.org/10.1037/1082-989X.9.2.147

McArdle, J. J., Ferrer-Caja, E., Hamagami, F., & Woodcock, R. W. (2002). Comparative longitudinal structural analyses of the growth and decline of multiple intellectual abilities over the life span. Developmental psychology, 38(1), 115–142.

McGrew, K. S., & Flanagan, D. P. (1998). The intelligence test desk reference (ITDR): Gf-Gc cross-battery assessment. Boston, MA: Allyn & Bacon.

Neisser, U., Boodoo, G., Bouchard, T. J., Jr., Boykin, A. W., Brody, N., Ceci, S. J., Halpern, D. F., Loehlin, J. C., Perloff, R., Sternberg, R. J., & Urbina, S. (1996). Intelligence: Knowns and unknowns. American Psychologist, 51(2), 77–101. https://doi.org/10.1037/0003-066X.51.2.77

Pedhazur, E. J., & Schmelkin, L. P. (1991). Measurement, design, and analysis: An integrated approach. Hillsdale, NJ: Lawrence Erlbaum Associates.

Plomin, R., & Spinath, F. M. (2004). Intelligence: genetics, genes, and genomics. Journal of personality and social psychology, 86(1), 112–129. https://doi.org/10.1037/0022-3514.86.1.112

Reynolds, C. R., & Kamphaus, R. W. (2003). Reynolds Intellectual Assessment Scales (RIAS) and the Reynolds Intellectual Screening Test (RIST), Professional Manual. Lutz, FL: Psychological Assessment Resources.

Salthouse, T. A. (2004). What and When of Cognitive Aging. Current Directions in Psychological Science, 13(4), 140–144. https://doi.org/10.1111/j.0963-7214.2004.00293.x

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

Schumacker, R. E., & Lomax, R. G. (2004). A beginner's guide to structural equation modeling. Mahwah, NJ: Lawrence Erlbaum Associates. https://doi.org/10.4324/9781410610904

Spearman, C. (1904). "General intelligence," is objectively determined and measured. The American Journal of Psychology, 15(2), 201-292. https://doi.org/10.2307/1412107

Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences, 23(5), 645–665. https://doi.org/10.1017/S0140525X00003435

Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). Boston, MA: Pearson Education.