Showing posts with label Jouve Cerebrals Crystallized Educational Scale. Show all posts
Showing posts with label Jouve Cerebrals Crystallized Educational Scale. Show all posts

Friday, January 16, 2015

Exploring the Underlying Dimensions of Cognitive Abilities: A Multidimensional Scaling Analysis of JCCES and GAMA Subtests

Abstract

This study aimed to investigate the relationships between tasks of the Jouve Cerebrals Crystallized Educational Scale (JCCES) and General Ability Measure for Adults (GAMA) using multidimensional scaling (MDS) analysis. The JCCES measures Verbal Analogies, Mathematical Problems, and General Knowledge, while the GAMA assesses nonverbal cognitive abilities through Matching, Analogies, Sequences, and Construction tasks. A total of 63 participants completed both assessments. MDS analysis revealed a 2-dimensional solution, illustrating a diagonal separation between nonverbal and verbal abilities, with Mathematical Problems slightly closer to the verbal side. Seven groups were identified, corresponding to distinct cognitive processes. The findings suggest that JCCES and GAMA tasks are not independent and share common underlying dimensions. This study contributes to a more nuanced understanding of cognitive abilities, with potential implications for educational, clinical, and research settings. Future research should address the study's limitations, including the small sample size and potential methodological constraints.

Keywords: cognitive abilities, JCCES, GAMA, multidimensional scaling, verbal abilities, nonverbal abilities, fluid intelligence, crystallized intelligence

Introduction

The study of cognitive abilities has been an area of significant interest in the field of psychometrics, which aims to develop and refine methods for assessing individual differences in mental capabilities (Embretson & Reise, 2000). Among the diverse cognitive abilities, crystallized and fluid intelligence have been particularly influential constructs in the understanding of human cognition (Cattell, 1963). Crystallized intelligence refers to the acquired knowledge and skills, while fluid intelligence reflects the capacity for abstract reasoning and problem-solving, independent of prior knowledge or experience (Cattell, 1963; Horn & Cattell, 1966). Various instruments have been developed to assess these cognitive abilities, including the Jouve Cerebrals Crystallized Educational Scale (JCCES; Jouve, 2010a) and the General Ability Measure for Adults (GAMA; Naglieri & Bardos, 1997).

Although JCCES and GAMA are used as independent measures of crystallized and nonverbal cognitive abilities, respectively, the relationships between the tasks within these instruments remain less explored. Previous research has identified separate factors for JCCES and GAMA subtests (Jouve, 2010b), but a more detailed investigation into the underlying cognitive processes is warranted. Multidimensional scaling (MDS) is a statistical technique that can provide insight into the relationships between tasks by representing them as points in a multidimensional space (Cox & Cox, 2001; Borg & Groenen, 2005). The present study aims to apply MDS to analyze the relationships between the tasks of JCCES and GAMA, in order to identify common underlying dimensions and provide a more nuanced understanding of the cognitive abilities assessed by these instruments.

The literature on cognitive abilities suggests that tasks within JCCES and GAMA may not be entirely independent and could share some common underlying dimensions (Carroll, 1993; Spearman, 1927). For instance, the verbal analogies (VA) and general knowledge (GK) tasks in JCCES tap into language development, a crucial aspect of crystallized intelligence (Horn & Cattell, 1966). Similarly, the matching (MAT), analogies (ANA), sequences (SEQ), and construction (CON) tasks in GAMA are related to fluid intelligence, involving abstract reasoning and problem-solving skills (Naglieri & Bardos, 1997). However, the specific relationships between these tasks and their underlying cognitive processes remain to be further elucidated.

The present study seeks to address this gap in the literature by employing MDS to investigate the relationships between JCCES and GAMA tasks, with the aim of identifying common underlying dimensions. In line with previous research (Jouve, 2010b), we hypothesize that the MDS analysis will reveal a clear distinction between verbal and nonverbal abilities. Furthermore, we expect that the analysis will provide a more detailed classification of the tasks, reflecting the underlying cognitive processes involved in each task. By providing a comprehensive understanding of the relationships between the tasks within JCCES and GAMA, this study will contribute to the psychometric literature and inform the development of more targeted interventions and assessments in educational, clinical, and research settings.

Method

Research Design

The current study employed a correlational research design to investigate the relationships between the tasks from the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the General Ability Measure for Adults (GAMA). This design was chosen because it allowed the researchers to examine the associations between the variables of interest without manipulating any variables or assigning participants to experimental conditions (Creswell, 2014).

Participants

A total of 63 participants were recruited for the study. Demographic information regarding age, gender, and ethnicity was collected but not used in this study. The participants were selected based on their willingness to participate and their ability to complete the JCCES and GAMA assessments. No exclusion criteria were set.

Materials

The JCCES is a measure of crystallized cognitive abilities (Jouve, 2010a), which reflect an individual's acquired knowledge and skills (Cattell, 1971). It consists of three subtests: Verbal Analogies (VA), Mathematical Problems (MP), and General Knowledge (GK).

The GAMA is a standardized measure of nonverbal and figurative general cognitive abilities (Naglieri & Bardos, 1997). It consists of four subtests: Matching (MAT), Analogies (ANA), Sequences (SEQ), and Construction (CON).

Procedures

Data collection was conducted in a quiet and well-lit testing environment. Participants first completed the JCCES, followed by the GAMA. Standardized instructions were provided to ensure that participants understood the requirements of each subtest. The JCCES and GAMA were administered according to their respective guidelines. 

Statistical Analyses

The data were analyzed using Multidimensional Scaling (MDS) with XLSTAT. The choice of MDS was informed by its ability to represent the structure of complex datasets by reducing their dimensionality while preserving the relationships between data points (Borg & Groenen, 2005). Kruskal's stress (1) was used to measure the goodness of fit for the MDS solution (Kruskal, 1964). Analyses were conducted for dimensions ranging from 1 to 7, with random initial configuration, 10 repetitions, and stopping conditions set at convergence = 0.00001 and iterations = 500.

Results

The data included scores from the Jouve Cerebrals Crystallized Educational Scale (JCCES), which measures Verbal Analogies (VA), Mathematical Problems (MP), and General Knowledge (GK), and the General Ability Measure for Adults (GAMA), a nonverbal assessment comprising Matching (MAT), Analogies (ANA), Sequences (SEQ), and Construction (CON) tasks.

Results of the Statistical Analyses

Based on Kruskal's stress values, the best solution was obtained for a 2-dimensional representation space (stress = 0.100), as higher dimensions did not result in significant improvements. The results for the 2-dimensional space (RSQ = .9418) are illustrated in Figure 1.



Interpretation of Results

The results from the MDS analysis indicate that the 2-dimensional solution provides a nuanced representation of the relationships between the JCCES and GAMA tasks. The findings suggest that the tasks from the two measures are not independent, and there are common underlying dimensions that account for their relationships. The MDS configuration reveals a diagonal separation between nonverbal abilities (MAT, ANA, SEQ, and CON) on one side and verbal abilities (GK and VA) on the other, with MP being unrelated to either side but slightly closer to the verbal side.

Based on their proximities, seven groups can be identified (in alphabetical order):

  1. Abstract Reasoning and Pattern Recognition: This group, comprising SEQ and CON tasks, reflects abilities related to identifying and extrapolating patterns, as well as spatial visualization and manipulation. Both tasks share the common cognitive processes of abstract reasoning and pattern recognition, making them closely related to fluid intelligence.
  2. Nonverbal Analogical Reasoning: Represented solely by the ANA task, this group reflects the ability to identify relationships and draw analogies between seemingly unrelated objects. The unique focus on analogical reasoning in a figurative context sets this task apart from the other tasks within the fluid intelligence group.
  3. Crystallized intelligence: Consisting of MP, GK, and VA, this group represents abilities related to accumulated knowledge and experience, as well as the application of learned information in problem-solving situations.
  4. Fluid intelligence: Comprised of ANA, SEQ, and CON, this group represents cognitive processes involving reasoning, problem-solving, and abstract thinking, which are not dependent on prior knowledge or experience.
  5. Language development: Represented by GK and VA, this group reflects abilities related to language comprehension, vocabulary, and the use of language in various contexts.
  6. Quantitative reasoning and knowledge: Represented uniquely by MP, this group reflects abilities related to understanding, interpreting, and applying numerical information and mathematical concepts.
  7. Visual-spatial representation: Represented solely by MAT, this group reflects abilities related to visualizing and manipulating spatial information.

Particularities

The MDS analysis suggests that the CON task is more closely related to the fluid intelligence subgroup (ANA, SEQ, and CON) rather than the visual-spatial representation subgroup (MAT). This finding could be attributed to the nature of the CON task, which involves problem-solving and reasoning abilities that extend beyond visual-spatial skills. Although visual-spatial abilities may be necessary for the task, the CON task requires individuals to analyze and identify patterns, think abstractly, and apply their reasoning skills, which are more closely aligned with fluid intelligence processes. As a result, the CON task seems to tap into a broader range of cognitive processes than just visual-spatial representation, making it a better fit for the fluid intelligence subgroup.

In the fluid intelligence group, the MDS analysis reveals an intriguing difference in the proximity between the tasks. SEQ and CON are closely related, while ANA is positioned farther apart. This distinction can be better understood by examining the underlying processes involved in each task.

The SEQ task requires individuals to identify patterns and complete a sequence by deducing the logical progression. This involves abstract reasoning, pattern recognition, and the ability to extrapolate from given information. Similarly, the CON task involves assembling objects or shapes to create a specific configuration. This also requires abstract reasoning and pattern recognition, as well as spatial visualization and manipulation skills. Due to these shared cognitive processes, SEQ and CON tasks form a closely related subgroup within fluid intelligence.

On the other hand, the ANA task involves identifying relationships between pairs of objects or concepts and applying that understanding to a new set of objects or concepts. Although this task also requires abstract reasoning and problem-solving skills, it differs from SEQ and CON tasks in the sense that it demands a higher level of analogical reasoning, which involves identifying similarities and relationships between seemingly unrelated entities. This unique cognitive demand in the ANA task sets it apart from the other tasks in the fluid intelligence group.

Limitations

There are some limitations in the current study that may have affected the results. Firstly, the sample size of 63 participants may not be sufficient to provide robust and generalizable results. A larger sample would improve the reliability of the MDS analysis and potentially lead to more conclusive findings. Secondly, a lack of inclusion criteria could have been introduced in the recruitment process, affecting the representativeness of the sample and the generalizability of the results. Finally, there may be methodological limitations associated with the use of MDS, such as the assumption that the data are interval-scaled and that the relationships between tasks can be represented in a Euclidean space. These assumptions may not be entirely accurate for the current dataset, potentially affecting the interpretation of the results.

Discussion

Interpretation of Results and Comparison with Previous Research

The current study aimed to investigate the relationships between the tasks of the JCCES and GAMA, with the intent of identifying underlying dimensions that account for these relationships. In line with our hypotheses, we found a clear distinction between verbal and nonverbal abilities. The MDS analysis provided a nuanced representation of the relationships between the tasks, revealing a diagonal separation between nonverbal abilities (MAT, ANA, SEQ, and CON) and verbal abilities (GK and VA), with MP being closer to the verbal side. This finding is consistent with previous research (Jouve, 2010b), which identified separate factors for JCCES and GAMA subtests, emphasizing the distinctiveness of the two instruments in assessing crystallized and nonverbal cognitive abilities.

Our analysis identified seven groups, providing a more detailed understanding of the cognitive processes involved in each task. This classification aligns with the theoretical distinction between crystallized and fluid intelligence (Cattell, 1963), with the crystallized intelligence group consisting of MP, GK, and VA, and the fluid intelligence group comprising ANA, SEQ, and CON. However, our analysis also revealed unique relationships between tasks that warrant further discussion.

Detailed Analysis of the Groups

The seven groups identified in our analysis not only align with the distinction between crystallized and fluid intelligence (Cattell, 1963) but also offer a more granular understanding of the cognitive processes involved in each task. The following sections provide a more in-depth discussion of these groups, linking them with relevant literature.

Abstract Reasoning and Pattern Recognition

This group, consisting of SEQ and CON tasks, reflects abilities related to identifying and extrapolating patterns and spatial visualization and manipulation. Both tasks share the common cognitive processes of abstract reasoning and pattern recognition, making them closely related to fluid intelligence (Carroll, 1993). Research has demonstrated that these abilities play a significant role in various cognitive domains, such as problem-solving and decision-making (Sternberg, 1985).

Nonverbal Analogical Reasoning

The ANA task represents a unique group that reflects the ability to identify relationships and draw analogies between seemingly unrelated figurative objects. This ability is closely related to fluid intelligence (Spearman, 1927) and has been associated with higher-order cognitive processes, such as problem-solving, creativity, and critical thinking (Gentner, 1983; Holyoak & Thagard, 1995).

Crystallized Intelligence

The group consisting of MP, GK, and VA represents abilities related to accumulated knowledge and experience, as well as the application of learned information in problem-solving situations (Cattell, 1963). Crystallized intelligence is considered to be a product of both genetic factors and environmental influences, such as education and cultural exposure (Horn & Cattell, 1966).

Fluid Intelligence

Comprising ANA, SEQ, and CON, this group represents cognitive processes involving reasoning, problem-solving, and abstract thinking, which are not dependent on prior knowledge or experience (Cattell, 1963). Fluid intelligence is thought to be primarily determined by genetic factors and is believed to decline with age (Horn & Cattell, 1967).

Language Development

Represented by GK and VA, this group reflects abilities related to language comprehension, vocabulary, and the use of language in various contexts. Language development has been linked to both crystallized intelligence (Horn & Cattell, 1966) and general cognitive ability (Carroll, 1993).

Quantitative Reasoning and Knowledge

The MP task represents a unique group that reflects abilities related to understanding, interpreting, and applying numerical information and mathematical concepts. Quantitative reasoning and knowledge have been associated with both crystallized and fluid intelligence (Horn & Cattell, 1966; McGrew, 2009) and are considered essential components of general cognitive ability (Carroll, 1993).

Visual-Spatial Representation

The MAT task represents a distinct group that reflects abilities related to visualizing and manipulating spatial information. Visual-spatial representation is closely linked to fluid intelligence (Carroll, 1993) and has been shown to play a crucial role in various cognitive domains, such as navigation, mental rotation, and object recognition (Kosslyn, 1994).

By linking the identified groups with relevant literature, our analysis contributes to a more nuanced understanding of the cognitive processes underlying the tasks of the JCCES and GAMA. This detailed classification can inform the development of more targeted interventions and assessments in educational, clinical, and research settings.

Unexpected and Significant Findings

One intriguing finding was the positioning of the CON task within the fluid intelligence group rather than the visual-spatial representation subgroup. The CON task appeared to tap into a broader range of cognitive processes, such as abstract reasoning and pattern recognition, which are more closely aligned with fluid intelligence processes. Another interesting observation was the distinction between the ANA task and the other tasks within the fluid intelligence group. The unique focus on analogical reasoning sets the ANA task apart from the other tasks, emphasizing its distinct cognitive demands.

Implications for Theory, Practice, and Future Research

The present study adds to the growing body of literature on the relationships between cognitive abilities and contributes to our understanding of the cognitive processes involved in various tasks. The findings suggest that employing JCCES and GAMA as complementary tools can provide a more comprehensive assessment of an individual's cognitive profile. This approach has practical implications for educational, clinical, and research settings, where a thorough understanding of cognitive abilities is crucial for making informed decisions.

Future research should address the limitations of the current study by employing larger and more diverse samples, as well as investigating the potential utility of combining JCCES and GAMA to predict cognitive and academic outcomes. Additionally, exploring the relationships between these cognitive abilities and other relevant factors, such as socioeconomic background or educational attainment, would provide valuable insights into the broader context of cognitive functioning.

Limitations

There are several limitations in the current study that may have affected the results or the interpretation of the findings. First, the sample size of 63 participants may limit the generalizability and robustness of the results. Second, the lack of inclusion criteria in the recruitment process could affect the representativeness of the sample. Third, there may be methodological limitations associated with the use of MDS, such as the assumptions regarding interval-scaled data and Euclidean space representation.

Conclusion

In summary, this study provides a nuanced understanding of the relationships between the JCCES and GAMA tasks, revealing a clear distinction between verbal and nonverbal abilities, and further dividing these abilities into seven groups. These findings contribute to the existing literature on cognitive abilities and suggest that using JCCES and GAMA as complementary tools can offer a comprehensive assessment of an individual's cognitive profile. The implications for theory and practice include the potential to develop targeted interventions and assessments in educational, clinical, and research settings.

However, the study is not without limitations, such as a small sample size, lack of inclusion criteria in the recruitment process, and methodological constraints associated with the use of MDS. Future research should address these limitations and explore the potential utility of combining JCCES and GAMA to predict cognitive and academic outcomes, as well as investigate the relationships between cognitive abilities and other relevant factors.

Overall, this study highlights the importance of understanding the complex relationships between various cognitive abilities and offers a solid foundation for future research to build upon in the pursuit of developing more effective assessment tools and interventions.

References

Borg, I., & Groenen, P. J. F. (2005). Modern multidimensional scaling: Theory and applications (2nd ed.). New York: Springer.

Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York: Cambridge University Press. https://doi.org/10.1017/CBO9780511571312

Cattell, R. B. (1971). Abilities: Their structure, growth, and action. Boston, MA: Houghton Mifflin.

Cox, T. F., & Cox, M. A. A. (2001). Multidimensional scaling (2nd ed.). New York: Chapman & Hall/CRC. https://doi.org/10.1201/9780367801700

Creswell, J. W. (2014). Research Design: Qualitative, Quantitative and Mixed Methods Approaches (4th ed.). Thousand Oaks, CA: Sage. 

Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates. https://doi.org/10.4324/9781410605269

Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2), 155-170. https://doi.org/10.1016/S0364-0213(83)80009-3

Holyoak, K. J., & Thagard, P. (1995). Mental leaps: Analogy in creative thought. Cambridge, MA: MIT Press.

Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized general intelligences. Journal of Educational Psychology, 57(5), 253-270. https://doi.org/10.1037/h0023816

Jouve, X. (2010a). Jouve Cerebrals Crystallized Educational Scale. Retrieved from http://www.cogn-iq.org/tests/jouve-cerebrals-crystallized-educational-scale-jcces

Jouve, X. (2010b). Differentiating Cognitive Abilities: A Factor Analysis of JCCES and GAMA Subtests. Retrieved from https://cogniqblog.blogspot.com/2014/10/differentiating-cognitive-abilities.html

Kosslyn, S. M. (1994). Image and brain: The resolution of the imagery debate. Cambridge, MA: MIT Press.

Kruskal, J. B. (1964). Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29(1), 1–27. https://doi.org/10.1007/BF02289565

McGrew, K. S. (2009). CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence, 37(1), 1–10. https://doi.org/10.1016/j.intell.2008.08.004

Naglieri, J. A., & Bardos, A. N. (1997). General Ability Measure for Adults (GAMA). Minneapolis, MN: National Computer Systems.

Spearman, C. (1927). The abilities of man: Their nature and measurement. New York: Macmillan.

Sternberg, R. J. (1985). Implicit theories of intelligence, creativity, and wisdom. Journal of Personality and Social Psychology, 49(3), 607–627. https://doi.org/10.1037/0022-3514.49.3.607

Saturday, October 11, 2014

Exploring the Relationship between JCCES and ACT Assessments: A Factor Analysis Approach

Abstract

This study aimed to examine the relationship between the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the American College Test (ACT) by conducting a factor analysis. The dataset consisted of 60 observations, with Pearson's correlation revealing significant associations between all variables. The factor analysis identified three factors, with the first factor accounting for 53.697% of the total variance and demonstrating the highest loadings for all variables. The results suggest that the JCCES and ACT assessments may be measuring a common cognitive construct, which could be interpreted as general cognitive ability or intelligence. However, several limitations should be considered, including the sample size, the scope of the analysis, and the use of factor analysis as the sole statistical method. Future research should employ larger samples, consider additional assessments, and explore alternative statistical techniques to validate these findings.

Keywords: Jouve Cerebrals Crystallized Educational Scale, American College Test, factor analysis, general cognitive ability, intelligence, college admission assessments.

Introduction

Psychometrics has long been a central topic of interest for researchers aiming to understand the underlying structure of cognitive abilities and the validity of various assessment tools. One of the most widely recognized theories in this field is the theory of general intelligence, or g-factor, which posits that an individual's cognitive abilities can be captured by a single underlying factor (Spearman, 1904). Over the years, numerous instruments have been developed to measure this general cognitive ability, with intelligence tests and college admission assessments being among the most prevalent. However, the extent to which these instruments measure the same cognitive construct remains a subject of debate.

The present study aims to investigate the factor structure of two assessments, the Jouve Cerebrals Crystallized Educational Scale (JCCES; Jouve, 2010) and the American College Test (ACT), to test the hypothesis that a single underlying factor accounts for the majority of variance in these measures. This hypothesis is grounded in the g-factor theory and is further supported by previous research demonstrating the strong correlation between intelligence test scores and academic performance (Deary, et al., 2007; Koenig, et al., 2008).

In recent years, the application of factor analysis has become a popular method for exploring the structure of cognitive assessments and identifying the dimensions that contribute to an individual's performance on these tests (Carroll, 1993; Jensen, 1998). Factor analysis allows researchers to quantify the extent to which various test items or subtests share a common underlying construct, thus providing insights into the validity and reliability of the instruments in question (Fabrigar, et al., 1999).

The selection of the JCCES and ACT assessments for this study is based on their use in academic and professional settings and their potential relevance to general cognitive ability. The JCCES is a psychometric test that measures crystallized intelligence, which is thought to reflect accumulated knowledge and skills acquired through education and experience (Cattell, 1971). The ACT, on the other hand, is a college admission assessment that evaluates students' academic readiness in various subject areas, such as English, mathematics, reading, and science (ACT, 2014). By examining the factor structure of these two assessments, the present study aims to shed light on the relationship between intelligence and college admission measures and the extent to which they tap into a common cognitive construct.

In sum, this study seeks to contribute to the ongoing discussion regarding the measurement of cognitive abilities and the relevance of psychometric theories in understanding the structure of intelligence and college admission assessments. By employing factor analysis and focusing on the JCCES and ACT, the study aims to provide a clearer understanding of the relationship between these measures and the g-factor theory. Ultimately, the results of this investigation may help inform the development and validation of future cognitive assessment tools and enhance our understanding of the complex nature of human intelligence.

Method

Research Design

The present study employed a correlational research design to examine the relationship between intelligence and college admission assessments. This design was chosen to analyze the associations between variables without manipulating any independent variables or assigning participants to experimental conditions (Creswell, 2014). The correlational design allows for the exploration of naturally occurring relationships among variables, which is particularly useful in understanding the structure and relationships of cognitive measures.

Participants

A total of 60 participants were recruited for this study, with their demographic characteristics collected, but not reported in this study. Participants were high school seniors or college students who had completed both the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the American College Test (ACT). There were no exclusion criteria for this study.

Materials

The study utilized two separate assessments to collect data: the JCCES and the ACT.

Jouve Cerebrals Crystallized Educational Scale (JCCES)

The JCCES is a measure of crystallized intelligence and assesses cognitive abilities through three subtests (Jouve, 2010). The subtests include Verbal Analogies (VA), Mathematical Problems (MP), and General Knowledge (GK). The JCCES was chosen for its relevance in evaluating cognitive abilities.

American College Test (ACT)

The ACT is a standardized college admission assessment measuring cognitive domains relevant to college readiness (ACT, 2014). The test is composed of four primary sections: English, Mathematics, Reading, and Science Reasoning. The ACT was selected for its widespread use in educational settings and its ability to evaluate cognitive abilities pertinent to academic success.

Procedure

Data collection involved obtaining participants' scores on both the JCCES and ACT assessments. Participants were instructed to provide their most recent test scores from ACT upon completion of the JCCES online. Then, they were then entered into a secure database for analysis. Prior to data collection, informed consent was obtained from all participants, and they were assured of the confidentiality and anonymity of their responses. 

Statistical Methods

To analyze the data, a factor analysis was conducted to test the research hypotheses (Tabachnick, & Fidell, 2007). Pearson's correlation was used to measure the associations between variables, with principal factor analysis conducted for data extraction. Varimax rotation was employed to simplify the factor structure, with the number of factors determined automatically and initial communalities calculated using squared multiple correlations. The study employed a convergence criterion of 0.0001 and a maximum of 50 iterations.

The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Cronbach's alpha were calculated to assess the sample size adequacy and internal consistency, respectively. Factor loadings were computed for each variable, and the proportion of variance explained by the extracted factors was determined.

Results

The present study employed factor analysis to test the research hypotheses. Pearson's correlation was used to measure the associations between variables, and the principal factor analysis was conducted for data extraction. Varimax rotation was used to simplify the factor structure. The number of factors was determined automatically, with initial communalities calculated using squared multiple correlations. The study employed a convergence criterion of 0.0001 and a maximum of 50 iterations.

Results of the Statistical Analyses

The Pearson correlation matrix revealed significant correlations (α = 0.05) between all variables. The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy indicated a KMO value of 0.809, suggesting that the sample size was adequate for conducting a factor analysis. Cronbach's alpha was calculated at 0.887, indicating satisfactory internal consistency for the variables.

The factor analysis revealed three factors with eigenvalues greater than one, accounting for 63.526% of the total variance. The first factor (F1) had an eigenvalue of 3.759, accounting for 53.697% of the variance. The second factor (F2) had an eigenvalue of 0.437, accounting for 6.242% of the variance, and the third factor (F3) had an eigenvalue of 0.251, accounting for 3.587% of the variance.

Factor loadings were calculated for each variable, with the first factor (F1) showing the highest loadings for all variables. Specifically, F1 had factor loadings of 0.631 for Verbal Analogies (VA), 0.734 for Mathematical Problems (MP), 0.651 for General Knowledge (GK), 0.802 for English (ENG), 0.881 for Mathematics (MATH), 0.744 for Reading (READ), and 0.905 for Science (SCIE). Final communalities ranged from 0.361 for VA to 0.742 for SCIE, indicating the proportion of variance in each variable explained by the extracted factors.

Interpretation of the Results

The results of the factor analysis support the research hypothesis that a single underlying factor (F1) accounts for the majority of the variance in the intelligence and college admission assessments. Specifically, F1 explained 53.697% of the total variance, with all variables loading highly on this factor. This finding suggests that the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the American College Test (ACT) are measuring a common cognitive construct, which may be interpreted as general cognitive ability or intelligence.

Limitations

There are several limitations to consider when interpreting the results of this study. First, the sample size of 60 observations, although adequate for factor analysis based on the KMO measure, may not be large enough to ensure the generalizability of the results. Future studies should employ larger and more diverse samples to validate these findings.

Second, this study only considered the JCCES and ACT assessments, limiting the scope of the analysis. Further research should investigate the factor structure of other intelligence and college admission assessments to provide a more comprehensive understanding of the relationship between these measures and general cognitive ability.

Lastly, the use of factor analysis as the sole statistical method may not account for potential non-linear relationships between the variables. Future studies could employ additional statistical techniques, such as structural equation modeling or item response theory, to better capture the complexity of the relationships between these cognitive measures.

Discussion

Interpretation of the Results and Previous Research

The findings of the present study support the research hypothesis that a single underlying factor (F1) accounts for the majority of the variance in the intelligence and college admission assessments. Specifically, F1 explained 53.697% of the total variance, with all variables loading highly on this factor. This result is consistent with previous research, which has also demonstrated a strong relationship between general cognitive ability, or intelligence, and performance on college admission assessments (Deary et al., 2007; Koenig et al., 2008). The high factor loadings for all variables on F1 suggest that the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the American College Test (ACT) are measuring a common cognitive construct, which may be interpreted as general cognitive ability or intelligence.

Implications for Theory, Practice, and Future Research

The results of this study have important implications for both theory and practice. From a theoretical perspective, the findings support the idea that general cognitive ability is a key underlying factor that contributes to performance on both intelligence and college admission assessments. This suggests that efforts to improve general cognitive ability may be effective in enhancing performance on a wide range of cognitive measures, including college admission assessments.

In terms of practice, the results indicate that the JCCES and ACT assessments are likely measuring similar cognitive constructs, which may have implications for college admission processes. For instance, it may be useful for colleges and universities to consider using a single assessment to evaluate both intelligence and college readiness in applicants, potentially streamlining the admission process and reducing the burden on students.

Moreover, these findings highlight the importance of considering general cognitive ability in educational and career planning. Students, educators, and career counselors can use these insights to develop strategies and interventions aimed at improving general cognitive ability, ultimately enhancing academic and career outcomes.

Limitations and Alternative Explanations

The present study has several limitations that should be considered when interpreting the findings. First, the sample size of 60 observations, although adequate for factor analysis based on the KMO measure, may not be large enough to ensure the generalizability of the results. Future studies should employ larger and more diverse samples to validate these findings.

Second, this study only considered the JCCES and ACT assessments, limiting the scope of the analysis. Further research should investigate the factor structure of other intelligence and college admission assessments, such as the Wechsler Adult Intelligence Scale (WAIS) and the Scholastic Assessment Test (SAT), to provide a more comprehensive understanding of the relationship between these measures and general cognitive ability.

Lastly, the use of factor analysis as the sole statistical method may not account for potential non-linear relationships between the variables. Future studies could employ additional statistical techniques, such as structural equation modeling or item response theory, to better capture the complexity of the relationships between these cognitive measures.

Conclusion

In conclusion, this study's results indicate that a single underlying factor (F1) accounts for the majority of the variance in the intelligence and college admission assessments, specifically, the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the American College Test (ACT). This finding suggests that both assessments measure a common cognitive construct, which may be interpreted as general cognitive ability or intelligence. The implications of these findings for theory and practice are significant, as they provide insight into the relationship between intelligence assessments and college admission tests, potentially guiding the development of more effective testing methods in the future.

However, some limitations should be considered. The sample size of 60 observations may not be large enough for generalizability, and the study only analyzed JCCES and ACT assessments. Future research should include larger, more diverse samples and investigate other intelligence and college admission assessments. Additionally, employing other statistical methods, such as structural equation modeling or item response theory, may better capture the complexity of the relationships between these cognitive measures.

Despite these limitations, the study highlights the importance of understanding the underlying factors that contribute to performance on intelligence and college admission assessments and opens avenues for future research to improve the assessment of general cognitive ability.

References

ACT. (2014). About the ACT. Retrieved from https://www.act.org/content/act/en/products-and-services/the-act/about-the-act.html

Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York: Cambridge University Press. https://doi.org/10.1017/CBO9780511571312

Cattell, R. B. (1971). Abilities: Their structure, growth, and action. Boston, MA: Houghton Mifflin.

Creswell, J. W. (2014). Research Design: Qualitative, Quantitative and Mixed Methods Approaches (4th ed.). Thousand Oaks, CA: Sage. 

Deary, I. J., Strand, S., Smith, P., & Fernandes, C. (2007). Intelligence and educational achievement. Intelligence, 35(1), 13-21. https://doi.org/10.1016/j.intell.2006.02.001

Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272–299. https://doi.org/10.1037/1082-989X.4.3.272

Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger.

Jouve, X. (2010). Jouve Cerebrals Crystallized Educational Scale. Retrieved from http://www.cogn-iq.org/tests/jouve-cerebrals-crystallized-educational-scale-jcces

Koenig, K. A., Frey, M. C., & Detterman, D. K. (2008). ACT and general cognitive ability. Intelligence, 36(2), 153–160. https://doi.org/10.1016/j.intell.2007.03.005

Spearman, C. (1904). "General intelligence," objectively determined and measured. The American Journal of Psychology, 15(2), 201-292. https://doi.org/10.2307/1412107

Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). Boston, MA: Pearson Education, Inc.

Tuesday, December 28, 2010

Identifying the Underlying Dimensions of the JCCES Mathematical Problems using Alternating Least Squares Scaling

Abstract

This study aimed to investigate the underlying dimensions of the Jouve Cerebrals Crystallized Educational Scale (JCCES) Mathematical Problems (MP) using Alternating Least Squares Scaling (ALSCAL). The dataset consisted of intercorrelations between 38 MP items, with 588 participants. Various dimensional solutions were assessed for the goodness of fit. A 4-dimensional solution provided a reasonable fit (RSQ = 0.815, stress = 0.155), accounting for 81.5% of the variance in the disparities. The 4-dimensional solution is more parsimonious than the marginally better 5-dimensional solution (RSQ = 0.851, stress = 0.127). The study's limitations include the sample size, selection bias, and the exploratory nature of ALSCAL. The results contribute to understanding the structure of mathematical problem-solving abilities and have implications for theory, practice, and future research in cognitive and educational psychology.

Keywords: Jouve Cerebrals Crystallized Educational Scale, Mathematical Problems, Alternating Least Squares Scaling, multidimensional scaling, cognitive abilities

Introduction

Psychometrics is a scientific discipline focused on the development and evaluation of psychological assessments, including the measurement of cognitive abilities such as mathematical problem-solving skills (Embretson & Reise, 2000). The Jouve Cerebrals Crystallized Educational Scale (JCCES) Mathematical Problems (MP) subtest is an instrument used to assess these skills. To improve our understanding of the structure underlying the JCCES Mathematical Problems, the present study investigates the dimensionality of this instrument using Alternating Least Squares Scaling (ALSCAL).

The JCCES MP is grounded in various psychometric theories, particularly item response theory (IRT) and classical test theory (CTT) (De Ayala, 2009; Nunnally & Bernstein, 1994). Additionally, the JCCES MP aligns with cognitive and educational psychology theories, such as the multiple-component model of mathematical problem-solving (Swanson & Beebe-Frankenberger, 2004), which posits that problem-solving requires a combination of distinct cognitive abilities. Other relevant theories include Geary's (1994) cognitive mechanisms and Hecht's (2001) cognitive strategies that underlie mathematical problem-solving.

The selection of ALSCAL as the analytical method for this study is based on its suitability for exploratory research on multidimensional scaling (MDS) (Kruskal & Wish, 1978; Young, et al., 1978). ALSCAL has been used in various psychometric research to examine the dimensional structure of cognitive assessments (e.g., Gorsuch, 1983; Hambleton & Swaminathan, 1985). The method provides a data-driven approach to derive dimensional solutions and assess their goodness of fit, which can inform the development and interpretation of psychological assessments.

The present study's research question is: What is the lowest dimensionality offering a reasonable fit for the structure of items in the JCCES Mathematical Problems, as assessed by ALSCAL? By answering this question, the study aims to contribute to the literature on the dimensional structure of mathematical problem-solving assessments and inform future research and educational practice. In the context of previous research, the study will examine whether the identified dimensions align with established theories, such as Geary's (1994) cognitive mechanisms, Hecht's (2001) cognitive strategies, and the multiple-component model (Swanson & Beebe-Frankenberger, 2004).

Method

Research Design

The present study employed a correlational research design to investigate the structure of items in the Jouve Cerebrals Crystallized Educational Scale (JCCES) Mathematical Problems (MP) using Alternating Least Squares Scaling (ALSCAL; Young, et al., 1978). This design was chosen as it allowed for the exploration of potential relationships between the items without manipulating any variables.

Participants

The sample consisted of 588 participants, who were recruited through convenience sampling from various online media. Participants' demographic characteristics included age, gender, and educational level, which were collected through self-report measures. No exclusion criteria for the study were set.

Materials

The primary material used in this study was the JCCES Mathematical Problems subtest. The JCCES is a measure of crystallized intelligence, with established reliability and validity (Jouve, 2010a; 2010b). The Mathematical Problems subtest consists of 38 items that assess a range of mathematical problem-solving abilities, such as numerical reasoning, analytical thinking, and computational fluency. Participants completed the subtest in a computerized format.

Procedures

Participants were instructed that they had all the necessary time to complete the subtest, and their responses were scored automatically and then checked manually to insuring the most possible data reliability. The resulting dataset consisted of intercorrelations between the 38 items of the MP subtest.

Statistical Methods

ALSCAL was used to analyze the intercorrelations between the items and derive solutions for different dimensionalities, ranging from two to five dimensions. The analysis involved iterative optimization procedures to minimize stress values, with convergence criteria set at an improvement of less than 0.001 for stress values across consecutive iterations (Young et al., 1978). Kruskal's Stress Formula 1 was used to compute stress values, while RSQ values represented the proportion of variance in the scaled data (disparities) accounted for by their corresponding distances (Kruskal & Wish, 1978). The goodness of fit for various dimensional solutions was assessed, with the aim of identifying the lowest dimensionality offering a reasonable fit, operationalized as an RSQ value greater than 0.80 and a stress value within the range of 0.10 to 0.20.

Results

The primary objective of this study was to investigate the structure in items of the Jouve Cerebrals Crystallized Educational Scale (JCCES) Mathematical Problems (MP) using Alternating Least Squares Scaling (ALSCAL). The dataset consisted of intercorrelations between the 38 items of the MP subtest, with a sample size of 588 participants. The analysis involved assessing the goodness of fit for various dimensional solutions, with the aim of identifying the lowest dimensionality offering a reasonable fit, operationalized as an RSQ value greater than 0.80 and a stress value within the range of 0.10 to 0.20.

Statistical Analyses

ALSCAL (Young, et al., 1978) was employed to derive solutions for different dimensionalities, ranging from two to five dimensions. The analysis entailed iterative optimization procedures to minimize stress values, with convergence criteria set at an improvement of less than 0.001 for stress values across consecutive iterations. Kruskal's Stress Formula 1 was used to compute stress values, while RSQ values represented the proportion of variance in the scaled data (disparities) accounted for by their corresponding distances.

Results of Statistical Analyses

The ALSCAL analysis was conducted to derive solutions for different dimensionalities, and the results are presented below in a comprehensive manner.

5-dimensional solution

The 5-dimensional solution showed the lowest stress value (0.127) among all the solutions, indicating a relatively better fit to the data. The RSQ value, which represents the proportion of variance in the scaled data accounted for by the corresponding distances, was 0.851. This value suggests that 85.1% of the variance in the disparities could be explained by the distances in the 5-dimensional solution. However, the additional dimension compared to the 4-dimensional solution may increase model complexity without providing a substantial improvement in fit.

Iteration history:

  • Iteration 1: SSTRESS = 0.23325
  • Iteration 2: SSTRESS = 0.18514, Improvement = 0.04811
  • Iteration 3: SSTRESS = 0.18247, Improvement = 0.00267
  • Iteration 4: SSTRESS = 0.18223, Improvement = 0.00024

4-dimensional solution

The 4-dimensional solution was identified as the lowest dimensionality offering a reasonable fit based on the predefined criteria (RSQ > 0.80, stress within 0.10 to 0.20). The stress value for this solution was 0.155, while the RSQ value was 0.815, indicating that 81.5% of the variance in the disparities was accounted for by the distances in the 4-dimensional solution.

Iteration history:

  • Iteration 1: SSTRESS = 0.26892
  • Iteration 2: SSTRESS = 0.22219, Improvement = 0.04673
  • Iteration 3: SSTRESS = 0.21927, Improvement = 0.00292
  • Iteration 4: SSTRESS = 0.21902, Improvement = 0.00025
3-dimensional solution

The 3-dimensional solution showed a stress value of 0.210 and an RSQ value of 0.739. This solution did not meet the predefined criteria for a reasonable fit, as the RSQ value was below the threshold of 0.80.

Iteration history:

  • Iteration 1: SSTRESS = 0.32326
  • Iteration 2: SSTRESS = 0.28030, Improvement = 0.04295
  • Iteration 3: SSTRESS = 0.27771, Improvement = 0.00259
  • Iteration 4: SSTRESS = 0.27727, Improvement = 0.00045
2-dimensional solution

The 2-dimensional solution had the highest stress value (0.305) among all the solutions, indicating a relatively poor fit to the data. The RSQ value was 0.647, suggesting that only 64.7% of the variance in the disparities was accounted for by the distances in the 2-dimensional solution.

Iteration history:

  • Iteration 1: SSTRESS = 0.42841
  • Iteration 2: SSTRESS = 0.37073, Improvement = 0.05768
  • Iteration 3: SSTRESS = 0.36893, Improvement = 0.00180
  • Iteration 4: SSTRESS = 0.36702, Improvement = 0.00191
  • Iteration 5: SSTRESS = 0.36696, Improvement = 0.00006
Interpretation of Results

The 4-dimensional solution, with a stress value of 0.155 and an RSQ value of 0.815, suggests that the structure of the Jouve Cerebrals Crystallized Educational Scale (JCCES) Mathematical Problems (MP) can be adequately represented in a 4-dimensional space. This solution accounts for 81.5% of the variance in the disparities, and it represents a balance between model complexity and goodness of fit.

The ALSCAL analysis results imply that there are four underlying dimensions or constructs in the JCCES Mathematical Problems items that contribute significantly to the structure of the data. These dimensions may represent distinct cognitive abilities, problem-solving strategies, or other factors that influence an individual's performance on the JCCES Mathematical Problems items.

It is essential to note that while the 5-dimensional solution provided marginally better-fit statistics (RSQ = 0.851 and stress = 0.127), the additional dimension would increase the model complexity without a substantial improvement in the goodness of fit. As a result, the 4-dimensional solution is more parsimonious and appropriate for this study.

To further interpret and understand the meaning of these dimensions, it is necessary to examine the item content and characteristics of the JCCES Mathematical Problems items, as well as any relevant theoretical frameworks in the field of cognitive and educational psychology. This examination would help researchers identify and label the dimensions, providing a better understanding of the underlying structure of the JCCES Mathematical Problems and informing future research and educational practice.

Limitations

Despite the successful identification of a 4-dimensional solution that met the predefined criteria, there are certain limitations to this study. First, the sample size of 588 participants may not be sufficient to generalize the findings to a broader population. Additionally, selection bias may be present, as the participants may not be representative of the entire population of interest. Finally, the study is limited by the methodological approach, as ALSCAL is an exploratory technique and may not provide definitive conclusions about the underlying structure of the data.

In conclusion, the 4-dimensional solution provided a reasonable fit for the structure in items of the JCCES Mathematical Problems, with an RSQ value of 0.815 and a stress value of 0.155. This solution offers a basis for further investigation and interpretation of the underlying dimensions in the JCCES Mathematical Problems dataset. However, it is important to consider the limitations of this study when interpreting and generalizing these findings.

Discussion

Interpretation of Results and Comparison with Previous Research

The results of the present study indicate that a 4-dimensional solution best represents the structure of the JCCES Mathematical Problems items. This finding is consistent with previous research suggesting that mathematical problem-solving involves multiple dimensions or cognitive abilities (e.g., Geary, 1994; Hecht, 2001; Swanson & Beebe-Frankenberger, 2004). These dimensions may represent distinct skills or strategies, such as numerical reasoning, spatial visualization, analytical thinking, and computational fluency. The identification of these dimensions provides a deeper understanding of the underlying structure of the JCCES Mathematical Problems and can inform both theoretical and practical applications in the field of cognitive and educational psychology.

Unexpected Findings and Their Importance

One notable finding in the present study was that the 5-dimensional solution, although offering slightly better-fit statistics, did not provide a substantial improvement in the goodness of fit compared to the 4-dimensional solution. This finding suggests that the additional dimension in the 5-dimensional solution may not be necessary or meaningful, and the 4-dimensional solution is more parsimonious and appropriate. This result highlights the importance of considering model complexity and parsimony in addition to fitting statistics when selecting the best solution in multidimensional scaling analyses.

Implications for Theory, Practice, and Future Research

The present study's findings contribute to the understanding of the structure of mathematical problem-solving abilities as assessed by the JCCES Mathematical Problems. By identifying four underlying dimensions, researchers can further explore these dimensions' nature and implications for cognitive and educational psychology theories. For instance, the findings can inform the development of more targeted interventions and instructional strategies to improve specific dimensions of mathematical problem-solving abilities.

Moreover, the results can help practitioners, such as educators and clinicians, to better interpret and use the JCCES Mathematical Problems in various settings, such as educational assessment, cognitive assessment, and intervention planning. By understanding the underlying dimensions, practitioners can more accurately identify students' strengths and weaknesses and provide targeted support to enhance their mathematical problem-solving skills.

Limitations and Alternative Explanations

As mentioned earlier, several limitations should be considered when interpreting the findings of the present study. The sample size and potential selection bias may limit the generalizability of the results to a broader population. Additionally, the exploratory nature of the ALSCAL analysis does not allow for definitive conclusions about the underlying structure of the data. Future studies may employ confirmatory techniques, such as confirmatory factor analysis or structural equation modeling, to validate the 4-dimensional solution identified in the present study.

Another limitation is the potential influence of other factors, such as individual differences in motivation, attention, or working memory capacity, which may have affected participants' performance on the JCCES Mathematical Problems and, consequently, the identified dimensions. Future research could examine these potential influences and incorporate them into the analysis to gain a more comprehensive understanding of the underlying structure of mathematical problem-solving abilities.

Directions for Future Research

Future research should aim to replicate and extend the present study using larger and more diverse samples to increase generalizability. Furthermore, researchers could examine the content and characteristics of the JCCES Mathematical Problems items to better understand and label the identified dimensions. This analysis could involve examining the items in relation to relevant theoretical frameworks, such as the multiple-component model of mathematical problem-solving (Swanson & Beebe-Frankenberger, 2004), to provide more meaningful interpretations of the dimensions.

Additionally, longitudinal studies could investigate the development of the identified dimensions across different age groups and educational levels to explore their potential implications for educational practice and cognitive development. Finally, future research could examine the relationship between the identified dimensions and other cognitive abilities or academic achievement measures to explore the practical significance and predictive validity of the JCCES Mathematical Problems.

Conclusion

In conclusion, this study found that a 4-dimensional solution best represents the structure of the JCCES Mathematical Problems items, accounting for 81.5% of the variance in the disparities. These dimensions may represent distinct cognitive abilities or problem-solving strategies, providing valuable insights into the structure of mathematical problem-solving abilities. The findings have significant implications for both theory and practice, informing future research and educational interventions targeting specific dimensions of mathematical problem-solving.

However, it is important to acknowledge the study's limitations, including the sample size, potential selection bias, and the exploratory nature of the ALSCAL analysis. Future research should focus on validating the 4-dimensional solution using confirmatory techniques, examining the content of the items, and investigating potential influences of individual differences. Longitudinal studies and investigations of the relationship between the identified dimensions and other cognitive abilities or academic achievement measures are also recommended to further our understanding of the underlying structure of mathematical problem-solving abilities.

References

De Ayala, R. J. (2009). The theory and practice of item response theory. New York, NY: Guilford Press.

Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates. https://doi.org/10.4324/9781410605269

Hambleton, R. K., & Swaminathan, H. (1985). Item response theory: Principles and applications. Boston, MA: Kluwer-Nijhoff.

Hecht, S. A., & Vagi, K. J. (2010). Sources of Group and Individual Differences in Emerging Fraction Skills. Journal of educational psychology, 102(4), 843–859. https://doi.org/10.1037/a0019824

Geary, D. C. (1994). Children's mathematical development: Research and practical applications. Washington, DC: American Psychological Association. https://doi.org/10.1037/10163-000

Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Hillsdale, NJ: Erlbaum.

Jouve, X. (2010a). Investigating the Relationship Between JCCES and RIAS Verbal Scale: A Principal Component Analysis Approach. Retrieved from https://cogniqblog.blogspot.com/2010/02/on-relationship-between-jcces-and.html

Jouve, X. (2010b). Relationship between Jouve Cerebrals Crystallized Educational Scale (JCCES) Crystallized Educational Index (CEI) and Cognitive and Academic Measures. Retrieved from https://cogniqblog.blogspot.com/2010/02/correlations-between-jcces-and-other.html

Jouve, X. (2010c). Jouve Cerebrals Crystallized Educational Scale. Retrieved from http://www.cogn-iq.org/tests/jouve-cerebrals-crystallized-educational-scale-jcces

Kruskal, J. B., & Wish, M. (1978). Multidimensional scaling. Beverly Hills, CA: Sage. https://doi.org/10.4135/9781412985130

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.

Swanson, H. L., & Beebe-Frankenberger, M. (2004). The Relationship Between Working Memory and Mathematical Problem Solving in Children at Risk and Not at Risk for Serious Math Difficulties. Journal of Educational Psychology, 96(3), 471–491. https://doi.org/10.1037/0022-0663.96.3.471

Young, F. W., Takane, Y., & Lewyckyj, R. (1978). ALSCAL: A nonmetric multidimensional scaling program with several individual-differences options. Behavior Research Methods & Instrumentation, 10(3), 451–453. https://doi.org/10.3758/BF03205177

Sunday, February 14, 2010

Relationship between Jouve Cerebrals Crystallized Educational Scale (JCCES) Crystallized Educational Index (CEI) and Cognitive and Academic Measures

Abstract

This study aimed to examine the relationships between the Jouve Cerebrals Crystallized Educational Scale (JCCES) Crystallized Educational Index (CEI) and various measures of cognitive abilities and academic achievement. Pearson correlation analyses were used to test the research hypotheses. The results showed strong correlations between the JCCES CEI and measures of cognitive abilities, including the Reynolds Intellectual Assessment Scale (RIAS), Wechsler Adult Intelligence Scale - Third Edition (WAIS-III), Wechsler Intelligence Scale for Children - Third Edition (WISC-III), General Ability Measure for Adults (GAMA), and Stanford Binet Intelligence Scale (SBIS). Additionally, strong correlations were observed between the JCCES CEI and measures of academic achievement, including the Scholastic Assessment Test (SAT), American College Test (ACT), and Graduate Record Examination (GRE). The results suggest that the JCCES CEI is an effective measure of general cognitive ability and academic achievement across different age groups.

Keywords: Jouve Cerebrals Crystallized Educational Scale, Crystallized Educational Index, cognitive abilities, academic achievement, Pearson correlation analyses, Scholastic Assessment Test, American College Test, Graduate Record Examination.

Introduction

Psychometrics, the scientific study of psychological measurement, has been a critical aspect of psychology since the early 20th century, with the development of the first intelligence tests by pioneers such as Binet and Simon (1905) and Wechsler (1939). These seminal works laid the foundation for the development of various instruments to assess cognitive abilities, personality traits, and educational outcomes (Anastasi & Urbina, 1997). Over the years, psychometric theories have evolved, with advancements in factor analysis, item response theory, and other methodologies contributing to the refinement of existing instruments and the development of new ones (Embretson & Reise, 2000).

One such instrument is the Jouve Cerebrals Crystallized Educational Scale (JCCES), which assesses crystallized intelligence, a key component of general cognitive ability (Cattell, 1971; Horn & Cattell, 1966). Crystallized intelligence, often considered the product of accumulated knowledge and experiences, has been shown to be a reliable predictor of academic achievement and occupational success (Deary et al., 2007; Neisser et al., 1996).

The present study aims to examine the relationships between the JCCES Crystallized Educational Index (CEI) and various other measures of cognitive abilities and academic achievement, such as the Reynolds Intellectual Assessment Scale (RIAS), the Wechsler Adult Intelligence Scale - Third Edition (WAIS-III), the Scholastic Assessment Test (SAT), the American College Test (ACT), the Graduate Record Examination (GRE), the Armed Forces Qualification Test (AFQT), the Wechsler Intelligence Scale for Children - Third Edition (WISC-III), the General Ability Measure for Adults (GAMA), and the Stanford Binet Intelligence Scale (SBIS). Pearson correlation analyses were employed to investigate these relationships.

A comprehensive understanding of the relationships between the JCCES CEI and these well-established measures can provide valuable insight into the validity and utility of the JCCES in various contexts. Previous research has demonstrated that crystallized intelligence is a significant predictor of academic achievement (Deary et al., 2007) and is often correlated with other measures of cognitive abilities (Carroll, 1993). Therefore, the present study seeks to extend the existing literature by further examining these relationships, while also assessing the JCCES CEI's potential as an effective tool for predicting academic and cognitive outcomes.

The results of this study may have important implications for the use of the JCCES in educational and occupational settings and may contribute to the ongoing refinement of psychometric theories and methodologies. By exploring the relationships between the JCCES CEI and a range of well-established cognitive and achievement measures, this study aims to provide a comprehensive understanding of the JCCES's validity and utility within the broader context of psychometrics research.

Results

Statistical Analyses

The research hypotheses were tested using Pearson correlations to examine the relationships between the Jouve Cerebrals Crystallized Educational Scale (JCCES) Crystallized Educational Index (CEI) and various other measures. Assumptions made for the Pearson correlation analyses included linearity, homoscedasticity, and normality of the data.

Presentation of Results

The results of the Pearson correlation analyses between the JCCES CEI and various measures of cognitive abilities and academic achievement are presented in detail below. The majority of correlations were statistically significant at the p < .001 level, indicating strong relationships between the JCCES CEI and the respective measures.

Reynolds Intellectual Assessment Scale (RIAS, N = 138): The JCCES CEI demonstrated strong correlations with the Verbal Intelligence Index (VII) (r = .859, p < .001), Guess What? (GWH) (Information) (r = .814, p < .001), and Verbal Reasoning (VRZ) (r = .859, p < .001).

Wechsler Adult Intelligence Scale - Third Edition (WAIS-III, N =76): The JCCES CEI showed strong correlations with Full Scale IQ (FSIQ) (r = .821, p < .001), Verbal IQ (VIQ) (r = .837, p < .001), Performance IQ (PIQ) (r = .660, p < .001), Verbal Comprehension Index (VCI) (r = .816, p < .001), Vocabulary (VOC) (r = .775, p < .001), Similarities (SIM) (r = .579, p < .001), and Information (INF) (r = .769, p < .001).

Scholastic Assessment Test (SAT) (three different versions): The JCCES CEI exhibited strong correlations with SAT Composite scores for all three versions: <1995 (r = .814, p < .001, N = 87), 1995-2005 (r = .826, p < .001, N = 118), and >2005 (r = .858, p < .001, N = 125). Similarly, significant correlations were observed with Verbal and Mathematical scores across the three versions.

American College Test (ACT, N = 133): The JCCES CEI was significantly correlated with the ACT Composite score (r = .691, p < .001) and all subscales, including English (r = .636, p < .001), Mathematical (r = .600, p < .001), Reading (r = .676, p < .001), and Science (r = .685, p < .001).

Graduate Record Examination (GRE, N = 66): The JCCES CEI demonstrated a strong correlation with the GRE Composite score (r = .844, p < .001), Verbal (r = .768, p < .001), and Quantitative (r = .819, p < .001) scores. However, the correlation with the GRE Analytical subscale was weaker (r = .430, p = .020, N = 29).

Armed Forces Qualification Test (AFQT, N = 62): The JCCES CEI showed a strong correlation with the AFQT percentile converted to a deviation IQ (r = .825, p < .001).

Wechsler Intelligence Scale for Children - Third Edition (WISC-III, N = 29): The JCCES CEI had strong correlations with Full Scale IQ (FSIQ) (r = .851, p < .001), Verbal IQ (VIQ) (r = .665, p = .003, N = 18), and Performance IQ (PIQ) (r = .703, p = .001, N = 18).

General Ability Measure for Adults (GAMA, N = 64): The JCCES CEI was significantly correlated with the GAMA IQ score (r = .617, p < .001) and all subscales, including Matching (r = .467, p < .001), Analogies (r = .612, p < .001), Sequences (r = .455, p < .001), and Construction (r = .482, p <.001).

Stanford Binet Intelligence Scale (SBIS, N = 10): The JCCES CEI exhibited the strongest correlation with the SBIS Full Scale IQ (FSIQ) (r = .883, p = .001).

Interpretation of Results

Upon examining the Pearson correlation analysis results in greater detail, we can further interpret the relationships between the JCCES CEI and various cognitive and academic measures. The majority of the correlations were strong, supporting the research hypothesis that the JCCES CEI is positively related to these measures.

The strong relationships between the JCCES CEI and various intelligence scales provide further evidence that the JCCES CEI is an effective measure of general cognitive ability across different age groups. Both the Wechsler Adult Intelligence Scale - Third Edition (WAIS-III) and the Wechsler Intelligence Scale for Children - Third Edition (WISC-III) are widely recognized and well-established measures of cognitive ability, assessing various domains such as verbal comprehension, perceptual organization, working memory, and processing speed.

Intelligence Tests

  1. Wechsler Adult Intelligence Scale - Third Edition (WAIS-III): The WAIS-III is designed for individuals aged 16 to 89 years, assessing cognitive abilities across multiple domains. The strong correlation between the JCCES CEI and the WAIS-III Full Scale IQ (FSIQ) (r = .821, p < .001, N = 76) indicates that the JCCES CEI effectively captures general cognitive ability in adults. This positive relationship suggests that the JCCES CEI could be a useful tool for assessing cognitive abilities in various settings, such as educational, clinical, and occupational contexts.
  2. Wechsler Intelligence Scale for Children - Third Edition (WISC-III): The WISC-III is designed for children aged 6 to 16 years, assessing cognitive abilities across a similar range of domains as the WAIS-III. The strong correlation between the JCCES CEI and the WISC-III Full Scale IQ (FSIQ) (r = .851, p < .001, N = 29) suggests that the JCCES CEI is also effective in measuring general cognitive ability in children. This positive relationship implies that the JCCES CEI could be a valuable instrument for evaluating cognitive abilities in educational settings, as well as for identifying potential learning difficulties or giftedness in children.
Academic Tests

The Scholastic Assessment Test (SAT) is a widely used standardized test for college admissions in the United States, designed to measure students' critical thinking, problem-solving, and overall academic aptitude. The strong relationships between the JCCES CEI and SAT Composite scores across all three versions suggest that the JCCES CEI is a reliable indicator of academic achievement as measured by the SAT.

The SAT has undergone several changes over the years, resulting in three distinct versions. The following details illustrate the strong relationships between the JCCES CEI and each version of the SAT:

  1. SAT <1995: This version of the SAT consisted of two main sections: Verbal and Mathematical. The JCCES CEI showed a strong correlation with the SAT Composite score for this version (r = .814, p < .001, N = 87), indicating that the JCCES CEI is positively related to both verbal and mathematical abilities as measured by the SAT <1995.
  2. SAT 1995-2005: This version of the SAT maintained the Verbal and Mathematical sections, but introduced a new format and scoring system. The JCCES CEI displayed a strong correlation with the SAT Composite score for this version (r = .826, p < .001, N = 118), suggesting that the JCCES CEI remains a reliable indicator of academic achievement despite changes to the SAT format.
  3. SAT >2005: This version of the SAT introduced a third section, Writing, in addition to the existing Verbal (renamed as Reading) and Mathematical sections. The JCCES CEI demonstrated a strong correlation with the SAT Composite score for this version (r = .858, p < .001, N = 125), implying that the JCCES CEI is positively related to all three aspects of the SAT: Reading, Mathematical, and Writing.

The American College Test (ACT) correlations with the JCCES CEI provide further evidence that the JCCES CEI captures various aspects of academic achievement across multiple subject areas. The ACT is a standardized test that assesses high school student's general educational development and their ability to complete college-level work, covering four main subject areas: English, Mathematics, Reading, and Science.

The Pearson correlation analyses results for the ACT subscales are as follows:

  1. English: The JCCES CEI exhibited a strong correlation with the ACT English subscale (r = .636, p < .001, N = 133). This suggests that the JCCES CEI is positively related to English language skills, including grammar, punctuation, sentence structure, and rhetorical skills.
  2. Mathematics: The JCCES CEI displayed a strong correlation with the ACT Mathematics subscale (r = .600, p < .001, N = 133). This indicates a positive relationship between the JCCES CEI and mathematical problem-solving abilities, including knowledge of algebra, geometry, and trigonometry.
  3. Reading: The JCCES CEI showed a strong correlation with the ACT Reading subscale (r = .676, p < .001, N = 133). This implies that the JCCES CEI is positively associated with reading comprehension skills, including the ability to understand and analyze complex literary and informational texts.
  4. Science: The JCCES CEI demonstrated a strong correlation with the ACT Science subscale (r = .685, p < .001, N = 133). This suggests that the JCCES CEI is positively related to scientific reasoning and problem-solving skills, including the ability to interpret and analyze data from various scientific disciplines.

The moderate correlation between the JCCES CEI and the Graduate Record Examination (GRE) Analytical subscale (r = .430, p = .020, N = 29) is indeed notable, as it suggests a weaker relationship between the JCCES CEI and analytical abilities compared to the strong correlations observed with other cognitive and academic measures. Several factors might contribute to this finding, including:
  1. Differences in assessed skills: The JCCES CEI, which consists of Verbal Analogies, Mathematical Problems, and General Knowledge subtests, primarily measures crystallized intelligence. Crystallized intelligence refers to the knowledge and skills acquired through experience and education, such as vocabulary and factual information. In contrast, the GRE Analytical subscale assesses analytical writing skills, including the ability to articulate complex ideas, support arguments with relevant reasons and examples, and demonstrate critical thinking. The moderate correlation between the JCCES CEI and the GRE Analytical subscale may reflect the differences in the skills assessed by these two measures.
  2. Variability in the sample: The sample used in this study might have influenced the observed correlation between the JCCES CEI and the GRE Analytical subscale. The study participants might have had varying levels of exposure to analytical writing tasks, which could affect their performance on the GRE Analytical subscale. Additionally, the sample size for the GRE Analytical subscale (N = 29) was smaller than that of other measures, which might limit the generalizability of the findings.

Discussion

The present study aimed to examine the relationships between the Jouve Cerebrals Crystallized Educational Scale (JCCES) Crystallized Educational Index (CEI) and various measures of cognitive abilities and academic achievement. The results of the study support the research hypothesis that the JCCES CEI is positively related to these measures. Specifically, the JCCES CEI demonstrated strong correlations with measures of verbal intelligence, information, verbal reasoning, full-scale IQ, verbal IQ, performance IQ, verbal comprehension, vocabulary, similarities, information, SAT composite scores across three different versions, ACT composite score, and subscales, GRE composite score, quantitative score, and AFQT IQ score. The JCCES CEI also exhibited strong correlations with the GAMA IQ score and all subscales, as well as the SBIS Full Scale IQ.

The strong correlations between the JCCES CEI and various intelligence scales provide further evidence that the JCCES CEI is an effective measure of general cognitive ability across different age groups. The positive relationships between the JCCES CEI and various cognitive and academic measures suggest that the JCCES CEI could be a useful tool for assessing cognitive abilities and academic achievement in various settings, such as educational, clinical, and occupational contexts (Deary et al., 2007).

The strong correlations between the JCCES CEI and the SAT Composite scores across all three versions suggest that the JCCES CEI is a reliable indicator of academic achievement as measured by the SAT. The strong correlations observed between the JCCES CEI and the ACT composite score and subscales suggest that the JCCES CEI captures various aspects of academic achievement across multiple subject areas.

The moderate correlation between the JCCES CEI and the GRE Analytical subscale suggests a weaker relationship between the JCCES CEI and analytical abilities compared to the strong correlations observed with other cognitive and academic measures. This finding may reflect differences in the skills assessed by these two measures, as well as the variability in the sample used in this study.

Implications for Theory, Practice, and Future Research

The findings of the present study have several implications for theory and practice. The strong correlations observed between the JCCES CEI and measures of cognitive abilities and academic achievement support the validity and reliability of the JCCES as a measure of general cognitive ability and academic achievement. The JCCES CEI could be a valuable tool for assessing cognitive abilities and academic achievement in educational, clinical, and occupational settings.

The results of this study also have implications for future research. The present study used a cross-sectional design, and future research could use a longitudinal design to examine the stability and predictive validity of the JCCES CEI over time. Additionally, future research could explore the relationship between the JCCES CEI and other measures of academic achievement, such as high school and college GPA. Furthermore, future research could examine the factor structure of the JCCES and its relationships with other measures of cognitive abilities.

Limitations

There are several limitations to this study that may have affected the results. First, the sample size varied across the different measures, with smaller sample sizes for some of the tests. Smaller sample sizes may have limited the statistical power to detect significant correlations.

Second, selection bias may have influenced the results, as participants may have been more likely to respond to the survey if they had higher cognitive abilities or academic achievement. This could have resulted in an overestimation of the correlations between the JCCES CEI and other measures.

Finally, many samples relied on self-reported data, which may be subject to reporting biases and inaccuracies. Although the JCCES is an untimed, self-administered, open-ended test, it is possible that participants' responses were influenced by factors such as social desirability or recall biases, which may have affected the validity of the study results.

Future Research

Future research could address some of the limitations of this study, including increasing sample sizes for certain measures and using more diverse samples to improve generalizability. Additionally, future research could examine the JCCES CEI's relationship with other cognitive and academic measures not included in this study, such as measures of creativity or problem-solving ability.

Further exploration of the weaker relationship between the JCCES CEI and the GRE Analytical subscale could also be valuable. Additional research could investigate whether the moderate correlation is due to differences in the skills assessed or limitations of the sample used in this study. Future studies could also examine the JCCES CEI's relationship with other measures of analytical abilities, such as performance on analytical writing tasks or measures of critical thinking.

Implications

The results of this study have important implications for both theory and practice. The strong relationships between the JCCES CEI and various measures of cognitive abilities and academic achievement provide further evidence for the construct validity of the JCCES as a measure of general cognitive ability. The JCCES CEI may be particularly useful in educational and occupational settings for assessing individuals' cognitive abilities, identifying potential learning difficulties or giftedness, and predicting academic and occupational success.

Additionally, the strong correlations between the JCCES CEI and the SAT and ACT suggest that the JCCES CEI is an effective tool for predicting academic achievement. As such, the JCCES CEI may be useful for guiding educational interventions and for identifying individuals who may benefit from academic support.

However, it is important to note that the JCCES CEI should not be used as the sole measure for assessing cognitive abilities or academic achievement. Rather, the JCCES CEI should be used in conjunction with other measures to provide a more comprehensive evaluation of an individual's strengths and weaknesses.

Conclusion

In conclusion, the results of this study provide strong evidence for the construct validity of the JCCES as a measure of general cognitive ability. The JCCES CEI demonstrated strong correlations with various measures of cognitive abilities and academic achievement, including well-established measures such as the WAIS-III and the SAT. The study results suggest that the JCCES CEI may be a useful tool for assessing cognitive abilities and predicting academic and occupational success. However, the limitations of the study should be taken into consideration when interpreting the results. Future research could address some of the limitations and further explore the JCCES CEI's relationship with other measures of cognitive abilities and academic achievement.

References

Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). Upper Saddle River, NJ: Prentice-Hall.

Binet, A., & Simon, T. (1905). New methods for the diagnosis of the intellectual level of subnormals. L'Année Psychologique, 11, 191-244.

Cattell, R. B. (1971). Abilities: Their structure, growth, and action. Boston, MA: Houghton Mifflin.

Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge, UK: Cambridge University Press.

Deary, I. J., Strand, S., Smith, P., & Fernandes, C. (2007). Intelligence and educational achievement. Intelligence, 35(1), 13-21. https://doi.org/10.1016/j.intell.2006.02.001

Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates.

Wechsler, D. (1939). The measurement of adult intelligence. Baltimore, MD: Williams & Wilkins.