Showing posts with label intelligence. Show all posts
Showing posts with label intelligence. Show all posts

Thursday, March 2, 2023

[Article Review] Reversing the Tide: Unraveling the Flynn Effect in U.S. Adults

Reference

Dworak, E. M., Revelle, W., & Condon, D. M. (2023). Looking for Flynn effects in a recent online U.S. adult sample: Examining shifts within the SAPA Project. Intelligence, 98, 101734. https://doi.org/10.1016/j.intell.2023.101734

Review

The Flynn effect, named after the psychologist James Flynn, refers to the phenomenon of a significant and steady increase in intelligence test scores over time. While extensive research has documented this trend in European countries, there is a dearth of studies exploring the presence or reversal of the Flynn effect in the United States, particularly among adult populations. In their recent study, Dworak, Revelle, and Condon (2023) addressed this gap by analyzing the cognitive ability scores of a large cross-sectional sample of U.S. adults from 2006 to 2018.

The authors used data from the Synthetic Aperture Personality Assessment Project (SAPA Project), which included responses from 394,378 adults. The cognitive ability scores were derived from two overlapping sets of items from the International Cognitive Ability Resource (ICAR). The researchers examined trends in standardized average composite cognitive ability scores and domain scores of matrix reasoning, letter and number series, verbal reasoning, and three-dimensional rotation.

The results revealed a pattern consistent with a reversed Flynn effect for composite ability scores from 35 items and domain scores (matrix reasoning; letter and number series) from 2006 to 2018 when stratified across age, education, or gender. However, slopes for verbal reasoning scores did not meet or exceed an annual threshold of |0.02| SD. Furthermore, a reversed Flynn effect was also present for composite ability scores from 60 items from 2011 to 2018, across age, education, and gender.

Interestingly, despite declining scores across age and demographics in other domains of cognitive ability, three-dimensional rotation scores showed evidence of a Flynn effect, with the largest slopes occurring across age-stratified regressions. This finding suggests that not all cognitive abilities are similarly affected by the Flynn effect or its reversal.

Dworak et al.'s (2023) study, makes a significant contribution to the literature on the Flynn effect by providing evidence of its reversal in a large sample of U.S. adults. However, it is essential to consider that the study is based on cross-sectional data, which limits the ability to draw causal conclusions or infer longitudinal trends. Future research could benefit from longitudinal designs to better understand the factors that contribute to the Flynn effect and its reversal in the United States. Additionally, exploring the role of social, cultural, and environmental factors that may impact cognitive abilities could provide further insight into this complex phenomenon.

Thursday, November 14, 2019

[Article Review] Understanding the Role of Intelligence and Music Aptitude in Piano Skill Acquisition for Beginners

Reference

Burgoyne, A. P., Harris, L. J., & Hambrick, D. Z. (2019). Predicting piano skill acquisition in beginners: The role of general intelligence, music aptitude, and mindset. Intelligence, 76, 101383. https://doi.org/10.1016/j.intell.2019.101383

Review

In their study, Burgoyne, Harris, and Hambrick (2019) aim to examine sources of individual differences in musical skill acquisition. The authors had 171 undergraduate students with no or little piano-playing experience try to learn a piece of piano music with the help of a video guide, and then perform it from memory after practice. A panel of musicians assessed the performances based on melodic and rhythmic accuracy. Participants also completed tests of cognitive ability, including working memory capacity, fluid intelligence, crystallized intelligence, processing speed, and two tests of music aptitude. The study found that general intelligence and music aptitude significantly correlated with skill acquisition, but mindset did not. Structural equation modeling revealed that general intelligence, music aptitude, and mindset together accounted for 22.4% of the variance in skill acquisition. However, only general intelligence contributed significantly to the model. Overall, the study suggests that after accounting for individual differences in general intelligence, music aptitude, and mindset do not predict piano skill acquisition in beginners.

The results of Burgoyne, Harris, and Hambrick’s (2019) study are particularly relevant for music educators, who may be interested in understanding how individual differences in cognitive ability and musical aptitude can affect skill acquisition. The study’s findings suggest that general intelligence and music aptitude play a role in piano skill acquisition, but mindset does not. This suggests that educators may be better off focusing on developing students’ general intelligence and musical aptitude when teaching piano, rather than trying to cultivate a particular mindset. Additionally, the study’s focus on beginners is notable, as many studies on music education have focused on more experienced musicians.

One limitation of the study is that it only examined the role of general intelligence, music aptitude, and mindset in predicting piano skill acquisition in beginners. Other factors, such as motivation and practice habits, may also play a role in skill acquisition. Additionally, the study only used one measure of mindset, the Mindset Inventory, which may not have been sensitive enough to capture the nuances of different mindsets. Nonetheless, the study’s results provide valuable insights into the factors that may influence musical skill acquisition.

Monday, September 24, 2018

[Article Review] Unlocking the Secrets of IQ Malleability: The Role of Epigenetics and Dopamine D2 Receptor

Reference

Kaminski, J. A., Schlagenhauf, F., Rapp, M., Awasthi, S., Ruggeri, B., Deserno, L., Banaschewski, T., Bokde, A. L. W., Bromberg, U., Büchel, C., Quinlan, E. B., Desrivières, S., Flor, H., Frouin, V., Garavan, H., Gowland, P., Ittermann, B., Martinot, J.-L., Martinot, M.-L. P., Nees, F., Papadopoulos Orfanos, D., Paus, T., Poustka, L., Smolka, M. N., Fröhner, J. H., Walter, H., Whelan, R., Ripke, S., Schumann, G., Heinz, A., & the IMAGEN consortium. (2018). Epigenetic variance in dopamine D2 receptor: a marker of IQ malleability? Translational Psychiatry, 8(169). https://doi.org/10.1038/s41398-018-0222-7

Review

In this article, the authors investigate the association between general IQ (gIQ) and various factors, including polygenic scores for intelligence, epigenetic modification of DRD2 gene, gray matter density in the striatum, and functional striatal activation elicited by temporarily surprising reward-predicting cues. The study includes a sample of 1475 healthy adolescents from the IMaging and GENetics (IMAGEN) project.

The researchers emphasize the significance of understanding the malleability of gIQ and its neurobiological correlates. Their findings indicate the equal importance of genetic variance, epigenetic modification of the DRD2 receptor gene, and functional striatal activation, which influences dopamine neurotransmission. These factors contribute to the variance in cognitive test performance and could potentially explain the "missing heritability" between estimates from twin studies and variance explained by genetic markers.

This study offers valuable insight into the factors that influence gIQ and highlights the need for further research on peripheral epigenetic markers. Specifically, future studies should investigate individual and environmental factors that modify the epigenetic structure, as well as confirm these peripheral markers within the central nervous system in longitudinal settings.


Monday, June 18, 2018

[Article Review] Unlocking Potential: How Education Can Improve Intelligence

Reference

Ritchie, S. J., & Tucker-Drob, E. M. (2018). How Much Does Education Improve Intelligence? A Meta-Analysis. Psychological Science, 29(8), 1358-1369. https://doi.org/10.1177/0956797618774253

Review

In this article, Ritchie and Tucker-Drob (2018) explore the relationship between education and intelligence, specifically whether more education leads to increased intelligence. The authors conducted a meta-analysis of 142 effect sizes from 42 data sets, involving over 600,000 participants, using quasi-experimental methods including controlled associations, instrumental variables, and regression-discontinuity designs. The results reveal a consistent, positive effect of education on cognitive abilities, with an increase of 1 to 5 IQ points for each additional year of education.

The authors' robust analysis further highlights the durability of the observed effects, as they persist across various life stages and all broad categories of cognitive ability. This finding is significant, as it suggests that education is a consistent and reliable method for increasing intelligence. By using various research designs, Ritchie and Tucker-Drob (2018) strengthen the validity of their findings, making a compelling case for the importance of continued education in promoting cognitive development.

Overall, the study by Ritchie and Tucker-Drob (2018) offers valuable insight into the impact of education on intelligence, and its findings have important implications for policymakers and educators. The results underscore the significance of investing in education to promote cognitive growth, which can contribute to individual and societal success. This study lays a strong foundation for future research exploring the specific mechanisms through which education may enhance intelligence and cognitive abilities.

Monday, March 21, 2016

[Article Review] Busting the Myth: Are Blondes Really Dumb?

Reference

Zagorsky, J. (2016). Are Blondes Really Dumb? Economics Bulletin, 36(1), 401-410.

Review

In the article "Are Blondes Really Dumb?" by Jay Zagorsky (2016), the author investigates the validity of the stereotype that blonde women are less intelligent than women with other hair colors. Using data from the National Longitudinal Surveys (NLSY79), a large representative survey tracking young baby boomers, Zagorsky found that blonde women have a higher mean AFQT IQ than women with brown, red, and black hair. Moreover, blondes are more likely to be classified as "geniuses" and less likely to have extremely low IQs than women with other hair colors.

The author highlights the importance of debunking this stereotype, as discrimination based on appearance can have serious economic consequences. Employers seeking intelligent workers may be less likely to hire blondes based on the assumption that they are less intelligent. Zagorsky's research demonstrates that the "dumb blonde" stereotype is unfounded and urges readers to question other commonly held prejudices that may be damaging as well.

In conclusion, Zagorsky (2016) dispels the myth of the "dumb blonde" by providing empirical evidence that blonde women have a higher mean IQ than women with other hair colors. The author emphasizes the importance of questioning and debunking harmful stereotypes that can lead to discrimination in the workplace and society at large. By challenging these assumptions, we can promote a more inclusive and equitable environment for all.

Saturday, October 11, 2014

Exploring the Relationship between JCCES and ACT Assessments: A Factor Analysis Approach

Abstract

This study aimed to examine the relationship between the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the American College Test (ACT) by conducting a factor analysis. The dataset consisted of 60 observations, with Pearson's correlation revealing significant associations between all variables. The factor analysis identified three factors, with the first factor accounting for 53.697% of the total variance and demonstrating the highest loadings for all variables. The results suggest that the JCCES and ACT assessments may be measuring a common cognitive construct, which could be interpreted as general cognitive ability or intelligence. However, several limitations should be considered, including the sample size, the scope of the analysis, and the use of factor analysis as the sole statistical method. Future research should employ larger samples, consider additional assessments, and explore alternative statistical techniques to validate these findings.

Keywords: Jouve Cerebrals Crystallized Educational Scale, American College Test, factor analysis, general cognitive ability, intelligence, college admission assessments.

Introduction

Psychometrics has long been a central topic of interest for researchers aiming to understand the underlying structure of cognitive abilities and the validity of various assessment tools. One of the most widely recognized theories in this field is the theory of general intelligence, or g-factor, which posits that an individual's cognitive abilities can be captured by a single underlying factor (Spearman, 1904). Over the years, numerous instruments have been developed to measure this general cognitive ability, with intelligence tests and college admission assessments being among the most prevalent. However, the extent to which these instruments measure the same cognitive construct remains a subject of debate.

The present study aims to investigate the factor structure of two assessments, the Jouve Cerebrals Crystallized Educational Scale (JCCES; Jouve, 2010) and the American College Test (ACT), to test the hypothesis that a single underlying factor accounts for the majority of variance in these measures. This hypothesis is grounded in the g-factor theory and is further supported by previous research demonstrating the strong correlation between intelligence test scores and academic performance (Deary, et al., 2007; Koenig, et al., 2008).

In recent years, the application of factor analysis has become a popular method for exploring the structure of cognitive assessments and identifying the dimensions that contribute to an individual's performance on these tests (Carroll, 1993; Jensen, 1998). Factor analysis allows researchers to quantify the extent to which various test items or subtests share a common underlying construct, thus providing insights into the validity and reliability of the instruments in question (Fabrigar, et al., 1999).

The selection of the JCCES and ACT assessments for this study is based on their use in academic and professional settings and their potential relevance to general cognitive ability. The JCCES is a psychometric test that measures crystallized intelligence, which is thought to reflect accumulated knowledge and skills acquired through education and experience (Cattell, 1971). The ACT, on the other hand, is a college admission assessment that evaluates students' academic readiness in various subject areas, such as English, mathematics, reading, and science (ACT, 2014). By examining the factor structure of these two assessments, the present study aims to shed light on the relationship between intelligence and college admission measures and the extent to which they tap into a common cognitive construct.

In sum, this study seeks to contribute to the ongoing discussion regarding the measurement of cognitive abilities and the relevance of psychometric theories in understanding the structure of intelligence and college admission assessments. By employing factor analysis and focusing on the JCCES and ACT, the study aims to provide a clearer understanding of the relationship between these measures and the g-factor theory. Ultimately, the results of this investigation may help inform the development and validation of future cognitive assessment tools and enhance our understanding of the complex nature of human intelligence.

Method

Research Design

The present study employed a correlational research design to examine the relationship between intelligence and college admission assessments. This design was chosen to analyze the associations between variables without manipulating any independent variables or assigning participants to experimental conditions (Creswell, 2014). The correlational design allows for the exploration of naturally occurring relationships among variables, which is particularly useful in understanding the structure and relationships of cognitive measures.

Participants

A total of 60 participants were recruited for this study, with their demographic characteristics collected, but not reported in this study. Participants were high school seniors or college students who had completed both the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the American College Test (ACT). There were no exclusion criteria for this study.

Materials

The study utilized two separate assessments to collect data: the JCCES and the ACT.

Jouve Cerebrals Crystallized Educational Scale (JCCES)

The JCCES is a measure of crystallized intelligence and assesses cognitive abilities through three subtests (Jouve, 2010). The subtests include Verbal Analogies (VA), Mathematical Problems (MP), and General Knowledge (GK). The JCCES was chosen for its relevance in evaluating cognitive abilities.

American College Test (ACT)

The ACT is a standardized college admission assessment measuring cognitive domains relevant to college readiness (ACT, 2014). The test is composed of four primary sections: English, Mathematics, Reading, and Science Reasoning. The ACT was selected for its widespread use in educational settings and its ability to evaluate cognitive abilities pertinent to academic success.

Procedure

Data collection involved obtaining participants' scores on both the JCCES and ACT assessments. Participants were instructed to provide their most recent test scores from ACT upon completion of the JCCES online. Then, they were then entered into a secure database for analysis. Prior to data collection, informed consent was obtained from all participants, and they were assured of the confidentiality and anonymity of their responses. 

Statistical Methods

To analyze the data, a factor analysis was conducted to test the research hypotheses (Tabachnick, & Fidell, 2007). Pearson's correlation was used to measure the associations between variables, with principal factor analysis conducted for data extraction. Varimax rotation was employed to simplify the factor structure, with the number of factors determined automatically and initial communalities calculated using squared multiple correlations. The study employed a convergence criterion of 0.0001 and a maximum of 50 iterations.

The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Cronbach's alpha were calculated to assess the sample size adequacy and internal consistency, respectively. Factor loadings were computed for each variable, and the proportion of variance explained by the extracted factors was determined.

Results

The present study employed factor analysis to test the research hypotheses. Pearson's correlation was used to measure the associations between variables, and the principal factor analysis was conducted for data extraction. Varimax rotation was used to simplify the factor structure. The number of factors was determined automatically, with initial communalities calculated using squared multiple correlations. The study employed a convergence criterion of 0.0001 and a maximum of 50 iterations.

Results of the Statistical Analyses

The Pearson correlation matrix revealed significant correlations (α = 0.05) between all variables. The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy indicated a KMO value of 0.809, suggesting that the sample size was adequate for conducting a factor analysis. Cronbach's alpha was calculated at 0.887, indicating satisfactory internal consistency for the variables.

The factor analysis revealed three factors with eigenvalues greater than one, accounting for 63.526% of the total variance. The first factor (F1) had an eigenvalue of 3.759, accounting for 53.697% of the variance. The second factor (F2) had an eigenvalue of 0.437, accounting for 6.242% of the variance, and the third factor (F3) had an eigenvalue of 0.251, accounting for 3.587% of the variance.

Factor loadings were calculated for each variable, with the first factor (F1) showing the highest loadings for all variables. Specifically, F1 had factor loadings of 0.631 for Verbal Analogies (VA), 0.734 for Mathematical Problems (MP), 0.651 for General Knowledge (GK), 0.802 for English (ENG), 0.881 for Mathematics (MATH), 0.744 for Reading (READ), and 0.905 for Science (SCIE). Final communalities ranged from 0.361 for VA to 0.742 for SCIE, indicating the proportion of variance in each variable explained by the extracted factors.

Interpretation of the Results

The results of the factor analysis support the research hypothesis that a single underlying factor (F1) accounts for the majority of the variance in the intelligence and college admission assessments. Specifically, F1 explained 53.697% of the total variance, with all variables loading highly on this factor. This finding suggests that the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the American College Test (ACT) are measuring a common cognitive construct, which may be interpreted as general cognitive ability or intelligence.

Limitations

There are several limitations to consider when interpreting the results of this study. First, the sample size of 60 observations, although adequate for factor analysis based on the KMO measure, may not be large enough to ensure the generalizability of the results. Future studies should employ larger and more diverse samples to validate these findings.

Second, this study only considered the JCCES and ACT assessments, limiting the scope of the analysis. Further research should investigate the factor structure of other intelligence and college admission assessments to provide a more comprehensive understanding of the relationship between these measures and general cognitive ability.

Lastly, the use of factor analysis as the sole statistical method may not account for potential non-linear relationships between the variables. Future studies could employ additional statistical techniques, such as structural equation modeling or item response theory, to better capture the complexity of the relationships between these cognitive measures.

Discussion

Interpretation of the Results and Previous Research

The findings of the present study support the research hypothesis that a single underlying factor (F1) accounts for the majority of the variance in the intelligence and college admission assessments. Specifically, F1 explained 53.697% of the total variance, with all variables loading highly on this factor. This result is consistent with previous research, which has also demonstrated a strong relationship between general cognitive ability, or intelligence, and performance on college admission assessments (Deary et al., 2007; Koenig et al., 2008). The high factor loadings for all variables on F1 suggest that the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the American College Test (ACT) are measuring a common cognitive construct, which may be interpreted as general cognitive ability or intelligence.

Implications for Theory, Practice, and Future Research

The results of this study have important implications for both theory and practice. From a theoretical perspective, the findings support the idea that general cognitive ability is a key underlying factor that contributes to performance on both intelligence and college admission assessments. This suggests that efforts to improve general cognitive ability may be effective in enhancing performance on a wide range of cognitive measures, including college admission assessments.

In terms of practice, the results indicate that the JCCES and ACT assessments are likely measuring similar cognitive constructs, which may have implications for college admission processes. For instance, it may be useful for colleges and universities to consider using a single assessment to evaluate both intelligence and college readiness in applicants, potentially streamlining the admission process and reducing the burden on students.

Moreover, these findings highlight the importance of considering general cognitive ability in educational and career planning. Students, educators, and career counselors can use these insights to develop strategies and interventions aimed at improving general cognitive ability, ultimately enhancing academic and career outcomes.

Limitations and Alternative Explanations

The present study has several limitations that should be considered when interpreting the findings. First, the sample size of 60 observations, although adequate for factor analysis based on the KMO measure, may not be large enough to ensure the generalizability of the results. Future studies should employ larger and more diverse samples to validate these findings.

Second, this study only considered the JCCES and ACT assessments, limiting the scope of the analysis. Further research should investigate the factor structure of other intelligence and college admission assessments, such as the Wechsler Adult Intelligence Scale (WAIS) and the Scholastic Assessment Test (SAT), to provide a more comprehensive understanding of the relationship between these measures and general cognitive ability.

Lastly, the use of factor analysis as the sole statistical method may not account for potential non-linear relationships between the variables. Future studies could employ additional statistical techniques, such as structural equation modeling or item response theory, to better capture the complexity of the relationships between these cognitive measures.

Conclusion

In conclusion, this study's results indicate that a single underlying factor (F1) accounts for the majority of the variance in the intelligence and college admission assessments, specifically, the Jouve Cerebrals Crystallized Educational Scale (JCCES) and the American College Test (ACT). This finding suggests that both assessments measure a common cognitive construct, which may be interpreted as general cognitive ability or intelligence. The implications of these findings for theory and practice are significant, as they provide insight into the relationship between intelligence assessments and college admission tests, potentially guiding the development of more effective testing methods in the future.

However, some limitations should be considered. The sample size of 60 observations may not be large enough for generalizability, and the study only analyzed JCCES and ACT assessments. Future research should include larger, more diverse samples and investigate other intelligence and college admission assessments. Additionally, employing other statistical methods, such as structural equation modeling or item response theory, may better capture the complexity of the relationships between these cognitive measures.

Despite these limitations, the study highlights the importance of understanding the underlying factors that contribute to performance on intelligence and college admission assessments and opens avenues for future research to improve the assessment of general cognitive ability.

References

ACT. (2014). About the ACT. Retrieved from https://www.act.org/content/act/en/products-and-services/the-act/about-the-act.html

Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York: Cambridge University Press. https://doi.org/10.1017/CBO9780511571312

Cattell, R. B. (1971). Abilities: Their structure, growth, and action. Boston, MA: Houghton Mifflin.

Creswell, J. W. (2014). Research Design: Qualitative, Quantitative and Mixed Methods Approaches (4th ed.). Thousand Oaks, CA: Sage. 

Deary, I. J., Strand, S., Smith, P., & Fernandes, C. (2007). Intelligence and educational achievement. Intelligence, 35(1), 13-21. https://doi.org/10.1016/j.intell.2006.02.001

Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272–299. https://doi.org/10.1037/1082-989X.4.3.272

Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger.

Jouve, X. (2010). Jouve Cerebrals Crystallized Educational Scale. Retrieved from http://www.cogn-iq.org/tests/jouve-cerebrals-crystallized-educational-scale-jcces

Koenig, K. A., Frey, M. C., & Detterman, D. K. (2008). ACT and general cognitive ability. Intelligence, 36(2), 153–160. https://doi.org/10.1016/j.intell.2007.03.005

Spearman, C. (1904). "General intelligence," objectively determined and measured. The American Journal of Psychology, 15(2), 201-292. https://doi.org/10.2307/1412107

Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). Boston, MA: Pearson Education, Inc.

Tuesday, May 24, 2011

Exploring the Dynamics of Speed and Intelligence at Cogn-IQ.org

In this article, Chew delves into the intriguing connection between the speed of information processing and intelligence, utilizing elementary cognitive tasks (ECTs) to gauge processing speed. Findings over the decades consistently show a negative correlation between reaction times to ECTs and intelligence levels, with more complex tasks amplifying these correlations. 

However, it's crucial to distinguish between processing speed and test-taking speed, as the latter relates more to personality traits. As we examine this relationship further, the role of working memory and task complexity emerges as vital in understanding the link between processing speed and intelligence, highlighting that as tasks become more demanding, the influence of processing speed grows. 

Additionally, the relationship between speed and intelligence is nuanced, influenced by item difficulty and individual capability. Challenging tasks can exhibit positive correlations with ability, adding complexity to this intricate relationship. Despite these findings, IQ tests remain a reliable metric for cognitive capability, emphasizing the need for a holistic interpretation of speed and intelligence. 

This article contributes to the broader understanding of human intelligence, showcasing the multifaceted nature of the relationship between speed and intelligence, shaped by working memory, task complexity, and individual capacity. 

Link to Full Article: Chew, M. (2011). Speed & Intelligence: Correlations And Implications https://www.cogn-iq.org/articles/speed-intelligence-correlations.html

Tuesday, June 1, 2010

[Article Review] Unlocking the Connection: The Relationship Between SAT Scores and General Cognitive Ability

Reference

Frey, M. C., & Detterman, D. K. (2004). Scholastic Assessment or g?: The Relationship Between the Scholastic Assessment Test and General Cognitive Ability. Psychological Science, 15(6), 373-378. https://doi.org/10.1111/j.0956-7976.2004.00687.x

Review

In their seminal article, Frey and Detterman (2004) delve into the relationship between the Scholastic Assessment Test (SAT) and general cognitive ability (g). Their research aimed to understand the correlation between the two constructs and evaluate the SAT as a potential measure of g, in addition to exploring its use as a premorbid measure of intelligence. Two distinct studies were conducted: the first utilized data from the National Longitudinal Survey of Youth 1979, while the second examined the correlation between revised SAT scores and scores on the Raven's Advanced Progressive Matrices among undergraduates.

The first study reported a significant correlation of .82 (corrected for nonlinearity) between measures of g extracted from the Armed Services Vocational Aptitude Battery and SAT scores of 917 participants. The second study further substantiated the relationship, revealing a correlation of .483 (corrected for restricted range) between revised SAT scores and scores on the Raven's Advanced Progressive Matrices among the undergraduate sample. These findings indicate that the SAT is predominantly a test of g, with the authors providing equations for converting SAT scores to estimated IQs. This conversion could be useful for estimating premorbid IQ or conducting individual difference research among college students.

Frey and Detterman's (2004) research provides valuable insights into the relationship between the SAT and general cognitive ability, offering empirical support for the SAT's validity as a measure of g. This information has important implications for the use of the SAT in educational and psychological settings. Furthermore, the conversion equations presented by the authors may facilitate researchers in estimating premorbid IQ or conducting individual differences research with college students, broadening the potential applications of SAT scores.