Showing posts with label 2PL IRT. Show all posts
Showing posts with label 2PL IRT. Show all posts

Friday, October 11, 2024

Group-Theoretical Symmetries in Item Response Theory (IRT)

Item Response Theory (IRT) models the interaction between latent traits and responses in psychological assessments. My latest article introduces a new approach by incorporating group-theoretic symmetry constraints to improve IRT parameter estimation. By formalizing algebraic structures with group actions on item parameters like difficulty and discrimination, this method captures regularities within test items that are often overlooked by traditional estimation techniques.

Specifically, group actions on item parameters, such as difficulty, are represented through permutation matrices. This process reduces the dimensionality of the parameter space by collapsing symmetrically related items into equivalence classes, resulting in more efficient and theoretically consistent parameter estimates. The model also introduces dynamic, data-driven bounds for discrimination parameters, ensuring they reflect real variability without losing theoretical integrity.

While this method primarily focuses on the two-parameter logistic (2PL) model, it can be adapted to more complex models, such as the three- and four-parameter models (3PL and 4PL). Future developments aim to validate its empirical effectiveness and scalability across diverse psychometric scenarios.

Read the article here: https://www.cogn-iq.org/doi/10.2024/34d128d888faa98f72aa

Thursday, September 19, 2024

Theoretical Framework for Bayesian Hierarchical 2PLM with ADVI

My latest article from Cogn-IQ.org examines the Two-Parameter Logistic (2PL) Item Response Theory (IRT) model through a Bayesian hierarchical lens. This advanced approach introduces hierarchical priors on both respondent abilities and item parameters, allowing for more nuanced modeling of latent traits. The model also adopts Automatic Differentiation Variational Inference (ADVI), offering a scalable solution for handling large datasets, improving on traditional methods like Markov Chain Monte Carlo (MCMC). 

Key improvements introduced in the Bayesian hierarchical framework include better partial pooling of information, making it especially valuable in situations with sparse data. The article delves deep into the mathematical structure, outlining prior-likelihood functions and the importance of variational inference to ensure efficient posterior approximation. 

While the paper focuses on the theoretical aspects, future research could explore practical applications in fields like psychometrics, educational assessment, and machine learning. This method holds promise for more accurate latent trait estimation across various disciplines.

 
For more details, refer to the original article at https://www.cogn-iq.org/doi/09.2024/37693a22159f5fa4078d