In this paper, using log-normal Item-Response Theory models for response time data, we study how personality traits predict response speed trajectories in low-stakes reasoning tasks. We found that some traits are associated with increasing in speed vs. decreasing in speed, controlling for item effects.
A package for the Psychometric analysis and scoring of judgment data using polytomous Item-Response Theory (IRT) models.
In this paper, we introduce new psychometric models for count/fluency tasks (tasks in which individuals have to provide many instances). Notably, we extent the Rasch Poisson Counts Model (RPCM) to account for variable discrimination (2-Parameter Poisson Counts Model) and to account for local dependencies/nuisance factors (Bifactor 2-Parameter Poisson Counts Model).
In this paper, I reanalyze a dataset for a special issue of Journal of Intelligence. The analysis uses the framework of Mokken Scale Analysis (MSA), which is a non-parametric approach to relations between item responses and attributes (i.e. Item-Response Theory).
In this paper, we propose to generalize the use of Nested Logit Models and implement its use for online personnel selection procedures that use reasoning tests with multiple choices (e.g., matrix-type tests). We find that using such models results in gains of reliability, especially at low ability levels.
In this paper, we propose the use of the last series of the last series of the Standard Progressive Matrices (SPM-LS) as a viable snapshot measure of g. We provide an extensive Item-Response Theory (IRT) analysis of the instrument, based on the binary (pass/fail) and nominal (response categories) data.