Posted: Sep 22, 2020
The selection of appropriate statistical models represents a fundamental challenge across scientific disciplines, with profound implications for inference, prediction, and theoretical development. Among the various approaches available for model comparison, likelihood ratio tests (LRTs) have maintained a prominent position in statistical practice, particularly when comparing nested models. While LRTs are traditionally employed for hypothesis testing concerning specific parameter constraints, their application extends naturally to model selection contexts where researchers must choose between competing theoretical specifications. However, the performance characteristics of LRTs as model selection tools remain inadequately characterized, especially in comparison to information-theoretic approaches such as the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC). Contemporary statistical practice has witnessed a gradual shift toward information criteria for model selection, largely motivated by their theoretical foundations and computational convenience. This shift has occurred despite the well-established theoretical properties of LRTs and their deep connections to fundamental statistical principles. The relative performance of these approaches remains contested, with conflicting recommendations emerging from different methodological traditions. This research addresses this methodological gap by systematically evaluating the selection accuracy of LRTs across diverse data conditions and modeling scenarios.
Downloads: 71
Abstract Views: 1656
Rank: 53952