Posted: Nov 21, 1998
The foundations of statistical inference rest upon several key properties that estimators should ideally possess. Among these, consistency and asymptotic normality stand as two of the most fundamental and widely studied characteristics. Consistency ensures that as the sample size increases, an estimator converges in probability to the true parameter value, while asymptotic normality guarantees that the sampling distribution of the estimator approaches a normal distribution as the sample size grows. These properties form the bedrock of hypothesis testing, confidence interval construction, and many other statistical procedures. Despite their central importance in statistical theory, the precise relationship between consistency and asymptotic normality remains surprisingly opaque. Traditional statistical education often presents these properties as complementary aspects of good estimator behavior, with the implicit assumption that they naturally co-occur in well-behaved estimation problems. However, a deeper examination reveals a more complex and nuanced relationship that has not been systematically explored in the literature. This paper addresses this gap by developing a comprehensive theoretical framework to characterize the conditions under which consistency implies asymptotic normality and vice versa. We challenge the conventional wisdom that these properties are inherently linked, demonstrating through rigorous mathematical analysis and extensive simulations that their relationship is far more intricate than previously recognized. Our work reveals that in many modern statistical contexts, particularly those involving high-dimensional parameters, non-regular estimation problems, or complex dependence structures, the connection between consistency and asymptotic normality breaks down in unexpected ways.
Downloads: 28
Abstract Views: 748
Rank: 138028