Posted: Aug 25, 2020
The reproducibility crisis affecting numerous scientific domains has prompted extensive investigation into its underlying causes, with statistical inference methodologies emerging as a potentially significant yet underexplored factor. This research presents a comprehensive analysis examining how various statistical inference techniques influence experimental data reproducibility across multiple scientific disciplines. We developed a novel methodological framework that integrates Bayesian hierarchical modeling with frequentist approaches to assess reproducibility metrics across 1,247 experimental studies from computational biology, psychology, and materials science. Our approach uniquely quantifies the reproducibility risk associated with different statistical practices while controlling for contextual factors such as sample size, effect magnitude, and experimental design complexity. The findings reveal that Bayesian methods with weakly informative priors demonstrated significantly higher reproducibility rates (87.3%) compared to traditional null hypothesis significance testing approaches (63.8%), particularly in studies with moderate to small sample sizes. Furthermore, we identified specific statistical practices—including multiple comparison corrections, pre-registration of analysis plans, and appropriate power calculations—that substantially moderated the relationship between inference techniques and reproducibility outcomes. These results provide empirical evidence for the critical role of statistical methodology selection in addressing the reproducibility crisis and offer practical guidelines for researchers seeking to enhance the reliability of their scientific findings. The methodological framework developed in this study represents a significant advancement in reproducibility assessment and provides a foundation for future research in methodological optimization for scientific reliability.
Downloads: 6
Abstract Views: 627
Rank: 261256