Posted: Oct 28, 2025
This paper presents a comprehensive comparative analysis of two prominent regularization techniques in neural networks: dropout and weight decay. With the increasing complexity of deep learning models and the persistent challenge of overfitting, understanding the relative effectiveness of different regularization methods has become crucial. We conducted extensive experiments on three benchmark datasets—MNIST, CIFAR-10, and Fashion-MNIST—using feedforward neural networks with varying architectures. Our methodology involved systematic testing of dropout rates ranging from 0.1 to 0.7 and weight decay parameters from 1e-6 to 1e-2. The results demonstrate that while both methods effectively reduce overfitting, their performance varies significantly across different network architectures and dataset complexities. Dropout consistently outperformed weight decay on deeper networks with more parameters, achieving up to 15
Downloads: 0
Abstract Views: 1804
Rank: 380865