Submit Your Article

Neural Network Regularization Techniques: A Comparative Analysis of Dropout and Weight Decay Methods

Posted: Oct 16, 2025

Abstract

This paper presents a comprehensive comparative analysis of two prominent regularization techniques in neural networks: dropout and weight decay. With the increasing complexity of deep learning models and the persistent challenge of overfitting, understanding the relative effectiveness of different regularization methods has become crucial. We conducted extensive experiments on three benchmark datasets—MNIST, CIFAR-10, and Fashion-MNIST—using feedforward neural networks with varying architectures. Our methodology involved systematic testing of dropout rates ranging from 0.1 to 0.7 and weight decay parameters from 1e-6 to 1e-2. The results demonstrate that while both methods effectively reduce overfitting, their performance varies significantly across different network architectures and dataset complexities. Dropout consistently outperformed weight decay on deeper networks with more parameters, achieving up to 15% better generalization performance on complex datasets. However, weight decay showed superior performance on shallower networks and simpler tasks, with faster convergence times. The mathematical formulation for the combined regularization loss function is derived, and empirical evidence supports the hypothesis that hybrid approaches combining both methods can yield optimal results in specific scenarios. This research provides practical guidelines for selecting appropriate regularization strategies based on network architecture and task complexity.

Downloads: 1152

Abstract Views: 2295

Rank: 123043