Submit Your Article

Neural Network Regularization Techniques: A Comparative Analysis of Dropout and Weight Decay Methods

Posted: Oct 28, 2025

Abstract

This paper presents a comprehensive comparative analysis of two prominent regularization techniques in neural networks: dropout and weight decay. With the increasing complexity of deep learning models, overfitting remains a significant challenge in machine learning applications. Our research systematically evaluates the effectiveness of these regularization methods across multiple benchmark datasets including MNIST, CIFAR-10, and Fashion-MNIST. We employ feedforward neural networks with varying architectures to assess regularization performance under different conditions. The experimental results demonstrate that while both techniques effectively mitigate overfitting, their performance varies significantly based on network architecture, dataset complexity, and hyperparameter settings. Dropout shows superior performance in deeper networks with high-dimensional data, whereas weight decay provides more consistent results across different architectures. Our findings provide practical guidelines for selecting appropriate regularization strategies based on specific application requirements and computational constraints.

Downloads: 0

Abstract Views: 1879

Rank: 285850