Posted: Feb 26, 2024
Data normalization represents a fundamental preprocessing step in machine learning pipelines, yet its systematic evaluation across diverse statistical models remains surprisingly limited in the literature. The conventional wisdom suggests that normalization improves model performance by ensuring features contribute equally to learning algorithms, preventing dominance by features with larger scales. However, this assumption overlooks the complex interactions between normalization techniques, model architectures, and underlying data distributions. This research addresses this gap by conducting a comprehensive empirical investigation of how different normalization approaches affect machine learning performance across statistical domains.
Downloads: 68
Abstract Views: 1433
Rank: 338286