Posted: Dec 28, 2018
Dimensionality reduction represents one of the fundamental preprocessing techniques in machine learning and data science, with applications spanning noise reduction, computational efficiency improvement, and visualization enhancement. The prevailing literature has extensively documented the performance benefits of dimensionality reduction, particularly in high-dimensional settings where the curse of dimensionality severely impacts model generalization. However, the relationship between dimensionality reduction and model interpretability remains significantly understudied, despite interpretability emerging as a critical requirement in many real-world applications, especially in regulated domains such as healthcare, finance, and criminal justice. This research addresses this gap by systematically investigating how different dimensionality reduction techniques influence both model performance and interpretability, challenging the conventional assumption that these two objectives necessarily exist in opposition. Traditional approaches to interpretability often treat it as a post-hoc concern, employing techniques such as LIME or SHAP to explain already-trained models. This research proposes an alternative paradigm where interpretability is considered throughout the modeling pipeline, beginning with the initial feature transformation through dimensionality reduction. We hypothesize that certain dimensionality reduction techniques inherently produce more interpretable representations, thereby reducing the need for complex post-hoc explanation methods. This approach represents a fundamental shift in how we conceptualize the relationship between data preprocessing and model transparency. Our research addresses three primary questions that have received limited attention in the existing literature. First, how do different dimensionality reduction techniques quantitatively impact various dimensions of interpretability? Second, what is the nature of the relationship between performance gains and interpretability improvements achieved through dimensionality reduction? Third, are there systematic patterns in how different application domains respond to the interpretability-enhancing effects of dimensionality reduction? By answering these questions, this work provides both theoretical insights and practical
Downloads: 59
Abstract Views: 1142
Rank: 281193