Posted: Nov 18, 2021
The rapid integration of artificial intelligence into autism spectrum disorder diagnosis presents significant ethical challenges concerning algorithmic bias and fairness across diverse demographic groups. This research presents a comprehensive
framework for bias detection and fairness evaluation in AI-based autism diagnostic models, addressing critical concerns about equitable access and representation
in automated assessment systems. Our approach integrates multiple fairness metrics, bias detection algorithms, and mitigation strategies specifically designed for
the complex, multidimensional nature of autism diagnosis. We developed novel
statistical methods for identifying intersectional biases that manifest across race,
gender, socioeconomic status, and geographic location, employing advanced techniques including subgroup analysis, counterfactual fairness assessment, and bias
propagation tracking. The framework was evaluated on a diverse dataset of 8,500
children from 12 clinical sites, encompassing varied demographic backgrounds and
clinical presentations. Results revealed significant performance disparities across
subgroups, with model accuracy varying by up to 18.7 percentage points between
demographic groups. Our bias detection system identified feature importance skewness and representation imbalances as primary drivers of algorithmic bias, while the
fairness-aware training approach reduced performance disparities by 67.3% without compromising overall accuracy. The research demonstrates that systematic bias
auditing can significantly improve the equity of AI diagnostic tools while maintaining clinical utility. This work establishes essential methodological foundations for
ethical AI development in healthcare and provides practical tools for ensuring that
autism diagnostic models serve all populations equitably, addressing both technical
and societal imperatives for fair medical artificial intelligence.
Downloads: 385
Abstract Views: 355
Rank: 24220