Posted: Oct 28, 2025
This paper presents a novel neural architecture search (NAS) framework that optimizes convolutional neural networks for both accuracy and computational efficiency. Traditional NAS methods often prioritize accuracy while neglecting computational constraints, leading to models that are impractical for real-world deployment. Our approach employs a multi-objective optimization strategy that simultaneously maximizes classification accuracy and minimizes computational complexity. We introduce a hierarchical search space that enables efficient exploration of architectural variations while maintaining structural coherence. Experimental results on CIFAR-10 and ImageNet datasets demonstrate that our method discovers architectures that achieve state-of-the-art accuracy with significantly reduced computational requirements compared to manually designed networks and existing NAS approaches. The proposed framework reduces floating-point operations by up to 45%.
Downloads: 0
Abstract Views: 1045
Rank: 6660