Posted: Oct 28, 2025
This paper presents a novel neural architecture search (NAS) framework that optimizes convolutional neural networks for both accuracy and computational efficiency. Traditional NAS methods often prioritize accuracy at the expense of computational requirements, making them impractical for resource-constrained environments. Our approach employs a multi-objective optimization strategy that simultaneously maximizes classification accuracy while minimizing computational cost, measured in floating-point operations (FLOPs). We introduce a hierarchical search space that enables efficient exploration of architectural variations and implement a modified evolutionary algorithm with adaptive mutation rates. Experimental results on CIFAR-10 and ImageNet datasets demonstrate that our method discovers architectures that achieve competitive accuracy with state-of-the-art models while reducing computational requirements by 35-60%.
Downloads: 0
Abstract Views: 451
Rank: 374189