Posted: Mar 09, 2024
This paper presents a novel neural architecture search (NAS) framework that optimizes convolutional neural networks for both accuracy and computational efficiency. Traditional NAS methods often prioritize accuracy at the expense of computational requirements, making them impractical for resource-constrained environments. Our approach employs a multi-objective optimization strategy that simultaneously maximizes classification accuracy while minimizing computational cost, measured in floating-point operations (FLOPs). We introduce a hierarchical search space that enables efficient exploration of architectural variations and implement a modified evolutionary algorithm with adaptive mutation rates. Experimental results on CIFAR-10 and ImageNet datasets demonstrate that our method discovers architectures that achieve competitive accuracy with state-of-the-art models while reducing computational requirements by 35-60%. The proposed framework provides a practical solution for deploying deep learning models in edge computing and mobile applications where computational resources are limited.
Downloads: 1173
Abstract Views: 651
Rank: 382002