Posted: May 10, 2024
This paper presents a novel neural architecture search (NAS) framework that optimizes convolutional neural networks for both accuracy and computational efficiency. Traditional NAS methods often focus solely on accuracy metrics, leading to computationally expensive models that are impractical for resource-constrained environments. Our approach employs a multi-objective optimization strategy that simultaneously considers classification accuracy, model size, and inference speed. We introduce a modified evolutionary algorithm with specialized mutation and crossover operations tailored for neural architecture exploration. Experimental results on CIFAR-10 and ImageNet datasets demonstrate that our method discovers architectures that achieve competitive accuracy while reducing computational requirements by 35-60% compared to hand-designed networks and 20-40% compared to single-objective NAS approaches. The proposed framework provides a systematic methodology for developing efficient deep learning models suitable for deployment on edge devices and mobile platforms.
Downloads: 830
Abstract Views: 1875
Rank: 406549