Posted: Oct 28, 2025
This paper presents a novel neural architecture search (NAS) framework that optimizes convolutional neural networks for both accuracy and computational efficiency. Traditional NAS methods often focus solely on accuracy metrics, leading to computationally expensive models that are impractical for resource-constrained environments. Our approach employs a multi-objective optimization strategy that simultaneously considers classification accuracy, model size, and inference speed. We introduce a modified evolutionary algorithm with specialized mutation and crossover operations tailored for neural architecture exploration. Experimental results on CIFAR-10 and ImageNet datasets demonstrate that our method discovers architectures that achieve competitive accuracy while reducing computational requirements by 35-60%.
Downloads: 0
Abstract Views: 1858
Rank: 16222