Skip to Main content Skip to Navigation
Conference papers

Accelerating Neural Architecture Search with Rank-Preserving Surrogate Models

Abstract : Over the past years, deep learning has enabled significant progress in several tasks, such as image recognition, speech recognition and language modelling. Novel Neural architectures are behind this achievement. However, manually designing these architectures by human experts is time-consuming and error-prone. Neural architecture search (NAS) automates the design process by searching for the best architecture in a huge search space. This search process requires evaluating each sampled architecture via time-consuming training. To speed up NAS algorithms, several existing approaches use surrogate models that predict the neural architectures' precision instead of training each sampled one. In this paper, we propose RS-NAS for Rank-preserving Surrogate model in NAS, a surrogate model trained with a rank-preserving loss function. We posit that the search algorithm doesn't need to know the exact accuracy of a candidate architecture but instead needs to know if it is better or worse than others. We thoroughly experiment and validate our surrogate models with state-of-the-art search algorithms. Using the rank-preserving surrogate models, local search in DARTS finds a 2% more accurate architecture than using the NAS-Bench-301 surrogate model on the same search time.
Document type :
Conference papers
Complete list of metadata
Contributor : Kathleen TORCK Connect in order to contact the contributor
Submitted on : Friday, October 15, 2021 - 9:55:02 AM
Last modification on : Thursday, June 23, 2022 - 6:38:17 PM




Hadjer Benmeziane, Hamza Ouarnoughi, Kaoutar El Maghraoui, Smail Niar. Accelerating Neural Architecture Search with Rank-Preserving Surrogate Models. 7th International Conference on Arab Women in Computing, Aug 2021, Sharja, United Arab Emirates. ⟨10.1145/3485557.3485579⟩. ⟨hal-03379648⟩



Record views