Generative Adversarial Neural Architecture Search

Discriminator
DOI: 10.48550/arxiv.2105.09356 Publication Date: 2021-01-01
ABSTRACT
Despite the empirical success of neural architecture search (NAS) in deep learning applications, optimality, reproducibility and cost NAS schemes remain hard to assess. In this paper, we propose Generative Adversarial (GA-NAS) with theoretically provable convergence guarantees, promoting stability search. Inspired by importance sampling, GA-NAS iteratively fits a generator previously discovered top architectures, thus increasingly focusing on important parts large space. Furthermore, an efficient adversarial approach, where is trained reinforcement based rewards provided discriminator, being able explore space without evaluating number architectures. Extensive experiments show that beats best published results under several cases three public benchmarks. meantime, can handle ad-hoc constraints spaces. We be used improve already optimized baselines found other methods, including EfficientNet ProxylessNAS, terms ImageNet accuracy or parameters, their original
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....