Single Path One-Shot Neural Architecture Search with Uniform Sampling
FLOPS
DOI:
10.48550/arxiv.1904.00420
Publication Date:
2019-01-01
AUTHORS (7)
ABSTRACT
We revisit the one-shot Neural Architecture Search (NAS) paradigm and analyze its advantages over existing NAS approaches. Existing method, however, is hard to train not yet effective on large scale datasets like ImageNet. This work propose a Single Path One-Shot model address challenge in training. Our central idea construct simplified supernet, where all architectures are single paths so that weight co-adaption problem alleviated. Training performed by uniform path sampling. All (and their weights) trained fully equally. Comprehensive experiments verify our approach flexible effective. It easy fast search. effortlessly supports complex search spaces (e.g., building blocks, channel, mixed-precision quantization) different constraints FLOPs, latency). thus convenient use for various needs. achieves start-of-the-art performance dataset
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....