Asynchronous Evolution of Deep Neural Network Architectures

Speedup
DOI: 10.48550/arxiv.2308.04102 Publication Date: 2023-01-01
ABSTRACT
Many evolutionary algorithms (EAs) take advantage of parallel evaluation candidates. However, if times vary significantly, many worker nodes (i.e.,\ compute clients) are idle much the time, waiting for next generation to be created. Evolutionary neural architecture search (ENAS), a class EAs that optimizes and hyperparameters deep networks, is particularly vulnerable this issue. This paper proposes generic asynchronous strategy (AES) then adapted work with ENAS. AES increases throughput by maintaining queue up $K$ individuals ready sent workers proceeding as soon $M<<K$ have been evaluated. A suitable value $M$ determined experimentally, balancing diversity efficiency. To showcase generality power AES, it was first evaluated in eight-line sorting network design (a single-population optimization task limited evaluation-time variability), achieving an over two-fold speedup. Next, 11-bit multiplexer discovery extended where 14-fold speedup observed. It scaled ENAS image captioning multi-population open-ended-optimization task), resulting In all problems, multifold performance improvement observed, suggesting promising method parallelizing evolution complex systems long variable times, such those
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....