Multiresolution convolutional autoencoders
FOS: Computer and information sciences
Computer Science - Machine Learning
Image and Video Processing (eess.IV)
Machine Learning (stat.ML)
Numerical Analysis (math.NA)
Electrical Engineering and Systems Science - Image and Video Processing
01 natural sciences
Machine Learning (cs.LG)
Statistics - Machine Learning
0103 physical sciences
FOS: Mathematics
FOS: Electrical engineering, electronic engineering, information engineering
Mathematics - Numerical Analysis
DOI:
10.1016/j.jcp.2022.111801
Publication Date:
2022-11-21T15:59:03Z
AUTHORS (4)
ABSTRACT
20 pages, 11 figures<br/>We propose a multi-resolution convolutional autoencoder (MrCAE) architecture that integrates and leverages three highly successful mathematical architectures: (i) multigrid methods, (ii) convolutional autoencoders and (iii) transfer learning. The method provides an adaptive, hierarchical architecture that capitalizes on a progressive training approach for multiscale spatio-temporal data. This framework allows for inputs across multiple scales: starting from a compact (small number of weights) network architecture and low-resolution data, our network progressively deepens and widens itself in a principled manner to encode new information in the higher resolution data based on its current performance of reconstruction. Basic transfer learning techniques are applied to ensure information learned from previous training steps can be rapidly transferred to the larger network. As a result, the network can dynamically capture different scaled features at different depths of the network. The performance gains of this adaptive multiscale architecture are illustrated through a sequence of numerical experiments on synthetic examples and real-world spatial-temporal data.<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (48)
CITATIONS (18)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....