Data Assimilation in Chaotic Systems Using Deep Reinforcement Learning
Adaptability
Ensemble Learning
DOI:
10.48550/arxiv.2401.00916
Publication Date:
2024-01-01
AUTHORS (5)
ABSTRACT
Data assimilation (DA) plays a pivotal role in diverse applications, ranging from climate predictions and weather forecasts to trajectory planning for autonomous vehicles. A prime example is the widely used ensemble Kalman filter (EnKF), which relies on linear updates minimize variance among of forecast states. Recent advancements have seen emergence deep learning approaches this domain, primarily within supervised framework. However, adaptability such models untrained scenarios remains challenge. In study, we introduce novel DA strategy that utilizes reinforcement (RL) apply state corrections using full or partial observations variables. Our investigation focuses demonstrating approach chaotic Lorenz '63 system, where agent's objective root-mean-squared error between corresponding Consequently, agent develops correction strategy, enhancing model based available system observations. employs stochastic action policy, enabling Monte Carlo-based framework randomly sampling policy generate an assimilated realizations. Results demonstrate developed RL algorithm performs favorably when compared EnKF. Additionally, illustrate capability assimilate non-Gaussian data, addressing significant limitation
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....