- Model Reduction and Neural Networks
- Neural Networks and Applications
- Adversarial Robustness in Machine Learning
- Anomaly Detection Techniques and Applications
- Advanced Neural Network Applications
- Sparse and Compressive Sensing Techniques
- Stochastic Gradient Optimization Techniques
- Tensor decomposition and applications
- Domain Adaptation and Few-Shot Learning
- Generative Adversarial Networks and Image Synthesis
- Fluid Dynamics and Turbulent Flows
- Image and Signal Denoising Methods
- Time Series Analysis and Forecasting
- Advanced Vision and Imaging
- Computational Physics and Python Applications
- Advanced Image Processing Techniques
- Gaussian Processes and Bayesian Inference
- Matrix Theory and Algorithms
- Lattice Boltzmann Simulation Studies
- Digital Media Forensic Detection
- Explainable Artificial Intelligence (XAI)
- Bayesian Methods and Mixture Models
- Blind Source Separation Techniques
- Atmospheric and Environmental Gas Dynamics
- Retinal and Macular Surgery
Lawrence Berkeley National Laboratory
2023
University of California, Berkeley
2017-2022
International Computer Science Institute
2019-2022
University of Pittsburgh
2020-2021
Berkeley College
2020
University of St Andrews
2015-2019
University of Washington Applied Physics Laboratory
2017-2019
Seattle University
2018
University of Washington
2017
In many applications, it is important to reconstruct a fluid flow field, or some other high-dimensional state, from limited measurements and data. this work, we propose shallow neural network-based learning methodology for such reconstruction. Our approach learns an end-to-end mapping between the sensor without any heavy preprocessing on raw No prior knowledge assumed be available, estimation method purely data-driven. We demonstrate performance three examples in mechanics oceanography,...
We introduce the method of compressed dynamic mode decomposition (cDMD) for background modeling. The is a regression technique that integrates two leading data analysis methods in use today: Fourier transforms and singular value decomposition. Borrowing ideas from sensing matrix sketching, cDMD eases computational workload high-resolution video processing. key principal to obtain on (small) representation feed. Hence, algorithm scales with intrinsic rank matrix, rather than size actual...
This paper presents a randomized algorithm for computing the near-optimal low-rank dynamic mode decomposition (DMD). Randomized algorithms are emerging techniques to compute matrix approximations at fraction of cost deterministic algorithms, easing computational challenges arising in area `big data'. The idea is derive small from high-dimensional data, which then used efficiently modes and eigenvalues. presented modular probabilistic framework, approximation quality can be controlled via...
Matrix decompositions are fundamental tools in the area of applied mathematics, statistical computing, and machine learning. In particular, low-rank matrix vital, widely used for data analysis, dimensionality reduction, compression. Massive datasets, however, pose a computational challenge traditional algorithms, placing significant constraints on both memory processing power. Recently, powerful concept randomness has been introduced as strategy to ease load. The essential idea probabilistic...
In addition to providing high-profile successes in computer vision and natural language processing, neural networks also provide an emerging set of techniques for scientific problems. Such data-driven models, however, typically ignore physical insights from the system under consideration. Among other things, a physics-informed model formulation should encode some degree stability or robustness well-conditioning (in that small change input will not lead drastic changes output), characteristic...
Sparse principal component analysis (SPCA) has emerged as a powerful technique for modern data analysis, providing improved interpretation of low-rank structures by identifying localized spatial in the and disambiguating between distinct time scales. We demonstrate robust scalable SPCA algorithm formulating it value-function optimization problem. This viewpoint leads to flexible computationally efficient algorithm. The approach can further leverage randomized methods from linear algebra...
Abstract Dynamical systems that evolve continuously over time are ubiquitous throughout science and engineering. Machine learning (ML) provides data-driven approaches to model predict the dynamics of such systems. A core issue with this approach is ML models typically trained on discrete data, using methodologies not aware underlying continuity properties. This results in often do capture any continuous dynamics—either system interest, or indeed related system. To address challenge, we...
We demonstrate that the integration of recently developed dynamic mode decomposition with a multi-resolution analysis allows for video streams into multi-time scale features and objects. A one-level separation background (low-rank) foreground (sparse) video, or robust principal component analysis. Further iteration method data set to be separated objects moving at different rates against slowly varying background, thus allowing multiple-target tracking detection. The algorithm is...
The classical way of studying the rainfall-runoff processes in water cycle relies on conceptual or physically-based hydrologic models. Deep learning (DL) has recently emerged as an alternative and blossomed hydrology community for simulations. However, decades-old Long Short-Term Memory (LSTM) network remains benchmark this task, outperforming newer architectures like Transformers. In work, we propose a State Space Model (SSM), specifically Frequency Tuned Diagonal Sequence (S4D-FT) model,...
Transformers have recently shown strong performance in time-series forecasting, but their all-to-all attention mechanism overlooks the (temporal) causal and often (temporally) local nature of data. We introduce Powerformer, a novel Transformer variant that replaces noncausal weights with are reweighted according to smooth heavy-tailed decay. This simple yet effective modification endows model an inductive bias favoring temporally dependencies, while still allowing sufficient flexibility...
Abstract The CANDECOMP/PARAFAC (CP) tensor decomposition is a popular dimensionality-reduction method for multiway data. Dimensionality reduction often sought after since many high-dimensional tensors have low intrinsic rank relative to the dimension of ambient measurement space. However, emergence ‘big data’ poses significant computational challenges computing this fundamental decomposition. By leveraging modern randomized algorithms, we demonstrate that coherent structures can be learned...
Viewing recurrent neural networks (RNNs) as continuous-time dynamical systems, we propose a unit that describes the hidden state's evolution with two parts: well-understood linear component plus Lipschitz nonlinearity. This particular functional form facilitates stability analysis of long-term behavior using tools from nonlinear systems theory. In turn, this enables architectural design decisions before experimentation. Sufficient conditions for global are obtained, motivating novel scheme...
Randomized numerical linear algebra - RandNLA, for short concerns the use of randomization as a resource to develop improved algorithms large-scale computations. The origins contemporary RandNLA lay in theoretical computer science, where it blossomed from simple idea: provides an avenue computing approximate solutions problems more efficiently than deterministic algorithms. This idea proved fruitful development scalable machine learning and statistical data analysis applications. However,...
We demonstrate a heuristic algorithm to compute the approximate low-rank singular value decomposition. The is inspired by ideas from compressed sensing and, in particular, suitable for image and video processing applications. Specifically, our decomposition (cSVD) employs aggressive random test matrices efficiently sketch row space of input matrix. resulting representation data enables computation an accurate approximation dominant high-dimensional left right vectors. benchmark cSVD against...
Recent work has attempted to interpret residual networks (ResNets) as one step of a forward Euler discretization an ordinary differential equation, focusing mainly on syntactic algebraic similarities between the two systems. Discrete dynamical integrators continuous systems, however, have much richer structure. We first show that ResNets fail be meaningful in this sense. then demonstrate neural network models can learn represent with structure and properties, by embedding them into...
Recurrent neural networks are widely used on time series data, yet such models often ignore the underlying physical structures in sequences. A new class of physics-based methods related to Koopman theory has been introduced, offering an alternative for processing nonlinear dynamical systems. In this work, we propose a novel Consistent Autoencoder model which, unlike majority existing leverages forward and backward dynamics. Key our approach is analysis which explores interplay between...