- Generative Adversarial Networks and Image Synthesis
- Domain Adaptation and Few-Shot Learning
- Adversarial Robustness in Machine Learning
- Gaussian Processes and Bayesian Inference
- Reinforcement Learning in Robotics
- Explainable Artificial Intelligence (XAI)
- Topic Modeling
- Model Reduction and Neural Networks
- Control Systems and Identification
- Wireless Networks and Protocols
- Digital Media Forensic Detection
- Anomaly Detection Techniques and Applications
- Human Pose and Action Recognition
- Machine Learning in Healthcare
- Indoor and Outdoor Localization Technologies
- Context-Aware Activity Recognition Systems
- Computational and Text Analysis Methods
- COVID-19 diagnosis using AI
- Genetics and Neurodevelopmental Disorders
- Artificial Intelligence in Games
- Aerosol Filtration and Electrostatic Precipitation
- Advanced MEMS and NEMS Technologies
- Cyclone Separators and Fluid Dynamics
- Music and Audio Processing
- Embedded Systems and FPGA Applications
Hohai University
2023
Northeastern University
2023
Stanford University
2013-2021
Stanford Medicine
2020
Domain adaptation refers to the problem of leveraging labeled data in a source domain learn an accurate model target where labels are scarce or unavailable. A recent approach for finding common representation two domains is via adversarial training (Ganin & Lempitsky, 2015), which attempts induce feature extractor that matches and distributions some space. However, faces critical limitations: 1) if extraction function has high-capacity, then distribution matching weak constraint, 2)...
Adversarial examples are typically constructed by perturbing an existing data point within a small matrix norm, and current defense methods focused on guarding against this type of attack. In paper, we propose unrestricted adversarial examples, new threat model where the attackers not restricted to norm-bounded perturbations. Different from perturbation-based attacks, synthesize entirely scratch using conditional generative models. Specifically, first train Auxiliary Classifier Generative...
The variational autoencoder (VAE) is a popular model for density estimation and representation learning. Canonically, the principle suggests to prefer an expressive inference so that approximation accurate. However, it often overlooked overly-expressive can be detrimental test set performance of both amortized posterior approximator and, more importantly, generative estimator. In this paper, we leverage fact VAEs rely on propose techniques regularization (AIR) control smoothness model. We...
Given datasets from multiple domains, a key challenge is to efficiently exploit these data sources for modeling target domain. Variants of this problem have been studied in many contexts, such as cross-domain translation and domain adaptation. We propose AlignFlow, generative framework that models each via normalizing flow. The use flows allows a) flexibility specifying learning objectives adversarial training, maximum likelihood estimation, or hybrid the two methods; b) exact inference...
Embed-to-control (E2C) is a model for solving high-dimensional optimal control problems by combining variational auto-encoders with locally-optimal controllers. However, the E2C suffers from two major drawbacks: 1) its objective function does not correspond to likelihood of data sequence and 2) encoder used embedding typically has large approximation error, especially when there noise in system dynamics. In this paper, we present new learning robust locally-linear controllable (RCE). Our...
Learning disentangled representations that correspond to factors of variation in real-world data is critical interpretable and human-controllable machine learning. Recently, concerns about the viability learning a purely unsupervised manner has spurred shift toward incorporation weak supervision. However, there currently no formalism identifies when how supervision will guarantee disentanglement. To address this issue, we provide theoretical framework assist analyzing disentanglement...
High-dimensional observations are a major challenge in the application of model-based reinforcement learning (MBRL) to real-world environments. To handle high-dimensional sensory inputs, existing approaches use representation map into lower-dimensional latent space that is more amenable dynamics estimation and planning. In this work, we present an information-theoretic approach employs temporal predictive coding encode elements environment can be predicted across time. Since focuses on...
We introduce a new framework for training deep generative models high-dimensional conditional density estimation. The Bottleneck Conditional Density Estimator (BCDE) is variant of the variational autoencoder (CVAE) that employs layer(s) stochastic variables as bottleneck between input $x$ and target $y$, where both are high-dimensional. Crucially, we propose hybrid method blends model with joint model. Hybrid blending key to effective BCDE, which avoids overfitting provides novel mechanism...
High-dimensional observations and unknown dynamics are major challenges when applying optimal control to many real-world decision making tasks. The Learning Controllable Embedding (LCE) framework addresses these by embedding the into a lower dimensional latent space, estimating dynamics, then performing directly in space. To ensure learned predictive of next-observations, all existing LCE approaches decode back observation space explicitly perform next-observation prediction---a challenging...