Zheyuan Hu

ORCID: 0000-0002-0116-9874
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Model Reduction and Neural Networks
  • Neural Networks and Applications
  • Gaussian Processes and Bayesian Inference
  • Magnetic Properties and Applications
  • Nuclear Engineering Thermal-Hydraulics
  • Fractional Differential Equations Solutions
  • Speech and dialogue systems
  • Machine Learning in Healthcare
  • AI in Service Interactions
  • Machine Learning in Materials Science
  • Robotics and Automated Systems
  • Reinforcement Learning in Robotics
  • Domain Adaptation and Few-Shot Learning
  • Manufacturing Process and Optimization
  • Tensor decomposition and applications
  • Speech Recognition and Synthesis
  • Generative Adversarial Networks and Image Synthesis
  • Industrial Vision Systems and Defect Detection
  • Nuclear reactor physics and engineering
  • Quantum many-body systems
  • Nanofluid Flow and Heat Transfer
  • Risk and Portfolio Optimization
  • Advanced Bandit Algorithms Research
  • Iterative Methods for Nonlinear Equations
  • Machine Learning and ELM

National University of Singapore
2022-2025

University of California, Berkeley
2024

Beijing Institute of Technology
2022-2024

Tsinghua University
2021

University of Science and Technology of China
2020

Physics-informed neural networks (PINNs) have become a popular choice for solving high-dimensional partial differential equations (PDEs) due to their excellent approximation power and generalization ability. Recently, Extended PINNs (XPINNs) based on domain decomposition methods attracted considerable attention effectiveness in modeling multiscale multiphysics problems parallelization. However, theoretical understanding convergence properties remains unexplored. In this study, we take an...

10.1137/21m1447039 article EN SIAM Journal on Scientific Computing 2022-09-27

The curse-of-dimensionality taxes computational resources heavily with exponentially increasing cost as the dimension increases. This poses great challenges in solving high-dimensional partial differential equations (PDEs), Richard E. Bellman first pointed out over 60 years ago. While there has been some recent success numerical PDEs high dimensions, such computations are prohibitively expensive, and true scaling of general nonlinear to dimensions never achieved. We develop a new method up...

10.1016/j.neunet.2024.106369 article EN cc-by Neural Networks 2024-05-07

Molecular property prediction (e.g., energy) is an essential problem in chemistry and biology. Unfortunately, many supervised learning methods usually suffer from the of scarce labeled molecules chemical space, where such labels are generally obtained by Density Functional Theory (DFT) calculation which extremely computational costly. An effective solution to incorporate unlabeled a semi-supervised fashion. However, representation for large amounts challenging, including joint issue both...

10.1145/3394486.3403117 preprint EN 2020-08-20

Physics-Informed Neural Networks (PINNs) have proven effective in solving partial differential equations (PDEs), especially when some data are available by seamlessly blending and physics. However, extending PINNs to high-dimensional even high-order PDEs encounters significant challenges due the computational cost associated with automatic differentiation residual loss function calculation. Herein, we address limitations of handling introducing Hutchinson Trace Estimation (HTE) method....

10.1016/j.cma.2024.116883 article EN cc-by Computer Methods in Applied Mechanics and Engineering 2024-03-01

The Fokker-Planck (FP) equation is a foundational PDE in stochastic processes. However, curse of dimensionality (CoD) poses challenge when dealing with high-dimensional FP PDEs. Although Monte Carlo and vanilla Physics-Informed Neural Networks (PINNs) have shown the potential to tackle CoD, both methods exhibit numerical errors high dimensions probability density function (PDF) associated Brownian motion. point-wise PDF values tend decrease exponentially as dimension increases, surpassing...

10.48550/arxiv.2402.07465 preprint EN arXiv (Cornell University) 2024-02-12

Machine learning algorithms with empirical risk minimization usually suffer from poor generalization performance due to the greedy exploitation of correlations among training data, which are not stable under distributional shifts. Recently, some invariant methods for out-of-distribution (OOD) have been proposed by leveraging multiple environments find relationships. However, modern datasets frequently assembled merging data sources without explicit source labels. The resultant unobserved...

10.48550/arxiv.2105.03818 preprint EN other-oa arXiv (Cornell University) 2021-01-01

The curse-of-dimensionality taxes computational resources heavily with exponentially increasing cost as the dimension increases. This poses great challenges in solving high-dimensional PDEs, Richard E. Bellman first pointed out over 60 years ago. While there has been some recent success numerically partial differential equations (PDEs) high dimensions, such computations are prohibitively expensive, and true scaling of general nonlinear PDEs to dimensions never achieved. We develop a new...

10.48550/arxiv.2307.12306 preprint EN cc-by arXiv (Cornell University) 2023-01-01

While physics-informed neural networks (PINNs) have been proven effective for low-dimensional partial differential equations (PDEs), the computational cost remains a hurdle in high-dimensional scenarios. This is particularly pronounced when computing high-order and derivatives loss. Randomized Smoothing PINN (RS-PINN) introduces Gaussian noise stochastic smoothing of original net model, enabling Monte Carlo methods derivative approximation, eliminating need costly auto-differentiation....

10.48550/arxiv.2311.15283 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Physics-informed neural networks (PINNs) have become a popular choice for solving high-dimensional partial differential equations (PDEs) due to their excellent approximation power and generalization ability. Recently, Extended PINNs (XPINNs) based on domain decomposition methods attracted considerable attention effectiveness in modeling multiscale multiphysics problems parallelization. However, theoretical understanding convergence properties remains unexplored. In this study, we take an...

10.48550/arxiv.2109.09444 preprint EN cc-by arXiv (Cornell University) 2021-01-01

Physics-Informed Neural Networks (PINNs) have proven effective in solving partial differential equations (PDEs), especially when some data are available by seamlessly blending and physics. However, extending PINNs to high-dimensional even high-order PDEs encounters significant challenges due the computational cost associated with automatic differentiation residual loss. Herein, we address limitations of handling introducing Hutchinson Trace Estimation (HTE). Starting second-order ubiquitous...

10.48550/arxiv.2312.14499 preprint EN cc-by arXiv (Cornell University) 2023-01-01

In recent years, deep-learning schemes have been widely and successfully used to diagnose bearing faults. However, as operating conditions change, the distribution of new data may differ from that previously learned data. Training using only old cannot guarantee good performance when handling data, vice versa. Here, we present an incremental learning scheme based on Repeated Replay Memory Indexing (R-REMIND) method for fault diagnosis. R-REMIND can learn information under various working...

10.3390/machines10050338 article EN cc-by Machines 2022-05-06

The curse-of-dimensionality taxes computational resources heavily with exponentially increasing cost as the dimension increases. This poses great challenges in solving high-dimensional PDEs, Richard E. Bellman first pointed out over 60 years ago. While there has been some recent success numerically partial differential equations (PDEs) high dimensions, such computations are prohibitively expensive, and true scaling of general nonlinear PDEs to dimensions never achieved. We develop a new...

10.2139/ssrn.4641406 preprint EN 2023-01-01

The ability to generalize under distributional shifts is essential reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i.i.d$ testing data. Recently, invariant learning methods for out-of-distribution (OOD) generalization propose find causally relationships multi-environments. However, modern datasets are frequently multi-sourced without explicit source labels, rendering many inapplicable. In this paper, we Kernelized Heterogeneous Risk...

10.48550/arxiv.2110.12425 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Kohn-Sham Density Functional Theory (KS-DFT) has been traditionally solved by the Self-Consistent Field (SCF) method. Behind SCF loop is physics intuition of solving a system non-interactive single-electron wave functions under an effective potential. In this work, we propose deep learning approach to KS-DFT. First, in contrast conventional loop, directly minimize total energy reparameterizing orthogonal constraint as feed-forward computation. We prove that such same expressivity method, yet...

10.48550/arxiv.2303.00399 preprint EN cc-by arXiv (Cornell University) 2023-01-01
Coming Soon ...