- Advanced Memory and Neural Computing
- Ferroelectric and Negative Capacitance Devices
- Neural dynamics and brain function
- Neural Networks and Reservoir Computing
- ECG Monitoring and Analysis
- Non-Invasive Vital Sign Monitoring
- Cognitive Science and Mapping
- EEG and Brain-Computer Interfaces
- Medical Imaging Techniques and Applications
- Cardiovascular Function and Risk Factors
- COVID-19 diagnosis using AI
- Model Reduction and Neural Networks
- Memory and Neural Mechanisms
- Cephalopods and Marine Biology
- Phonocardiography and Auscultation Techniques
Chinese University of Hong Kong
2023-2024
University of Hong Kong
2018-2024
University of California, Irvine
2021-2023
Hong Kong University of Science and Technology
2018
Myocardial infarction (MI) is a medical emergency for which the early detection of symptoms desirable. The prevalence portable electrocardiogram (ECG) devices makes frequent screening MI possible. In this study, we develop an classifier that combines both convolutional and recurrent neural networks, suitable wearable ECG with only single lead recording. It performs multiclass classification to discriminate records from those healthy individuals patients existing chronic heart conditions, as...
Humans and many animals possess the remarkable ability to navigate environments by seamlessly switching between first-person perspectives (FPP) global map (GMP). However, neural mechanisms that underlie this transformation remain poorly understood. In study, we developed a variational autoencoder (VAE) model, enhanced with recurrent networks (RNNs), investigate computational principles behind perspective transformations. Our results reveal temporal sequence modeling is crucial for...
Researchers have developed machine learning-based ECG diagnostic algorithms that match or even surpass cardiologist level of performance. However, most them cannot be used in real-world, as older generation machines do not permit installation new algorithms.
To achieve the low latency, high throughput, and energy efficiency benefits of Spiking Neural Networks (SNNs), reducing memory compute requirements when running on a neuromorphic hardware is an important step. Neuromorphic architecture allows massively parallel computation with variable local bit-precisions. However, how different bit-precisions should be allocated to layers or connections network not trivial. In this work, we demonstrate layer-wise Hessian trace analysis can measure...
Today's Machine Learning(ML) systems, especially those running in server farms workloads such as Deep Neural Networks, which require billions of parameters and many hours to train a model, consume significant amount energy. To combat this, researchers have been focusing on new emerging neuromorphic computing models. Two models are Hyperdimensional Computing (HDC) Spiking Networks (SNNs), both with their own benefits. HDC has various desirable properties that other Learning (ML) algorithms...
Today's machine learning (ML) systems, running workloads, such as deep neural networks, which require billions of parameters and many hours to train a model, consume significant amount energy. Due the complexity computation topology, even quantized models are hard deploy on edge devices under energy constraints. To combat this, researchers have been focusing new emerging neuromorphic computing models. Two those hyperdimensional (HDC) spiking networks (SNNs), both with their own benefits. HDC...
Physics modeling can improve patient monitoring in clinical applications such as cardiovascular flows, but is challenging due to the limited memory and compute capability available on typical edge devices. We present a novel way train Informed Neural Network (PINN) at edge, while using high performance computing cloud, without transfer of sensitive data. This achieved by assigning data fitting (regression) loss where acquired, physics informed (residual) cloud computational capabilities are...
To achieve the low latency, high throughput, and energy efficiency benefits of Spiking Neural Networks (SNNs), reducing memory compute requirements when running on a neuromorphic hardware is an important step. Neuromorphic architecture allows massively parallel computation with variable local bit-precisions. However, how different bit-precisions should be allocated to layers or connections network not trivial. In this work, we demonstrate layer-wise Hessian trace analysis can measure...