Lihua Zhang

ORCID: 0000-0003-0467-4347
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • EEG and Brain-Computer Interfaces
  • Neuroscience and Neural Engineering
  • Advanced Memory and Neural Computing
  • Advanced Neural Network Applications
  • Human Pose and Action Recognition
  • Robotics and Sensor-Based Localization
  • Neural dynamics and brain function
  • Sentiment Analysis and Opinion Mining
  • Functional Brain Connectivity Studies
  • Robotic Path Planning Algorithms
  • Video Surveillance and Tracking Methods
  • Emotion and Mood Recognition
  • AI in cancer detection
  • Anomaly Detection Techniques and Applications
  • Adaptive Control of Nonlinear Systems
  • Reinforcement Learning in Robotics
  • Medical Image Segmentation Techniques
  • Robotic Locomotion and Control
  • Gaze Tracking and Assistive Technology
  • Gait Recognition and Analysis
  • Corrosion Behavior and Inhibition
  • Advanced Measurement and Detection Methods
  • Iterative Learning Control Systems
  • Hand Gesture Recognition Systems
  • Robotics and Automated Systems

Ji Hua Laboratory
2021-2025

State Key Laboratory of Medical Neurobiology
2022-2025

Fudan University
2009-2025

Fujian Provincial Hospital
2025

Fuzhou University
2025

Minzu University of China
2021-2025

University of Utah
2025

Northeast Institute of Geography and Agroecology
2025

Chinese Academy of Sciences
2022-2025

Institute of Botany
2025

Multimodal emotion recognition aims to identify human emotions from text, audio, and visual modalities. Previous methods either explore correlations between different modalities or design sophisticated fusion strategies. However, the serious problem is that distribution gap information redundancy often exist across heterogeneous modalities, resulting in learned multimodal representations may be unrefined. Motivated by these observations, we propose a Feature-Disentangled Emotion Recognition...

10.1145/3503161.3547754 article EN Proceedings of the 30th ACM International Conference on Multimedia 2022-10-10

Context-Aware Emotion Recognition (CAER) is a crucial and challenging task that aims to perceive the emotional states of target person with contextual information. Recent approaches invariably focus on designing sophisticated architectures or mechanisms extract seemingly meaningful representations from subjects contexts. However, long-overlooked issue context bias in existing datasets leads significantly unbalanced distribution among different scenarios. Concretely, harmful confounder...

10.1109/cvpr52729.2023.01822 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023-06-01

10.1109/icassp49660.2025.10887699 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2025-03-12

Minimally invasive endovascular stent electrode is a currently emerging technology in neural engineering with minimal damage to the tissue. However, typical still requires resistive welding and relatively large, limiting its application mainly on large animal or thick vessels. In this study, we investigated feasibility of laser ablation micro-wire diameter as small 25 μm verified it superior sagittal sinus (SSS) rat.
Approach: We have developed expose sites both sides without damaging...

10.1088/2057-1976/adc266 article EN Biomedical Physics & Engineering Express 2025-03-19

In recent years, assessing action quality from videos has attracted growing attention in computer vision community and human interaction. Most existing approaches usually tackle this problem by directly migrating the model recognition tasks, which ignores intrinsic differences within feature map such as foreground background information. To address issue, we propose a Tube Self-Attention Network (TSA-Net) for assessment (AQA). Specifically, introduce single object tracker into AQA Module...

10.1145/3474085.3475438 article EN Proceedings of the 30th ACM International Conference on Multimedia 2021-10-17

Speech emotion recognition combining linguistic content and audio signals in the dialog is a challenging task. Nevertheless, previous approaches have failed to explore cues contextual interactions ignored long-range dependencies between elements from different modalities. To tackle above issues, this letter proposes multimodal speech method using text data. We first present transformer module introduce information via embedding utterances interlocutors, which enhances representation of...

10.1109/lsp.2022.3210836 article EN IEEE Signal Processing Letters 2022-01-01

Understanding human behaviors and intents from videos is a challenging task. Video flows usually involve time-series data different modalities, such as natural language, facial gestures, acoustic information. Due to the variable receiving frequency for sequences each modality, collected multimodal streams are unaligned. For fusion of asynchronous sequences, existing methods focus on projecting multiple modalities into common latent space learning hybrid representations, which neglects...

10.1145/3503161.3547755 article EN Proceedings of the 30th ACM International Conference on Multimedia 2022-10-10

Multimodal Sentiment Analysis (MSA) has attracted widespread research attention recently. Most MSA studies are based on the assumption of signal completeness. However, many inevitable factors in real applications lead to uncertain missing, causing significant degradation model performance. To this end, we propose a Robust multimodal Missing Signal Framework (RMSF) handle problem missing for tasks and can be generalized other patterns. Specifically, hierarchical cross modal interaction module...

10.1109/lsp.2023.3324552 article EN IEEE Signal Processing Letters 2023-01-01

Driver distraction has become a significant cause of severe traffic accidents over the past decade. Despite growing development vision-driven driver monitoring systems, lack comprehensive perception datasets restricts road safety and security. In this paper, we present an AssIstive Driving pErception dataset (AIDE) that considers context information both inside outside vehicle in naturalistic scenarios. AIDE facilitates holistic through three distinctive characteristics, including multi-view...

10.1109/iccv51070.2023.01871 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2023-10-01

Accurate visualization of liver tumors and their surrounding blood vessels is essential for noninvasive diagnosis prognosis prediction tumors. In medical image segmentation, there still a lack in-depth research on the simultaneous segmentation peritumoral vessels. To this end, we collect first tumor, vessel benchmark datasets containing 52 portal vein phase computed tomography images with liver, annotations. case, propose 3D U-shaped Cross-Attention Network (UCA-Net) that utilizes tailored...

10.1109/icassp49357.2023.10095689 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2023-05-05

Multimodal Sentiment Analysis (MSA) has attracted widespread research attention recently. Most MSA studies are based on the assumption of modality completeness. However, many inevitable factors in real-world scenarios lead to uncertain missing modalities, which invalidate fixed multimodal fusion approaches. To this end, we propose a Unified Missing self-Distillation Framework (UMDF) handle problem modalities MSA. Specifically, unified self-distillation mechanism UMDF drives single network...

10.1609/aaai.v38i9.28871 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2024-03-24

The net ecosystem CO2 exchange (NEE) is a critical parameter for quantifying terrestrial ecosystems and their contributions to the ongoing climate change. accumulation of ecological data calling more advanced quantitative approaches assisting NEE prediction. In this study, we applied two widely used machine learning algorithms, Random Forest (RF) Extreme Gradient Boosting (XGBoost), build models simulating in major biomes based on FLUXNET dataset. Both accurately predicted all biomes, while...

10.3390/rs13122242 article EN cc-by Remote Sensing 2021-06-08

Human activity recognition has a wide range of application prospects and research significance in intelligent monitoring, assisted driving human-computer interaction, such as monitoring the elderly living alone, warning dangerous behaviors drivers development somatosensory games. Traditionally, human is realized by cameras or wearable devices. However, privacy-sensitive areas wards cars, users may not be willing to share too many private videos. In this paper, we use millimeterwave radar...

10.1109/ijcnn52387.2021.9533989 article EN 2022 International Joint Conference on Neural Networks (IJCNN) 2021-07-18

Recently, computing-in-memory (CIM) macros, originally designed to reduce the intensive memory accesses of Al tasks, have been employed in low-power machine learning SoCs due their ultra-high computing efficiency [1]–[3]. These CIM macros still access weight data through on/off-chip memories, similar processing elements near-memory-computing architectures. The implementation poses challenges when counting overall SoC energy (Fig. 15.3.1). First, wall issue is unsolved. updates affect system...

10.1109/isscc42614.2022.9731657 article EN 2022 IEEE International Solid- State Circuits Conference (ISSCC) 2022-02-20
Coming Soon ...