Zengyu Wan

ORCID: 0009-0009-0777-1143
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Gaze Tracking and Assistive Technology
  • CCD and CMOS Imaging Sensors
  • Image and Video Quality Assessment
  • Advanced Optical Sensing Technologies
  • Advanced Memory and Neural Computing
  • Advanced Image Processing Techniques
  • Image Enhancement Techniques
  • Advanced Vision and Imaging
  • Neural Networks and Reservoir Computing
  • Speech and dialogue systems
  • Advanced Neural Network Applications

University of Science and Technology of China
2022-2024

Due to the wide dynamic range in real low-light scenes, there will be large differences degree of contrast degradation and detail blurring captured images, making it difficult for existing end-to-end methods enhance images normal exposure. To address above issue, we decompose image enhancement into a recursive task propose brightness-perceiving-based framework high enhancement. Specifically, our consists two parallel sub-networks: Adaptive Contrast Texture network (ACT-Net) Brightness...

10.1109/tai.2023.3339092 article EN IEEE Transactions on Artificial Intelligence 2023-12-04

10.1109/cvprw63382.2024.00585 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2024-06-17

This survey reviews the AIS 2024 Event-Based Eye Tracking (EET) Challenge. The task of challenge focuses on processing eye movement recorded with event cameras and predicting pupil center eye. emphasizes efficient tracking to achieve good accuracy efficiency trade-off. During period, 38 participants registered for Kaggle competition, 8 teams submitted a factsheet. novel diverse methods from factsheets are reviewed analyzed in this advance future event-based research.

10.48550/arxiv.2404.11770 preprint EN arXiv (Cornell University) 2024-04-17

Event-based eye tracking has shown great promise with the high temporal resolution and low redundancy provided by event camera. However, diversity abruptness of movement patterns, including blinking, fixating, saccades, smooth pursuit, pose significant challenges for localization. To achieve a stable event-based eye-tracking system, this paper proposes bidirectional long-term sequence modeling time-varying state selection mechanism to fully utilize contextual information in response...

10.48550/arxiv.2404.12083 preprint EN arXiv (Cornell University) 2024-04-18

Event cameras respond to temporal dynamics, helping resolve ambiguities in spatio-temporal changes for optical flow estimation. However, the unique event distribution challenges feature extraction, and direct construction of motion representation through orthogonal view is less than ideal due entanglement appearance motion. This paper proposes transform into a motion-dependent one enhancing event-based presents Motion View-based Network (MV-Net) practical Specifically, this transformation...

10.1109/tip.2024.3426469 article EN IEEE Transactions on Image Processing 2024-01-01

Automatic lip-reading (ALR) is the task of recognizing words based on visual information obtained from speaker's lip movements. In this study, we introduce event cameras, a novel type sensing device, for ALR. Event cameras offer both technical and application advantages over conventional ALR due to their higher temporal resolution, less redundant information, lower power consumption. To recognize data, propose multigrained spatiotemporal features learning framework, which capable perceiving...

10.1109/tnnls.2024.3440495 article EN IEEE Transactions on Neural Networks and Learning Systems 2024-01-01

Scene reconstruction from casually captured videos has wide applications in real-world scenarios. With recent advancements differentiable rendering techniques, several methods have attempted to simultaneously optimize scene representations (NeRF or 3DGS) and camera poses. Despite progress, existing relying on traditional input tend fail high-speed (or equivalently low-frame-rate) Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high...

10.48550/arxiv.2410.15392 preprint EN arXiv (Cornell University) 2024-10-20
Coming Soon ...