- Soft Robotics and Applications
- Robot Manipulation and Learning
- Tactile and Sensory Interactions
- Robotics and Sensor-Based Localization
- Face recognition and analysis
- Advanced Sensor and Energy Harvesting Materials
- Augmented Reality Applications
- 3D Shape Modeling and Analysis
- Teleoperation and Haptic Systems
- 3D Surveying and Cultural Heritage
Tsinghua–Berkeley Shenzhen Institute
2022-2024
University Town of Shenzhen
2024
Tsinghua University
2023-2024
Toronto Metropolitan University
2022
Shanghai University
2022
The grasping of transparent objects is challenging but significance to robots. In this article, a visual–tactile fusion framework for object in complex backgrounds proposed, which synergizes the advantages vision and touch, greatly improves efficiency objects. First, we propose multiscene synthetic dataset named SimTrans12 K together with Gaussian-mask annotation method. Next, based on TaTa gripper, network object-grasping convolutional neural position detection, shows good performance both...
Transparent objects are a common part of daily life, but their unique optical properties make estimating 6D pose challenging task. In this letter, we propose TGF-Net, monocular instance-level estimation method for transparent based on geometric fusion. TGF-Net learns the edge features and surface fragments as intermediate reduces influence appearance changes by fusing rich in network. Moreover, an approach generating high-fidelity large-scale synthetic datasets using Blender use to generate...
Humans can feel and grasp efficiently in the dark through tactile feedback, whereas it is still a challenging task for robots. In this research, we create novel soft gripper named JamTac, which has high-resolution perception, large detection surface, integrated sensing-grasping capability that search low-visibility environments. The combines granular jamming visuotactile perception technologies. Using principle of refractive index matching, refraction-free liquid-particle rationing scheme...
Transparent objects are common in daily life, while their optical properties pose challenges for RGB-D cameras to capture accurate depth information. This issue is further amplified when these hand-held, as hand occlusions complicate estimation. For assistant robots, however, accurately perceiving hand-held transparent critical effective human-robot interaction. paper presents a Hand-Aware Depth Restoration (HADR) method based on creating an implicit neural representation function from...
The accurate detection and grasping of transparent objects are challenging but significance to robots. Here, a visual-tactile fusion framework for object under complex backgrounds variant light conditions is proposed, including the position detection, tactile calibration, based classification. First, multi-scene synthetic dataset generation method with Gaussian distribution data annotation proposed. Besides, novel network named TGCNN proposed showing good results in both real scenes. In...