- Robot Manipulation and Learning
- Human Pose and Action Recognition
- Advanced Vision and Imaging
- Gene expression and cancer classification
- Image Processing and 3D Reconstruction
- Tactile and Sensory Interactions
- Reinforcement Learning in Robotics
- Robotics and Sensor-Based Localization
- Hand Gesture Recognition Systems
- Genomics and Phylogenetic Studies
- Advanced Neural Network Applications
- Image Processing Techniques and Applications
- Algorithms and Data Compression
King University
2024
Peking University
2023-2024
Robotic research encounters a significant hurdle when it comes to the intricate task of grasping objects that come in various shapes, materials, and textures. Unlike many prior investigations heavily leaned on specialized point-cloud cameras or abundant RGB visual data gather 3D insights for object-grasping missions, this paper introduces pioneering approach called RGBGrasp. This method depends limited set views perceive surroundings containing transparent specular achieve accurate grasping....
In this work, we tackle the problem of online camera-to-robot pose estimation from single-view successive frames an image sequence, a crucial task for robots to interact with world. The primary obstacles are robot's self-occlusions and ambiguity images. This work demonstrates, first time, effectiveness temporal information robot structure prior in addressing these challenges. Given joint configuration, our method learns accurately regress 2D coordinates predefined keypoints (e.g. joints)....
Estimating robot pose and joint angles is significant in advanced robotics, enabling applications like collaboration online hand-eye calibration.However, the introduction of unknown makes prediction more complex than simple estimation, due to its higher dimensionality.Previous methods either regress 3D keypoints directly or utilise a render&compare strategy. These approaches often falter terms performance efficiency grapple with cross-camera gap problem.This paper presents novel framework...
Tactile sensing plays a vital role in enabling robots to perform fine-grained, contact-rich tasks. However, the high dimensionality of tactile data, due large coverage on dexterous hands, poses significant challenges for effective feature learning, especially 3D as there are no standardized datasets and strong pretrained backbones. To address these challenges, we propose novel canonical representation that reduces difficulty learning further introduces force-based self-supervised pretraining...
The use of anthropomorphic robotic hands for assisting individuals in situations where human may be unavailable or unsuitable has gained significant importance. In this paper, we propose a novel task called human-assisting dexterous grasping that aims to train policy controlling hand's fingers assist users objects. Unlike conventional grasping, presents more complex challenge as the needs adapt diverse user intentions, addition object's geometry. We address by proposing an approach...
Robotic research encounters a significant hurdle when it comes to the intricate task of grasping objects that come in various shapes, materials, and textures. Unlike many prior investigations heavily leaned on specialized point-cloud cameras or abundant RGB visual data gather 3D insights for object-grasping missions, this paper introduces pioneering approach called RGBGrasp. This method depends limited set views perceive surroundings containing transparent specular achieve accurate grasping....