Jiyao Zhang

ORCID: 0009-0009-6150-2788
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Robot Manipulation and Learning
  • Human Pose and Action Recognition
  • Advanced Vision and Imaging
  • Gene expression and cancer classification
  • Image Processing and 3D Reconstruction
  • Tactile and Sensory Interactions
  • Reinforcement Learning in Robotics
  • Robotics and Sensor-Based Localization
  • Hand Gesture Recognition Systems
  • Genomics and Phylogenetic Studies
  • Advanced Neural Network Applications
  • Image Processing Techniques and Applications
  • Algorithms and Data Compression

King University
2024

Peking University
2023-2024

Robotic research encounters a significant hurdle when it comes to the intricate task of grasping objects that come in various shapes, materials, and textures. Unlike many prior investigations heavily leaned on specialized point-cloud cameras or abundant RGB visual data gather 3D insights for object-grasping missions, this paper introduces pioneering approach called RGBGrasp. This method depends limited set views perceive surroundings containing transparent specular achieve accurate grasping....

10.1109/lra.2024.3396101 article EN IEEE Robotics and Automation Letters 2024-05-02

In this work, we tackle the problem of online camera-to-robot pose estimation from single-view successive frames an image sequence, a crucial task for robots to interact with world. The primary obstacles are robot's self-occlusions and ambiguity images. This work demonstrates, first time, effectiveness temporal information robot structure prior in addressing these challenges. Given joint configuration, our method learns accurately regress 2D coordinates predefined keypoints (e.g. joints)....

10.1109/cvpr52729.2023.00861 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023-06-01

Estimating robot pose and joint angles is significant in advanced robotics, enabling applications like collaboration online hand-eye calibration.However, the introduction of unknown makes prediction more complex than simple estimation, due to its higher dimensionality.Previous methods either regress 3D keypoints directly or utilise a render&compare strategy. These approaches often falter terms performance efficiency grapple with cross-camera gap problem.This paper presents novel framework...

10.48550/arxiv.2403.18259 preprint EN arXiv (Cornell University) 2024-03-27

Tactile sensing plays a vital role in enabling robots to perform fine-grained, contact-rich tasks. However, the high dimensionality of tactile data, due large coverage on dexterous hands, poses significant challenges for effective feature learning, especially 3D as there are no standardized datasets and strong pretrained backbones. To address these challenges, we propose novel canonical representation that reduces difficulty learning further introduces force-based self-supervised pretraining...

10.48550/arxiv.2409.17549 preprint EN arXiv (Cornell University) 2024-09-26

The use of anthropomorphic robotic hands for assisting individuals in situations where human may be unavailable or unsuitable has gained significant importance. In this paper, we propose a novel task called human-assisting dexterous grasping that aims to train policy controlling hand's fingers assist users objects. Unlike conventional grasping, presents more complex challenge as the needs adapt diverse user intentions, addition object's geometry. We address by proposing an approach...

10.48550/arxiv.2309.06038 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Robotic research encounters a significant hurdle when it comes to the intricate task of grasping objects that come in various shapes, materials, and textures. Unlike many prior investigations heavily leaned on specialized point-cloud cameras or abundant RGB visual data gather 3D insights for object-grasping missions, this paper introduces pioneering approach called RGBGrasp. This method depends limited set views perceive surroundings containing transparent specular achieve accurate grasping....

10.48550/arxiv.2311.16592 preprint EN cc-by arXiv (Cornell University) 2023-01-01
Coming Soon ...