Yuhang Hu

ORCID: 0000-0003-1964-5784
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Robotics and Automated Systems
  • Modular Robots and Swarm Intelligence
  • Robot Manipulation and Learning
  • Advanced Materials and Mechanics
  • Teaching and Learning Programming
  • Embedded Systems Design Techniques
  • Robotic Locomotion and Control
  • Micro and Nano Robotics
  • Image Processing Techniques and Applications
  • Advanced biosensing and bioanalysis techniques
  • Face recognition and analysis
  • Viral Infectious Diseases and Gene Expression in Insects
  • Dynamics and Control of Mechanical Systems
  • Real-time simulation and control systems
  • Robotic Mechanisms and Dynamics
  • Anomaly Detection Techniques and Applications
  • Facial Nerve Paralysis Treatment and Research
  • Soft Robotics and Applications
  • Winter Sports Injuries and Performance
  • Biosensors and Analytical Detection
  • Adhesion, Friction, and Surface Interactions
  • Face Recognition and Perception
  • SARS-CoV-2 detection and testing
  • Cell Image Analysis Techniques
  • Human Pose and Action Recognition

Hefei Institutes of Physical Science
2024

Institute of Intelligent Machines
2024

Chinese Academy of Sciences
2024

University of Science and Technology of China
2024

Columbia University
2022-2024

Georgia Institute of Technology
2024

Large language models are enabling rapid progress in robotic verbal communication, but nonverbal communication is not keeping pace. Physical humanoid robots struggle to express and communicate using facial movement, relying primarily on voice. The challenge twofold: First, the actuation of an expressively versatile face mechanically challenging. A second knowing what expression generate so that robot appears natural, timely, genuine. Here, we propose both barriers can be alleviated by...

10.1126/scirobotics.adi4724 article EN Science Robotics 2024-03-27

10.1038/s42256-025-01006-w article EN Nature Machine Intelligence 2025-02-25

Abstract The ability of robots to model their own dynamics is key autonomous planning and learning, as well for damage detection recovery. Traditionally dynamic models are pre-programmed, or learned from external observations IMU data. Here, we demonstrate the first time how a task-agnostic self-model can be using only single first-person-view camera in self-supervised manner, without any prior knowledge robot morphology, kinematics, task. We trained an egocentric visual random motor...

10.21203/rs.3.rs-2303274/v1 preprint EN cc-by Research Square (Research Square) 2023-02-15

Integrating Large Language Models (VLMs) and Vision-Language with robotic systems enables robots to process understand complex natural language instructions visual information. However, a fundamental challenge remains: for fully capitalize on these advancements, they must have deep understanding of their physical embodiment. The gap between AI models cognitive capabilities the embodiment leads following question: Can robot autonomously adapt its form functionalities through interaction...

10.48550/arxiv.2403.10496 preprint EN arXiv (Cornell University) 2024-03-15

Abstract Simulation enables robots to plan and estimate the outcomes of prospective actions without need physically execute them. We introduce a self-supervised learning framework enable model predict their morphology, kinematics motor control using only brief raw video data, eliminating for extensive real-world data collection kinematic priors. By observing own movements, akin humans watching reflection in mirror, learn an ability simulate themselves spatial motion various tasks. Our...

10.21203/rs.3.rs-3909574/v1 preprint EN cc-by Research Square (Research Square) 2024-03-26

Entomopathogenic nematodes (EPNs) exhibit a bending-elastic instability, or kink, before becoming airborne, feature hypothesized but not proven to enhance jumping performance. Here, we provide the evidence that this kink is crucial for improving launch We demonstrate EPNs actively modulate their aspect ratio, forming liquid-latched closed loop over slow timescale O (1 s), then rapidly open it (10 µs), achieving heights of 20 body lengths (BL) and generating ∼ 10 4 W/Kg power. Using...

10.1101/2024.06.07.598012 preprint EN bioRxiv (Cold Spring Harbor Laboratory) 2024-06-10

10.1109/iros58592.2024.10801809 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2024-10-14

Abstract Self-modeling refers to an agent's ability learn a predictive model of its own behavior. A continuously adapted self-model can serve as internal simulator, enabling the agent plan and assess various potential behaviors internally, reducing need for expensive physical experimentation. Self-models are especially important in legged locomotion, where manual modeling is difficult, reinforcement learning slow, experimentation risky. Here, we propose Quasi-static Self-Modeling framework...

10.21203/rs.3.rs-3121605/v1 preprint EN cc-by Research Square (Research Square) 2023-07-07

Simulation enables robots to plan and estimate the outcomes of prospective actions without need physically execute them. We introduce a self-supervised learning framework enable model predict their morphology, kinematics motor control using only brief raw video data, eliminating for extensive real-world data collection kinematic priors. By observing own movements, akin humans watching reflection in mirror, learn an ability simulate themselves spatial motion various tasks. Our results...

10.48550/arxiv.2311.12151 preprint EN other-oa arXiv (Cornell University) 2023-01-01
Coming Soon ...