Nikita Rudin

ORCID: 0000-0001-5893-0348
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Robotic Locomotion and Control
  • Reinforcement Learning in Robotics
  • Robot Manipulation and Learning
  • Robotic Path Planning Algorithms
  • Human Pose and Action Recognition
  • Software Testing and Debugging Techniques
  • Prosthetics and Rehabilitation Robotics
  • Modular Robots and Swarm Intelligence
  • Robotics and Sensor-Based Localization
  • Parallel Computing and Optimization Techniques
  • Space Exploration and Technology
  • Planetary Science and Exploration
  • Advanced Vision and Imaging
  • Bat Biology and Ecology Studies
  • Multimodal Machine Learning Applications
  • Astro and Planetary Science
  • Soil Mechanics and Vehicle Dynamics
  • Computability, Logic, AI Algorithms
  • Genetics and Physical Performance
  • Advanced Neural Network Applications
  • Evolutionary Algorithms and Applications
  • Soft Robotics and Applications
  • Software Engineering Techniques and Practices
  • Muscle activation and electromyography studies
  • Poxvirus research and outbreaks

ETH Zurich
2021-2024

Nvidia (United States)
2022-2023

Isaac Gym offers a high performance learning platform to train policies for wide variety of robotics tasks directly on GPU. Both physics simulation and the neural network policy training reside GPU communicate by passing data from buffers PyTorch tensors without ever going through any CPU bottlenecks. This leads blazing fast times complex single with 2-3 orders magnitude improvements compared conventional RL that uses based simulator networks. We host results videos at...

10.48550/arxiv.2108.10470 preprint EN cc-by arXiv (Cornell University) 2021-01-01

Reinforcement learning (RL) has emerged as a powerful approach for locomotion control of highly articulated robotic systems. However, one major challenge is the tedious process tuning reward function to achieve desired motion style. To address this issue, imitation approaches such adversarial priors have been proposed, which encourage pre-defined In work, we present an enhance concept prior-based RL, allowing multiple, discretely switchable styles. Our demonstrates that multiple styles and...

10.1109/icra48891.2023.10160751 article EN 2023-05-29

We present <sc xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Orbit</small> , a unified and modular framework for robot learning powered by xmlns:xlink="http://www.w3.org/1999/xlink">Nvidia</small> Isaac Sim. It offers design to easily efficiently create robotic environments with photo-realistic scenes high-fidelity rigid deformable body simulation. With we provide suite of benchmark tasks varying difficulty– from single-stage cabinet opening cloth...

10.1109/lra.2023.3270034 article EN IEEE Robotics and Automation Letters 2023-04-25

Performing agile navigation with four-legged robots is a challenging task because of the highly dynamic motions, contacts various parts robot, and limited field view perception sensors. Here, we propose fully learned approach to training such conquer scenarios that are reminiscent parkour challenges. The method involves advanced locomotion skills for several types obstacles, as walking, jumping, climbing, crouching, then using high-level policy select control those across terrain. Thanks our...

10.1126/scirobotics.adi7566 article EN Science Robotics 2024-03-13

In this work, we present and study a training set-up that achieves fast policy generation for real-world robotic tasks by using massive parallelism on single workstation GPU. We analyze discuss the impact of different algorithm components in massively parallel regime final performance times. addition, novel game-inspired curriculum is well suited with thousands simulated robots parallel. evaluate approach quadrupedal robot ANYmal to walk challenging terrain. The allows policies flat terrain...

10.48550/arxiv.2109.11978 preprint EN other-oa arXiv (Cornell University) 2021-01-01

In this article, we show that learned policies can be applied to solve legged locomotion control tasks with extensive flight phases, such as those encountered in space exploration. Using an off-the-shelf deep reinforcement learning algorithm, trained a neural network jumping quadruped robot while solely using its limbs for attitude control. We present of increasing complexity leading combination three-dimensional (re-)orientation and landing behaviors traversing simulated low-gravity...

10.1109/tro.2021.3084374 article EN IEEE Transactions on Robotics 2021-06-14

The common approach for local navigation on challenging environments with legged robots requires path planning, following and locomotion, which usually a locomotion control policy that accurately tracks commanded velocity. However, by breaking down the problem into these sub-tasks, we limit robot's capabilities since individual tasks do not consider full solution space. In this work, propose to solve complete training an end-to-end deep reinforcement learning. Instead of continuously...

10.1109/iros47612.2022.9981198 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022-10-23

10.1109/iros58592.2024.10801909 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2024-10-14

We propose a learning-based method to reconstruct the local terrain for locomotion with mobile robot traversing urban environments. Using stream of depth measurements from onboard cameras and robot’s trajectory, algorithm estimates topography in vicinity. The raw these are noisy only provide partial occluded observations that many cases do not show stands on. Therefore, we 3D reconstruction model faithfully reconstructs scene, despite large amounts missing data coming blind spots camera...

10.1109/lra.2022.3184779 article EN IEEE Robotics and Automation Letters 2022-06-20

This letter introduces Barry, a dynamically balancing quadruped robot optimized for high payload capabilities and efficiency. It presents new high-torque low-inertia leg design, which includes custom-built high-efficiency actuators transparent, sensorless transmissions. The robot's reinforcement learning-based controller is trained to fully leverage the hardware balance steer robot. newly developed can manage non-linearities introduced by design handle unmodeled payloads up 90 kg while...

10.1109/lra.2023.3313923 article EN IEEE Robotics and Automation Letters 2023-09-18

Autonomous robots must navigate reliably in unknown environments even under compromised exteroceptive perception, or perception failures.Such failures often occur when harsh lead to degraded sensing, the algorithm misinterprets scene due limited generalization.In this paper, we model as invisible obstacles and pits, train a reinforcement learning (RL) based local navigation policy guide our legged robot.Unlike previous works relying on heuristics anomaly detection update navigational...

10.1109/icra57147.2024.10611254 article EN 2024-05-13

Quadruped robots have shown remarkable mobility on various terrains through reinforcement learning. Yet, in the presence of sparse footholds and risky such as stepping stones balance beams, which require precise foot placement to avoid falls, model-based approaches are often used. In this paper, we show that end-to-end learning can also enable robot traverse with dynamic motions. To end, our approach involves training a generalist policy for agile locomotion disorderly before transferring...

10.48550/arxiv.2311.10484 preprint EN other-oa arXiv (Cornell University) 2023-01-01

We present SpaceHopper, a three-legged, small-scale robot designed for future mobile exploration of asteroids and moons. The weighs 5.2kg has body size 245mm while using space-qualifiable components. Furthermore, SpaceHopper's design controls make it well-adapted investigating dynamic locomotion modes with extended flight-phases. Instead gyroscopes or fly-wheels, the system uses its three legs to reorient during flight in preparation landing. control leg motion reorientation Deep...

10.48550/arxiv.2403.02831 preprint EN arXiv (Cornell University) 2024-03-05

Symmetry is a fundamental aspect of many real-world robotic tasks. However, current deep reinforcement learning (DRL) approaches can seldom harness and exploit symmetry effectively. Often, the learned behaviors fail to achieve desired transformation invariances suffer from motion artifacts. For instance, quadruped may exhibit different gaits when commanded move forward or backward, even though it symmetrical about its torso. This issue becomes further pronounced in high-dimensional complex...

10.48550/arxiv.2403.04359 preprint EN arXiv (Cornell University) 2024-03-07

We present ORBIT, a unified and modular framework for robot learning powered by NVIDIA Isaac Sim. It offers design to easily efficiently create robotic environments with photo-realistic scenes fast accurate rigid deformable body simulation. With we provide suite of benchmark tasks varying difficulty -- from single-stage cabinet opening cloth folding multi-stage such as room reorganization. To support working diverse observations action spaces, include fixed-arm mobile manipulators different...

10.48550/arxiv.2301.04195 preprint EN cc-by arXiv (Cornell University) 2023-01-01

In recent years, reinforcement learning (RL) has shown outstanding performance for locomotion control of highly articulated robotic systems. Such approaches typically involve tedious reward function tuning to achieve the desired motion style. Imitation such as adversarial priors aim reduce this problem by encouraging a pre-defined work, we present an approach augment concept prior-based RL allow multiple, discretely switchable styles. We show that multiple styles and skills can be learned...

10.48550/arxiv.2203.14912 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Performing agile navigation with four-legged robots is a challenging task due to the highly dynamic motions, contacts various parts of robot, and limited field view perception sensors. In this paper, we propose fully-learned approach train such conquer scenarios that are reminiscent parkour challenges. The method involves training advanced locomotion skills for several types obstacles, as walking, jumping, climbing, crouching, then using high-level policy select control those across terrain....

10.48550/arxiv.2306.14874 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Autonomous robots must navigate reliably in unknown environments even under compromised exteroceptive perception, or perception failures. Such failures often occur when harsh lead to degraded sensing, the algorithm misinterprets scene due limited generalization. In this paper, we model as invisible obstacles and pits, train a reinforcement learning (RL) based local navigation policy guide our legged robot. Unlike previous works relying on heuristics anomaly detection update navigational...

10.48550/arxiv.2310.03581 preprint EN other-oa arXiv (Cornell University) 2023-01-01

We propose a learning-based method to reconstruct the local terrain for locomotion with mobile robot traversing urban environments. Using stream of depth measurements from onboard cameras and robot's trajectory, algorithm estimates topography in vicinity. The raw these are noisy only provide partial occluded observations that many cases do not show stands on. Therefore, we 3D reconstruction model faithfully reconstructs scene, despite large amounts missing data coming blind spots camera...

10.48550/arxiv.2206.08077 preprint EN other-oa arXiv (Cornell University) 2022-01-01
Coming Soon ...