- Human Pose and Action Recognition
- Robotic Locomotion and Control
- Human Motion and Animation
- X-ray Diffraction in Crystallography
- Crystallization and Solubility Studies
- Reinforcement Learning in Robotics
- Robot Manipulation and Learning
- Muscle activation and electromyography studies
- Fault Detection and Control Systems
- Prosthetics and Rehabilitation Robotics
- Video Analysis and Summarization
- Artificial Intelligence in Games
- Evacuation and Crowd Dynamics
- Machine Fault Diagnosis Techniques
- Animal Behavior and Welfare Studies
- Mineral Processing and Grinding
- Hand Gesture Recognition Systems
- Spectroscopy and Chemometric Analyses
- Crystallography and molecular interactions
- Motor Control and Adaptation
- Generative Adversarial Networks and Image Synthesis
- Contact Mechanics and Variational Inequalities
- Autonomous Vehicle Technology and Safety
- Balance, Gait, and Falls Prevention
- Neurogenetic and Muscular Disorders Research
Simon Fraser University
2022-2024
Nvidia (United States)
2022-2024
University of California, Berkeley
2018-2022
Yanshan University
2021
University of California System
2018-2021
Google (United States)
2020
University of British Columbia
2015-2017
University of Sheffield
2017
Baogang Group (China)
2013-2014
Air Force Engineering University
2009
Learning physics-based locomotion skills is a difficult problem, leading to solutions that typically exploit prior knowledge of various forms. In this paper we aim learn variety environment-aware with limited amount knowledge. We adopt two-level hierarchical control framework. First, low-level controllers are learned operate at fine timescale and which achieve robust walking gaits satisfy stepping-target style objectives. Second, high-level then plan the steps by invoking desired step...
A longstanding goal in character animation is to combine data-driven specification of behavior with a system that can execute similar physical simulation, thus enabling realistic responses perturbations and environmental variation. We show well-known reinforcement learning (RL) methods be adapted learn robust control policies capable imitating broad range example motion clips, while also complex recoveries, adapting changes morphology, accomplishing user-specified goals. Our method handles...
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics. While manually-designed controllers have able to emulate many complex behaviors, building such involves time-consuming difficult development process, often requiring substantial expertise nuances each skill. Reinforcement learning provides an appealing alternative for automating manual effort involved controllers. However, designing objectives that elicit desired behaviors from agent...
Reinforcement learning offers a promising methodology for developing skills simulated characters, but typically requires working with sparse hand-crafted features. Building on recent progress in deep reinforcement (DeepRL), we introduce mixture of actor-critic experts (MACE) approach that learns terrain-adaptive dynamic locomotion using high-dimensional state and terrain descriptions as input, parameterized leaps or steps output actions. MACE more quickly than single results exhibit...
Data-driven character animation based on motion capture can produce highly naturalistic behaviors and, when combined with physics simulation, provide for natural procedural responses to physical perturbations, environmental changes, and morphological discrepancies. Motion remains the most popular source of data, but collecting mocap data typically requires heavily instrumented environments actors. In this paper, we propose a method that enables physically simulated characters learn skills...
Developing robust walking controllers for bipedal robots is a challenging endeavor. Traditional model-based locomotion require simplifying assumptions and careful modelling; any small errors can result in unstable control. To address these challenges locomotion, we present model-free reinforcement learning framework training policies simulation, which then be transferred to real Cassie robot. facilitate sim-to-real transfer, domain randomization used encourage the learn behaviors that are...
The incredible feats of athleticism demonstrated by humans are made possible in part a vast repertoire general-purpose motor skills, acquired through years practice and experience. These skills not only enable to perform complex tasks, but also provide powerful priors for guiding their behaviors when learning new tasks. This is stark contrast what common physics-based character animation, where control policies most typically trained from scratch each task. In this work, we present...
The use of deep reinforcement learning allows for high-dimensional state descriptors, but little is known about how the choice action representation impacts difficulty and resulting performance. We compare impact four different parameterizations (torques, muscle-activations, target joint angles, joint-angle velocities) in terms time, policy robustness, motion quality, query rates. Our results are evaluated on a gait-cycle imitation task multiple planar articulated figures gaits. demonstrate...
Legged robots are physically capable of traversing a wide range challenging environments, but designing controllers that sufficiently robust to handle this diversity has been long-standing challenge in robotics. Reinforcement learning presents an appealing approach for automating the controller design process and able produce remarkably when trained suitable environments. However, it is difficult predict all likely conditions robot will encounter during deployment enumerate them at...
We introduce a method for generating realistic pedestrian trajectories and full-body animations that can be controlled to meet user-defined goals. draw on recent advances in guided diffusion modeling achieve test-time controllability of trajectories, which is normally only associated with rule-based systems. Our model allows users constrain through target waypoints, speed, specified social groups while accounting the surrounding environment context. This trajectory integrated novel...
This paper presents a comprehensive study on using deep reinforcement learning (RL) to create dynamic locomotion controllers for bipedal robots. Going beyond focusing single skill, we develop general control solution that can be used range of skills, from periodic walking and running aperiodic jumping standing. Our RL-based controller incorporates novel dual-history architecture, utilizing both long-term short-term input/output (I/O) history the robot. when trained through proposed...
We address the problem of enabling quadrupedal robots to perform precise shooting skills in real world using reinforcement learning. Developing algorithms enable a legged robot shoot soccer ball given target is challenging that combines motion control and planning into one task. To solve this problem, we need consider dynamics limitation stability during dynamic robot. Moreover, hard-to-model deformable rolling on ground with uncertain friction desired location. In paper, propose...
Flight PhaseFig. 1: Representative dynamic jumping maneuvers performed by a bipedal robot Cassie using the proposed goal-conditioned control policies.From left to right: (a) jumps over 1.4 m and lands at given target; (b) target that is 0.88 in front of 0.44 above ground, (c) place while turning 55 • with command turn 60 place.The policies are trained simulation deployed on hardware without further tuning.Video at: https://youtu.be/aAPSZ2QFB-E.
We present a reinforcement learning (RL) framework that enables quadrupedal robots to perform soccer goalkeeping tasks in the real world. Soccer with quadrupeds is challenging problem, combines highly dynamic locomotion precise and fast non-prehensile object (ball) manipulation. The robot needs react intercept potentially flying ball using maneuvers very short amount of time, usually less than one second. In this paper, we propose address problem hierarchical model-free RL framework. first...
Humanoid robots have great potential to perform various human-level skills. These skills involve locomotion, manipulation, and cognitive capabilities. Driven by advances in machine learning the strength of existing model-based approaches, these capabilities progressed rapidly, but often separately. Therefore, a timely overview current progress future trends this fast-evolving field is essential. This survey first summarizes planning control that been backbone humanoid robotics for past three...
The locomotion skills developed for physics-based characters most often target flat terrain. However, much of their potential lies with the creation dynamic, momentum-based motions across more complex terrains. In this paper, we learn controllers that allow simulated to traverse terrains gaps, steps, and walls using highly dynamic gaits. This is achieved reinforcement learning, careful attention given action representation, non-parametric approximation both value function policy;...
Humans are able to perform a myriad of sophisticated tasks by drawing upon skills acquired through prior experience. For autonomous agents have this capability, they must be extract reusable from past experience that can recombined in new ways for subsequent tasks. Furthermore, when controlling complex high-dimensional morphologies, such as humanoid bodies, often require coordination multiple simultaneously. Learning discrete primitives every combination quickly becomes prohibitive....
Data-driven character animation based on motion capture can produce highly naturalistic behaviors and, when combined with physics simulation, provide for natural procedural responses to physical perturbations, environmental changes, and morphological discrepancies. Motion remains the most popular source of data, but collecting mocap data typically requires heavily instrumented environments actors. In this paper, we propose a method that enables physically simulated characters learn skills...
Human and animal gaits are often symmetric in nature, which points to the use of motion symmetry as a potentially useful source structure that can be exploited for learning. By encouraging motion, learning may faster, converge more efficient solutions, aesthetically pleasing. We describe, compare, evaluate four practical methods symmetry. These implemented via particular choices policy network, data duplication, or loss function. experimentally terms performance achieved symmetry, provide...
Developing systems that can synthesize natural and life-like motions for simulated characters has long been a focus computer animation. But in order these to be useful downstream applications, they need not only produce high-quality motions, but must also provide an accessible versatile interface through which users direct character's behaviors. Natural language provides simple-to-use expressive medium specifying user's intent. Recent breakthroughs processing (NLP) have demonstrated...
Abstract Modeling human motor control and predicting how humans will move in novel environments is a grand scientific challenge. Despite advances neuroscience techniques, it still difficult to measure interpret the activity of millions neurons involved control. Thus, researchers fields biomechanics have proposed evaluated models via neuromechanical simulations, which produce physically correct motions musculoskeletal model. Typically, developed that encode physiologically plausible...