Satoshi Yamamori

ORCID: 0009-0001-0712-268X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Robot Manipulation and Learning
  • Reinforcement Learning in Robotics
  • Fault Detection and Control Systems
  • EEG and Brain-Computer Interfaces
  • Teleoperation and Haptic Systems
  • Advanced Image and Video Retrieval Techniques
  • Intelligent Tutoring Systems and Adaptive Learning
  • Hand Gesture Recognition Systems
  • Soft Robotics and Applications
  • Advanced Memory and Neural Computing
  • Video Surveillance and Tracking Methods
  • Advanced Vision and Imaging
  • Gait Recognition and Analysis
  • Ferroelectric and Negative Capacitance Devices
  • Advanced Control Systems Optimization
  • Multimodal Machine Learning Applications
  • Modular Robots and Swarm Intelligence
  • Human Pose and Action Recognition
  • Neural Networks and Applications
  • Anomaly Detection Techniques and Applications
  • Anatomy and Medical Technology
  • Robotic Locomotion and Control
  • Web Data Mining and Analysis

Advanced Telecommunications Research Institute International
2024-2025

Kyoto University
2018

Recent advancements in 3D reconstruction, especially through neural rendering approaches like Neural Radiance Fields (NeRF) and Plenoxel, have led to high-quality visualizations. However, these methods are optimized for digital environments employ view-dependent color models (RGB) 2D splatting techniques, which do not translate well physical printing. This paper introduces "Poxel", stands Printable-Voxel, a voxel-based reconstruction framework photopolymer jetting printing, allows...

10.48550/arxiv.2501.10474 preprint EN arXiv (Cornell University) 2025-01-16

10.1109/sii59315.2025.10870978 article EN 2022 IEEE/SICE International Symposium on System Integration (SII) 2025-01-21

10.1109/tcds.2025.3543350 article EN cc-by IEEE Transactions on Cognitive and Developmental Systems 2025-01-01

In this study, we propose the use of phase-amplitude reduction method to construct an imitation learning framework. Imitating human movement trajectories is recognized as a promising strategy for generating range human-like robot movements. Unlike previous dynamical system-based approaches, our proposed allows not only imitate limit cycle trajectory but also replicate transient from initial or disturbed state cycle. Consequently, offers safer approach that avoids unpredictable motions...

10.48550/arxiv.2406.03735 preprint EN arXiv (Cornell University) 2024-06-06

While MPC enables nonlinear feedback control by solving an optimal problem at each timestep, the computational burden tends to be significantly large, making it difficult optimize a policy within period. To address this issue, one possible approach is utilize terminal value learning reduce costs. However, learned cannot used for other tasks in situations where task dynamically changes original setup. In study, we develop framework with goal-conditioned achieve multitask optimization while...

10.48550/arxiv.2410.04929 preprint EN arXiv (Cornell University) 2024-10-07

In this study, we propose the use of phase-amplitude reduction method to construct an imitation learning framework. Imitating human movement trajectories is recognized as a promising strategy for generating range human-like robot movements. Unlike previous dynamical system-based approaches, our proposed allows not only imitate limit cycle trajectory but also replicate transient from initial or disturbed state cycle. Consequently, offers safer approach that avoids unpredictable motions...

10.1080/01691864.2024.2441242 article EN cc-by Advanced Robotics 2024-12-17

10.1587/transfun.e101.a.1092 article EN IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences 2018-06-30

In dynamic motion generation tasks, including contact and collisions, small changes in policy parameters can lead to extremely different returns. For example, soccer, the ball fly completely directions with a similar heading by slightly changing hitting position or force applied when friction of varies. However, it is difficult imagine that skills are needed for directions. this study, we proposed multitask reinforcement learning algorithm adapting implicit goals environments single category...

10.48550/arxiv.2308.16471 preprint EN other-oa arXiv (Cornell University) 2023-01-01
Coming Soon ...