Yohei Hayamizu

ORCID: 0000-0003-1642-4919
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Evolutionary Algorithms and Applications
  • Metaheuristic Optimization Algorithms Research
  • Reinforcement Learning in Robotics
  • AI-based Problem Solving and Planning
  • Fuzzy Logic and Control Systems
  • Neural Networks and Applications
  • Viral Infectious Diseases and Gene Expression in Insects
  • Image Processing Techniques and Applications
  • Cognitive Science and Education Research
  • Social Robot Interaction and HRI
  • Virology and Viral Diseases
  • Robotic Locomotion and Control
  • Multimodal Machine Learning Applications
  • Genetics and Physical Performance
  • Geographic Information Systems Studies
  • Multi-Agent Systems and Negotiation
  • Language and cultural evolution
  • Semantic Web and Ontologies
  • Robotic Path Planning Algorithms
  • Speech and dialogue systems

Binghamton University
2022-2025

University of Electro-Communications
2021

Olympus (Japan)
1972

This paper focuses on the concept of "absumption" which restrains over-general rules by decomposing them into several concrete rules, proposes novel for continuous spaces improving conventional absumption to achieve high performance (e.g., acquired rewards) in a noisy environment, and integrates it XCS real-valued inputs (XCSR) evaluate through comparison with absumption. Concretely, proposed mechanism based Overgenerality Condition-clustering specialization (called Absumption-OC)...

10.1145/3512290.3528841 article EN Proceedings of the Genetic and Evolutionary Computation Conference 2022-07-08

This paper focuses on the impact of rule representation in Michigan-style Learning Fuzzy-Classifier Systems (LFCSs) its classification performance. A well-representation rules an LFCS is crucial for improving However, conventional representations frequently need help addressing problems with unknown data characteristics. To address this issue, proposes a supervised (i.e., Fuzzy-UCS) self-adaptive mechanism, entitled Adaptive-UCS. Adaptive-UCS incorporates fuzzy indicator as new parameter...

10.1145/3583131.3590360 article EN Proceedings of the Genetic and Evolutionary Computation Conference 2023-07-12

Reinforcement learning (RL) enables an agent to learn from trial-and-error experiences toward achieving long-term goals; automated planning aims compute plans for accomplishing tasks using action knowledge. Despite their shared goal of completing complex tasks, the development RL and has been largely isolated due different computational modalities. Focusing on improving agents' efficiency, we develop Guided Dyna-Q (GDQ) enable agents reason with knowledge avoid exploring less-relevant...

10.1609/icaps.v31i1.16011 article EN Proceedings of the International Conference on Automated Planning and Scheduling 2021-05-17

This paper proposes ELPSDeCS (Encoding, Learning, "Plausible" Sampling, and Decoding Classifier System) by extending ELSDeCS to increase both the accuracy interpretability of generated classifiers which matches high-dimensional input such as images. The experimental results on complex multi-class classification problem handwritten numerals show that are higher than ELSDeCS.

10.1109/cec45853.2021.9504733 article EN 2022 IEEE Congress on Evolutionary Computation (CEC) 2021-06-28

Quadruped animals are capable of exhibiting a diverse range locomotion gaits. While progress has been made in demonstrating such gaits on robots, current methods rely motion priors, dynamics models, or other forms extensive manual efforts. People can use natural language to describe dance moves. Could one formal specify quadruped gaits? To this end, we aim enable easy gait specification and efficient policy learning. Leveraging Reward Machines (RMs) for high-level over foot contacts, our...

10.1609/icaps.v34i1.31470 article EN Proceedings of the International Conference on Automated Planning and Scheduling 2024-05-30

Vision-language models (VLMs) have been applied to robot task planning problems, where the receives a in natural language and generates plans based on visual inputs. While current VLMs demonstrated strong vision-language understanding capabilities, their performance is still far from being satisfactory tasks. At same time, although classical planners, such as PDDL-based, are for long-horizon tasks, they do not work well open worlds unforeseen situations common. In this paper, we propose...

10.48550/arxiv.2406.17659 preprint EN arXiv (Cornell University) 2024-06-25

This paper focuses on the rule representation in Learning Classifier System (LCS) and proposes a flexible mechanism that can generate variety of shapes its matching area with one condition classifier. Concretely, proposed changes shape according to logical product or multiplication values probability distribution multiple dimension. As implementation, this introduces beta XCS for continuous space. Through intensive experiments different types space problems, following implications have been...

10.1145/3512290.3528874 article EN Proceedings of the Genetic and Evolutionary Computation Conference 2022-07-08

This paper focuses on the problem of Learning Classifier System (LCS) that is hard to guarantee generate "correct" output (i.e., action in LCS) as dimension size data increases (which results producing "incorrect" output) and proposes method can detect incorrect LCS. For this issue, Misclassification Detection based Conditional Variational Auto-Encoder (MD/C) which detects rejects LCS through a comparison between original restored by CVAE (Conditional Auto-Encoder) (as condition CVAE). The...

10.1145/3449726.3459508 article EN Proceedings of the Genetic and Evolutionary Computation Conference Companion 2021-07-07

Service robots need language capabilities for communicating with people, and navigation skills beyond-proximity interaction in the real world. When robot explores world people side by side, there is compound problem of human-robot dialog co-navigation. The team uses to decide where go, their shared spatial awareness affects state. In this paper, we develop a framework that learns joint policy co-navigation toward efficiently accurately completing tour guide information delivery tasks. We...

10.1109/iros55552.2023.10341663 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2023-10-01

Reinforcement learning (RL) enables an agent to learn from trial-and-error experiences toward achieving long-term goals; automated planning aims compute plans for accomplishing tasks using action knowledge. Despite their shared goal of completing complex tasks, the development RL and has been largely isolated due different computational modalities. Focusing on improving agents' efficiency, we develop Guided Dyna-Q (GDQ) enable agents reason with knowledge avoid exploring less-relevant...

10.48550/arxiv.2004.11456 preprint EN cc-by-nc-nd arXiv (Cornell University) 2020-01-01
Coming Soon ...