Reina Ishikawa

ORCID: 0000-0003-4792-6380
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Music and Audio Processing
  • Video Surveillance and Tracking Methods
  • Infrastructure Maintenance and Monitoring
  • Human Pose and Action Recognition
  • Soft Robotics and Applications
  • Robot Manipulation and Learning
  • Advanced Image and Video Retrieval Techniques
  • Hand Gesture Recognition Systems
  • Speech and Audio Processing
  • Computer Graphics and Visualization Techniques
  • Advanced Vision and Imaging
  • Tactile and Sensory Interactions
  • Modular Robots and Swarm Intelligence
  • Advanced Image Processing Techniques
  • Generative Adversarial Networks and Image Synthesis

Keio University
2021-2023

Graz University of Technology
2023

Flinders University
2023

Omron (Japan)
2022

This study aimed to anticipate fractures of fragile food during robotic manipulation. Anticipating allows a robot manipulate ingredients without irreversible failure. Food fracture models investigated in texture fields explain the properties objects well. However, they may not directly apply manipulation due variance physical even within same ingredient. To this end, we developed fracture-anticipation system with tactile sensing module and simple recurrent neural network. The key idea was...

10.1109/access.2022.3207491 article EN cc-by IEEE Access 2022-01-01

Multi-layer images are a powerful scene representation for high-performance rendering in virtual/augmented reality (VR/AR). The major approach to generate such is use deep neural network trained encode colors and alpha values of depth certainty on each layer using registered multi-view images. A typical aimed at limited number nearest views. Therefore, local noises input from user-navigated camera deteriorate the final quality interfere with coherency over view transitions. We propose focal...

10.1109/tvcg.2023.3320248 article EN cc-by IEEE Transactions on Visualization and Computer Graphics 2023-10-02

The contactless estimation of the weight a container and amount its content manipulated by person are key pre-requisites for safe human-to-robot handovers. However, opaqueness transparencies content, variability materials, shapes, sizes, make this difficult. In paper, we present range methods an open framework to benchmark acoustic visual perception capacity container, type, mass, content. includes dataset, specific tasks performance measures. We conduct in-depth comparative analysis that...

10.1109/access.2022.3166906 article EN cc-by IEEE Access 2022-01-01

The key to an accurate understanding of terrain is extract the informative features from multi-modal data obtained different devices. Sensors, such as RGB cameras, depth sensors, vibration and microphones, are used data. Many studies have explored ways use them, especially in robotics field. Some papers successfully introduced single-modal or methods. However, practice, robots can be faced with extreme conditions; microphones do not work well crowded scenes, camera cannot capture terrains...

10.1109/access.2021.3075582 article EN cc-by IEEE Access 2021-01-01

The key to an accurate understanding of terrain is extract the informative features from multi-modal data obtained different devices. Sensors, such as RGB cameras, depth sensors, vibration and microphones, are used data. Many studies have explored ways use them, especially in robotics field. Some papers successfully introduced single-modal or methods. However, practice, robots can be faced with extreme conditions; microphones do not work well crowded scenes, camera cannot capture terrains...

10.1109/icpr48806.2021.9412638 article EN 2022 26th International Conference on Pattern Recognition (ICPR) 2021-01-10

Food picking is trivial for humans but not robots, as foods are fragile. Presetting foods' physical properties does help robots much due to the objects' inter- and intra-category diversity. A recent study proved that learning-based fracture anticipation with tactile sensors could overcome this problem; however, method trains model each food deal differences, tuning leads an undesirable amount of consumption. This proposes a novel framework learning food-picking tasks without consuming foods....

10.1109/icra48891.2023.10160405 article EN 2023-05-29
Coming Soon ...