- Tactile and Sensory Interactions
- Virtual Reality Applications and Impacts
- Interactive and Immersive Displays
- Augmented Reality Applications
- Gaze Tracking and Assistive Technology
- Teleoperation and Haptic Systems
- Hand Gesture Recognition Systems
- Action Observation and Synchronization
- Face recognition and analysis
- Robotics and Automated Systems
- Robotics and Sensor-Based Localization
- Advanced Vision and Imaging
- Visual perception and processing mechanisms
- Emotion and Mood Recognition
- Advanced Optical Imaging Technologies
- Robot Manipulation and Learning
- Motor Control and Adaptation
- Face and Expression Recognition
- 3D Surveying and Cultural Heritage
- Advanced Image and Video Retrieval Techniques
- Video Surveillance and Tracking Methods
- Visual Attention and Saliency Detection
- Surgical Simulation and Training
- Computer Graphics and Visualization Techniques
- Human Motion and Animation
Teikyo University
2025
Keio University
2015-2024
Toyohashi University of Technology
2021
The University of Tokyo
2016-2019
University of South Australia
2015-2019
Kobe University
2009-2015
Korea Institute of Science and Technology
2015
Salzburg University of Applied Sciences
2015
Japan Science and Technology Agency
2015
Cambridge University Press
2010
Abstract Background New technologies can considerably improve preoperative planning, enhance the surgeon's skill and simplify approach to complex procedures. Augmented reality techniques, robot assisted operations computer navigation tools will become increasingly important in surgery residents’ education. Methods We obtained 3D reconstructions from simple spiral computed tomography (CT) slides using OsiriX, an open source processing software package dedicated DICOM images. These images were...
Body ownership can be modulated through illusory visual-tactile integration or visual-motor synchronicity/contingency. Recently, it has been reported that of an invisible body induced by from a first-person view. We aimed to test whether similar the could active method synchronicity and if experienced in front facing away observer. Participants observed left right white gloves socks them, at distance 2 m, virtual room head-mounted display. The were synchronized with observers' actions. In...
Abstract Background We applied a new concept of “image overlay surgery” consisting the integration virtual reality (VR) and augmented (AR) technology, in which dynamic 3D images were superimposed on patient's actual body surface evaluated as reference for surgical navigation gastrointestinal, hepatobiliary pancreatic surgery. Methods carried out seven surgeries, including three cholecystectomies, two gastrectomies colectomies. A Macintosh DICOM workstation OsiriX used operating room image...
We present the FuwaFuwa sensor module, a round, hand-size, wireless device for measuring shape deformations of soft objects such as cushions and plush toys. It can be embedded in typical household without complex installation procedures spoiling softness object because it requires no physical connection. Six LEDs module emit IR light six orthogonal directions, corresponding photosensors measure reflected energy. One easily convert almost any into touch-input that detect both touch position...
This paper presents a novel smart eyewear that uses embedded photo reflective sensors and machine learning to recognize wearer's facial expressions in daily life. We leverage the skin deformation when wearers change their expressions. With small sensors, we measure proximity between surface on face frame where 17 are integrated. A Support Vector Machine (SVM) algorithm was applied for sensor information. The can cover various muscle movements be integrated into everyday glasses. main...
Goals The predictors of malignant intraductal papillary mucinous neoplasm (IPMN) and invasive IPMN were investigated in this study to determine the optimal indicators surgical resection for IPMN. Background Recently, international consensus guidelines have described standard However, IPMN, especially branch duct still remain controversial. Study Eighty-two patients with who underwent during April 1998 January 2009, retrospectively reviewed examined regard their preoperative factors...
Abstract The supernumerary robotic limb system expands the motor function of human users by adding extra artificially designed limbs. It is important for us to embody as if it a part one’s own body and maintain cognitive transparency in which load suppressed. Embodiment studies have been conducted with an expansion bodily functions through “substitution” “extension”. However, there few on “addition” parts. In this study, we developed that operates virtual environment, then evaluated whether...
We propose a novel display-based game environment using augmented reality technology with small robots.In this environment, the robots can be by display image according to their positions and postures.The augmentation activity reinforces fun of playing such in real world.
In this paper, we propose a novel technology called "CheekInput" with head-mounted display (HMD) that senses touch gestures by detecting skin deformation. We attached multiple photo-reflective sensors onto the bottom front frame of HMD. Since these measure distance between and cheeks, our system is able to detect deformation cheek when surface touched fingers. Our uses Support Vector Machine determine gestures: pushing face up down, left right. combined 4 directional for each extend 16...
In this paper, we propose EarTouch, a new sensing technology for ear-based input controlling applications by slightly pulling the ear and detecting deformation an enhanced earphone device. It is envisioned that EarTouch will enable control of such as music players, navigation systems, calendars "eyes-free" interface. As operation shape measured optical sensors. Deformation skin caused touching with fingers recognized attaching sensors to measuring distance from inside ear. supports...
We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A avatar can be representative of the user in environment. However, synchronization avatar's expressions with those is limited. The major problem wearing that large portion user's face occluded, making recognition difficult HMD-based To overcome this problem, we using retro-reflective photoelectric sensors....
We propose an occlusion compensation method for optical see-through head-mounted displays (OST-HMDs) equipped with a singlelayer transmissive spatial light modulator (SLM), in particular, liquid crystal display (LCD). Occlusion is important depth cue 3D perception, yet realizing it on OST-HMDs particularly difficult due to the displays' semitransparent nature. A key component support SLM-a device that can selectively interfere rays passing through it. For example, LCD SLM block or pass...
Cyber space enables us to "share" bodies whose movements are a consequence of by several individuals. But whether and how our motor behavior is affected during body sharing remains unclear. Here we examined this issue in arm reaching performed shared avatar, movement was generated averaging the two participants. We observed that participants exhibited improved reaction times with avatar than alone. Moreover, reach trajectory straighter either participant correlated their subjective...
IncreTable is a mixed reality tabletop game inspired by The Incredible Machine. Users can combine real and virtual pieces in order to solve puzzles the game. Game actions include placing domino blocks with digital pens, controlling car modifying terrain through depth camera interface or robots topple over dominoes.
In this paper, we describe Empathy Glasses, a head worn prototype designed to create an empathic connection between remote collaborators. The main novelty of our system is that it the first combine following technologies together: (1) wearable facial expression capture hardware, (2) eye tracking, (3) camera, and (4) see-through mounted display, with focus on collaboration. Using system, local user can send their information view environment helper who back visual cues user's display help...
Maki Sugimoto and Naoji Taniguchi are board members of Holoeyes Inc. Please note: The publisher is not responsible for the content or functionality any supporting information supplied by authors. Any queries (other than missing content) should be directed to corresponding author article.
Time-Followers's Vision is a mixed-reality-based visual presentation system that captures robotic vehicle's size, position, and environment, allowing even inexperienced operators to easily control it. The technique produces virtual image using mixed reality technology presents the surrounding environment status operator. Therefore, for operators, position orientation situation can be readily understood. authors implement prototype evaluate its feasibility. This article available with short...
We developed a novel sensation interface device using galvanic vestibular stimulation (GVS). GVS alters your balance. Our can induce vection (virtual sense of acceleration) synchronized with optic flow or musical rhythms. The also lateral walking towards the anode while human walking.
In this paper, we describe a demonstration of remote collaboration system using Empathy glasses. Using our system, local worker can share view their environment with helper, as well gaze, facial expressions, and physiological signals. The user send back visual cues via see-through head mounted display to help them perform better on real world task. also provides some indication the users face expression tracking technology.
We propose a system consisting of wearable device equipped with photo-reflective sensors arranged in an array. Hand gestures are recognized by measuring the skin deformation back hand. Since muscles and bones on hand linked to fingers, finger movement can be clearly observed. Skin is measured using several sensors. determined distance between these The estimates support vector machine sensor data. this simultaneously records shape Leap Motion learning phase, user freely register gestures....
Abstract Illusory body ownership can be induced in a part or full by visual-motor synchronisation. A previous study indicated that an invisible illusion the synchronous movement of only hands and feet. The difference between has not been explained detail because there is no method for separating these two illusions. To develop to do so, we scrambled randomised positions feet compared it with normal layout stimulus manipulating In Experiment 1, participants observed stimuli from third-person...
This article explores direct touch and manipulation techniques for surface computing environments using a specialized haptic force feedback stylus, called ImpAct, which can dynamically change its effective length equipped with sensors to calculate orientation in world coordinates. When user pushes it against screen, the physical stylus shrinks rendered projection of is drawn inside giving illusion that submerged display device. Once users see immersed digital below he or she manipulate...
This study describes the relation between vection produced by optical flow and that created galvanic vestibular stimulation. Vection is illusion of self motion most often experienced when an observer views a large screen display containing translating pattern. has only limited fidelity duration unless it reinforced confirming information. Galvanic stimulation (GVS) can directly produce sensation vection.