- Computer Graphics and Visualization Techniques
- 3D Shape Modeling and Analysis
- Advanced Vision and Imaging
- Generative Adversarial Networks and Image Synthesis
- Advanced Materials and Mechanics
- Human Motion and Animation
- Face recognition and analysis
- Stellar, planetary, and galactic studies
- Computational Geometry and Mesh Generation
- Interactive and Immersive Displays
- Astrophysics and Star Formation Studies
- Advanced Image and Video Retrieval Techniques
- 3D Surveying and Cultural Heritage
- Structural Analysis and Optimization
- Image Enhancement Techniques
- Human Pose and Action Recognition
- Modular Robots and Swarm Intelligence
- Manufacturing Process and Optimization
- Advanced Numerical Analysis Techniques
- Visual Attention and Saliency Detection
- Multimedia Communication and Technology
- Solar and Space Plasma Dynamics
- Medical Image Segmentation Techniques
- Atmospheric Ozone and Climate
- Robotics and Sensor-Based Localization
University of Tsukuba
2016-2025
Meijo University
2024
The University of Tokyo
2006-2023
Tsukuba University of Technology
2022
Gunma University
2002-2006
Kiryu University
2002
Mitsubishi Electric (Japan)
2002
Sendai National College of Technology
1992
Inferring a high dynamic range (HDR) image from single low (LDR) input is an ill-posed problem where we must compensate lost data caused by under-/over-exposure and color quantization. To tackle this, propose the first deep-learning-based approach for fully automatic inference using convolutional neural networks. Because naive way of directly inferring 32-bit HDR 8-bit LDR intractable due to difficulty training, take indirect approach; key idea our method synthesize images taken with...
Relighting of human images has various applications in image synthesis. For relighting, we must infer albedo, shape, and illumination from a portrait. Previous techniques rely on faces for this inference, based spherical harmonics (SH) lighting. However, because they often ignore light occlusion, inferred shapes are biased relit unnaturally bright particularly at hollowed regions such as armpits, crotches, or garment wrinkles. This paper introduces the first attempt to occlusion SH...
Abstract Edit propagation is a technique that can propagate various image edits (e.g., colorization and recoloring) performed via user strokes to the entire based on similarity of features. In most previous work, users must manually determine importance each feature color, coordinates, textures) in accordance with their needs target images. We focus representation learning automatically learns representations only from single instead tuning existing features manually. To this end, paper...
We present new observations of the active star-formation region NGC 1333 in Perseus molecular cloud complex from James Clerk Maxwell Telescope B-Fields In Star-forming Region Observations (BISTRO) survey with POL-2 instrument. The BISTRO data cover entire (~1.5 pc x 2 pc) at 0.02 resolution and spatially resolve polarized emission individual filamentary structures for first time. inferred magnetic field structure is as a whole, each filament aligned different position angles relative to...
We report 850~$\mu$m dust polarization observations of a low-mass ($\sim$12 $M_{\odot}$) starless core in the $\rho$ Ophiuchus cloud, C, made with POL-2 instrument on James Clerk Maxwell Telescope (JCMT) as part JCMT B-fields In STar-forming Region Observations (BISTRO) survey. detect an ordered magnetic field projected plane sky core. The across $\sim$0.1~pc shows predominant northeast-southwest orientation centering between $\sim$40$^\circ$ to $\sim$100$^\circ$, indicating that is well...
Automatic generation of a high-quality video from single image remains challenging task despite the recent advances in deep generative models. This paper proposes method that can create high-resolution, long-term animation using convolutional neural networks (CNNs) landscape where we mainly focus on skies and waters. Our key observation is motion (e.g., moving clouds) appearance time-varying colors sky) natural scenes have different time scales. We thus learn them separately predict with...
Abstract Metaballs are implicit surfaces widely used to model curved objects, represented by the isosurface of a density field defined set points. Recently, results particle‐based simulations have been often visualized using large number metaballs, however, such visualizations high rendering costs. In this paper we propose fast technique for metaballs on GPU. Instead polygonization, is directly evaluated in per‐pixel manner. For evaluation, all contributing need be extracted along each...
We study the HII regions associated with NGC 6334 molecular cloud observed in sub-millimeter and taken as part of B-fields In STar-forming Region Observations (BISTRO) Survey. particular, we investigate polarization patterns magnetic field morphologies these regions. Through pattern pressure calculation analyses, several bubbles indicate that gas lines have been pushed away from bubble, toward an almost tangential (to bubble) morphology. densest 6334, where morphology is similar to...
Age transformation of facial images is a technique that edits age-related person's appearances while preserving the identity. Existing deep learning-based methods can reproduce natural age transformations; however, they only averaged transitions and fail to account for individual-specific influenced by their life histories. In this paper, we propose first diffusion model-based method personalized transformation. Our model takes image target as input generates an age-edited face output. To...
Abstract Relighting of human images enables post‐photography editing lighting effects in portraits. The current mainstream approach uses neural networks to approximate without explicitly accounting for the principle physical shading. As a result, it often has difficulty representing high‐frequency shadows and In this paper, we propose two‐stage relighting method that can reproduce physically‐based shading from low high frequencies. key idea is an environment light source with set fixed...
Abstract Recent advances in physically‐based simulations have made it possible to generate realistic animations. However, the case of solid‐fluid coupling, wetting effects rarely been noticed despite their visual importance especially interactions between fluids and granular materials. This paper presents a simple particle‐based method model physical mechanism wetness propagating through materials; Fluid particles are absorbed spaces these wetted then stick together due liquid bridges that...
Abstract Facial makeup enriches the beauty of not only real humans but also virtual characters; therefore, for 3D facial models is highly in demand productions. However, painting directly on faces and capturing real‐world are costly, extracting from 2D images often struggles with shading effects occlusions. This paper presents first method a single portrait. Our consists following three steps. First, we exploit strong prior morphable via regression‐based inverse rendering to extract coarse...
Abstract Origami has received much attention in geometry, mathematics, and engineering due to its potential construct 3D developable shapes from designed crease patterns on a flat sheet. Waterbomb tessellation, which is one type of traditional origami consisting set waterbomb bases, been used create geometrically appealing widely studied. In this paper, we propose method for approximating target surfaces, are parametric surfaces varying or constant curvatures, using generalized...
Abstract We present the 850 μ m polarization observations toward IC 5146 filamentary cloud taken using Submillimetre Common-User Bolometer Array 2 (SCUBA-2) and its associated polarimeter (POL-2), mounted on James Clerk Maxwell Telescope, as part of B -fields In STar forming Regions Observations. This work is aimed at revealing magnetic field morphology within a core-scale (≲1.0 pc) hub-filament structure (HFS) located end parsec-scale filament. To investigate whether observed traces in HFS,...
We present the POL-2 850 $\mu$m linear polarization map of Barnard 1 clump in Perseus molecular cloud complex from B-fields In STar-forming Region Observations (BISTRO) survey at James Clerk Maxwell Telescope. find a trend decreasing fraction as function total intensity, which we link to depolarization effects towards higher density regions cloud. then use data infer plane-of-sky orientation large-scale magnetic field 1. This runs North-South across most cloud, with exception B1-c where it...
Abstract The modern supervised approaches for human image relighting rely on training data generated from 3D models. However, such datasets are often small (e.g., Light Stage with a number of individuals) or limited to diffuse materials commercial scanned models). Thus, the techniques suffer poor generalization capability and synthetic‐to‐real domain gap. In this paper, we propose two‐stage method single‐image adaptation. first stage, train neural network diffuse‐only relighting. second...
Abstract There is considerable recent progress in hair simulations, driven by the high demands computer animated movies. However, capturing complex interactions between and water still relatively its infancy. Such are best modeled as those an anisotropic permeable medium can flow into out of volume biased fiber direction. Modeling interaction further challenged when allowed to move. In this paper, we introduce a simulation model that reproduces dynamic material. We utilize Eulerian approach...
Abstract Animations of hair dynamics greatly enrich the visual attractiveness human characters. Traditional simulation techniques handle as clumps or continuum for efficiency; however, quality is limited because they cannot represent fine‐scale motion individual strands. Although a recent mass‐spring approach tackled problem simulating every strand hair, it required complicated setting springs and suffered from high computational cost. In this paper, we base animation on such Lattice Shape...
Abstract In this paper, we propose an interactive technique for constructing a 3D scene via sparse user inputs. We represent in the form of Layered Depth Image (LDI) which is composed foreground layer and background layer, each has corresponding texture depth map. Given user‐specified inputs, maps are computed based on superpixels using interpolation with geodesic‐distance weighting optimization framework. This computation done immediately, allows to edit LDI interactively. Additionally, our...