Youngjung Kim

ORCID: 0000-0002-4288-0160
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Vision and Imaging
  • Advanced Image Processing Techniques
  • Image Processing Techniques and Applications
  • Image and Signal Denoising Methods
  • Image Enhancement Techniques
  • Agriculture, Soil, Plant Science
  • Advanced Image Fusion Techniques
  • Optical measurement and interference techniques
  • Solid State Laser Technologies
  • Engineering Applied Research
  • Advanced SAR Imaging Techniques
  • Sparse and Compressive Sensing Techniques
  • Laser Material Processing Techniques
  • Photoacoustic and Ultrasonic Imaging
  • Generative Adversarial Networks and Image Synthesis
  • Synthetic Aperture Radar (SAR) Applications and Techniques
  • Laser Design and Applications
  • Biodiesel Production and Applications
  • Video Analysis and Summarization
  • Soil Mechanics and Vehicle Dynamics
  • Multimodal Machine Learning Applications
  • Autonomous Vehicle Technology and Safety
  • Turfgrass Adaptation and Management
  • Advanced Image and Video Retrieval Techniques
  • Remote Sensing and LiDAR Applications

Agency for Defense Development
2018-2024

Ben-Gurion University of the Negev
2020

York University
2020

Hong Kong Polytechnic University
2019

Queen Mary University of London
2019

Yonsei University
2008-2018

Bangladesh Rice Research Institute
2016

Yeungnam University
2010-2013

National Academy of Agricultural Science
2010-2012

Korea Polytechnic University
1999-2007

This paper reviews the second challenge on spectral reconstruction from RGB images, i.e., recovery of whole- scene hyperspectral (HS) information a 3-channel image. As in previous challenge, two tracks were provided: (i) "Clean" track where HS images are estimated noise-free RGBs, themselves calculated numerically using ground-truth and supplied sensitivity functions (ii) "Real World" track, simulating capture by an uncalibrated unknown camera, recovered noisy JPEG-compressed images. A new,...

10.1109/cvprw50498.2020.00231 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020-06-01

Convolutional neural networks (CNNs) have facilitated substantial progress on various problems in computer vision and image processing. However, applying them to fusion has remained challenging due the lack of labelled data for supervised learning. This paper introduces a deep network (DIF-Net), an unsupervised learning framework fusion. The DIF-Net parameterizes entire processes fusion, comprising feature extraction, reconstruction, using CNN. purpose is generate output which identical...

10.1109/tip.2020.2966075 article EN IEEE Transactions on Image Processing 2020-01-01

This paper reviewed the 3rd NTIRE challenge on single-image super-resolution (restoration of rich details in a low-resolution image) with focus proposed solutions and results. The had 1 track, which was aimed at real-world single image problem an unknown scaling factor. Participants were mapping images captured by DSLR camera shorter focal length to their high-resolution longer length. With this challenge, we introduced novel dataset (RealSR). track 403 registered participants, 36 teams...

10.1109/cvprw.2019.00274 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019-06-01

Recent works on machine learning have greatly advanced the accuracy of single image depth estimation. However, resulting images are still over-smoothed and perceptually unsatisfying. This paper casts prediction from as a parametric problem. Specifically, we propose deep variational model that effectively integrates heterogeneous predictions two convolutional neural networks (CNNs), named global local networks. They contrasting network architecture designed to capture information with...

10.1109/tip.2018.2836318 article EN IEEE Transactions on Image Processing 2018-05-15

This paper reviews the NTIRE 2020 challenge on real image denoising with focus newly introduced dataset, proposed methods and their results. The is a new version of previous 2019 that was based SIDD benchmark. collected validation testing datasets, hence, named SIDD+. has two tracks for quantitatively evaluating performance in (1) Bayer-pattern rawRGB (2) standard RGB (sRGB) color spaces. Each track ~250 registered participants. A total 22 teams, proposing 24 methods, competed final phase...

10.1109/cvprw50498.2020.00256 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020-06-01

Depth completion has been widely studied to predict a dense depth image from its sparse measurement and single color image. However, most state-of-the-art methods rely on static convolutional neural networks (CNNs) which are not flexible enough for capturing the dynamic nature of input contexts. In this paper, we propose GuideFormer, fully transformer-based architecture completion. We first process guidance images with separate transformer branches extract hierarchical complementary token...

10.1109/cvpr52688.2022.00615 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022-06-01

Most variational formulations for structure-texture image decomposition force structure images to have small norm in some functional spaces, and share a common notion of edges, i.e., large-gradients or -intensity differences. However, such definition makes it difficult distinguish edges from oscillations that fine spatial scale but high contrast. In this paper, we introduce new model by learning deep prior without explicit training data. An alternating direction method multiplier (ADMM)...

10.1109/tip.2018.2889531 article EN IEEE Transactions on Image Processing 2018-12-24

We present a novel unsupervised framework for instance-level image-to-image translation. Although recent advances have been made by incorporating additional object annotations, existing methods often fail to handle images with multiple disparate objects. The main cause is that, during inference, they apply global style the whole image and do not consider large discrepancy between instance background, or within instances. To address this problem, we propose class-aware memory network that...

10.1109/cvpr46437.2021.00649 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021-06-01

Inferring scene depth from a single monocular image is highly ill-posed problem in computer vision. This paper presents new gradient-domain approach, called analogy, that makes use of analogy as means for synthesizing target field, when collection RGB-D pairs given training data. Specifically, the proposed method employs non-parametric learning process creates an analogous field by sampling reliable gradients using visual correspondence established on pairs. Unlike existing data-driven...

10.1109/tip.2015.2495261 article EN IEEE Transactions on Image Processing 2015-10-27

This paper describes a method for high-quality depth superresolution. The standard formulations of image-guided upsampling, using simple joint filtering or quadratic optimization, lead to texture copying and bleeding artifacts. These artifacts are caused by inherent discrepancy structures in data from different sensors. Although there exists some correlation between intensity discontinuities, they distribution formation. To tackle this problem, we formulate an optimization model nonconvex...

10.1109/tip.2016.2601262 article EN IEEE Transactions on Image Processing 2016-08-18

Recent works on machine learning have greatly advanced the accuracy of depth estimation from a single image. However, resulting images are still visually unsatisfactory, often producing poor boundary localization and spurious regions. In this paper, we formulate problem as deep adversarial framework. A two-stage convolutional network is designed generator to sequentially predict global local structures At heart our approach training criterion based discriminator which attempts distinguish...

10.1109/icip.2017.8296575 article EN 2022 IEEE International Conference on Image Processing (ICIP) 2017-09-01

Regularization-based image restoration has remained an active research topic in processing and computer vision. It often leverages a guidance signal captured different-fields as additional cue. In this work, we present general framework for restoration, called deeply aggregated alternating minimization (DeepAM). We propose to train deep neural network advance two of the steps conventional AM algorithm: proximal mapping β-continuation. Both are learned from large dataset end-to-end manner....

10.1109/cvpr.2017.38 article EN 2017-07-01

Edge-preserving smoothing (EPS) can be formulated as minimizing an objective function that consists of data and regularization terms. At the price high-computational cost, this global EPS approach is more robust versatile than a local one typically has form weighted averaging. In paper, we introduce efficient decomposition-based method for minimizes L2 (possibly non-smooth non-convex) terms in linear time. Different from previous decompositionbased methods, which require solving large...

10.1109/tip.2017.2710621 article EN IEEE Transactions on Image Processing 2017-06-01

Current self-supervised methods for monocular depth estimation are largely based on deeply nested convolutional networks that leverage stereo image pairs or sequences during a training phase. However, they often exhibit inaccurate results around occluded regions and boundaries. In this paper, we present simple yet effective approach using pairs. The study aims to propose student-teacher strategy in which shallow student network is trained with the auxiliary information obtained from deeper...

10.48550/arxiv.1904.10230 preprint EN other-oa arXiv (Cornell University) 2019-01-01

This manual is intended to provide a detailed description of the DIML/CVL RGB-D dataset. dataset comprised 2M color images and their corresponding depth maps from great variety natural indoor outdoor scenes. The was constructed using Microsoft Kinect v2, while built stereo cameras (ZED camera built-in camera). Table I summarizes details our dataset, including acquisition, processing, format, toolbox. Refer Section II III for more details.

10.48550/arxiv.2110.11590 preprint EN other-oa arXiv (Cornell University) 2021-01-01

The advent of deep learning has made a significant advance in ship detection synthetic aperture radar (SAR) images. However, it is still challenging since the amount labeled SAR samples for training not sufficient. Moreover, images are corrupted by speckle noise, making them complex and difficult to interpret even human experts. In this letter, we propose novel framework that leverages label-rich electro-optical (EO) more plentiful feature representations, delicately addresses noise To end,...

10.1109/lgrs.2021.3115498 article EN IEEE Geoscience and Remote Sensing Letters 2021-10-07

This paper presents the robust shape optimization of electromechanical devices considering uncertainties design variables based on numerical technique and finite element method (FEM). In formulation optimization, multiobjective function is composed mean standard deviation original objective function, while constraints are supplemented by adding a penalty term to constraints. The sequential quadratic programming (SQP) applied solve problem. results manufacturing errors compared with those...

10.1109/20.767356 article EN IEEE Transactions on Magnetics 1999-05-01

Detecting objects in synthetic aperture radar (SAR) imagery has received much attention recent years since SAR can operate all-weather and day-and-night conditions. Due to the prosperity development of convolutional neural networks (CNNs), many previous methodologies have been proposed for object detection. In spite advance, existing detection still limitations boosting performance because inherently noisy characteristics imagery; hence, separate preprocessing step such as denoising...

10.3390/app11125569 article EN cc-by Applied Sciences 2021-06-16

Training deep networks commonly follows the supervised learning paradigm, which requires large-scale semantically-labeled data. The construction of such dataset is one major challenges when approaching to Advanced Driver Assistance Systems (ADAS) due expense human annotation. In this paper, we explore whether unsupervised stereo-based cues can be used learn high-level semantics for monocular road detection. Specifically, estimate drivable space and surface normals from stereo images, are...

10.1109/icme.2018.8486472 article EN 2022 IEEE International Conference on Multimedia and Expo (ICME) 2018-07-01

We present a multi-scale deep convolutional neural network (CNN) for the task of automatic 2D-to-3D conversion. Traditional methods, which make virtual view from reference view, consist separate stages i.e., depth (or disparity) estimation image and image-based rendering (DIBR) with estimated depth. In contrast, we reformulate synthesis as an reconstruction problem spatial transformer module directly stereo pairs unified CNN framework without ground-truth supervision. further propose...

10.1109/icip.2017.8296377 article EN 2022 IEEE International Conference on Image Processing (ICIP) 2017-09-01
Coming Soon ...