- Algorithms and Data Compression
- Wireless Communication Security Techniques
- Cellular Automata and Applications
- DNA and Biological Computing
- Advanced biosensing and bioanalysis techniques
- Advanced Neural Network Applications
- Genomics and Phylogenetic Studies
- Error Correcting Code Techniques
- Distributed Sensor Networks and Detection Algorithms
- Neural Networks and Applications
- Adversarial Robustness in Machine Learning
- Advanced Data Compression Techniques
- Generative Adversarial Networks and Image Synthesis
- Advanced Bandit Algorithms Research
- Image Enhancement Techniques
- Model Reduction and Neural Networks
- Statistical Methods and Inference
- Privacy-Preserving Technologies in Data
- Domain Adaptation and Few-Shot Learning
- Sparse and Compressive Sensing Techniques
- Statistical Mechanics and Entropy
- Advanced Image Processing Techniques
- Multimodal Machine Learning Applications
- Cooperative Communication and Network Coding
- Anomaly Detection Techniques and Applications
Hongik University
2018-2024
Yonsei University
2024
Chung-Ang University
2019
Stanford University
2012-2016
Roche (United States)
2016
In compressed sensing, one takes samples of an N -dimensional vector using matrix A , obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when k -sparse, there a precisely determined phase transition: for certain region in the ( )-phase diagram, convex optimization typically finds sparsest solution, whereas outside that region, fails. It has been shown empirically same property—with transition location—holds wide range...
This paper proposes a video delivery strategy for dynamic streaming services which maximizes time-average quality under playback delay constraint in wireless caching networks. The network where popular videos encoded by scalable coding are already stored randomly distributed nodes is considered adaptive concepts, and distance-based interference management investigated this paper. In model, user makes delay-constrained decisions depending on stochastic states: 1) node delivery, 2) quality, 3)...
In DNA storage systems, there are tradeoffs between writing and reading costs. Increasing the code rate of error-correcting codes may save cost, but it will need more sequence reads for data retrieval. There is potentially a way to improve sequencing decoding processes in such that cost induced by this tradeoff reduced without increasing cost. past researches, clustering, alignment were considered as separate stages we believe using information from all these together performance. Actual...
Four problems related to information divergence measures defined on finite alphabets are considered. In three of the cases we consider, illustrate a contrast which arises between binary-alphabet and larger-alphabet settings. This is surprising in some instances, since characterizations for settings do not generalize their counterparts. Specifically, show that $f$-divergences unique decomposable divergences binary satisfy data processing inequality, thereby clarifying claims have previously...
We investigate the second order asymptotics (source dispersion) of successive refinement problem. Similar to classical definition a successively refinable source, we say that source is strongly if coding can achieve optimum rate (including dispersion terms) at both decoders. establish sufficient condition for strong refinability. show any discrete under Hamming distortion and Gaussian quadratic are refinable. also demonstrate how ideas be used in point-to-point lossy compression problems...
We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of few maximal source components, while decoder's reconstruction is natural estimate components based on this information. This scheme turns out to be near optimal for memoryless Gaussian in sense achieving zero-rate slope its distortion-rate function. Motivated finding, we then propose comprised iterating above an appropriately transformed version difference between and...
DNA data storage systems have rapidly developed with novel error-correcting techniques, random access algorithms, and query systems. However, designing an algorithm for is challenging, mainly due to the unpredictable nature of errors extremely high price experiments. Thus, a simulator interest that can imitate error statistics system replace experiments in developing processes. We introduce generative adversarial networks learn channel statistics. Our takes oligos (DNA sequences write) as...
DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in acquisition and mining of more data. Large amounts data are vital for genomics research, generic compression tools, while viable, cannot offer same savings as approaches tuned inherent biological properties. We propose an algorithm compress target genome given known reference genome. The proposed first generates mapping from genome, then compresses this with entropy coder. As illustration...
We establish two strong senses of universality logarithmic loss as a distortion criterion in lossy compression: For any fixed length compression problem under an arbitrary criterion, we show that there is equivalent loss. In the successive refinement problem, if first decoder operates loss, discrete memoryless source successively refinable for second decoder.
Due to the advantages of high storage densities and longevity, DNA has become one attractive technologies for future data systems. However, writing/reading cost is still more efficient techniques are required. In this paper, we propose improved log-likelihood ratio (LLR) processing schemes based on observed statistics low-density parity-check (LDPC) code decoding reduce reading while encoding kept unchanged. mismatch between real channel also limit maximum decoder input value, scaling...
In mobile multimedia devices, the frame memory compression (FMC) technique by embedded (EC) is becoming an increasingly important video-processing method for reducing external data bandwidth requirement, which, in turn, results power savings. Among various EC schemes, combination of discrete wavelet transform (DWT) and set partitioning hierarchical trees (SPIHT) widely used FMC because it achieves high efficiency with low computational complexity. However, there room improvement conventional...
Deep learning-based image signal processor (ISP) models for mobile cameras can generate high-quality images that rival those of professional DSLR cameras. However, their computational demands often make them unsuitable settings. Additionally, modern employ non-Bayer color filter arrays (CFA) such as Quad Bayer, Nona and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\text{Q}\times \text{Q}$...
We study the mean estimation problem under communication and local differential privacy constraints. While previous work has proposed \emph{order}-optimal algorithms for same (i.e., asymptotically optimal as we spend more bits), \emph{exact} optimality (in non-asymptotic setting) still not been achieved. In this work, take a step towards characterizing \emph{exact}-optimal approach in presence of shared randomness (a random variable between server user) identify several conditions...
DNA-based data storage is one of the most attractive research areas for future archival storage. However, it faces problems high writing and reading costs practical use. There have been many efforts to resolve this problem, but existing schemes are not fully suitable storage, more cost reduction needed.We propose whole encoding decoding procedures DNA The procedure consists a carefully designed single low-density parity-check code as an inter-oligo code, which corrects errors dropouts...
Finding a biomarker that indicates the subject’s age is one of most important topics in biology. Several recent studies tried to extract from brain imaging data including fMRI data. However, them focused on MRI data, which do not provide dynamics and lack attempts apply recently proposed deep learning models. We propose neural network model estimates subject images using recurrent (RNN), more precisely, gated unit (GRU). applying networks trivial due high dimensional nature In this work, we...
We study the neural network (NN) compression problem, viewing tension between ratio and NN performance through lens of rate-distortion theory. choose a distortion metric that reflects effect on model output derive tradeoff rate (compression) distortion. In addition to characterizing theoretical limits compression, this formulation shows \emph{pruning}, implicitly or explicitly, must be part good algorithm. This observation bridges gap parts literature pertaining data respectively, providing...
We establish an universal property of logarithmic loss in the successive refinement problem. If first decoder operates under loss, we show that any discrete memoryless source is successively refinable arbitrary distortion criterion for second decoder. Based on this result, propose a low-complexity lossy compression algorithm source.
We investigate the problem of continuous-time causal estimation under a minimax criterion. Let X <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">T</sup> = {X <sub xmlns:xlink="http://www.w3.org/1999/xlink">t</sub> , 0 ≤ t T} be governed by probability law P xmlns:xlink="http://www.w3.org/1999/xlink">θ</sub> from some class possible laws indexed θ ∈ S, and Y noise corrupted observations available to estimator. characterize estimator minimizing...
We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of few maximal source components, while decoder's reconstruction is natural estimate components based on this information. This scheme turns out to be near-optimal for memoryless Gaussian in sense achieving zero-rate slope its distortion-rate function. Motivated finding, we then propose comprising iterating above an appropriately transformed version difference between and...
We investigate the second order asymptotics (source dispersion) of successive refinement problem. Similarly to classical definition a successively refinable source, we say that source is strongly if coding can achieve optimum rate (including dispersion terms) at both decoders. establish sufficient condition for strong refinability. show any discrete under Hamming distortion and Gaussian quadratic are refinable. also demonstrate how ideas be used in point-to-point lossy compression problems...
We study the transmission of a single random variable across Poisson channel, which takes continuous-time waveform {λ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">t</sub> : 0 ≤ λ T} as an input, where A, for all t T. The output channel is non-homogeneous arrival process with rate <sup xmlns:xlink="http://www.w3.org/1999/xlink">T</sup> . explore class schemes that are optimal in distortion exponent sense under mean squared loss. determine...