Aswin C. Sankaranarayanan

ORCID: 0000-0003-0906-4046
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Sparse and Compressive Sensing Techniques
  • Advanced Vision and Imaging
  • Optical measurement and interference techniques
  • Advanced Optical Imaging Technologies
  • Advanced Optical Sensing Technologies
  • Image Processing Techniques and Applications
  • Image and Signal Denoising Methods
  • Advanced Image Processing Techniques
  • Random lasers and scattering media
  • Optical Coherence Tomography Applications
  • Computer Graphics and Visualization Techniques
  • Microwave Imaging and Scattering Analysis
  • Video Surveillance and Tracking Methods
  • Photoacoustic and Ultrasonic Imaging
  • Robotics and Sensor-Based Localization
  • Blind Source Separation Techniques
  • Image Enhancement Techniques
  • Face and Expression Recognition
  • Robotic Path Planning Algorithms
  • Digital Holography and Microscopy
  • Target Tracking and Data Fusion in Sensor Networks
  • Advanced Image Fusion Techniques
  • Color Science and Applications
  • Optical Imaging and Spectroscopy Techniques
  • Indoor and Outdoor Localization Technologies

Carnegie Mellon University
2016-2025

Mahindra Group (India)
2024

Samsung (South Korea)
2023

University of Pittsburgh
2023

Rice University
2010-2021

Adobe Systems (United States)
2017-2018

Consumer Healthcare Products Association
2016

University of Maryland, College Park
2005-2015

Secom (Japan)
2002-2003

University of Waterloo
2002

While deep learning methods have achieved state-of-theart performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks. Under this approach, each problem requires its own dedicated network. In scenarios where we need to solve a wide variety problems, e.g., on mobile camera, it is inefficient expensive use these On other hand, traditional using analytic signal priors can be used any linear...

10.1109/iccv.2017.627 article EN 2017-10-01

FlatCam is a thin form-factor lensless camera that consists of coded mask placed on top bare, conventional sensor array. Unlike traditional, lens-based camera, where an image the scene directly recorded pixels, each pixel in records linear combination light from multiple elements. A computational algorithm then used to demultiplex measurements and reconstruct scene. instance aperture imaging system; however, unlike vast majority related work, we place extremely close enables flat devices. We...

10.1109/tci.2016.2593662 article EN IEEE Transactions on Computational Imaging 2016-07-20

Unions of subspaces provide a powerful generalization single subspace models for collections high-dimensional data; however, learning multiple from data is challenging due to the fact that segmentation--the identification points live in same subspace--and estimation must be performed simultaneously. Recently, sparse recovery methods were shown provable and robust strategy exact feature selection (EFS)--recovering subsets ensemble subspace. In parallel with recent studies EFS l1-minimization,...

10.5555/2567709.2567741 article EN Journal of Machine Learning Research 2013-01-01

Video cameras are among the most commonly used sensors in a large number of applications, ranging from surveillance to smart rooms for videoconferencing. There is need develop algorithms tasks such as detection, tracking, and recognition objects, specifically using distributed networks cameras. The projective nature imaging provides ample challenges data association across We first discuss these context visual sensor networks. Then, we show how real-world constraints can be favorably...

10.1109/jproc.2008.928758 article EN Proceedings of the IEEE 2008-10-01

Compressive sensing (CS)-based spatial-multiplexing cameras (SMCs) sample a scene through series of coded projections using spatial light modulator and few optical sensor elements. SMC architectures are particularly useful when imaging at wavelengths for which full-frame sensors too cumbersome or expensive. While existing recovery algorithms SMCs perform well static images, they typically fail time-varying scenes (videos). In this paper, we propose novel CS multi-scale video (CS-MUVI)...

10.1109/iccphot.2012.6215212 article EN 2012-04-01

The design of conventional sensors is based primarily on the Shannon?Nyquist sampling theorem, which states that a signal bandwidth W Hz fully determined by its discrete time samples provided rate exceeds 2 per second. For signals, theorem has very simple interpretation: number data must be at least as large dimensionality being sampled and recovered. This important result enables processing in domain without any loss information. However, an increasing applications, Shannon-Nyquist dictates...

10.1109/msp.2016.2602099 article EN IEEE Signal Processing Magazine 2017-01-01

The basic design of a camera has remained unchanged for centuries. To acquire an image, light from the scene under view is focused onto photosensitive surface using lens. Over years, evolved photographic film to array digital sensors. However, lenses remain integral part modern imaging systems in broad range applications.

10.1109/msp.2016.2581921 article EN IEEE Signal Processing Magazine 2016-09-01

There is an increasing need for passive 3D scanning in many applications that have stringent energy constraints. In this paper, we present approach single frame, viewpoint, imaging using a phase mask at the aperture plane of camera. Our relies on end-to-end optimization framework to jointly learn optimal and reconstruction algorithm allows accurate estimation range image from captured data. Using our framework, design new performs significantly better than existing approaches. We build...

10.1109/iccphot.2019.8747330 article EN 2019-05-01

We present a novel theory of Fermat paths light between known visible scene and an unknown object not in the line sight transient camera. These either obey specular reflection or are reflected by object's boundary, hence encode shape hidden object. prove that correspond to discontinuities measurements. then derive constraint relates spatial derivatives path lengths at these surface normal. Based on this theory, we algorithm, called Flow, estimate non-line-of-sight Our method allows, for...

10.1109/cvpr.2019.00696 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019-06-01

Non-line-of-sight (NLOS) imaging is the problem of reconstructing properties scenes occluded from a sensor, using measurements light that indirectly travels scene to sensor through intermediate diffuse reflections. We introduce an analysis-by-synthesis framework can reconstruct complex shape and reflectance NLOS object. Our deviates prior work on reconstruction, by directly optimizing for surface representation object, in place commonly employed volumetric representations. At core our new...

10.1109/cvpr.2019.00164 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019-06-01

We propose a novel framework for the deterministic construction of linear, near-isometric embeddings finite set data points. Given training points X ⊂ \BBR <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">N</sup> , we consider secant S(X) that consists all pairwise difference vectors X, normalized to lie on unit sphere. formulate an affine rank minimization problem construct matrix Ψ preserves norms in up distortion parameter δ. While is NP-hard,...

10.1109/tsp.2015.2452228 article EN IEEE Transactions on Signal Processing 2015-07-07

Video cameras are invariably bandwidth limited and this results in a trade-off between spatial temporal resolution. Advances sensor manufacturing technology have tremendously increased the available resolution of modern while simultaneously lowering costs these sensors. In stark contrast, hardware improvements been modest. One solution to enhance is use high imaging devices such as speed sensors camera arrays. Unfortunately, solutions expensive. An alternate motivated by recent advances...

10.1109/iccphot.2012.6215211 article EN 2012-04-01

We present a virtual reality display that is capable of generating dense collection depth/focal planes. This achieved by driving focus-tunable lens to sweep range focal lengths at high frequency and, subsequently, tracking the length precisely microsecond time resolutions using an optical module. Precise length, coupled with high-speed display, enables our lab prototype generate 1600 planes per second. novel first-of-its-kind multifocal resolving vergence-accommodation conflict endemic...

10.1145/3272127.3275015 article EN ACM Transactions on Graphics 2018-11-28

Non-line-of-sight (NLOS) imaging utilizes the full 5D light transient measurements to reconstruct scenes beyond cameras field of view. Mathematically, this requires solving an elliptical tomography problem that unmixes shape and albedo from spatially-multiplexed NLOS scene. In paper, we propose a new approach for by studying properties first-returning photons three-bounce paths. We show times flight are dependent only on geometry scene each observation is almost always generated single...

10.1109/cvpr.2017.251 article EN 2017-07-01

We propose the use of a light-weight setup consisting collocated camera and light source – commonly found on mobile devices to reconstruct surface normals spatially-varying BRDFs near-planar material samples. A provides only 1-D "univariate" sampling 3-D isotropic BRDF. show that univariate is sufficient estimate parameters used analytical BRDF models. Subsequently, we dictionary-based reflectance prior derive robust technique for per-pixel normal estimation. demonstrate real-world shape...

10.1109/iccv.2017.573 article EN 2017-10-01

FlatCam is a thin form-factor lensless camera that consists of coded mask placed on top bare, conventional sensor array. Unlike traditional, lens-based where an image the scene directly recorded pixels, each pixel in records linear combination light from multiple elements. A computational algorithm then used to demultiplex measurements and reconstruct scene. instance aperture imaging system; however, unlike vast majority related work, we place extremely close can enable system. We employ...

10.48550/arxiv.1509.00116 preprint EN other-oa arXiv (Cornell University) 2015-01-01

Non-line-of-sight (NLOS) imaging aims to reconstruct scenes outside the field of view an system. A common approach is measure so-called light transients, which facilitates reconstructions through ellipsoidal tomography that involves solving a linear least-squares. Unfortunately, corresponding operator very high-dimensional and lacks structures facilitate fast solvers, so, ensuing optimization computationally daunting task. We introduce tractable framework for problem. Our main observation...

10.1109/iccv.2019.00798 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2019-10-01

This paper addresses the problem of estimating shape objects that exhibit spatially-varying reflectance. We assume multiple images object are obtained under a fixed view-point and varying illumination, i.e., setting photometric stereo. At core our techniques is assumption BRDF at each pixel lies in non-negative span known dictionary. enables per-pixel surface normal estimation framework computationally tractable requires no initialization spite underlying being non-convex. Our first solves...

10.1109/tpami.2016.2623613 article EN publisher-specific-oa IEEE Transactions on Pattern Analysis and Machine Intelligence 2016-11-01

Spatial-multiplexing cameras have emerged as a promising alternative to classical imaging devices, often enabling acquisition of `more for less'. One popular architecture spatial multiplexing is the single-pixel camera (SPC), which acquires coded measurements scene with pseudo-random masks. Significant theoretical developments over past few years provide means reconstruction original imagery from at sub-Nyquist sampling rates. Yet, accurate generally requires high measurement rates and...

10.1109/cvprw.2015.7301371 article EN 2015-06-01

Multiview face recognition has become an active research area in the last few years. In this paper, we present approach for video-based camera networks. Our goal is to handle pose variations by exploiting redundancy multiview video data. However, unlike traditional approaches that explicitly estimate of face, propose a novel feature robust presence diffuse lighting and variations. The proposed developed using spherical harmonic representation texture-mapped onto sphere; texture map itself...

10.1109/tip.2014.2300812 article EN IEEE Transactions on Image Processing 2014-01-31
Coming Soon ...