- Gaussian Processes and Bayesian Inference
- Image Processing Techniques and Applications
- Advanced X-ray Imaging Techniques
- Anomaly Detection Techniques and Applications
- Advanced Image Processing Techniques
- Advanced Fluorescence Microscopy Techniques
- Image and Signal Denoising Methods
- Optical measurement and interference techniques
- Digital Holography and Microscopy
- Target Tracking and Data Fusion in Sensor Networks
- Astrophysical Phenomena and Observations
- Time Series Analysis and Forecasting
- Spectroscopy and Chemometric Analyses
- Music and Audio Processing
- Fault Detection and Control Systems
- Control Systems and Identification
- Blind Source Separation Techniques
- Advanced Vision and Imaging
- Data Stream Mining Techniques
- Sparse and Compressive Sensing Techniques
- Remote-Sensing Image Classification
- X-ray Spectroscopy and Fluorescence Analysis
- Mobile Crowdsensing and Crowdsourcing
- Advanced Image and Video Retrieval Techniques
- Optical Coherence Tomography Applications
Northwestern University
2018-2021
Science North
2018
Universidad de Granada
2011-2016
The observation of gravitational waves from compact binary coalescences by LIGO and Virgo has begun a new era in astronomy. A critical challenge making detections is determining whether loud transient features the data are caused or instrumental environmental sources. citizen-science project Gravity Spy been demonstrated as an efficient infrastructure for classifying known types noise transients (glitches) through combination analysis performed both citizen volunteers machine learning. We...
In recent years, kernel methods, in particular support vector machines (SVMs), have been successfully introduced to remote sensing image classification. Their properties make them appropriate for dealing with a high number of features and low available labeled spectra. The introduction alternative approaches based on (parametric) Bayesian inference has quite scarce the more years. Assuming prior data distribution may lead poor results problems because specificities complexity data. this...
Abstract The volume of labeled data is often the primary determinant success in developing machine learning algorithms. This has increased interest methods for leveraging crowds to scale labeling efforts, and learn from noisy crowd-sourced labels. need acute but particularly challenging medical applications like pathology, due expertise required generate quality labels limited availability qualified experts. In this paper we investigate application Scalable Variational Gaussian Processes...
Wetlands serve many important ecosystem services, yet the United States lacks up-to-date, high-resolution wetland inventories. New, automated techniques for developing segmentation maps from aerial imagery can improve our understanding of location and amount wetlands. We assembled training testing data sets (patch sizes 28 × m2 56 m2) wetlands using Illinois Natural History Survey National Agricultural Imagery Project data. Each patch was labeled as or non-wetland. To augment these with...
The characterization of independent stationary stochastic components (sources), is generally achieved by using the spectral matrix partially correlated measurements, which are linearly related to interest. In general case where no assumptions made concerning way sources mixed on not able extract true sources. While analysis only uses second-order properties sources, a procedure based higher-order (fourth-order cross cumulants) developed. This approach leads complete identification sources.<...
In the last years, crowdsourcing is transforming way classification training sets are obtained. Instead of relying on a single expert annotator, shares labelling effort among large number collaborators. For instance, this being applied in laureate laser interferometer gravitational waves observatory (LIGO), order to detect glitches which might hinder identification true gravitational-waves. The scenario poses new challenging difficulties, as it has deal with different opinions from...
Despite recent advances, high performance single-shot 3D microscopy remains an elusive task. By introducing designed diffractive optical elements (DOEs), one is capable of converting a microscope into "kaleidoscope," in which case the snapshot image consists array tiles and each tile focuses on different depths. However, acquired multifocal microscopic (MFM) suffers from multiple sources degradation, prevents MFM further applications. We propose unifying computational framework simplifies...
It is noted that the problem of source separation has no solution without a priori information when only spectral matrix used. The authors have developed an algorithm using fourth-order cumulants solves two-source necessity information. They show that, in case two sources and sensors, model described by parameters. then present identification these parameters with up to order 4. potentialities this source-identification are illustrated simulated data.< <ETX...
We propose here a way of extracting independent sources from correlated inputs. The only hypothesis made is that the unknown relations between inputs and are linear, but not more than one source gaussian. show impossibility by using spectral analysis, make extraction possible equations which relate sources, based on cumulants cross-cumulants
Multi-focus microscope (MFM) provides a way to obtain 3D information by simultaneously capturing multiple focal planes. The naive method for MFM reconstruction is stack the sub-images with alignment. However, resolution in z-axis this limited number of acquired In work we build on recent algorithm MFM, using from frames improve quality. We propose two multiple-frame image algorithms: batch and recursive approaches. approach, take jointly estimate motion each frame. utilize reconstructed...
Recently, time-of-flight (ToF) sensors have emerged as a promising three-dimensional sensing technology that can be manufactured inexpensively in compact size. However, current state-of-the-art ToF suffer from low spatial resolution due to physical limitations the fabrication process. In this paper, we analyze sensor's output complex value coupling depth and intensity information phasor representation. Based on analysis, introduce novel multi-frame superresolution technique improve both...
Realizing both high temporal and spatial resolution across a large volume is key challenge for 3D fluorescent imaging. Towards achieving this objective, we introduce an interferometric multifocus microscopy (iMFM) system, combination of (MFM) with two opposing objective lenses. We show that the proposed iMFM capable simultaneously producing multiple focal plane interferometry provides axial super-resolution hence isotropic single exposure. design simulate microscope by employing special...
In this paper we address supervised learning problems where, instead of having a single annotator who provides the ground truth, multiple annotators, usually with varying degrees expertise, provide conflicting labels for same sample. Once Gaussian Process classification has been adapted to problem propose and describe how Variational Bayes inference can be used to, given observed labels, approximate posterior distribution latent classifier also estimate each annotator's reliability....
Every day, a huge amount of video data is generated for different purposes and applications. Fast accurate algorithms efficient search retrieval are therefore essential. The interesting properties sparse representation the new sampling theory named Compressive Sensing (CS) constitute core approach to we presenting in this paper. Once (where sparsity expected) has been chosen observations have taken, proposed utilizes Bayesian modeling inference tackle problem. In order speed up process use...
In this paper, we introduce a new Gaussian Process (GP) classification method for multisensory data. The proposed approach can deal with noisy and missing It is also capable of estimating the contribution each sensor towards task. We use Bayesian modeling to build GP-based classifier which combines information provided by all sensors approximates posterior distribution GP using variational inference. During its training phase, algorithm estimates sensor's weight then uses assign label...
We present a Bayesian approach for 3D image reconstruction of an extended object imaged with multi-focus microscopy (MFM). MFM simultaneously captures multiple sub-images different focal planes to provide information the sample. The naive method reconstruct is stack along z-axis, but result suffers from poor resolution in z-axis. maximum posteriori framework provides way according its observation model and prior knowledge. It jointly estimates parameters. Experimental results synthetic real...
The acquisition model of a 3Dx-ray imaging system can be understood as the combination two known techniques: tomography and ptychography. First, x-rays go through 3D object producing set 2D tomographic projections at different angles. Then, detector captures magnitude diffraction pattern produced by interaction these with finite-sized coherent beam spot (also called probe). In ptychography, in order to solve this phase retrieval problem, observations have captured large overlap between them....
Sequence labeling aims at assigning a label to every sample of signal (or pixel an image) while considering the sequentiality vicinity) samples. To perform this task, many works in literature first filter and then data. Unfortunately, filtering, which is performed independently from labeling, far optimal frequently makes latter task harder. In paper, novel approach that trains Gaussian process classifier estimates coefficients jointly presented. The new approach, based on Bayesian modeling...