- Generative Adversarial Networks and Image Synthesis
- Chaos-based Image/Signal Encryption
- Advanced Image and Video Retrieval Techniques
- Digital Media Forensic Detection
- Image Processing and 3D Reconstruction
- Advanced Sensor and Energy Harvesting Materials
- Fractal and DNA sequence analysis
- Advanced Vision and Imaging
- Handwritten Text Recognition Techniques
- Model Reduction and Neural Networks
- Image Retrieval and Classification Techniques
- Graphene research and applications
- Electrohydrodynamics and Fluid Dynamics
- Anomaly Detection Techniques and Applications
- Computer Graphics and Visualization Techniques
- Video Analysis and Summarization
- Human Pose and Action Recognition
- Explainable Artificial Intelligence (XAI)
- Remote-Sensing Image Classification
- Robotics and Sensor-Based Localization
- 3D Surveying and Cultural Heritage
- Machine Learning in Bioinformatics
- Geochemistry and Geologic Mapping
- Hand Gesture Recognition Systems
- Topological and Geometric Data Analysis
Eindhoven University of Technology
2019-2023
Universidad Nacional Autónoma de México
2015-2018
Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear
2018
Fundación para las Relaciones Internacionales y el Diálogo Exterior
2018
Direct functionalization of prefabricated free-standing graphene oxide paper (GOP) is the only approach suitable for systematic tuning its mechanical, thermal and electronic characteristics. However, traditional liquid-phase can compromise physical integrity paper-like material up to total disintegration. In present paper, we attempted apply an alternative, solvent-free strategy facile nondestructive GOP with 1-octadecylamine (ODA) 1,12-diaminododecane (DAD) as representatives aliphatic...
A standard Variational Autoencoder, with a Euclidean latent space, is structurally incapable of capturing topological properties certain datasets. To remove obstructions, we introduce Diffusion Autoencoders arbitrary manifolds as space. Autoencoder uses transition kernels Brownian motion on the manifold. In particular, it to implement reparametrization trick and fast approximations KL divergence. We show that capable synthetic Additionally, train MNIST spheres, tori, projective spaces,...
A disentangled representation of a data set should be capable recovering the underlying factors that generated it. One question arises is whether using Euclidean space for latent variable models can produce when generating have certain geometrical structure. Take example images car seen from different angles. The angle has periodic structure but 1-dimensional would fail to capture this topology. How we address problem? submissions presented first stage NeurIPS2019 Disentanglement Challenge...
We introduce Equivariant Isomorphic Networks (EquIN) -- a method for learning representations that are equivariant with respect to general group actions over data. Differently from existing representation learners, EquIN is suitable not free, i.e., stabilize data via nontrivial symmetries. theoretically grounded in the orbit-stabilizer theorem theory. This guarantees an ideal learner infers isomorphic while trained on equivariance alone and thus fully extracts geometric structure of provide...
The definition of Linear Symmetry-Based Disentanglement (LSBD) proposed by (Higgins et al., 2018) outlines the properties that should characterize a disentangled representation captures symmetries data. However, it is not clear how to measure degree which data fulfills these properties. We propose metric for evaluation level LSBD achieves. provide practical method evaluate this and use disentanglement representations obtained three datasets with underlying $SO(2)$ symmetries.
The definition of Linear Symmetry-Based Disentanglement (LSBD) formalizes the notion linearly disentangled representations, but there is currently no metric to quantify LSBD. Such a crucial evaluate LSBD methods and compare previous understandings disentanglement. We propose $\mathcal{D}_\mathrm{LSBD}$, mathematically sound LSBD, provide practical implementation for $\mathrm{SO}(2)$ groups. Furthermore, from this we derive LSBD-VAE, semi-supervised method learn representations. demonstrate...