- Sparse and Compressive Sensing Techniques
- Blind Source Separation Techniques
- Medical Image Segmentation Techniques
- Domain Adaptation and Few-Shot Learning
- Stock Market Forecasting Methods
- Face and Expression Recognition
- Machine Learning and Data Classification
- Scientific Computing and Data Management
- Stochastic processes and financial applications
- Advanced Data Storage Technologies
- Stochastic Gradient Optimization Techniques
- Advanced Neural Network Applications
- Multimodal Machine Learning Applications
- Advanced SAR Imaging Techniques
- Reinforcement Learning in Robotics
- Advanced Image and Video Retrieval Techniques
- Financial Markets and Investment Strategies
- Visual Attention and Saliency Detection
- Distributed Control Multi-Agent Systems
- Neural Networks and Reservoir Computing
- Tensor decomposition and applications
Google (United States)
2019-2024
Google (Switzerland)
2021-2023
Representation learning promises to unlock deep for the long tail of vision tasks without expensive labelled datasets. Yet, absence a unified evaluation general visual representations hinders progress. Popular protocols are often too constrained (linear classification), limited in diversity (ImageNet, CIFAR, Pascal-VOC), or only weakly related representation quality (ELBO, reconstruction error). We present Visual Task Adaptation Benchmark (VTAB), which defines good as those that adapt...
Data is a critical resource for Machine Learning (ML), yet working with data remains key friction point. This paper introduces Croissant, metadata format datasets that simplifies how used by ML tools and frameworks. Croissant makes more discoverable, portable interoperable, thereby addressing significant challenges in management responsible AI. already supported several popular dataset repositories, spanning hundreds of thousands datasets, ready to be loaded into the most
This paper presents the benefits of using randomized neural networks instead standard basis functions or deep to approximate solutions optimal stopping problems. The key idea is use networks, where parameters hidden layers are generated randomly, and only last layer trained, in order continuation value. Our approaches applicable high dimensional problems existing become increasingly impractical. In addition, since our can be optimized simple linear regression, they easy implement,...
This article revisits the problem of decomposing a positive semidefinite matrix as sum with given rank plus sparse matrix. An immediate application can be found in portfolio optimization, when to decomposed is covariance between different assets portfolio. Our approach consists representing low-rank part solution product <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$MM^{T}$ </tex-math></inline-formula> ,...
This paper presents the benefits of using randomized neural networks instead standard basis functions or deep to approximate solutions optimal stopping problems. The key idea is use networks, where parameters hidden layers are generated randomly and only last layer trained, in order continuation value. Our approaches applicable high dimensional problems existing become increasingly impractical. In addition, since our can be optimized simple linear regression, they easy implement theoretical...
Data is a critical resource for Machine Learning (ML), yet working with data remains key friction point. This paper introduces Croissant, metadata format datasets that simplifies how used by ML tools and frameworks. Croissant makes more discoverable, portable interoperable, thereby addressing significant challenges in management responsible AI. already supported several popular dataset repositories, spanning hundreds of thousands datasets, ready to be loaded into the most
The robust PCA of covariance matrices plays an essential role when isolating key explanatory features. currently available methods for performing such a low-rank plus sparse decomposition are matrix specific, meaning, those algorithms must re-run every new matrix. Since these computationally expensive, it is preferable to learn and store function that nearly instantaneously performs this evaluated. Therefore, we introduce Denise, deep learning-based algorithm matrices, or more generally,...
This paper revisits the problem of decomposing a positive semidefinite matrix as sum with given rank plus sparse matrix. An immediate application can be found in portfolio optimization, when to decomposed is covariance between different assets portfolio. Our approach consists representing low-rank part solution product $MM^{T}$, where $M$ rectangular appropriate size, parametrized by coefficients deep neural network. We then use gradient descent algorithm minimize an loss function over...