Pierre Ruyssen

ORCID: 0009-0006-8506-6464
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Sparse and Compressive Sensing Techniques
  • Blind Source Separation Techniques
  • Medical Image Segmentation Techniques
  • Domain Adaptation and Few-Shot Learning
  • Stock Market Forecasting Methods
  • Face and Expression Recognition
  • Machine Learning and Data Classification
  • Scientific Computing and Data Management
  • Stochastic processes and financial applications
  • Advanced Data Storage Technologies
  • Stochastic Gradient Optimization Techniques
  • Advanced Neural Network Applications
  • Multimodal Machine Learning Applications
  • Advanced SAR Imaging Techniques
  • Reinforcement Learning in Robotics
  • Advanced Image and Video Retrieval Techniques
  • Financial Markets and Investment Strategies
  • Visual Attention and Saliency Detection
  • Distributed Control Multi-Agent Systems
  • Neural Networks and Reservoir Computing
  • Tensor decomposition and applications

Google (United States)
2019-2024

Google (Switzerland)
2021-2023

Representation learning promises to unlock deep for the long tail of vision tasks without expensive labelled datasets. Yet, absence a unified evaluation general visual representations hinders progress. Popular protocols are often too constrained (linear classification), limited in diversity (ImageNet, CIFAR, Pascal-VOC), or only weakly related representation quality (ELBO, reconstruction error). We present Visual Task Adaptation Benchmark (VTAB), which defines good as those that adapt...

10.48550/arxiv.1910.04867 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Data is a critical resource for Machine Learning (ML), yet working with data remains key friction point. This paper introduces Croissant, metadata format datasets that simplifies how used by ML tools and frameworks. Croissant makes more discoverable, portable interoperable, thereby addressing significant challenges in management responsible AI. already supported several popular dataset repositories, spanning hundreds of thousands datasets, ready to be loaded into the most

10.1145/3650203.3663326 article EN 2024-05-29

This paper presents the benefits of using randomized neural networks instead standard basis functions or deep to approximate solutions optimal stopping problems. The key idea is use networks, where parameters hidden layers are generated randomly, and only last layer trained, in order continuation value. Our approaches applicable high dimensional problems existing become increasingly impractical. In addition, since our can be optimized simple linear regression, they easy implement,...

10.3934/fmf.2023022 article EN Frontiers of Mathematical Finance 2023-12-14

This article revisits the problem of decomposing a positive semidefinite matrix as sum with given rank plus sparse matrix. An immediate application can be found in portfolio optimization, when to decomposed is covariance between different assets portfolio. Our approach consists representing low-rank part solution product <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$MM^{T}$ </tex-math></inline-formula> ,...

10.1109/tnnls.2021.3091598 article EN IEEE Transactions on Neural Networks and Learning Systems 2021-07-07

This paper presents the benefits of using randomized neural networks instead standard basis functions or deep to approximate solutions optimal stopping problems. The key idea is use networks, where parameters hidden layers are generated randomly and only last layer trained, in order continuation value. Our approaches applicable high dimensional problems existing become increasingly impractical. In addition, since our can be optimized simple linear regression, they easy implement theoretical...

10.48550/arxiv.2104.13669 preprint EN cc-by-sa arXiv (Cornell University) 2021-01-01

Data is a critical resource for Machine Learning (ML), yet working with data remains key friction point. This paper introduces Croissant, metadata format datasets that simplifies how used by ML tools and frameworks. Croissant makes more discoverable, portable interoperable, thereby addressing significant challenges in management responsible AI. already supported several popular dataset repositories, spanning hundreds of thousands datasets, ready to be loaded into the most

10.1145/3650203.3663326 preprint EN arXiv (Cornell University) 2024-03-28

The robust PCA of covariance matrices plays an essential role when isolating key explanatory features. currently available methods for performing such a low-rank plus sparse decomposition are matrix specific, meaning, those algorithms must re-run every new matrix. Since these computationally expensive, it is preferable to learn and store function that nearly instantaneously performs this evaluated. Therefore, we introduce Denise, deep learning-based algorithm matrices, or more generally,...

10.48550/arxiv.2004.13612 preprint EN other-oa arXiv (Cornell University) 2020-01-01

This paper revisits the problem of decomposing a positive semidefinite matrix as sum with given rank plus sparse matrix. An immediate application can be found in portfolio optimization, when to decomposed is covariance between different assets portfolio. Our approach consists representing low-rank part solution product $MM^{T}$, where $M$ rectangular appropriate size, parametrized by coefficients deep neural network. We then use gradient descent algorithm minimize an loss function over...

10.48550/arxiv.1908.00461 preprint EN other-oa arXiv (Cornell University) 2019-01-01
Coming Soon ...