- Privacy-Preserving Technologies in Data
- Stochastic Gradient Optimization Techniques
- Internet Traffic Analysis and Secure E-voting
- Adversarial Robustness in Machine Learning
- Cryptography and Data Security
- Open Education and E-Learning
- Data-Driven Disease Surveillance
- Big Data Technologies and Applications
- Data Stream Mining Techniques
- Ferroelectric and Negative Capacitance Devices
- Mobile Crowdsensing and Crowdsourcing
- Advanced Graph Neural Networks
- Online Learning and Analytics
- Access Control and Trust
- Distributed Sensor Networks and Detection Algorithms
Institut national de recherche en informatique et en automatique
2023
Accenture (Switzerland)
2021-2023
Observatoire de la Côte d’Azur
2021
Université Côte d'Azur
2021
The increasing size of data generated by smartphones and IoT devices motivated the development Federated Learning (FL), a framework for on-device collaborative training machine learning models. First efforts in FL focused on single global model with good average performance across clients, but may be arbitrarily bad given client, due to inherent heterogeneity local distributions. multi-task (MTL) approaches can learn personalized models formulating an opportune penalized optimization...
Federated Learning (FL) is a novel approach enabling several clients holding sensitive data to collaboratively train machine learning models, without centralizing data. The cross-silo FL setting corresponds the case of few ($2$--$50$) reliable clients, each medium large datasets, and typically found in applications such as healthcare, finance, or industry. While previous works have proposed representative datasets for cross-device FL, realistic healthcare exist, thereby slowing algorithmic...
Federated learning usually employs a client-server architecture where an orchestrator iteratively aggregates model updates from remote clients and pushes them back refined model. This approach may be inefficient in cross-silo settings, as close-by data silos with high-speed access links exchange information faster than the orchestrator, become communication bottleneck. In this paper we define problem of topology design for federated using theory max-plus linear systems to compute system...
Federated Learning (FL) enables multiple clients, such as mobile phones and IoT devices, to collaboratively train a global machine learning model while keeping their data localized. However, recent studies have revealed that the training phase of FL is vulnerable reconstruction attacks, attribute inference attacks (AIA), where adversaries exploit exchanged messages auxiliary public information uncover sensitive attributes targeted clients. While these been extensively studied in context...
The enormous amount of data produced by mobile and IoT devices has motivated the development federated learning (FL), a framework allowing such (or clients) to collaboratively train machine models without sharing their local data. FL algorithms (like FedAvg) iteratively aggregate model updates computed clients on own datasets. Clients may exhibit different levels participation, often correlated over time with other clients. This paper presents first convergence analysis for FedAvg-like...
Federated learning allows clients to collaboratively learn statistical models while keeping their data local. was originally used train a unique global model be served all clients, but this approach might sub-optimal when clients' local distributions are heterogeneous. In order tackle limitation, recent personalized federated methods separate for each client still leveraging the knowledge available at other clients. work, we exploit ability of deep neural networks extract high quality...
The enormous amount of data produced by mobile and IoT devices has motivated the development federated learning (FL), a framework allowing such (or clients) to collaboratively train machine models without sharing their local data.FL algorithms (like FedAvg) iteratively aggregate model updates computed clients on own datasets.Clients may exhibit different levels participation, often correlated over time with other clients.This paper presents first convergence analysis for FedAvg-like FL...
Within the realm of privacy-preserving machine learning, empirical privacy defenses have been proposed as a solution to achieve satisfactory levels training data without significant drop in model utility. Most existing against membership inference attacks assume access reference data, defined an additional dataset coming from same (or similar) underlying distribution data. Despite common use previous works are notably reticent about defining and evaluating privacy. As gains utility and/or...
Within the realm of privacy-preserving machine learning, empirical privacy defenses have been proposed as a solution to achieve satisfactory levels training data without significant drop in model utility. Most existing against membership inference attacks assume access reference data, defined an additional dataset coming from same (or similar) underlying distribution data. Despite common use previous works are notably reticent about defining and evaluating privacy. As gains utility and/or...
The enormous amount of data produced by mobile and IoT devices has motivated the development federated learning (FL), a framework allowing such (or clients) to collaboratively train machine models without sharing their local data. FL algorithms (like FedAvg) iteratively aggregate model updates computed clients on own datasets. Clients may exhibit different levels participation, often correlated over time with other clients. This paper presents first convergence analysis for FedAvg-like...
Federated learning (FL) is an effective solution to train machine models on the increasing amount of data generated by IoT devices and smartphones while keeping such localized. Most previous work federated assumes that clients operate static datasets collected before training starts. This approach may be inefficient because 1) it ignores new samples collect during training, 2) require a potentially long preparatory phase for enough data. Moreover, simply impossible in scenarios with small...