- Adversarial Robustness in Machine Learning
- Advanced Malware Detection Techniques
- Anomaly Detection Techniques and Applications
- COVID-19 epidemiological studies
- Software Testing and Debugging Techniques
- Digital Media Forensic Detection
- Software Engineering Research
- Power Systems and Technologies
- COVID-19 diagnosis using AI
- Advanced Neural Network Applications
- Electricity Theft Detection Techniques
- Advanced X-ray and CT Imaging
- Power System Reliability and Maintenance
- Advanced Steganography and Watermarking Techniques
- Machine Learning and Data Classification
- Smart Grid Security and Resilience
- Optimal Power Flow Distribution
- Medical Imaging Techniques and Applications
- Security and Verification in Computing
- Power Transformer Diagnostics and Insulation
- Machine Learning in Materials Science
- Cryptographic Implementations and Security
- Handwritten Text Recognition Techniques
- Complex Systems and Decision Making
- Domain Adaptation and Few-Shot Learning
University of Luxembourg
2019-2023
Université de Lorraine
2015
The rapid spread of the Coronavirus SARS-2 is a major challenge that led almost all governments worldwide to take drastic measures respond tragedy. Chief among those massive lockdown entire countries and cities, which beyond its global economic impact has created some deep social psychological tensions within populations. While adopted mitigation (including lockdown) have generally proven useful, policymakers are now facing critical question: how when lift measures? A carefully-planned exit...
Efficiently solving Optimal Power Flow (OPF) problems in power systems is crucial for operational planning and grid management. There a growing need scalable algorithms capable of handling the increasing variability, constraints, uncertainties modern networks while providing accurate fast solutions. To address this, machine learning techniques, particularly Graph Neural Networks (GNNs) have emerged as promising approaches. This letter introduces SafePowerGraph-LLM, first framework explicitly...
As machine learning (ML) techniques gain prominence in power system research, validating these methods' effectiveness under real-world conditions requires real-time hardware-in-the-loop (HIL) simulations. HIL simulation platforms enable the integration of computational models with physical devices, allowing rigorous testing across diverse scenarios critical to resilience and reliability. In this study, we develop a SafePowerGraph-HIL framework that utilizes simulations on IEEE 9-bus system,...
Credit scoring systems are critical FinTech applications that concern the analysis of creditworthiness a person or organization. While decisions were previously based on human expertise, they now increasingly relying data and machine learning. In this paper, we assess ability state-of-the-art adversarial learning to craft attacks real-world credit system. Interestingly, find that, while these techniques can generate large numbers data, practically useless as all violate domain-specific...
While the literature on security attacks and defenses of Machine Learning (ML) systems mostly focuses unrealistic adversarial examples, recent research has raised concern about under-explored field realistic their implications robustness real-world systems. Our paper paves way for a better understanding against makes two major contributions. First, we conduct study three use cases (text classification, botnet detection, malware detection) seven datasets in order to evaluate whether examples...
Vulnerability to adversarial attacks is a well-known weakness of Deep Neural networks. While most the studies focus on single-task neural networks with computer vision datasets, very little research has considered complex multi-task models that are common in real applications. In this paper, we evaluate design choices impact robustness deep learning We provide evidence blindly adding auxiliary tasks, or weighing tasks provides false sense robustness. Thereby, tone down claim made by previous...
The generation of feasible adversarial examples is necessary for properly assessing models that work in constrained feature space. However, it remains a challenging task to enforce constraints into attacks were designed computer vision. We propose unified framework generate satisfy given domain constraints. Our can handle both linear and non-linear instantiate our two algorithms: gradient-based attack introduces the loss function maximize, multi-objective search algorithm aims...
Evasion Attacks have been commonly seen as a weakness of Deep Neural Networks. In this paper, we flip the paradigm and envision vulnerability useful application. We propose EAST, new steganography watermarking technique based on multi-label targeted evasion attacks. The key idea EAST is to encode data labels image that attacks produce.Our results confirm our embedding elusive; it not only passes unnoticed by humans, steganalysis methods, machine-learning detectors. addition, resilient soft...
Power grids are critical infrastructures of paramount importance to modern society and their rapid evolution interconnections has heightened the complexity power systems (PS) operations. Traditional methods for grid analysis struggle with computational demands large-scale RES ES integration, prompting adoption machine learning (ML) techniques, particularly Graph Neural Networks (GNNs). GNNs have proven effective in solving alternating current (AC) Flow (PF) Optimal (OPF) problems, crucial...
Convolutional Neural Networks (CNNs) are intensively used to solve a wide variety of complex problems. Although powerful, such systems require manual configuration and tuning. To this end, we view CNNs as configurable propose an end-to-end framework that allows the configuration, evaluation automated search for CNN architectures. Therefore, our contribution is threefold. First, model variability architectures with Feature Model (FM) generalizes over existing Each valid FM corresponds can be...
We present FeatureNET, an open-source Neural Architecture Search (NAS) tool1 that generates diverse sets of Deep Learning (DL) models. FeatureNET relies on a meta-model deep neural networks, consisting generic configurable entities. Then, it uses tools developed in the context software product lines to generate (maximize differences between generated) DL The models are translated Keras and can be integrated into typical machine learning pipelines. allows researchers seamlessly large variety...
We propose adversarial embedding, a new steganography and watermarking technique that embeds secret information within images. The key idea of our method is to use deep neural networks for image classification attacks embed Thus, we the an encoding message images related network outputs extract it. properties (invisible perturbations, nontransferability, resilience tampering) offer guarantees regarding confidentiality integrity hidden messages. empirically evaluate embedding using more than...
Efficiently solving unbalanced three-phase power flow in distribution grids is pivotal for grid analysis and simulation. There a pressing need scalable algorithms capable of handling large-scale that can provide accurate fast solutions. To address this, deep learning techniques, especially Graph Neural Networks (GNNs), have emerged. However, existing literature primarily focuses on balanced networks, leaving critical gap supporting grids. This letter introduces PowerFlowMultiNet, novel...
State-of-the-art deep learning models for tabular data have recently achieved acceptable performance to be deployed in industrial settings. However, the robustness of these remains scarcely explored. Contrary computer vision, there are no effective attacks properly evaluate adversarial due intrinsic properties data, such as categorical features, immutability, and feature relationship constraints. To fill this gap, we first propose CAPGD, a gradient attack that overcomes failures existing...
This paper analyzes the robustness of state-of-the-art AI-based models for power grid operations under $N-1$ security criterion. While these perform well in regular settings, our results highlight a significant loss accuracy following disconnection line.%under this Using graph theory-based analysis, we demonstrate impact node connectivity on loss. Our findings emphasize need practical scenario considerations developing AI methodologies critical infrastructure.
While adversarial robustness in computer vision is a mature research field, fewer researchers have tackled the evasion attacks against tabular deep learning, and even investigated robustification mechanisms reliable defenses. We hypothesize that this lag on part due to lack of standardized benchmarks. To fill gap, we propose TabularBench, first comprehensive benchmark learning classification models. evaluated with CAA, an ensemble gradient search which was recently demonstrated as most...
Although adversarial robustness has been extensively studied in white-box settings, recent advances black-box attacks (including transfer- and query-based approaches) are primarily benchmarked against weak defenses, leaving a significant gap the evaluation of their effectiveness more moderate robust models (e.g., those featured Robustbench leaderboard). In this paper, we question lack attention from to models. We establish framework evaluate both top-performing standard defense mechanisms,...
This paper reports on the first phase of an attempt to create a full retro-engineering pipeline that aims construct complete set coherent typographic parameters defining typefaces used in printed homogenous text. It should be stressed this process cannot reasonably expected fully automatic and it is designed include human interaction. Although font design governed by quite robust formal geometric rulesets, still heavily relies subjective interpretation. Furthermore, different parameters,...
Natural Language Processing (NLP) models based on Machine Learning (ML) are susceptible to adversarial attacks – malicious algorithms that imperceptibly modify input text force into making incorrect predictions. However, evaluations of these ignore the property imperceptibility or study it under limited settings. This entails perturbations would not pass any human quality gate and do represent real threats human-checked NLP systems. To bypass this limitation enable proper assessment (and...