Modar Alfadly

ORCID: 0000-0002-3763-3819
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Multimodal Machine Learning Applications
  • Adversarial Robustness in Machine Learning
  • Domain Adaptation and Few-Shot Learning
  • Advanced Image and Video Retrieval Techniques
  • Anomaly Detection Techniques and Applications
  • Viral Infections and Outbreaks Research
  • Machine Learning and ELM
  • Target Tracking and Data Fusion in Sensor Networks
  • Bacillus and Francisella bacterial research
  • Neural Networks and Applications
  • Nuclear reactor physics and engineering
  • Human Pose and Action Recognition
  • Advanced Neural Network Applications
  • Machine Learning and Algorithms
  • Visual Attention and Saliency Detection

King Abdullah University of Science and Technology
2017-2020

Deep neural networks have been playing an essential role in many computer vision tasks including Visual Question Answering (VQA). Until recently, the study of their accuracy was main focus research but now there is a trend toward assessing robustness these models against adversarial attacks by evaluating tolerance to varying noise levels. In VQA, can target image and/or proposed question and yet lack proper analysis later. this work, we propose flexible framework that focuses on language...

10.1609/aaai.v33i01.33018449 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2019-07-17

The outstanding performance of deep neural networks (DNNs), for the visual recognition task in particular, has been demonstrated on several large-scale benchmarks. This immensely strengthened line research that aims to understand and analyze driving reasons behind effectiveness these networks. One important aspect this analysis recently gained much attention, namely reaction a DNN noisy input. spawned developing adversarial input attacks as well training strategies make DNNs more robust...

10.1109/cvpr.2018.00948 article EN 2018-06-01

Taking an image and question as the input of our method, it can output text-based answer query about given image, so called Visual Question Answering (VQA). There are two main modules in algorithm. Given a natural language first module takes then outputs basic questions question. The second question, these We formulate generation problem LASSO optimization problem, also propose criterion how to exploit help Our method is evaluated on challenging VQA dataset yields state-of-the-art accuracy,...

10.48550/arxiv.1703.06492 preprint EN cc-by arXiv (Cornell University) 2017-01-01

Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack proper methods to measure models. There are two main modules in our algorithm. Given natural language question about an image, first module takes as input then outputs ranked basic questions, with similarity scores, given question. The second question, image these questions text-based answer image. We claim that...

10.48550/arxiv.1709.04625 preprint EN other-oa arXiv (Cornell University) 2017-01-01

Deep neural networks have been playing an essential role in many computer vision tasks including Visual Question Answering (VQA). Until recently, the study of their accuracy was main focus research but now there is a trend toward assessing robustness these models against adversarial attacks by evaluating tolerance to varying noise levels. In VQA, can target image and/or proposed question and yet lack proper analysis later. this work, we propose flexible framework that focuses on language...

10.48550/arxiv.1711.06232 preprint EN other-oa arXiv (Cornell University) 2017-01-01

Despite the impressive performance of deep neural networks (DNNs) on numerous vision tasks, they still exhibit yet-to-understand uncouth behaviours. One puzzling behaviour is subtle sensitive reaction DNNs to various noise attacks. Such a nuisance has strengthened line research around developing and training noise-robust networks. In this work, we propose new regularizer that aims minimize probabilistic expected loss DNN subject generic Gaussian input. We provide an efficient simple approach...

10.48550/arxiv.1904.11005 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Deep neural networks have been critical in the task of Visual Question Answering (VQA), with research traditionally focused on improving model accuracy. Recently, however, there has a trend towards evaluating robustness these models against adversarial attacks. This involves assessing accuracy VQA under increasing levels noise input, which can target either image or proposed query question, dubbed main question. However, is currently lack proper analysis this aspect VQA. work proposes new...

10.48550/arxiv.2304.03147 preprint EN cc-by arXiv (Cornell University) 2023-01-01

The impressive performance of deep neural networks (DNNs) has immensely strengthened the line research that aims at theoretically analyzing their effectiveness. This incited on reaction DNNs to noisy input, namely developing adversarial input attacks and strategies lead robust these attacks. To end, in this paper, we derive exact analytic expressions for first second moments (mean variance) a small piecewise linear (PL) network (Affine, ReLU, Affine) subject Gaussian input. In particular,...

10.48550/arxiv.2006.11776 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Training Deep Neural Networks that are robust to norm bounded adversarial attacks remains an elusive problem. While exact and inexact verification-based methods generally too expensive train large networks, it was demonstrated input intervals can be inexpensively propagated from a layer another through deep networks. This interval bound propagation approach (IBP) not only has improved both robustness certified accuracy but the first employed on large/deep However, due very loose nature of...

10.48550/arxiv.1905.12418 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Deep neural networks have been playing an essential role in the task of Visual Question Answering (VQA). Until recently, their accuracy has main focus research. Now there is a trend toward assessing robustness these models against adversarial attacks by evaluating under increasing levels noisiness inputs VQA models. In VQA, attack can target image and/or proposed query question, dubbed and yet lack proper analysis this aspect VQA. work, we propose new method that uses semantically related...

10.48550/arxiv.1912.01452 preprint EN cc-by arXiv (Cornell University) 2019-01-01
Coming Soon ...