Ahmed M. Abdelmoniem

ORCID: 0000-0002-1374-1882
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Cloud Computing and Resource Management
  • Software-Defined Networks and 5G
  • Privacy-Preserving Technologies in Data
  • Network Traffic and Congestion Control
  • Stochastic Gradient Optimization Techniques
  • IoT and Edge/Fog Computing
  • Caching and Content Delivery
  • Cryptography and Data Security
  • Domain Adaptation and Few-Shot Learning
  • Advanced Neural Network Applications
  • Mobile Crowdsensing and Crowdsourcing
  • Sparse and Compressive Sensing Techniques
  • Advanced Data Storage Technologies
  • Opportunistic and Delay-Tolerant Networks
  • Blockchain Technology Applications and Security
  • Cooperative Communication and Network Coding
  • Adversarial Robustness in Machine Learning
  • Advanced Optical Network Technologies
  • Generative Adversarial Networks and Image Synthesis
  • Anomaly Detection Techniques and Applications
  • Vehicle Dynamics and Control Systems
  • Cardiovascular Function and Risk Factors
  • Age of Information Optimization
  • Distributed and Parallel Computing Systems
  • Privacy, Security, and Data Protection

Queen Mary University of London
2022-2025

Graphic Era University
2024

Symbiosis International University
2024

Assiut University
2010-2023

King Abdullah University of Science and Technology
2019-2023

Farwaniya Hospital
2021-2023

Ain Shams University
2020-2023

Laboratoire d'Informatique de Paris-Nord
2022

Hong Kong University of Science and Technology
2014-2021

University of Hong Kong
2014-2021

Federated learning (FL) is becoming a popular paradigm for collaborative over distributed, private data sets owned by nontrusting entities. FL has seen successful deployment in production environments, and it been adopted services, such as virtual keyboards, auto-completion, item recommendation, several IoT applications. However, comes with the challenge of performing training largely heterogeneous sets, devices, networks that are out control centralized server. Motivated this inherent...

10.1109/jiot.2023.3250275 article EN IEEE Internet of Things Journal 2023-03-07

Federated Learning (FL) enables distributed training by learners using local data, thereby enhancing privacy and reducing communication. However, it presents numerous challenges relating to the heterogeneity of data distribution, device capabilities, participant availability as deployments scale, which can impact both model convergence bias. Existing FL schemes use random selection improve fairness; however, this result in inefficient resources lower quality training. In work, we...

10.1145/3552326.3567485 preprint EN 2023-05-05

The hype of the Internet Things as an enabler for intelligent applications and related promise ushering accessibility, efficiency, quality service is met with hindering security data privacy concerns. It follows that such IoT systems, which are empowered by artificial intelligence, need to be investigated cognisance threats mitigation schemes tailored their specific constraints requirements. In this work, we present a comprehensive review in emerging countermeasures particular focus on...

10.3390/fi16030085 article EN cc-by Future Internet 2024-02-29

Path tracking is one of the most important aspects autonomous vehicles. The current research focuses on designing path-tracking controllers taking into account stability yaw and nonholonomic constraints vehicle. In cases, lateral controller design relies identifying a path reference point, with shortest distance to vehicle giving state That restricts controller’s ability handle sudden changes trajectory heading angle. present article proposes new approach that imitates human behavior while...

10.1177/1729881420974852 article EN cc-by International Journal of Advanced Robotic Systems 2020-11-01

Compressed communication, in the form of sparsification or quantization stochastic gradients, is employed to reduce communication costs distributed data-parallel training deep neural networks. However, there exists a discrepancy between theory and practice: while theoretical analysis most existing compression methods assumes applied gradients entire model, many practical implementations operate individually on each layer model.In this paper, we prove that layer-wise is, theory, better,...

10.1609/aaai.v34i04.5793 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2020-04-03

Powerful computer clusters are used nowadays to train complex deep neural networks (DNN) on large datasets. Distributed training increasingly becomes communication bound. For this reason, many lossy compression techniques have been proposed reduce the volume of transferred data. Unfortunately, it is difficult argue about behavior methods, because existing work relies inconsistent evaluation testbeds and largely ignores performance impact practical system configurations. In paper, we present...

10.1109/icdcs51616.2021.00060 article EN 2021-07-01

Federated learning (FL) is increasingly becoming the norm for training models over distributed and private datasets. Major service providers rely on FL to improve services such as text auto-completion, virtual keyboards, item recommendations. Nonetheless, with in practice requires significant amount of time (days or even weeks) because tasks execute highly heterogeneous environments where devices only have widespread yet limited computing capabilities network connectivity conditions.

10.1145/3437984.3458839 article EN 2021-04-25

Companies across the globe are keen on targeting potential high-value customers in an attempt to expand revenue, and this could be achieved only by understanding more. Customer Lifetime Value (CLV) is total monetary value of transactions or purchases made a customer with business over intended period time used as means estimate future interactions. CLV finds application number distinct domains, such banking, insurance, online entertainment, gaming, e-commerce. The existing distribution-based...

10.1016/j.ject.2023.09.001 article EN cc-by Journal of Economy and Technology 2023-09-22

Data management applications are growing and require more attention, especially in the "big data" era. Thus, supporting such with novel efficient algorithms that achieve higher performance is critical. Array database systems one way to support these by dealing data represented n-dimensional structures. For instance, software like SciDB RasDaMan can be powerful tools required on large-scale problems multidimensional data. Like their relational counterparts, specific array query languages as...

10.5120/ijca2024923879 preprint EN arXiv (Cornell University) 2025-02-01

Abstract Aims Our aim is to describe the clinical characteristics and management of patients hospitalized with acute heart failure (HHF) ambulatory chronic (CHF) in Egypt compare them (HF) from other countries European Society Cardiology‐Heart Failure (ESC‐HF) registry. Methods results The ESC‐HF Long‐term Registry a prospective, multi‐centre, observational study presenting cardiology centres member ESC. From April 2011 February 2014, total 2145 HF were recruited 20 all over Egypt. Of these...

10.1002/ehf2.12046 article EN cc-by-nc-nd ESC Heart Failure 2015-07-08

Federated learning (FL) is becoming a popular paradigm for collaborative over distributed, private datasets owned by non-trusting entities. FL has seen successful deployment in production environments, and it been adopted services such as virtual keyboards, auto-completion, item recommendation, several IoT applications. However, comes with the challenge of performing training largely heterogeneous datasets, devices, networks that are out control centralized server. Motivated this inherent...

10.1145/3517207.3526969 article EN 2022-03-29

Due to the partition/aggregate nature of many distributed cloud-based applications, incast traffic carried by TCP abounds in data center networks. TCP, being agnostic such applications' patterns and their delay-sensitivity, cannot cope with resulting congestion events, leading severe performance degradation. The co-existence other throughput-demanding elastic flows network worsens degradation further. In this paper, relying on programmability Software Defined Networks (SDN), we address...

10.1109/icc.2017.7996826 article EN 2017-05-01

The recent many-fold increase in the size of deep neural networks makes efficient distributed training challenging. Many proposals exploit compressibility gradients and propose lossy compression techniques to speed up communication stage training. Nevertheless, comes at cost reduced model quality extra computation overhead. In this work, we design an compressor with minimal Noting sparsity gradients, as random variables according some sparsity-inducing distributions (SIDs). We empirically...

10.48550/arxiv.2101.10761 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Federated learning (FL) is a newly emerged branch of AI that facilitates edge devices to collaboratively train global machine model without centralizing data and with privacy by default. However, despite the remarkable advancement, this paradigm comes various challenges. Specifically, in large-scale deployments, client heterogeneity norm which impacts training quality such as accuracy, fairness, time. Moreover, energy consumption across these battery-constrained largely unexplored limitation...

10.1145/3556557.3557952 preprint EN 2022-10-17

Cloud interactive data-driven applications generate swarms of small TCP flows that compete for the switch buffer space in data-center. Such require a flow completion time (FCT) to be effective. Unfortunately, is myopic with respect composite nature application data. In addition it tends artificially inflate FCT individual by several orders magnitude, because its Internet-centric design, fixes retransmission timeout (RTO) at least hundreds milliseconds. To better understand this problem,...

10.1109/tnet.2021.3059913 article EN IEEE/ACM Transactions on Networking 2021-03-08

Distributed training performs data-parallel of DNN models which is a necessity for increasingly complex and large datasets. Recent works are identifying major communication bottlenecks in distributed training. These seek possible opportunities to speed-up the systems supporting ML workloads. As reduction, compression techniques proposed speed up this phase. However, comes at cost reduced model accuracy, especially when applied arbitrarily. Instead, we advocate more controlled use propose...

10.1109/infocom42981.2021.9488810 article EN IEEE INFOCOM 2022 - IEEE Conference on Computer Communications 2021-05-10

Federated Learning (FL) has emerged as a powerful approach that enables collaborative distributed model training without the need for data sharing. However, FL grapples with inherent heterogeneity challenges leading to issues such stragglers, dropouts, and performance variations. Selection of clients run an instance is crucial, but existing strategies introduce biases participation do not consider resource efficiency. Communication acceleration solutions proposed increase client also fall...

10.1145/3627703.3650081 article EN 2024-04-18
Coming Soon ...