Thaha Mohammed

ORCID: 0000-0002-4767-4147
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • IoT and Edge/Fog Computing
  • Parallel Computing and Optimization Techniques
  • Distributed and Parallel Computing Systems
  • Brain Tumor Detection and Classification
  • Privacy-Preserving Technologies in Data
  • Matrix Theory and Algorithms
  • Caching and Content Delivery
  • Stochastic Gradient Optimization Techniques
  • Mobile Crowdsensing and Crowdsourcing
  • Advanced Neural Network Applications
  • Energy Efficient Wireless Sensor Networks
  • Advanced Data Storage Technologies
  • Advanced Wireless Communication Technologies
  • Modular Robots and Swarm Intelligence
  • Software-Defined Networks and 5G
  • Tensor decomposition and applications
  • Robotics and Automated Systems

Aalto University
2020-2022

Deep neural networks (DNN) are the de-facto solution behind many intelligent applications of today, ranging from machine translation to autonomous driving. DNNs accurate but resource-intensive, especially for embedded devices such as mobile phones and smart objects in Internet Things. To overcome related resource constraints, DNN inference is generally offloaded edge or cloud. This accomplished by partitioning distributing computations at two different ends. However, most existing solutions...

10.1109/infocom41043.2020.9155237 article EN IEEE INFOCOM 2022 - IEEE Conference on Computer Communications 2020-07-01

5G networks and Internet of Things (IoT) offer a powerful platform for ubiquitous environments with their sensing, high speeds other benefits. The data, analytics, computations need to be optimally moved placed in these environments, dynamically, such that energy-efficiency QoS demands are best satisfied. A particular challenge this context is preserve privacy security while delivering quality service (QoS) energy-efficiency. Many works have tried address challenges but without focus on...

10.3390/app10207120 article EN cc-by Applied Sciences 2020-10-13

Inference carried out on pretrained deep neural networks (DNNs) is particularly effective as it does not require retraining and entails no loss in accuracy. Unfortunately, resource-constrained devices such those the Internet of Things may need to offload related computation more powerful servers, particularly, at network edge. However, edge servers have limited resources compared cloud; therefore, inference offloading generally requires dividing original DNN into different pieces that are...

10.1109/jiot.2022.3205410 article EN IEEE Internet of Things Journal 2022-09-09

Graphics processing units (GPUs) have delivered a remarkable performance for variety of high computing (HPC) applications through massive parallelism. One such application is sparse matrix-vector (SpMV) computations, which central to many scientific, engineering, and other including machine learning. No single SpMV storage or computation scheme provides consistent sufficiently all matrices due their varying sparsity patterns. An extensive literature review reveals that the techniques on GPUs...

10.3390/electronics9101675 article EN Electronics 2020-10-13

Future wireless networks should meet heterogeneous service requirements of diverse applications, including interactive multimedia, augmented reality, and autonomous driving. The fog radio access network (Fog-RAN) is a novel architecture that enables efficient flexible allocation resources to end users. However, guaranteeing application-specific while maximizing resource utilization an open challenge in Fog-RANs. This article proposes multiresource Fog-RAN slicing scheme maximizes satisfies...

10.1109/jiot.2022.3192291 article EN IEEE Internet of Things Journal 2022-07-19

Today's deep neural networks (DNNs) are very accurate when trained on a large amount of data. However, suitable input might not be available or may require extensive data collection. Data sharing is one option to address these issues, but it generally impractical because privacy concerns due the problematic process finding agreement. Instead, this work considers knowledge by first exchanging weights pretrained DNNs and then applying transfer learning (TL). Specifically, addresses economics...

10.1109/jiot.2022.3206585 article EN IEEE Internet of Things Journal 2022-09-14

Artificial intelligence (AI) is among the most influential technologies to improve daily lives and promote further economic activities. Recently, a distributed intelligence, referred as global brain, has been developed optimize mobile services their respective delivery networks. Inspired by interconnected neuron clusters in human nervous system, it an architecture interconnecting various AI entities. This paper models brain communication its components based on multi-agent system technology...

10.1109/msn50589.2020.00021 article EN 2021 17th International Conference on Mobility, Sensing and Networking (MSN) 2020-12-01

Iterative solutions of sparse linear systems and eigenvalue problems have a fundamental role in vital fields scientific research engineering. The crucial computing kernel for such iterative is the multiplication matrix by dense vector. Efficient implementation matrix-vector (SpMV) solvers are therefore essential has been subjected to extensive across variety architectures accelerators as central processing units (CPUs), graphical (GPUs), many integrated cores (MICs), field programmable gate...

10.48550/arxiv.2212.07490 preprint EN cc-by arXiv (Cornell University) 2022-01-01
Coming Soon ...