Iram Arshad

ORCID: 0000-0003-0755-5896
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Malware Detection Techniques
  • Adversarial Robustness in Machine Learning
  • Online Learning and Analytics
  • Software System Performance and Reliability
  • Software Testing and Debugging Techniques
  • Anomaly Detection Techniques and Applications
  • Biometric Identification and Security
  • Digital Media Forensic Detection
  • Radiation Effects in Electronics
  • Image Retrieval and Classification Techniques
  • Environmental Education and Sustainability
  • Network Security and Intrusion Detection
  • Manufacturing Process and Optimization
  • Industrial Vision Systems and Defect Detection
  • Digital Transformation in Industry

Shannon Applied Biotechnology Centre
2023-2024

Technological University of the Shannon: Midlands Midwest
2023

Athlone Institute of Technology
2021-2022

University of Engineering and Technology Taxila
2015-2016

Deep learning techniques have been widely adopted for cyber defence applications such as malware detection and anomaly detection. The ever-changing nature of threats has made a constantly evolving field. Smart manufacturing is critical to the broader thrust towards Industry 4.0 5.0. Developing advanced technologies in smart requires enabling paradigm shift manufacturing, while cyber-attacks significantly threaten manufacturing. For example, attack (e.g., backdoor) occurs during model's...

10.1109/access.2023.3306333 article EN cc-by-nc-nd IEEE Access 2023-01-01

Deep Learning (DL) models deliver superior performance and have achieved remarkable results for classification vision tasks. However, recent research focuses on exploring these Neural Networks (DNNs) weaknesses as can be vulnerable due to transfer learning outsourced training data. This paper investigates the feasibility of generating a stealthy invisible backdoor attack during phase deep models. For developing poison dataset, an interpolation technique is used corrupt sub-feature space...

10.1109/icce56470.2023.10043484 article EN 2023 IEEE International Conference on Consumer Electronics (ICCE) 2023-01-06

Deep neural networks are susceptible to various backdoor attacks, such as training time where the attacker can inject a trigger pattern into small portion of dataset control model's predictions at runtime. Backdoor attacks dangerous because they do not degrade performance. This paper explores feasibility new type attack, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">data-free</i> backdoor. Unlike traditional that require poisoning data and...

10.1109/tai.2024.3384938 article EN IEEE Transactions on Artificial Intelligence 2024-04-09

Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes. testing aims to ensure that systems run smoothly and error-free while maintaining the performance quality of data. However, because diversity complexity data, challenging. Though numerous research efforts deal with testing, a comprehensive review address techniques challenges not available as yet. Therefore, we have systematically reviewed techniques' evidence occurring in period...

10.32604/cmc.2023.030266 article EN Computers, materials & continua/Computers, materials & continua (Print) 2022-10-31

Deep learning algorithms outperform the machine techniques in various fields and are widely deployed for recognition classification tasks. However, recent research focuses on exploring these deep models' weaknesses as can be vulnerable due to outsourced training data transfer learning. This paper proposed a rudimentary, stealthy Pixel-space based Backdoor attack (Pixdoor) during phase of models. For generating poisoned dataset, bit-inversion technique is used injecting errors pixel bits...

10.23919/eusipco54536.2021.9616118 article EN 2021 29th European Signal Processing Conference (EUSIPCO) 2021-08-23

This paper introduces a novel, cost-effective method for enhancing human-robot collaboration in Industry 4.0 manufacturing using the OpenCV AI Kit-Lite (OAK-D-Lite). Our vision-based system employs deep learning models realtime human detection and dynamic adjustment of robot operations, ensuring safety without needing costly Lidar sensors. We further present an autonomous utilizing You Only Look Once (YOLO) v5 that accurately identifies, classifies, handles workpieces, effectively reducing...

10.1109/esmarta59349.2023.10293744 article EN 2023-10-10

A high level of transparency in reported research is critical for several reasons, such as ensuring an acceptable trustworthiness and enabling replication. Transparency qualitative permits the identification specific circumstances which are associated with findings observations. Thus, important repeatability original studies explorations transferability findings. There has been no investigation into levels technology education to date. With a position that increasing would be beneficial,...

10.35542/osf.io/p9ubt preprint EN 2021-08-10

Transparency in the reporting of empirical studies is foundational to a credible knowledge base. Higher levels transparency, addition clarity writing, also make research more accessible diverse readership. Previous reviewed how transparently reported qualitative, interview-based, were contemporary technology education (Buckley, Adams, et al., 2021). The results illustrated that no article was fully transparent and authors tended be less some areas, such as management power imbalances...

10.31235/osf.io/2p6wr preprint EN 2023-10-27

Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes. testing aims to ensure that systems run smoothly and error-free while maintaining the performance quality of data. However, because diversity complexity data, challenging. Though numerous research efforts deal with testing, a comprehensive review address techniques challenges not available as yet. Therefore, we have systematically reviewed evidence occurring in period 2010-2021....

10.48550/arxiv.2111.02853 preprint EN cc-by-nc-nd arXiv (Cornell University) 2021-01-01
Coming Soon ...