Zhan-Lun Chang

ORCID: 0000-0002-1226-966X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Privacy-Preserving Technologies in Data
  • Wireless Networks and Protocols
  • IoT and Edge/Fog Computing
  • Blockchain Technology Applications and Security
  • Network Security and Intrusion Detection
  • Information and Cyber Security
  • Age of Information Optimization
  • Cooperative Communication and Network Coding
  • Cryptography and Data Security
  • Opportunistic and Delay-Tolerant Networks
  • Stochastic Gradient Optimization Techniques

Purdue University West Lafayette
2024

National Taiwan University
2019-2021

Research Center for Information Technology Innovation, Academia Sinica
2021

Federated learning (FL) has emerged as a key technique for distributed machine (ML). Most literature on FL focused ML model training (i) single task/model, with (ii) synchronous scheme updating parameters, and (iii) static data distribution setting across devices, which is often not realistic in practical wireless environments. To address this, we develop DMA-FL considering dynamic multiple downstream tasks/models over an asynchronous update architecture. We first characterize convergence...

10.1109/tccn.2024.3391329 article EN IEEE Transactions on Cognitive Communications and Networking 2024-04-19

Mobile Edge Computing (MEC) is a promising paradigm to ease the computation burden of mobile devices by leveraging computing capabilities at network edge. With yearning needs for resource provision from comparatively limited edge servers, an admission control charging flat-rate price and resistant self-interested manipulations proposed both utilize scarce resources fullest guarantee end- to-end latency served devices. Besides, achieve energy sustainability maximize profit, servers can lessen...

10.1109/globecom38437.2019.9014203 article EN 2015 IEEE Global Communications Conference (GLOBECOM) 2019-12-01

Mobile Edge Computing (MEC) is a promising paradigm to ease the computation burden of Internet-of-Things (IoT) devices by leveraging computing capabilities at network edge. With yearning needs for resource provision from IoT devices, queueing delay edge nodes not only poses colossal impediment achieving satisfactory quality experience (QoE) but also benefits owing escalating energy expenditure. Moreover, since service providers may differ, computationally competent entities' services should...

10.1109/twc.2021.3071722 article EN IEEE Transactions on Wireless Communications 2021-04-15

While most existing federated learning (FL) approaches assume a fixed set of clients in the system, practice, can dynamically leave or join system depending on their needs interest specific task. This dynamic FL setting introduces several key challenges: (1) objective function changes current clients, unlike traditional that maintain static optimization goal; (2) global model may not serve as best initial point for next rounds and could potentially lead to slow adaptation, given possibility...

10.48550/arxiv.2410.05662 preprint EN arXiv (Cornell University) 2024-10-07

The network attack such as Distributed Denial-of-Service (DDoS) could be critical to latency-critical systems Mobile Edge Computing (MEC) attacks significantly increase the response delay of victim service. Intrusion prevention system (IPS) is a promising solution defend against attacks, but there will trade-off between IPS deployment and application resource reservation reduce number computation resources for MEC applications. In this paper, we proposed game-theoretic framework study joint...

10.1109/globecom46510.2021.9685866 article EN 2015 IEEE Global Communications Conference (GLOBECOM) 2021-12-01

Federated learning (FL) has emerged as a key technique for distributed machine (ML). Most literature on FL focused ML model training (i) single task/model, with (ii) synchronous scheme updating parameters, and (iii) static data distribution setting across devices, which is often not realistic in practical wireless environments. To address this, we develop DMA-FL considering dynamic multiple downstream tasks/models over an asynchronous update architecture. We first characterize convergence...

10.48550/arxiv.2305.13503 preprint EN cc-by arXiv (Cornell University) 2023-01-01
Coming Soon ...