Hongsheng Hu

ORCID: 0000-0003-4455-4227
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Privacy-Preserving Technologies in Data
  • Adversarial Robustness in Machine Learning
  • Cryptography and Data Security
  • Anomaly Detection Techniques and Applications
  • Advanced Neural Network Applications
  • Caching and Content Delivery
  • Domain Adaptation and Few-Shot Learning
  • Biometric Identification and Security
  • Security and Verification in Computing
  • Neural Networks and Applications
  • Spam and Phishing Detection
  • Machine Learning and Data Classification
  • Web Application Security Vulnerabilities
  • Misinformation and Its Impacts
  • Access Control and Trust
  • Fault Detection and Control Systems
  • Image and Video Quality Assessment
  • Topic Modeling
  • Occupational Health and Safety Research
  • Explainable Artificial Intelligence (XAI)
  • Advanced Data and IoT Technologies
  • Face and Expression Recognition
  • Face recognition and analysis
  • Traffic Prediction and Management Techniques
  • Privacy, Security, and Data Protection

Commonwealth Scientific and Industrial Research Organisation
2023-2024

Data61
2023-2024

University of Auckland
2020-2023

Federated learning (FL) has emerged as a promising privacy-aware paradigm that allows multiple clients to jointly train model without sharing their private data. Recently, many studies have shown FL is vulnerable membership inference attacks (MIAs) can distinguish the training members of given from non-members. However, existing MIAs ignore source member, i.e., information client owning while it essential explore privacy in beyond examples all clients. The leakage lead severe issues. For...

10.1109/icdm51629.2021.00129 article EN 2021 IEEE International Conference on Data Mining (ICDM) 2021-12-01

The right to be forgotten requires the removal or "unlearning" of a user's data from machine learning models.However, in context Machine Learning as Service (MLaaS), retraining model scratch fulfill unlearning request is impractical due lack training on service provider's side (the server).Furthermore, approximate further embraces complex trade-off between utility (model performance) and privacy (unlearning performance).In this paper, we try explore potential threats posed by services MLaaS,...

10.14722/ndss.2024.24252 article EN 2024-01-01

Sports matches are very popular all over the world. The prediction of a sports match is helpful to grasp team's state in time and adjust strategy process match. It's challenging effort predict Therefore, method proposed result next by using teams' historical data. We combined Long Short-Term Memory (LSTM) model with attention mechanism put forward an AS-LSTM for predicting results. Furthermore, ensure timeliness prediction, we add sliding window make have better timeliness. Taking football...

10.1016/j.dcan.2021.08.008 article EN cc-by-nc-nd Digital Communications and Networks 2021-08-30

Recently issued data privacy regulations like GDPR (General Data Protection Regulation) grant individuals the right to be forgotten. In context of machine learning, this requires a model forget about training sample if requested by owner (i.e., unlearning). As an essential step prior unlearning, it is still challenge for tell whether or not her have been used unauthorized party train learning model. Membership inference recently emerging technique identify was target model, and seems...

10.24963/ijcai.2022/532 article EN Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022-07-01

Summary Recommender systems are important applications in big data analytics because accurate recommendation items or high‐valued suggestions can bring high profit to both commercial companies and customers. To make precise recommendations, a recommender system often needs large fine‐grained for training. In the current era, exists form of isolated islands, it is difficult integrate scattered due privacy security concerns. Moreover, laws regulations harder share data. Therefore, designing...

10.1002/cpe.6233 article EN Concurrency and Computation Practice and Experience 2021-02-23

Federated learning (FL) is a popular approach to facilitate privacy-aware machine since it allows multiple clients collaboratively train global model without granting others access their private data. It is, however, known that FL can be vulnerable <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">membership inference attacks</i> (MIAs), where the training records of distinguished from testing records. Surprisingly, research focusing on...

10.1109/tdsc.2023.3321565 article EN IEEE Transactions on Dependable and Secure Computing 2023-10-03

Deep learning methods often suffer performance degradation due to domain shift, where discrepancies exist between training and testing data distributions. Domain generalization mitigates this problem by leveraging information from multiple source domains enhance model capabilities for unseen domains. However, existing typically present examples the in a random manner, overlooking potential benefits of structured presentation. To bridge gap, we propose novel strategy, Symmetric Self-Paced...

10.1609/aaai.v38i15.29639 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2024-03-24

Membership inference attacks on a machine learning model aim to determine whether given data record is member of the training set. They pose severe privacy risks individuals, e.g., identifying an individual's participation in hospital's health analytic set reveals that this individual was once patient hospital. Adversarial regularization (AR) one state-of-the-art defense methods mitigate such while preserving model's prediction accuracy. AR adds membership as new term target during process....

10.1109/ijcnn52387.2021.9534381 article EN 2022 International Joint Conference on Neural Networks (IJCNN) 2021-07-18

The recommender system is an important application in big data analytics because accurate recommendation items or high-valued suggestions can bring high profit to both commercial companies and customers. To make precise recommendations, a often needs large fine-grained for training. In the current era, exist form of isolated islands, it difficult integrate scattered due privacy security concerns. Moreover, laws regulations harder share data. Therefore, designing privacy-preserving paramount...

10.1109/ccgrid49817.2020.000-1 article EN 2020-05-01

Recently, biometric identification has been extensively used for border control. Some face recognition systems have designed based on Internet of Things. But the rich personal information contained in images can cause severe privacy breach and abuse issues during process if a system compromised by insiders or external security attacks. Encrypting query image is state-of-the-art solution to protect an individual’s but incurs huge computational cost poses big challenge time-critical...

10.1145/3448414 article EN ACM Transactions on Sensor Networks 2021-06-21

Machine unlearning has become a promising solution for fulfilling the "right to be forgotten", under which individuals can request deletion of their data from machine learning models. However, existing studies mainly focus on efficacy and efficiency methods, while neglecting investigation privacy vulnerability during process. With two versions model available an adversary, that is, original unlearned model, opens up new attack surface. In this paper, we conduct first understand extent leak...

10.48550/arxiv.2404.03233 preprint EN arXiv (Cornell University) 2024-04-04

AI systems, in particular with deep learning techniques, have demonstrated superior performance for various real-world applications. Given the need tailored optimization specific scenarios, as well concerns related to exploits of subsurface vulnerabilities, a more comprehensive and in-depth testing system becomes pivotal topic. We seen emergence tools applications that aim expand capabilities. However, they often concentrate on ad-hoc tasks, rendering them unsuitable simultaneously multiple...

10.48550/arxiv.2411.06146 preprint EN arXiv (Cornell University) 2024-11-09

Phishing remains a pervasive cyber threat, as attackers craft deceptive emails to lure victims into revealing sensitive information. While Artificial Intelligence (AI), particularly deep learning, has become key component in defending against phishing attacks, these approaches face critical limitations. The scarcity of publicly available, diverse, and updated data, largely due privacy concerns, constrains their effectiveness. As tactics evolve rapidly, models trained on limited, outdated...

10.48550/arxiv.2411.11389 preprint EN arXiv (Cornell University) 2024-11-18

As large language models (LLMs) increasingly depend on web-scraped datasets, concerns over unauthorized use of copyrighted or personal content for training have intensified. Despite regulations such as the General Data Protection Regulation (GDPR), data owners still limited control their in model training. To address this, we propose ExpShield, a proactive self-guard mechanism that empowers to embed invisible perturbations into text, limiting misuse LLMs without affecting readability. This...

10.48550/arxiv.2412.21123 preprint EN arXiv (Cornell University) 2024-12-30

Federated learning (FL) has emerged as a promising privacy-aware paradigm that allows multiple clients to jointly train model without sharing their private data. Recently, many studies have shown FL is vulnerable membership inference attacks (MIAs) can distinguish the training members of given from non-members. However, existing MIAs ignore source member, i.e., information which client owns while it essential explore privacy in beyond examples all clients. The leakage lead severe issues. For...

10.48550/arxiv.2109.05659 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Machine learning models have made significant breakthroughs across various domains. However, it is crucial to assess these obtain a complete understanding of their capabilities and limitations ensure effectiveness reliability in solving real-world problems. In this paper, we present framework, termed ML-Compass, that covers broad range machine abilities, including utility evaluation, neuron analysis, robustness interpretability examination. We use framework seven state-of-the-art...

10.1145/3579856.3592823 article EN 2023-07-05

The right to be forgotten requires the removal or "unlearning" of a user's data from machine learning models. However, in context Machine Learning as Service (MLaaS), retraining model scratch fulfill unlearning request is impractical due lack training on service provider's side (the server). Furthermore, approximate further embraces complex trade-off between utility (model performance) and privacy (unlearning performance). In this paper, we try explore potential threats posed by services...

10.48550/arxiv.2309.08230 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Federated learning (FL) is a popular approach to facilitate privacy-aware machine since it allows multiple clients collaboratively train global model without granting others access their private data. It is, however, known that FL can be vulnerable membership inference attacks (MIAs), where the training records of distinguished from testing records. Surprisingly, research focusing on investigation source problem appears lacking. We also observe identifying record's client result in privacy...

10.48550/arxiv.2310.00222 preprint EN cc-by arXiv (Cornell University) 2023-01-01
Coming Soon ...