Yan Kang

ORCID: 0000-0002-2016-9503
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Privacy-Preserving Technologies in Data
  • Cryptography and Data Security
  • Stochastic Gradient Optimization Techniques
  • Domain Adaptation and Few-Shot Learning
  • Adversarial Robustness in Machine Learning
  • Access Control and Trust
  • Privacy, Security, and Data Protection
  • Ovarian function and disorders
  • Advanced X-ray and CT Imaging
  • Medical Imaging Techniques and Applications
  • Ocular and Laser Science Research
  • Anomaly Detection Techniques and Applications
  • Data Quality and Management
  • Advanced battery technologies research
  • Advanced Computational Techniques and Applications
  • Advanced Graph Neural Networks
  • Advanced Battery Materials and Technologies
  • Digital Radiography and Breast Imaging
  • AI in cancer detection
  • Explainable Artificial Intelligence (XAI)
  • Personal Information Management and User Behavior
  • Ethics and Social Impacts of AI
  • Neural Networks and Reservoir Computing
  • Text and Document Classification Technologies
  • Photoreceptor and optogenetics research

National University of Defense Technology
2021-2025

West Bengal Electronics Industry Development Corporation Limited (India)
2020-2023

Hong Kong University of Science and Technology
2022

University of Hong Kong
2022

Beijing Academy of Artificial Intelligence
2022

Hubei University of Technology
2022

Beijing University of Posts and Telecommunications
2022

Samsung (United States)
2021

Xi'an Institute of Optics and Precision Mechanics
2019

University of Chinese Academy of Sciences
2019

Machine learning relies on the availability of a vast amount data for training. However, in reality, most are scattered across different organizations and cannot be easily integrated under many legal practical constraints. In this paper, we introduce new technique framework, known as federated transfer (FTL), to improve statistical models federation. The federation allows knowledge shared without compromising user privacy, enables complimentary transferred network. As result, target-domain...

10.1109/mis.2020.2988525 article EN IEEE Intelligent Systems 2020-04-22

Federated learning (FL) is a rapidly growing research field in machine learning. However, existing FL libraries cannot adequately support diverse algorithmic development; inconsistent dataset and model usage make fair algorithm comparison challenging. In this work, we introduce FedML, an open library benchmark to facilitate development performance comparison. FedML supports three computing paradigms: on-device training for edge devices, distributed computing, single-machine simulation. also...

10.48550/arxiv.2007.13518 preprint EN cc-by arXiv (Cornell University) 2020-01-01

How is it possible to allow multiple data owners collaboratively train and use a shared prediction model while keeping all the local training private? Traditional machine learning approaches n

10.2200/s00960ed2v01y201910aim043 article EN Synthesis lectures on artificial intelligence and machine learning 2019-12-19

Vertical Federated Learning (VFL) is a federated learning setting where multiple parties with different features about the same set of users jointly train machine models without exposing their raw data or model parameters. Motivated by rapid growth in VFL research and real-world applications, we provide comprehensive review concept algorithms VFL, as well current advances challenges various aspects, including effectiveness, efficiency, privacy. We an exhaustive categorization for settings...

10.1109/tkde.2024.3352628 article EN IEEE Transactions on Knowledge and Data Engineering 2024-01-26

Machine Learning models require a vast amount of data for accurate training. In reality, most is scattered across different organizations and cannot be easily integrated under many legal practical constraints. Federated Transfer (FTL) was introduced in [1] to improve statistical federation that allow knowledge shared without compromising user privacy, enable complementary transferred the network. As result, target-domain party can build more flexible powerful by leveraging rich labels from...

10.1109/bigdata47090.2019.9006280 preprint EN 2021 IEEE International Conference on Big Data (Big Data) 2019-12-01

We introduce a novel federated learning framework allowing multiple parties having different sets of attributes about the same user to jointly build models without exposing their raw data or model parameters. Conventional approaches are inefficient for cross-silo problems because they require exchange messages gradient updates at every iteration, and raise security concerns over sharing such during learning. propose <i>Federated Stochastic Block Coordinate Descent (FedBCD)</i> algorithm,...

10.1109/tsp.2022.3198176 article EN IEEE Transactions on Signal Processing 2022-01-01

Federated learning (FL) enables participating parties to collaboratively build a global model with boosted utility without disclosing private data information. Appropriate protection mechanisms have be adopted fulfill the opposing requirements in preserving privacy and maintaining high . In addition, it is mandate for federated system achieve efficiency order enable large-scale training deployment. We propose unified framework that reconciles horizontal vertical learning. Based on this...

10.1145/3595185 article EN ACM Transactions on Intelligent Systems and Technology 2023-05-05

We introduce a collaborative learning framework allowing multiple parties having different sets of attributes about the same user to jointly build models without exposing their raw data or model parameters. In particular, we propose Federated Stochastic Block Coordinate Descent (FedBCD) algorithm, in which each party conducts local updates before communication effectively reduce number rounds among parties, principal bottleneck for problems. analyze theoretically impact and show that when...

10.48550/arxiv.1912.11187 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Federated learning allows multiple parties to build machine models collaboratively without exposing data. In particular, vertical federated (VFL) enables participating a joint model based on distributed features of aligned samples. However, VFL requires all share sufficient amount reality, the set samples may be small, leaving majority non-aligned data unused. this article, we propose Cross-view Training (FedCVT), semi-supervised approach that improves performance with limited More...

10.1145/3510031 article EN ACM Transactions on Intelligent Systems and Technology 2022-02-04

We present a novel privacy-preserving federated adversarial domain adaptation approach ($\textbf{PrADA}$) to address an under-studied but practical cross-silo problem, in which the party of target is insufficient both samples and features. lack-of-feature issue by extending feature space through vertical learning with feature-rich tackle sample-scarce performing from sample-rich source party. In this work, we focus on financial applications where interpretability critical. However, existing...

10.1109/tbdata.2022.3188292 article EN IEEE Transactions on Big Data 2022-07-11

Highly concentrated salts, like 30 m ZnCl₂, can reduce free water molecules in aqueous electrolytes but also increase acidity, causing severe acid-catalyzed corrosion of the Zn anode, current collector, and...

10.1039/d5ee00737b article EN Energy & Environmental Science 2025-01-01

Machine learning (ML) training data is often scattered across disparate collections of datasets, called silos. This fragmentation poses a major challenge for data-intensive ML applications: integrating and transforming residing in different sources demand lot manual work computational resources. With privacy security constraints, cannot leave the premises silos, hence model should proceed decentralized manner. In this work, we present vision how to bridge traditional integration (DI)...

10.1109/icde55515.2023.00301 article EN 2022 IEEE 38th International Conference on Data Engineering (ICDE) 2023-04-01

Vertical federated learning (VFL) allows an active party with labeled data to leverage auxiliary features from the passive parties improve model performance. Concerns about private feature and label leakage in both training inference phases of VFL have drawn wide research attention. In this paper, we propose a general privacy-preserving vertical deep framework called FedPass, which leverages adaptive obfuscation protect simultaneously. Strong capabilities labels are theoretically proved (in...

10.24963/ijcai.2023/418 article EN 2023-08-01

Trustworthy federated learning typically leverages protection mechanisms to guarantee privacy. However, inevitably introduce utility loss or efficiency reduction while protecting data Therefore, and their parameters should be carefully chosen strike an optimal tradeoff among privacy leakage , . To this end, practitioners need tools measure the three factors optimize between them choose mechanism that is most appropriate application at hand. Motivated by requirement, we propose a framework...

10.1145/3652612 article EN ACM Transactions on Intelligent Systems and Technology 2024-03-18

Personalized Federated Continual Learning (PFCL) is a new practical scenario that poses greater challenges in sharing and personalizing knowledge. PFCL not only relies on knowledge fusion for server aggregation at the global spatial-temporal perspective but also needs model improvement each client according to local requirements. Existing methods, whether (PFL) or (FCL), have overlooked multi-granularity representation of knowledge, which can be utilized overcome Spatial-Temporal...

10.1145/3637528.3671948 article EN Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2024-08-24

Federated learning allows multiple parties to build machine models collaboratively without exposing data. In particular, vertical federated (VFL) enables participating a joint model based upon distributed features of aligned samples. However, VFL requires all share sufficient amount reality, the set samples may be small, leaving majority non-aligned data unused. this article, we propose Cross-view Training (FedCVT), semi-supervised approach that improves performance with limited More...

10.48550/arxiv.2008.10838 preprint EN cc-by arXiv (Cornell University) 2020-01-01

Conventionally, federated learning aims to optimize a single objective, typically the utility. However, for system be trustworthy, it needs simultaneously satisfy multiple objectives, such as maximizing model performance, minimizing privacy leakage and training costs, being robust malicious attacks. Multi-Objective Optimization (MOO) aiming conflicting objectives is quite suitable solving optimization problem of Trustworthy Federated Learning (TFL). In this paper, we unify MOO TFL by...

10.1145/3701039 article EN ACM Transactions on Intelligent Systems and Technology 2024-10-24
Coming Soon ...