- Domain Adaptation and Few-Shot Learning
- Multimodal Machine Learning Applications
- Cancer-related molecular mechanisms research
- Advanced Image and Video Retrieval Techniques
- Advanced Neural Network Applications
- Statistical Methods and Inference
- COVID-19 diagnosis using AI
- Advanced Malware Detection Techniques
- Advanced Causal Inference Techniques
- Statistical Methods and Bayesian Inference
- Anomaly Detection Techniques and Applications
- Medical Image Segmentation Techniques
- Complex Systems and Time Series Analysis
- Bacillus and Francisella bacterial research
- Traffic Prediction and Management Techniques
- Digital Communication and Language
- Chaos control and synchronization
- Advanced Graph Neural Networks
- Nonlinear Dynamics and Pattern Formation
- Respiratory viral infections research
- Machine Learning in Healthcare
- Topic Modeling
- Machine Learning and ELM
- Privacy-Preserving Technologies in Data
- Complex Network Analysis Techniques
Tencent (China)
2024
Zhejiang University
2005-2024
Zhejiang University of Science and Technology
2019-2023
Baidu (China)
2023
Zhejiang Lab
2019
Institute of Modern Physics
2005
Real-world networks exhibit prominent hierarchical and modular structures, with various subgraphs as building blocks. Most existing studies simply consider distinct motifs use only their numbers to characterize the underlying network. Although such statistics can be used describe a network model, or even design some algorithms, role of in applications further explored so improve results. In this article, concept subgraph (SGN) is introduced then applied models, algorithms designed for...
Domain generalization (DG) aims to generalize the knowledge learned from multiple source domains unseen target domains. Existing DG techniques can be subsumed under two broad categories, i.e., domain-invariant representation learning and domain manipulation. Nevertheless, it is extremely difficult explicitly augment or generate data. And when variety increases, developing a model by simply aligning more domain-specific information becomes challenging. In this paper, we propose simple yet...
In observational studies, confounder separation and balancing are the fundamental problems of treatment effect estimation. Most previous methods focused on addressing problem by treating all observed pre-treatment variables as confounders, ignoring separation. general, not confounders that refer to common causes outcome, some only contribute (i.e., instrumental variables) outcome adjustment variables). Balancing those non-confounders, including variables, would generate additional bias for...
Instrumental variables (IVs), sources of treatment randomization that are conditionally independent the outcome, play an important role in causal inference with unobserved confounders. However, existing IV-based counterfactual prediction methods need well-predefined IVs, while it is art rather than science to find valid IVs many real-world scenes. Moreover, predefined hand-made could be weak or erroneous by violating conditions IVs. These thorny facts hinder application methods. In this...
Universal domain adaptation (UniDA) aims to transfer knowledge from the source target without any prior about label set. The challenge lies in how determine whether samples belong common categories. mainstream methods make judgments based on sample features, which overemphasizes global information while ignoring most crucial local objects image, resulting limited accuracy. To address this issue, we propose a Attention Matching (UniAM) framework by exploiting self-attention mechanism vision...
Considerable progress has been made in domain generalization (DG) which aims to learn a generalizable model from multiple well-annotated source domains unknown target domains. However, it can be prohibitively expensive obtain sufficient annotation for datasets many real scenarios. To escape the dilemma between and costs, this paper, we introduce novel task named label-efficient (LEDG) enable with label-limited address challenging task, propose framework called Collaborative Exploration...
Domain generalization (DG) aims to learn from multiple known source domains a model that can generalize well unknown target domains. The existing DG methods usually exploit the fusion of shared multi-source data train generalizable model. However, tremendous is distributed across lots places nowadays not be due privacy policies. In this paper, we tackle problem federated domain where datasets only accessed and learned locally for protection. We propose novel framework called Collaborative...
Attackers often use domain generation algorithms (DGAs) to create various kinds of pseudorandom domains dynamically and select a part them connect with command control servers, therefore it is important automatically detect the algorithmically generated (AGDs). AGDs can be broken down into two categories: character-based wordlist-based domains. Recently, methods based on machine learning deep have been widely explored. However, much previous work perform well in detecting one kind DGA...
We present Follow-Your-Emoji, a diffusion-based framework for portrait animation, which animates reference with target landmark sequences. The main challenge of animation is to preserve the identity and transfer expression this while maintaining temporal consistency fidelity. To address these challenges, Follow-Your-Emoji equipped powerful Stable Diffusion model two well-designed technologies. Specifically, we first adopt new explicit motion signal, namely expression-aware landmark, guide...
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains. Existing DG methods mainly the representations with invariant marginal distribution of input features, however, invariance conditional labels given features is more essential for unknown domain prediction. Meanwhile, existing unobserved confounders which affect and simultaneously cause spurious correlation hinder learning relationship contained in distribution....
Domain generalization (DG) is a prevalent problem in real-world applications, which aims to train well-generalized models for unseen target domains by utilizing several source domains. Since domain labels, i.e., each data point sampled from, naturally exist, most DG algorithms treat them as kind of supervision information improve the performance. However, original labels may not be optimal signal due lack heterogeneity, diversity among For example, sample one closer another domain, its label...
Deep learning has achieved tremendous success in recent years, but most of these successes are built on an independent and identically distributed (IID) assumption. This somewhat hinders the application deep to more challenging out-of-distribution (OOD) scenarios. Although many OOD methods have been proposed address this problem obtained good performance testing data that is major shifts with training distributions, interestingly, we experimentally find achieve excellent by making a great...
Masked image modeling (MIM) learns visual representation by masking and reconstructing patches. Applying the reconstruction supervision on CLIP has been proven effective for MIM. However, it is still under-explored how in MIM influences performance. To investigate strategies refining CLIP-targeted MIM, we study two critical elements i.e., position mask ratio, reveal interesting perspectives, relying our developed simple pipeline, context autodecoder with target (CAE v2). Firstly, observe...
Large-scale vision-language (V-L) models have demonstrated remarkable generalization capabilities for downstream tasks through prompt tuning. However, the mechanisms behind learned text representations are unknown, limiting further gains, and limitations more severe when faced with prevalent class imbalances seen in web-sourced datasets. Recent advances neural collapse (NC) phenomenon of vision-only suggest that optimal representation structure is simplex ETF, which paves way to study V-L...
Universal domain adaptation (UniDA) aims to transfer knowledge from the source target without any prior about label set. The challenge lies in how determine whether samples belong common categories. mainstream methods make judgments based on sample features, which overemphasizes global information while ignoring most crucial local objects image, resulting limited accuracy. To address this issue, we propose a Attention Matching (UniAM) framework by exploiting self-attention mechanism vision...
Large-scale vision-language (V-L) models have demonstrated remarkable generalization capabilities for downstream tasks through prompt tuning. However, the mechanisms behind learned text representations are unknown, limiting further gains, especially under class imbalance scenarios. Recent advances in neural collapse (NC) phenomenon of vision-only suggest that optimal representation structure is simplex ETF, which paves way to study V-L models. In this paper, we make first attempt use NC...
The fundamental problem in treatment effect estimation from observational data is confounder identification and balancing. Most of the previous methods realized balancing by treating all observed pre-treatment variables as confounders, ignoring further identifying confounders non-confounders. In general, not are that refer to common causes outcome, some only contribute outcome. Balancing those non-confounders, including instrumental adjustment variables, would generate additional bias for...
Domain Generalization (DG) aims to learn a model that can generalize well unseen target domains from set of source domains. With the idea invariant causal mechanism, lot efforts have been put into learning robust effects which are determined by object yet insensitive domain changes. Despite invariance effects, they difficult be quantified and optimized. Inspired ability humans adapt new environments prior knowledge, We develop novel Contrastive Causal Model (CCM) transfer images taught...
Domain generalization (DG) is a prevalent problem in real-world applications, which aims to train well-generalized models for unseen target domains by utilizing several source domains. Since domain labels, i.e., each data point sampled from, naturally exist, most DG algorithms treat them as kind of supervision information improve the performance. However, original labels may not be optimal signal due lack heterogeneity, diversity among For example, sample one closer another domain, its label...
Prompt learning has become one of the most efficient paradigms for adapting large pre-trained vision-language models to downstream tasks. Current state-of-the-art methods, like CoOp and ProDA, tend adopt soft prompts learn an appropriate prompt each specific task. Recent CoCoOp further boosts base-to-new generalization performance via image-conditional prompt. However, it directly fuses identical image semantics different labels significantly weakens discrimination among classes as shown in...