Senlin Shu

ORCID: 0009-0008-7861-4834
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Machine Learning and Data Classification
  • Text and Document Classification Technologies
  • Image Retrieval and Classification Techniques
  • Advanced Image and Video Retrieval Techniques
  • Music and Audio Processing
  • Imbalanced Data Classification Techniques
  • Domain Adaptation and Few-Shot Learning
  • Machine Learning and Algorithms
  • Video Analysis and Summarization
  • Industrial Vision Systems and Defect Detection
  • COVID-19 diagnosis using AI
  • Advanced Bandit Algorithms Research
  • Machine Learning in Bioinformatics
  • Educational Technology and Assessment
  • Data Stream Mining Techniques
  • Spam and Phishing Detection
  • Pharmacy and Medical Practices
  • Handwritten Text Recognition Techniques
  • Advanced Neural Network Applications
  • Machine Learning and ELM

Chongqing University
2023-2024

Southwest University
2019-2022

Trained with the standard cross entropy loss, deep neural networks can achieve great performance on correctly labeled data. However, if training data is corrupted label noise, models tend to overfit noisy labels, thereby achieving poor generation performance. To remedy this issue, several loss functions have been proposed and demonstrated be robust noise. Although most of stem from Categorical Cross Entropy (CCE) they fail embody intrinsic relationships between CCE other functions. In paper,...

10.24963/ijcai.2020/305 article EN 2020-07-01

This article tackles the problem of multilabel learning with missing labels. For this problem, it is widely accepted that label correlations can be used to recover ground-truth matrix. Most existing approaches impose low-rank assumption on observed matrix exploit by decomposing into two matrices, which describe latent factors instances and labels, respectively. The quality these highly influences recovery labels construction classification model. In article, we propose recovering regularized...

10.1109/tcyb.2020.3016897 article EN IEEE Transactions on Cybernetics 2020-09-16

Partial multi-label learning (PML) deals with the problem where each training example is assigned multiple candidate labels, only a part of which are correct. To learn from such PML examples, straightforward model tends to be misled by noise label set. alleviate this problem, coupled framework established in paper desired and perform relabeling procedure alternatively. In procedure, instead simply extracting relative confidences, or deterministically eliminating low confidence labels...

10.1109/icdm.2019.00038 article EN 2021 IEEE International Conference on Data Mining (ICDM) 2019-11-01

Multiple-instance learning (MIL) solves the problem where training instances are grouped in bags, and a binary (positive or negative) label is provided for each bag. Most of existing MIL studies need fully labeled bags an effective classifier, while it could be quite hard to collect such data many real-world scenarios, due high cost labeling process. Fortunately, unlike data, triplet comparison can collected more accurate human-friendly way. Therefore, this article, we first time investigate...

10.1145/3638776 article EN ACM Transactions on Knowledge Discovery from Data 2024-01-02

Multiple-instance learning (MIL) is an important weakly supervised binary classification problem, where training instances are arranged in bags, and each bag assigned a positive or negative label. Most of the previous studies for MIL assume that bags fully labeled. However, some real-world scenarios, it could be difficult to collect labeled due expensive time labor consumption labeling task. Fortunately, much easier us similar dissimilar (indicating whether two share same label not), because...

10.1145/3447548.3467318 article EN 2021-08-13

Positive-unlabeled (PU) learning handles the problem of a predictive model from PU data. Past few years have witnessed boom learning, while existing algorithms are limited to binary classification and cannot be directly applied multi-class In this paper, we present an unbiased estimator original risk for show that direct empirical minimization suffers severe overfitting because is unbounded below. To address problem, propose alternative estimator, theoretically establish estimation error...

10.1109/icdm50108.2020.00160 article EN 2021 IEEE International Conference on Data Mining (ICDM) 2020-11-01

To alleviate the data requirement for training effective binary classifiers in classification, many weakly supervised learning settings have been proposed. Among them, some consider using pairwise but not pointwise labels, when labels are accessible due to privacy, confidentiality, or security reasons. However, as a label denotes whether two points share label, it cannot be easily collected if either point is equally likely positive negative. Thus, this paper, we propose novel setting called...

10.48550/arxiv.2010.01875 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Multi-label learning deals with the problem that each instance is associated multiple labels simultaneously. Most of existing approaches aim to improve performance multi-label by exploiting label correlations. Although data augmentation technique widely used in many machine tasks, it still unclear whether helpful learning. In this article, we propose leverage Specifically, first a novel approach performs clustering on real examples and treats cluster centers as virtual examples, these...

10.48550/arxiv.2004.08113 preprint EN cc-by arXiv (Cornell University) 2020-01-01

Multiple-instance learning (MIL) is a significant weakly supervised problem, where the training data consists of bags containing multiple instances and bag-level labels. Most previous MIL research required fully labeled bags. However, collecting such challenging due to labeling costs or privacy concerns. Fortunately, we can easily collect pairwise comparison information, indicating one bag more likely be positive than other. Therefore, investigate novel problem about binary classifier only...

10.1145/3696460 article EN ACM Transactions on Intelligent Systems and Technology 2024-09-29

Partial Label Learning (PLL) is a typical weakly supervised learning task, which assumes each training instance annotated with set of candidate labels containing the ground-truth label. Recent PLL methods adopt identification-based disambiguation to alleviate influence false positive and achieve promising performance. However, they require all classes in test have appeared set, ignoring fact that new will keep emerging real applications. To address this issue, paper, we focus on problem...

10.1145/3700137 article EN ACM Transactions on Intelligent Systems and Technology 2024-10-14

Partial Label Learning (PLL) is a typical weakly supervised learning task, which assumes each training instance annotated with set of candidate labels containing the ground-truth label. Recent PLL methods adopt identification-based disambiguation to alleviate influence false positive and achieve promising performance. However, they require all classes in test have appeared set, ignoring fact that new will keep emerging real applications. To address this issue, paper, we focus on problem...

10.48550/arxiv.2409.19600 preprint EN arXiv (Cornell University) 2024-09-29

In <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">multiple-instance learning</i> (MIL), each training example is represented by a bag of instances. A either negative if it contains no positive instances or has at least one instance. Previous MIL methods generally assume that bags are fully labeled. However, the exact labels examples may not be accessible, due to security, confidentiality, and privacy concerns. Fortunately, could easier for...

10.1109/tkde.2022.3232141 article EN IEEE Transactions on Knowledge and Data Engineering 2023-01-19

In contrast to the standard learning paradigm where all classes can be observed in training data, with augmented (LAC) tackles problem unobserved data may emerge test phase. Previous research showed that given unlabeled an unbiased risk estimator (URE) derived, which minimized for LAC theoretical guarantees. However, this URE is only restricted specific type of one-versus-rest loss functions multi-class classification, making it not flexible enough when needs changed dataset practice. paper,...

10.1609/aaai.v37i8.26173 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2023-06-26

Can we learn a multi-class classifier from only data of single class? We show that without any assumptions on the loss functions, models, and optimizers, can successfully class with rigorous consistency guarantee when confidences (i.e., class-posterior probabilities for all classes) are available. Specifically, propose an empirical risk minimization framework is loss-/model-/optimizer-independent. Instead constructing boundary between given other classes, our method conduct discriminative...

10.48550/arxiv.2106.08864 preprint EN other-oa arXiv (Cornell University) 2021-01-01

In contrast to the standard learning paradigm where all classes can be observed in training data, with augmented (LAC) tackles problem unobserved data may emerge test phase. Previous research showed that given unlabeled an unbiased risk estimator (URE) derived, which minimized for LAC theoretical guarantees. However, this URE is only restricted specific type of one-versus-rest loss functions multi-class classification, making it not flexible enough when needs changed dataset practice. paper,...

10.48550/arxiv.2306.06894 preprint EN cc-by arXiv (Cornell University) 2023-01-01
Coming Soon ...