Liangxiao Jiang

ORCID: 0000-0003-2201-3526
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Bayesian Modeling and Causal Inference
  • Machine Learning and Data Classification
  • Imbalanced Data Classification Techniques
  • Data Mining Algorithms and Applications
  • Rough Sets and Fuzzy Logic
  • Mobile Crowdsensing and Crowdsourcing
  • Data Stream Mining Techniques
  • Text and Document Classification Technologies
  • Anomaly Detection Techniques and Applications
  • Advanced Text Analysis Techniques
  • Machine Learning and Algorithms
  • Music and Audio Processing
  • Face and Expression Recognition
  • Web Data Mining and Analysis
  • Advanced Statistical Methods and Models
  • Data Management and Algorithms
  • Internet Traffic Analysis and Secure E-voting
  • Quantum Mechanics and Applications
  • Image Retrieval and Classification Techniques
  • Cold Atom Physics and Bose-Einstein Condensates
  • Geological and Geochemical Analysis
  • Geochemistry and Geologic Mapping
  • Spectroscopy and Chemometric Analyses
  • Evolutionary Algorithms and Applications
  • Domain Adaptation and Few-Shot Learning

China University of Geosciences
2016-2025

China University of Geosciences (Beijing)
2025

Bengbu Medical College
2024

Ministry of Education of the People's Republic of China
2021-2023

Harvard University
2009

Because learning an optimal Bayesian network classifier is NP-hard problem, learning-improved naive Bayes has attracted much attention from researchers. In this paper, we summarize the existing improved algorithms and propose a novel model: hidden (HNB). HNB, parent created for each attribute which combines influences all other attributes. We experimentally test HNB in terms of classification accuracy, using 36 UCI data sets selected by Weka, compare it to (NB), selective classifiers (SBC),...

10.1109/tkde.2008.234 article EN IEEE Transactions on Knowledge and Data Engineering 2008-12-24

10.1016/j.engappai.2016.02.002 article EN Engineering Applications of Artificial Intelligence 2016-02-27

Due to its simplicity, efficiency, and efficacy, naive Bayes (NB) has continued be one of the top 10 algorithms in data mining machine learning community. Of numerous approaches alleviating conditional independence assumption, feature weighting placed more emphasis on highly predictive features than those that are less predictive. In this paper, we argue for NB should correlated with class (maximum mutual relevance), yet uncorrelated other (minimum redundancy). Based premise, propose a...

10.1109/tkde.2018.2836440 article EN IEEE Transactions on Knowledge and Data Engineering 2018-05-15

KNN (k-nearest-neighbor) has been widely used as an effective classification model. In this paper, we summarize three main shortcomings confronting and single out methods for overcoming its shortcomings. Keeping to these methods, try our best survey some improved algorithms experimentally tested their effectiveness. Besides, discuss directions future study on KNN.

10.1109/fskd.2007.552 article EN 2007-01-01

10.1007/s10115-014-0746-y article EN Knowledge and Information Systems 2014-04-09

10.1016/j.patcog.2020.107674 article EN Pattern Recognition 2020-09-25

Crowdsourcing services provide a fast, efficient, and cost-effective way to obtain large labeled data for supervised learning. Unfortunately, the quality of crowdsourced labels cannot satisfy standards practical applications. Ground-truth inference, simply called label integration, designs proper aggregation methods infer unknown true each instance (sample) from multiple noisy set provided by ordinary crowd labelers (workers). However, nearly all existing integration focus solely on per...

10.1109/tnnls.2021.3082496 article EN IEEE Transactions on Neural Networks and Learning Systems 2021-05-31

Due to being fast, easy implement and relatively effective, some state-of-the-art naive Bayes text classifiers with the strong assumption of conditional independence among attributes, such as multinomial Bayes, complement one-versus-all-but-one model, have received a great deal attention from researchers in domain classification. In this article, we revisit these empirically compare their classification performance on large number widely used benchmark datasets. Then, propose locally...

10.1080/0952813x.2012.721010 article EN Journal of Experimental & Theoretical Artificial Intelligence 2012-10-22

Naive Bayes (NB) is a probability-based classification model which based on the attribute independence assumption. However, in many real-world data mining applications, its assumption often violated. Responding to this fact, researchers have made substantial amount of effort improve accuracy NB by weakening For recent example, averaged one-dependence estimators (AODE) proposed, weakens averaging all models from restricted class classifiers. classifiers AODE same weights and are treated...

10.1080/0952813x.2011.639092 article EN Journal of Experimental & Theoretical Artificial Intelligence 2011-12-07

10.1016/j.patrec.2014.04.017 article EN Pattern Recognition Letters 2014-04-30

Feature selection is an optional preprocessing procedure and frequently used to improve the classification accuracy of a machine learning algorithm by removing irrelevant and/or redundant features. However, in many real-world applications, test cost also required for making optimal decisions, addition accuracy. To best our knowledge, thus far, few studies have been conducted on test-cost-sensitive feature (TCSFS). In TCSFS, objectives are twofold: 1) 2) decrease cost. Therefore, fact, it...

10.1109/tsmc.2019.2904662 article EN IEEE Transactions on Systems Man and Cybernetics Systems 2019-01-01

10.1016/j.knosys.2016.06.003 article EN publisher-specific-oa Knowledge-Based Systems 2016-06-03
Coming Soon ...