- Topic Modeling
- Natural Language Processing Techniques
- Text and Document Classification Technologies
- Domain Adaptation and Few-Shot Learning
- Face and Expression Recognition
- Advanced Text Analysis Techniques
- Bayesian Methods and Mixture Models
- Advanced Graph Neural Networks
- Computational and Text Analysis Methods
- Machine Learning and Data Classification
- Data Quality and Management
- Multimodal Machine Learning Applications
- Machine Learning and Algorithms
- Advanced SAR Imaging Techniques
- Image Retrieval and Classification Techniques
- Advanced Neural Network Applications
- Complex Network Analysis Techniques
- Radar Systems and Signal Processing
- Algorithms and Data Compression
- Intelligent Tutoring Systems and Adaptive Learning
- Recommender Systems and Techniques
- Music and Audio Processing
- Biomedical Text Mining and Ontologies
- Advanced Database Systems and Queries
- Speech Recognition and Synthesis
Monash University
2015-2024
Xidian University
2005-2024
Australian Regenerative Medicine Institute
2023-2024
Nanjing University of Science and Technology
2011-2021
PLA Information Engineering University
2004-2020
Macquarie University
2012-2018
South China University of Technology
2016
Australian National University
2008-2013
Duke University
2009-2012
University of Electronic Science and Technology of China
2012
We develop a novel maximum neighborhood margin discriminant projection (MNMDP) technique for dimensionality reduction of high-dimensional data. It utilizes both the local information and class to model intraclass interclass scatters. By maximizing between neighborhoods all points, MNMDP cannot only detect true intrinsic manifold structure data but also strengthen pattern discrimination among different classes. To verify classification performance proposed MNMDP, it is applied PolyU HRF FKP...
Probabilistic topic models are widely used to discover latent topics in document collections, while feature vector representations of words have been obtain high performance many NLP tasks. In this paper, we extend two different Dirichlet multinomial by incorporating trained on very large corpora improve the word-topic mapping learnt a smaller corpus. Experimental results show that using information from external corpora, our new produce significant improvements coherence, clustering and...
Knowledge distillation (KD) is a prevalent model compression technique in deep learning, aiming to leverage knowledge from large teacher enhance the training of smaller student model. It has found success deploying compact models intelligent applications like transportation, smart health, and distributed intelligence. Current methods primarily fall into two categories: offline online distillation. Offline involve one-way process, transferring unvaried student, while enable simultaneous...
Radar high-resolution range profile (HRRP) is very sensitive to time-shift and target-aspect variation; therefore, HRRP-based radar automatic target recognition (RATR) requires efficient invariant features robust feature templates. Although higher order spectra are a set of well-known features, direct use them (except for power spectrum) impractical due their complexity. A method calculating the Euclidean distance in space proposed this paper, which avoids spectra, effectively reducing...
A factor analysis model based on multitask learning (MTL) is developed to characterize the FFT-magnitude feature of complex high-resolution range profile (HRRP), motivated by problem radar automatic target recognition (RATR). The MTL mechanism makes it possible appropriately share information among samples from different target-aspects and learn aspect-dependent parameters collectively, thus offering potential improve overall performance with small training data size. In addition, since...
Topic modelling has been a successful technique for text analysis almost twenty years. When topic met deep neural networks, there emerged new and increasingly popular research area, models, with nearly hundred models developed wide range of applications in language understanding such as generation, summarisation models. There is need to summarise developments discuss open problems future directions. In this paper, we provide focused yet comprehensive overview interested researchers the AI...
Abstract Knowledge distillation is a simple yet effective technique for deep model compression, which aims to transfer the knowledge learned by large teacher small student model. To mimic how teaches student, existing methods mainly adapt an unidirectional transfer, where extracted from different intermedicate layers of used guide However, it turns out that students can learn more effectively through multi-stage learning with self-reflection in real-world education scenario, nevertheless...
Early identification of pregnant women at high risk developing gestational diabetes (GDM) is desirable as effective lifestyle interventions are available to prevent GDM and reduce associated adverse outcomes. Personalised probability during pregnancy can be determined using a prediction model. These models extend from traditional statistics machine learning methods; however, accuracy remains sub-optimal.We aimed compare multiple algorithms develop models, then determine the optimal model for...
K-nearest neighbor (KNN) rule is a simple and effective algorithm in pattern classification. In this article, we propose local mean-based k-nearest centroid classifier that assigns to each query class label with nearest mean vector so as improve the classification performance. The proposed scheme not only takes into account proximity spatial distribution of k neighbors, but also utilizes neighbors from making decision. classifier, for well positioned sufficiently capture information. order...
Transformer has obtained promising results on cognitive speech signal processing field, which is of interest in various applications ranging from emotion to neurocognitive disorder analysis.However, most works treat as a whole, leading the neglect pronunciation structure that unique and reflects process.Meanwhile, heavy computational burden due its full attention operation.In this paper, hierarchical efficient framework, called SpeechFormer, considers structural characteristics speech,...
Knowledge distillation (KD), as an efficient and effective model compression technique, has received considerable attention in deep learning. The key to its success is about transferring knowledge from a large teacher network small student network. However, most existing KD methods consider only one type of learned either instance features or relations via specific strategy, failing explore the idea different types with strategies. Moreover, widely used offline also suffers limited learning...
Continual learning (CL) is a machine paradigm that accumulates knowledge while sequentially. The main challenge in CL catastrophic forgetting of previously seen tasks, which occurs due to shifts the probability distribution. To retain knowledge, existing models often save some past examples and revisit them new tasks. As result, size saved samples dramatically increases as more are seen. address this issue, we introduce an efficient method by storing only few achieve good performance....
Obtaining training data for multi-document Summarization (MDS) is time consuming and resource-intensive, so recent neural models can only be trained limited domains. In this paper, we propose SummPip: an unsupervised method summarization, in which convert the original documents to a sentence graph, taking both linguistic deep representation into account, then apply spectral clustering obtain multiple clusters of sentences, finally compress each cluster generate final summary. Experiments on...
Besides the text content, documents and their associated words usually come with rich sets of meta information, such as categories semantic/syntactic features words, like those encoded in word embeddings. Incorporating information directly into generative process topic models can improve modelling accuracy quality, especially case where word-occurrence training data is insufficient. In this paper, we present a model, called MetaLDA, which able to leverage either document or both them...
Graph neural networks (GNNs) are important tools for transductive learning tasks, such as node classification in graphs, due to their expressive power capturing complex interdependency between nodes. To enable GNN learning, existing works typically assume that labeled nodes, from two or multiple classes, provided, so a discriminative classifier can be learned the data. In reality, this assumption might too restrictive applications, users may only provide labels of interest single class small...