- Anomaly Detection Techniques and Applications
- Machine Learning and Data Classification
- Network Security and Intrusion Detection
- Face and Expression Recognition
- Time Series Analysis and Forecasting
- Advanced Clustering Algorithms Research
- Data Stream Mining Techniques
- Neural Networks and Applications
- Domain Adaptation and Few-Shot Learning
- Data Management and Algorithms
- Artificial Immune Systems Applications
- Data Mining Algorithms and Applications
- Machine Learning and ELM
- Teaching and Learning Programming
- Imbalanced Data Classification Techniques
- Complex Network Analysis Techniques
- Evolutionary Algorithms and Applications
- Energy Load and Power Forecasting
- Modular Robots and Swarm Intelligence
- Tactile and Sensory Interactions
- Advanced Image and Video Retrieval Techniques
- Video Surveillance and Tracking Methods
- Machine Learning and Algorithms
- Statistical and numerical algorithms
- Robotic Locomotion and Control
Deakin University
2019-2022
Federation University
2016-2020
Monash University
2009-2014
Australian Regenerative Medicine Institute
2013
Abstract The first successful isolation‐based anomaly detector, ie, iForest, uses trees as a means to perform isolation. Although it has been shown have advantages over existing detectors, we identified 4 weaknesses, its inability detect local anomalies, anomalies with high percentage of irrelevant attributes, that are masked by axis‐parallel clusters, and in multimodal data sets. To overcome these this paper shows an alternative isolation mechanism is required thus presents iNNE or using...
This paper presents iNNE (isolation using Nearest Neighbour Ensemble), an efficient nearest neighbour-based anomaly detection method by isolation. Inne runs significantly faster than existing methods such as Local Outlier Factor, especially in data sets having thousands of dimensions or millions instances. is because the proposed has linear time complexity and constant space complexity. Compared with tree-based isolation iForest, overcomes three weaknesses iForest that we have identified,...
This paper introduces a new ensemble approach, Feature-Subspace Aggregating (Feating), which builds local models instead of global models. Feating is generic approach that can enhance the predictive performance both stable and unstable learners. In contrast, most existing approaches improve learners only. Our analysis shows reduces execution time to generate model in an through increased level localisation Feating. empirical evaluation performs significantly better than Boosting, Random...
Handling incomplete data in real-world applications is a critical challenge due to two key limitations of existing methods: (i) they are primarily designed for numeric and struggle with categorical or heterogeneous/mixed datasets; (ii) assume that missing completely at random, which often not the case practice -- reality, patterns, leading biased results if these patterns accounted for. To address limitations, this paper presents novel approach handling values using Probability Mass...
Measuring similarity between two objects is the core operation in existing clustering algorithms grouping similar into clusters. This paper introduces a new measure called point-set kernel which computes an object and set of objects. The proposed procedure utilizes this to characterize every cluster grown from seed object. We show that both effective efficient enables it deal with large scale datasets. In contrast, are either or effective. comparison state-of-the-art density-peak scalable...
Density estimation is the ubiquitous base modelling mechanism employed for many tasks such as clustering, classification, anomaly detection and information retrieval. Commonly used density methods kernel estimator k-nearest neighbour have high time space complexities which render them inapplicable in problems with large data size even a moderate number of dimensions. This weakness sets fundamental limit existing algorithms all these tasks. We propose first method stretches this to an extent...
Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not stable like Support Vector Machines (SVM), k‐nearest neighbors and Naive Bayes classifiers. In addition model stability problem, high time complexity some SVM prohibits them from generating multiple models form an ensemble for large data sets. This paper introduces a simple method that only enables learners, also significantly reduces computational generate sets would otherwise...
In this paper, we show that preprocessing data using a variant of rank transformation called 'Average Rank over an Ensemble Sub-samples (ARES)' makes clustering algorithms robust to representation and enable them detect varying density clusters. Our empirical results, obtained three most widely used algorithms-namely KMeans, DBSCAN, DP (Density Peak)-across wide range real-world datasets, after ARES produces better more consistent results.
The future for robotics in today's world is clear.We must automate our manufacturing processes to remain competitive the global marketplace.Modern an important part of that automation.Humanoids, a more recent development robotics, have host capabilities may reveal potential new uses industry and modern engineering education.This paper describes how one advance from beginning user NAO robot intermediate Choregraphe software.The authors seek bridge gap between available documentation novice...
This paper introduces a simple and efficient density estimator that enables fast systematic search. To show its advantage over commonly used kernel estimator, we apply it to outlying aspects mining. Outlying mining discovers feature subsets (or subspaces) describe how query stand out from given dataset. The task demands search of subspaces. We identify existing miners are restricted datasets with small data size dimensions because they employ which is computationally expensive, for subspace...
This paper shows that adaptive kernel density estimator (KDE) can be derived effectively from Isolation Kernel. Existing KDEs often employ a data independent such as Gaussian kernel. Therefore, it requires an additional means to adapt its bandwidth locally in given dataset. Because Kernel is dependent which directly data, no operation required. The resultant called IKDE the only KDE fast and adaptive. are either but non-adaptive or slow. In addition, using for anomaly detection, we identify...
Although simulated environments are improved by adding sensory information, temperature is one input that has rarely featured in them. Here we report findings from experiments examine the efficacy of information to multimodal complex known be benefit simulations. In first experiment, Peltier tiles added thermal kinesthetic feedback given a hand-worn exoskeletal device and this increased ratings for 'presence' during interactions with objects. an experiment which exploratory movements across...
Large scale online kernel learning aims to build an efficient and scalable kernel-based predictive model incrementally from a sequence of potentially infinite data points. A current key approach focuses on ways produce approximate finite-dimensional feature map, assuming that the used has map with intractable dimensionality---an assumption traditionally held in methods. While this can deal large datasets efficiently, outcome is achieved by compromising accuracy because approximation. We...