Jonathan R. Wells

ORCID: 0000-0003-0550-1229
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Anomaly Detection Techniques and Applications
  • Machine Learning and Data Classification
  • Network Security and Intrusion Detection
  • Face and Expression Recognition
  • Time Series Analysis and Forecasting
  • Advanced Clustering Algorithms Research
  • Data Stream Mining Techniques
  • Neural Networks and Applications
  • Domain Adaptation and Few-Shot Learning
  • Data Management and Algorithms
  • Artificial Immune Systems Applications
  • Data Mining Algorithms and Applications
  • Machine Learning and ELM
  • Teaching and Learning Programming
  • Imbalanced Data Classification Techniques
  • Complex Network Analysis Techniques
  • Evolutionary Algorithms and Applications
  • Energy Load and Power Forecasting
  • Modular Robots and Swarm Intelligence
  • Tactile and Sensory Interactions
  • Advanced Image and Video Retrieval Techniques
  • Video Surveillance and Tracking Methods
  • Machine Learning and Algorithms
  • Statistical and numerical algorithms
  • Robotic Locomotion and Control

Deakin University
2019-2022

Federation University
2016-2020

Monash University
2009-2014

Australian Regenerative Medicine Institute
2013

Abstract The first successful isolation‐based anomaly detector, ie, iForest, uses trees as a means to perform isolation. Although it has been shown have advantages over existing detectors, we identified 4 weaknesses, its inability detect local anomalies, anomalies with high percentage of irrelevant attributes, that are masked by axis‐parallel clusters, and in multimodal data sets. To overcome these this paper shows an alternative isolation mechanism is required thus presents iNNE or using...

10.1111/coin.12156 article EN Computational Intelligence 2018-01-05

This paper presents iNNE (isolation using Nearest Neighbour Ensemble), an efficient nearest neighbour-based anomaly detection method by isolation. Inne runs significantly faster than existing methods such as Local Outlier Factor, especially in data sets having thousands of dimensions or millions instances. is because the proposed has linear time complexity and constant space complexity. Compared with tree-based isolation iForest, overcomes three weaknesses iForest that we have identified,...

10.1109/icdmw.2014.70 article EN 2014-12-01

This paper introduces a new ensemble approach, Feature-Subspace Aggregating (Feating), which builds local models instead of global models. Feating is generic approach that can enhance the predictive performance both stable and unstable learners. In contrast, most existing approaches improve learners only. Our analysis shows reduces execution time to generate model in an through increased level localisation Feating. empirical evaluation performs significantly better than Boosting, Random...

10.1007/s10994-010-5224-5 article EN cc-by-nc Machine Learning 2010-11-17

Handling incomplete data in real-world applications is a critical challenge due to two key limitations of existing methods: (i) they are primarily designed for numeric and struggle with categorical or heterogeneous/mixed datasets; (ii) assume that missing completely at random, which often not the case practice -- reality, patterns, leading biased results if these patterns accounted for. To address limitations, this paper presents novel approach handling values using Probability Mass...

10.48550/arxiv.2501.04300 preprint EN arXiv (Cornell University) 2025-01-08

Measuring similarity between two objects is the core operation in existing clustering algorithms grouping similar into clusters. This paper introduces a new measure called point-set kernel which computes an object and set of objects. The proposed procedure utilizes this to characterize every cluster grown from seed object. We show that both effective efficient enables it deal with large scale datasets. In contrast, are either or effective. comparison state-of-the-art density-peak scalable...

10.1109/tkde.2022.3144914 article EN publisher-specific-oa IEEE Transactions on Knowledge and Data Engineering 2022-01-01

Density estimation is the ubiquitous base modelling mechanism employed for many tasks such as clustering, classification, anomaly detection and information retrieval. Commonly used density methods kernel estimator k-nearest neighbour have high time space complexities which render them inapplicable in problems with large data size even a moderate number of dimensions. This weakness sets fundamental limit existing algorithms all these tasks. We propose first method stretches this to an extent...

10.1109/icdm.2011.47 article EN 2011-12-01

Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not stable like Support Vector Machines (SVM), k‐nearest neighbors and Naive Bayes classifiers. In addition model stability problem, high time complexity some SVM prohibits them from generating multiple models form an ensemble for large data sets. This paper introduces a simple method that only enables learners, also significantly reduces computational generate sets would otherwise...

10.1111/j.1467-8640.2012.00441.x article EN Computational Intelligence 2012-06-12

10.1007/s10115-022-01765-7 article EN Knowledge and Information Systems 2022-10-01

In this paper, we show that preprocessing data using a variant of rank transformation called 'Average Rank over an Ensemble Sub-samples (ARES)' makes clustering algorithms robust to representation and enable them detect varying density clusters. Our empirical results, obtained three most widely used algorithms-namely KMeans, DBSCAN, DP (Density Peak)-across wide range real-world datasets, after ARES produces better more consistent results.

10.48550/arxiv.2401.11402 preprint EN other-oa arXiv (Cornell University) 2024-01-01

The future for robotics in today's world is clear.We must automate our manufacturing processes to remain competitive the global marketplace.Modern an important part of that automation.Humanoids, a more recent development robotics, have host capabilities may reveal potential new uses industry and modern engineering education.This paper describes how one advance from beginning user NAO robot intermediate Choregraphe software.The authors seek bridge gap between available documentation novice...

10.13189/ujeee.2019.060404 article EN Universal Journal of Electrical and Electronic Engineering 2019-10-01

This paper introduces a simple and efficient density estimator that enables fast systematic search. To show its advantage over commonly used kernel estimator, we apply it to outlying aspects mining. Outlying mining discovers feature subsets (or subspaces) describe how query stand out from given dataset. The task demands search of subspaces. We identify existing miners are restricted datasets with small data size dimensions because they employ which is computationally expensive, for subspace...

10.48550/arxiv.1707.00783 preprint EN other-oa arXiv (Cornell University) 2017-01-01

This paper shows that adaptive kernel density estimator (KDE) can be derived effectively from Isolation Kernel. Existing KDEs often employ a data independent such as Gaussian kernel. Therefore, it requires an additional means to adapt its bandwidth locally in given dataset. Because Kernel is dependent which directly data, no operation required. The resultant called IKDE the only KDE fast and adaptive. are either but non-adaptive or slow. In addition, using for anomaly detection, we identify...

10.1109/icdm51629.2021.00073 article EN 2021 IEEE International Conference on Data Mining (ICDM) 2021-12-01

Although simulated environments are improved by adding sensory information, temperature is one input that has rarely featured in them. Here we report findings from experiments examine the efficacy of information to multimodal complex known be benefit simulations. In first experiment, Peltier tiles added thermal kinesthetic feedback given a hand-worn exoskeletal device and this increased ratings for 'presence' during interactions with objects. an experiment which exploratory movements across...

10.1504/ijidss.2009.031417 article EN International Journal of Intelligent Defence Support Systems 2009-01-01

Large scale online kernel learning aims to build an efficient and scalable kernel-based predictive model incrementally from a sequence of potentially infinite data points. A current key approach focuses on ways produce approximate finite-dimensional feature map, assuming that the used has map with intractable dimensionality---an assumption traditionally held in methods. While this can deal large datasets efficiently, outcome is achieved by compromising accuracy because approximation. We...

10.48550/arxiv.1907.01104 preprint EN other-oa arXiv (Cornell University) 2019-01-01
Coming Soon ...