- Topic Modeling
- Natural Language Processing Techniques
- Optimization and Search Problems
- Multimodal Machine Learning Applications
- Machine Learning and Algorithms
- Explainable Artificial Intelligence (XAI)
- Indoor and Outdoor Localization Technologies
- Advanced Text Analysis Techniques
- Sentiment Analysis and Opinion Mining
- Speech and Audio Processing
- Algorithms and Data Compression
- Ferroelectric and Negative Capacitance Devices
- Machine Learning in Healthcare
- Target Tracking and Data Fusion in Sensor Networks
- semigroups and automata theory
- Computability, Logic, AI Algorithms
- Metaheuristic Optimization Algorithms Research
- Software Engineering Research
- Reinforcement Learning in Robotics
- Blockchain Technology Applications and Security
- AI in Service Interactions
- Data Quality and Management
- Underwater Acoustics Research
- Underwater Vehicles and Communication Systems
- Network Security and Intrusion Detection
University of Agder
2020-2024
Chosun University
2019
This paper proposes human-interpretable learning of aspect-based sentiment analysis (ABSA), employing the recently introduced Tsetlin Machines (TMs). We attain interpretability by converting intricate position-dependent textual semantics into binary form, mapping all features bag-of-words (BOWs). The form BOWs are encoded so that information on aspect and context words nearly lossless for classification. further adapt as input to TM, enabling patterns in propositional logic. To evaluate...
Indoor positioning systems have received increasing attention because of their wide range indoor applications. However, the system generally suffers from a large error in localization and has low solidity. The main approaches widely used for are based on inertial measurement unit (IMU), Bluetooth, Wi-Fi, ultra-wideband. major problem with Bluetooth-based fingerprinting is inconsistency radio signal strength, IMU-based drift that increases time. To compensate these drawbacks, present study,...
Aspect-based sentiment analysis (ABSA) aims at identifying fine-grained polarity of opinion associated with a given aspect word. Several existing articles demonstrated promising ABSA accuracy using positional embedding to show the relationship between an word and its context. In most cases, depends on distance remaining words in context, known as position index sequence. However, these techniques usually employ both complex preprocessing approaches additional trainable architectures obtain...
The unstable nature of radio frequency signals and the need for external infrastructure inside buildings have limited use positioning techniques, such as Wi-Fi Bluetooth fingerprinting. Compared to these geomagnetic field exhibits stable signal strength in time domain. However, existing magnetic methods cannot perform well a wide space because is not always discernible. In this paper, we introduce deep recurrent neural networks (DRNNs) build model that capable capturing long-range...
Embedding words in vector space is a fundamental first step state-of-the-art natural language processing (NLP). Typical NLP solutions employ pre-defined representations to improve generalization by co-locating similar space. For instance, Word2Vec self-supervised predictive model that captures the context of using neural network. Similarly, GLoVe popular unsupervised incorporating corpus-wide word co-occurrence statistics. Such embedding has significantly boosted important tasks, including...
The state-of-the-art natural language processing models have raised the bar for excellent performance on a variety of tasks in recent years. However, concerns are rising over their primitive sensitivity to distribution biases that reside training and testing data. This issue hugely impacts when exposed out-of-distribution counterfactual root cause seems be many machine learning prone learn shortcuts, modelling simple correlations rather than more fundamental general relationships. As result,...
Tsetlin Machine (TM) is an interpretable pattern recognition algorithm based on propositional logic, which has demonstrated competitive performance in many Natural Language Processing (NLP) tasks, including sentiment analysis, text classification, and Word Sense Disambiguation. To obtain human-level interpretability, legacy TM employs Boolean input features such as bag-of-words (BOW). However, the BOW representation makes it difficult to use any pre-trained information, for instance,...
Logic-based machine learning has the crucial advantage of transparency. However, despite significant recent progress, further research is needed to close accuracy gap between logic-based architectures and deep neural network ones. This paper introduces a novel variant Tsetlin (TM) that randomly drops clauses, logical element TMs. In effect, TM with Drop Clause ignores random selection clauses in each epoch, selected according predefined probability. this way, phase becomes more diverse. To...
Using logical clauses to represent patterns, Tsetlin Machines (TMs) have recently obtained competitive performance in terms of accuracy, memory footprint, energy, and learning speed on several benchmarks. Each TM clause votes for or against a particular class, with classification resolved using majority vote. While the evaluation is fast, being based binary operators, voting makes it necessary synchronize evaluation, impeding parallelization. In this paper, we propose novel scheme...
Large-scale pre-trained language representation and its promising performance in various downstream applications have become an area of interest the field natural processing (NLP). There has been huge further increasing model’s size order to outperform best previously obtained performances. However, at some point, parameters may lead reaching saturation point due limited capacity GPU/TPU. In addition this, such models are mostly available English or a shared multilingual structure. Hence,...
Tsetlin Machines learn from input data by creating patterns in propositional logical, using the literals available data. These vote for classes a classification task. Despite their simplistic premise, machine (TM)s have been performing at with other popular learning methods across various benchmarks. Not only accuracy, TMs also perform well terms of energy efficiency and speed. The general TM scheme works best when there is sufficient discriminatory information between two classes. In this...
We introduce Diffuse, a system that dynamically performs task and kernel fusion in distributed, task-based runtime systems. The key component of Diffuse is an intermediate representation distributed computation enables the necessary analyses for tasks to be performed scalable manner. pair with JIT compiler fuse together kernels within fused tasks. show empirically Diffuse's general enough target two real-world, libraries (cuNumeric Legate Sparse), letting find optimization opportunities...
Natural language processing (NLP) has become a vital requirement in wide range of applications, including machine translation, information retrieval, and text classification. The development evaluation NLP models for various languages have received significant attention recent years, but there been relatively little work done on comparing the performance different Romanian data. In particular, introduction with multilingual barely comparatively studied. this paper, we address gap by...
Indoor space classification is an important part of localization that helps in precise location extraction, which has been extensively utilized industrial and domestic domain. There are various approaches employ Bluetooth Low En-ergy (BLE), Wi-Fi, magnetic field, object detection, Ultra Wide Band (UWB) for indoor purposes. Many the existing need extensive pre-installed infrastructure, making cost higher to obtain reasonable accuracy. Therefore, improvements still required increase accuracy...
In this article, we introduce a novel variant of the Tsetlin machine (TM) that randomly drops clauses, key learning elements TM. effect, TM with drop clause ignores random selection clauses in each epoch, selected according to predefined probability. way, additional stochasticity is introduced phase To explore effects has on accuracy, training time, interpretability and robustness, conduct extensive experiments nine benchmark datasets natural language processing~(NLP) (IMDb, R8, R52, MR...
Explainability is one of the key factors in Natural Language Processing (NLP) specially for legal documents, medical diagnosis, and clinical text. Attention mechanism has been a popular choice such explainability recently by estimating relative importance input units. Recent research revealed, however, that processes tend to misidentify irrelevant units when explaining them. This due fact language representation layers are initialized pre-trained word embedding not context-dependent. Such...