Dat Thanh Tran

ORCID: 0000-0002-5922-3458
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Stock Market Forecasting Methods
  • Time Series Analysis and Forecasting
  • Neural Networks and Applications
  • Machine Learning and ELM
  • Sparse and Compressive Sensing Techniques
  • Domain Adaptation and Few-Shot Learning
  • Complex Systems and Time Series Analysis
  • Indoor and Outdoor Localization Technologies
  • Face and Expression Recognition
  • Microwave Imaging and Scattering Analysis
  • Blind Source Separation Techniques
  • Tensor decomposition and applications
  • Energy Load and Power Forecasting
  • Machine Learning and Algorithms
  • Computational Physics and Python Applications
  • Advanced Neural Network Applications
  • Anomaly Detection Techniques and Applications
  • Image and Signal Denoising Methods
  • Machine Learning and Data Classification
  • Biometric Identification and Security
  • Auction Theory and Applications
  • Advanced Clustering Algorithms Research
  • Human Pose and Action Recognition
  • Speech and Audio Processing
  • Financial Markets and Investment Strategies

Tampere University
2017-2023

Hanoi University of Science and Technology
2022

Vietnam National University Ho Chi Minh City
2016

Financial time-series forecasting has long been a challenging problem because of the inherently noisy and stochastic nature market. In high-frequency trading, for trading purposes is even more task, since an automated inference system required to be both accurate fast. this paper, we propose neural network layer architecture that incorporates idea bilinear projection as well attention mechanism enables detect focus on crucial temporal information. The resulting highly interpretable, given...

10.1109/tnnls.2018.2869225 article EN IEEE Transactions on Neural Networks and Learning Systems 2018-09-28

The traditional multilayer perceptron (MLP) using a McCulloch-Pitts neuron model is inherently limited to set of neuronal activities, i.e., linear weighted sum followed by nonlinear thresholding step. Previously, generalized operational (GOP) was proposed extend the conventional defining diverse activities imitate biological neurons. Together with GOP, progressive (POP) algorithm optimize predefined template multiple homogeneous layers in layerwise manner. In this paper, we propose an...

10.1109/tnnls.2019.2914082 article EN IEEE Transactions on Neural Networks and Learning Systems 2019-05-31

Nowadays, with the availability of massive amount trade data collected, dynamics financial markets pose both a challenge and an opportunity for high frequency traders. In order to take advantage rapid, subtle movement assets in High Frequency Trading (HFT), automatic algorithm analyze detect patterns price change based on transaction records must be available. The multichannel, time-series representation naturally suggests tensor-based learning algorithms. this work, we investigate...

10.1109/ssci.2017.8280812 preprint EN 2021 IEEE Symposium Series on Computational Intelligence (SSCI) 2017-11-01

Compressive Learning is an emerging topic that combines signal acquisition via compressive sensing and machine learning to perform inference tasks directly on a small number of measurements. Many data modalities naturally have multi-dimensional or tensorial format, with each dimension tensor mode representing different features such as the spatial temporal information in video sequences spectral hyperspectral images. However, existing frameworks, component utilizes either random learned...

10.1109/tnnls.2020.2984831 article EN IEEE Transactions on Neural Networks and Learning Systems 2020-04-17

Face verification is a prominent biometric technique for identity authentication that has been used extensively in several security applications. In practice, face often performed along with other visual surveillance tasks the computing device. Thus, ability to share computation and reuse information already extracted analysis can greatly help reduce load on devices. this study, we propose utilize knowledge transfer approach problem by building heterogeneous neural network architecture of...

10.1109/icip.2019.8804296 article EN 2022 IEEE International Conference on Image Processing (ICIP) 2019-08-26

Deep Learning models have become dominant in tackling financial time-series analysis problems, overturning conventional machine learning and statistical methods. Most often, a model trained for one market or security cannot be directly applied to another due differences inherent the conditions. In addition, as evolves over time, it is necessary update existing train new ones when data made available. This scenario, which most forecasting applications, naturally raises following research...

10.1016/j.patcog.2023.109604 article EN cc-by Pattern Recognition 2023-04-10

Learning to rank is an essential component in information retrieval system. The state-of-the-art ranking systems are often based on ensemble of classifiers, such as Random Forest or LambdaMART, which aggregates the outputs produced by thousands classifiers. storage and computation requirement model usually very high, imposing a significant operating cost To tackle this problem, we propose algorithm that adaptively learns single heterogeneous feedforward network architecture, composing...

10.1109/icassp.2019.8683711 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019-04-17

Financial time-series forecasting is one of the most challenging domains in field analysis. This mostly due to highly non-stationary and noisy nature financial data. With progressive efforts community design specialized neural networks incorporating prior domain knowledge, many analysis problems have been successfully tackled. The temporal attention mechanism a layer that recently gained popular-ity its ability focus on important events. In this paper, we propose based ideas multi-head...

10.23919/eusipco55093.2022.9909957 article EN 2021 29th European Signal Processing Conference (EUSIPCO) 2022-08-29

In this paper, we propose 2D-Attention (2DA), a generic attention formulation for sequence data, which acts as complementary computation block that can detect and focus on relevant sources of information the given learning objective. The proposed module is incorporated into recently Neural Bag Feature (NBoF) model to enhance its capacity. Since 2DA plug-in layer, injecting it different stages NBoF results in 2DA-NBoF architectures, each possesses unique interpretation. We conducted extensive...

10.1109/access.2022.3169776 article EN cc-by IEEE Access 2022-01-01

The visit patterns of insects to specific flowers at times during the diurnal cycle and across season play important roles in pollination biology. Thus, ability automatically detect visitors occurring video sequences greatly reduces manual human efforts needed collect such data. Data-dependent approaches, as supervised machine learning algorithms, have become core component several automation systems. In this paper, we describe a flower visitor detection system using deep Convolutional...

10.23919/eusipco.2018.8553494 article EN 2021 29th European Signal Processing Conference (EUSIPCO) 2018-09-01

Financial time-series analysis and forecasting have been extensively studied over the past decades, yet still remain as a very challenging research topic. Since financial market is inherently noisy stochastic, majority of interests are non-stationary, often obtained from different modalities. This property presents great challenges can significantly affect performance subsequent analysis/forecasting steps. Recently, Temporal Attention augmented Bilinear Layer (TABL) has shown performances in...

10.1109/icpr48806.2021.9412547 article EN 2022 26th International Conference on Pattern Recognition (ICPR) 2021-01-10

Recently, the Multilinear Compressive Learning (MCL) framework was proposed to efficiently optimize sensing and learning steps when working with multidimensional signals, i.e. tensors. In in general, MCL particular, number of compressed measurements captured by a compressive device characterizes storage requirement or bandwidth for transmission. This number, however, does not completely characterize performance system. this paper, we analyze relationship between input signal resolution, MCL....

10.1109/ssci47803.2020.9308418 article EN 2021 IEEE Symposium Series on Computational Intelligence (SSCI) 2020-12-01

Material is one of the intrinsic features objects, and consequently material recognition plays an important role in image understanding. The same may have various shapes appearance, while keeping physical characteristic. This brings great challenges for recognition. Besides suitable features, a powerful classifier also can improve overall performance. Due to limitations classical linear neurons, used all shallow deep neural networks, such as CNN, we propose apply generalized operational...

10.1109/mmsp48831.2020.9287058 article EN 2020-09-21

Deep Learning models have become dominant in tackling financial time-series analysis problems, overturning conventional machine learning and statistical methods. Most often, a model trained for one market or security cannot be directly applied to another due differences inherent the conditions. In addition, as evolves over time, it is necessary update existing train new ones when data made available. This scenario, which most forecasting applications, naturally raises following research...

10.2139/ssrn.4332126 article EN 2023-01-01

Multilinear compressive learning (MCL) is an efficient signal acquisition and paradigm for multidimensional signals. The level of compression affects the detection or classification performance MCL model, with higher rates often associated lower inference accuracy. However, are more amenable to a wider range applications, especially those that require low operating bandwidth minimal energy consumption such as Internet Things (IoT) applications. Many communication protocols provide support...

10.1109/jiot.2021.3114743 article EN IEEE Internet of Things Journal 2021-09-23

Progressive Neural Network Learning is a class of algorithms that incrementally construct the network's topology and optimize its parameters based on training data. While this approach exempts users from manual task designing validating multiple network topologies, it often requires an enormous number computations. In paper, we propose to speed up process by exploiting subsets data at each incremental step. Three different sampling strategies for selecting samples according criteria are...

10.1109/icip40778.2020.9191270 article EN 2022 IEEE International Conference on Image Processing (ICIP) 2020-09-30
Coming Soon ...