Xuewen Xia

ORCID: 0000-0002-4938-1479
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Metaheuristic Optimization Algorithms Research
  • Evolutionary Algorithms and Applications
  • Advanced Multi-Objective Optimization Algorithms
  • Image and Signal Denoising Methods
  • Neural Networks and Applications
  • Advanced Graph Neural Networks
  • Advanced Algorithms and Applications
  • Higher Education and Teaching Methods
  • Vehicle Routing Optimization Methods
  • Cellular Automata and Applications
  • Chaos-based Image/Signal Encryption
  • Artificial Immune Systems Applications
  • Time Series Analysis and Forecasting
  • Graph Theory and Algorithms
  • Stochastic processes and financial applications
  • Advanced Computational Techniques and Applications
  • Anomaly Detection Techniques and Applications
  • Stability and Controllability of Differential Equations
  • DNA and Biological Computing
  • Robotic Path Planning Algorithms
  • Education and Work Dynamics
  • Data Management and Algorithms
  • Analysis of environmental and stochastic processes
  • IoT and Edge/Fog Computing
  • Topic Modeling

Zhangzhou Normal University
2019-2024

Hunan Institute of Engineering
2002-2024

East China Jiaotong University
2014-2021

China Agricultural University
2019

Wuhan University
2006-2014

Hubei Engineering University
2009-2013

Hunan Normal University
2012-2013

Huanggang Normal University
2007-2013

Hebei University of Engineering
2012

Anhui Polytechnic University
2011

There are two common challenges in particle swarm optimization (PSO) research, that is, selecting proper exemplars and designing an efficient learning model for a particle. In this article, we propose triple archives PSO (TAPSO), which particles three used to deal with the above challenges. First, who have better fitness (i.e., elites) recorded one archive while other offer faster progress, called profiteers saved another archive. Second, when breeding each dimension of potential exemplar...

10.1109/tcyb.2019.2943928 article EN cc-by IEEE Transactions on Cybernetics 2019-10-11

This paper proposes a multi-swarm particle swarm optimization (MSPSO) that consists of three novel strategies to balance the exploration and exploitation abilities. The new proposed MSPSO in this work is based on multiple swarms framework cooperating with dynamic sub-swarm number strategy (DNS), regrouping (SRS), purposeful detecting (PDS). Firstly, DNS divides entire population into many sub-swarms early stage periodically reduces (i.e., increase size each sub-swarm) along evolutionary...

10.1016/j.asoc.2018.02.042 article EN cc-by-nc-nd Applied Soft Computing 2018-03-02

Differential evolution (DE) is an efficient and powerful stochastic optimization algorithm. Extensive studies in recent years have verified that different trial vector generation strategies associated control parameters offer distinct characteristics on problems. To take full advantages of them, ensemble methods based various adaptive been proposed during the last decade. Aiming to organically integrate merits some popular parameters, then utilize a multi-role DE (MRDE) this paper. In MRDE,...

10.1016/j.swevo.2019.03.003 article EN cc-by-nc-nd Swarm and Evolutionary Computation 2019-03-12

Abstract Graph neural networks (GNNs) have emerged as a powerful tool in graph representation learning. However, they are increasingly challenged by over-smoothing network depth grows, compromising their ability to capture and represent complex structures. Additionally, some popular GNN variants only consider local neighbor information during node updating, ignoring the global structural leading inadequate learning differentiation of To address these challenges, we introduce novel framework,...

10.1007/s11063-024-11496-1 article EN cc-by Neural Processing Letters 2024-02-09

Community detection is crucial in data mining. Traditional methods primarily focus on graph structure, often neglecting the significance of attribute features. In contrast, deep learning-based approaches incorporate features and local structural information through contrastive learning, improving performance. However, existing algorithms' complex design joint optimization make them difficult to train reduce efficiency. Additionally, these require number communities be predefined, making...

10.48550/arxiv.2501.12946 preprint EN arXiv (Cornell University) 2025-01-22

The backtracking search optimization algorithm (BSA) is a population-based evolutionary for numerical problems. BSA has powerful global exploration capacity while its local exploitation capability relatively poor. This affects the convergence speed of algorithm. In this paper, we propose modified inspired by simulated annealing (BSAISA) to overcome deficiency BSA. BSAISA, amplitude control factor (<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"...

10.1155/2018/9167414 article EN cc-by Computational Intelligence and Neuroscience 2018-01-01

In differential evolution algorithm (DE), it is a widely accepted method that selecting individuals with higher fitness to generate mutant vector. this case, the population under fitness-based driving force. Although force beneficial for exploitation, sacrifices performance on exploration. paper, novelty-hybrid-fitness introduced trade off contradictions between exploration and exploitation of DE. new proposed DE, named as NFDDE, both novelty values are considered when choosing create...

10.1016/j.ins.2021.07.082 article EN cc-by-nc-nd Information Sciences 2021-07-31

In a canonical particle swarm optimization (PSO) algorithm, the fitness is widely accepted criterion when selecting exemplars for particle, which exhibits promising performance in simple unimodal functions. To improve PSO's on complicated multimodal functions, various selection strategies based value are introduced PSO community. However, inherent defects of fitness-based selections still remain. this paper, novelty treated as an additional choosing particle. each generation, few elites and...

10.1109/tfuzz.2022.3227464 article EN cc-by IEEE Transactions on Fuzzy Systems 2022-12-07

Feature selection is an important pre-processing step in machine learning and data mining tasks, which improves the performance of models by removing redundant irrelevant features. Many feature algorithms have been widely studied, including greedy random search approaches, to find a subset most features for fulfilling particular task (i.e., classification regression). As powerful swarm-based meta-heuristic method, particle swarm optimization (PSO) reported be suitable problems with...

10.1109/access.2019.2953298 article EN cc-by IEEE Access 2019-01-01

In order to resolve conflict between convergence speed and population diversity of particle swarm optimization (PSO) algorithm, an improved PSO, called reverse-learning local-learning PSO (RLPSO) is presented in which a behavior implemented by some particles while adopted elite each generation. During the process, inferior initial particle’s historical worst position are reserved attract leap out local optimums. Furthermore, Hamming distance set no less than default rejection aim maintain...

10.4304/jsw.9.2.350-357 article EN Journal of Software 2014-02-01
Coming Soon ...