Wei Wu

ORCID: 0000-0002-2018-7058
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Neural Networks and Applications
  • Machine Learning and ELM
  • Fuzzy Logic and Control Systems
  • Face and Expression Recognition
  • Rough Sets and Fuzzy Logic
  • Stochastic Gradient Optimization Techniques
  • Image Retrieval and Classification Techniques
  • Model Reduction and Neural Networks
  • Domain Adaptation and Few-Shot Learning
  • Neural Networks Stability and Synchronization
  • Image and Video Stabilization
  • Silicone and Siloxane Chemistry
  • Environmental Changes in China
  • Perovskite Materials and Applications
  • Machine Learning and Algorithms
  • Text and Document Classification Technologies
  • Cellular transport and secretion
  • Environmental and Agricultural Sciences
  • Advanced Neural Network Applications
  • Smart Agriculture and AI
  • Computational Geometry and Mesh Generation
  • Carbon and Quantum Dots Applications
  • Visual Attention and Saliency Detection
  • Generative Adversarial Networks and Image Synthesis
  • Tensor decomposition and applications

Bozhou People's Hospital
2024

Taishan University
2023

Dalian University of Technology
2010-2022

Shanghai Artificial Intelligence Laboratory
2022

Dalian University
2012

National Institute of Neurological Disorders and Stroke
2007

Cross domain object detection is a realistic and challenging task in the wild. It suffers from performance degradation due to large shift of data distributions lack instance-level annotations target domain. Existing approaches mainly focus on either these two difficulties, even though they are closely coupled cross detection. To solve this problem, we propose novel Target-perceived Dual-branch Distillation (TDD) framework. By integrating branches both source domains unified teacher-student...

10.1109/cvpr52688.2022.00935 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022-06-01

Upon exocytosis, fused vesicles must be retrieved for recycling. One route of retrieval is to generate endosome-like structures, from which small bud off. Endosome-like structures are widely thought generated slowly ( approximately 1 min) the plasma membrane, a process called bulk endocytosis. Although concept endocytosis seems established, kinetic evidence showing instant membrane fission at synapses still missing. The present work provides this missing piece calyx-type synapse. We used...

10.1073/pnas.0611512104 article EN Proceedings of the National Academy of Sciences 2007-06-06

Weakly supervised salient object detection (WSOD) aims at training saliency models with weak supervision. Normally, the WSOD methods use pseudo labels converted from image-level classification to train network. However, always contain noise information compared ground truth. Previous are directly affected by label generate error-prone predictions. To mitigate this problem, we design a noise-robust adversarial learning framework and propose noise-sensitive strategy for framework. The consists...

10.1109/tmm.2022.3152567 article EN IEEE Transactions on Multimedia 2022-02-18

In this paper, we propose a group Lasso regularization term as hidden layer method for feedforward neural networks. Adding into the standard error function is fruitful approach to eliminate redundant or unnecessary neurons from network structure. As comparison, popular introduced of network. Our novel can force outgoing weights become smaller during training process and eventually be removed after process. This means it simplify structure minimizes computational cost. Numerical simulations...

10.3390/sym10100525 article EN Symmetry 2018-10-19

10.1016/s0377-0427(01)00571-4 article EN Journal of Computational and Applied Mathematics 2002-07-01

7 , 8 ] 、战略类游 戏 [1,2] 、医学诊断 [9,10] 、证券市场分析 [11] 等, 并取得了 巨大的成功.

10.1360/sspma2018-00073 article EN Zhongguo kexue. Wulixue Lixue Tianwenxue 2018-07-05

The batch split‐complex backpropagation (BSCBP) algorithm for training complex‐valued neural networks is considered. For constant learning rate, it proved that the error function of BSCBP monotone during iteration process, and gradient tends to zero. By adding a moderate condition, weights sequence itself also be convergent. A numerical example given support theoretical analysis.

10.1155/2009/329173 article EN cc-by Discrete Dynamics in Nature and Society 2009-01-01

We successfully developed biocompatible nanocomposites of POSS/Si-QDs and POSS/C-QDs that emit blue, green, red light, demonstrating excellent cell imaging capabilities.

10.1039/d4ra02987a article EN cc-by-nc RSC Advances 2024-01-01

The k nearest-neighbour (kNN) algorithm has enjoyed much attention since its inception as an intuitive and effective classification method. Many further developments of kNN have been reported such those integrated with fuzzy sets, rough evolutionary computation. In particular, the modifications shown significant enhancement in performance. This paper presents another improvement, leading to a multi-functional (MFNN) approach which is conceptually simple understand. It employs aggregation...

10.1007/s00500-017-2528-4 article EN cc-by Soft Computing 2017-03-06

This paper studies the L(p) approximation capabilities of sum-of-product (SOPNN) and sigma-pi-sigma (SPSNN) neural networks. It is proved that set functions are generated by SOPNN with its activation function in $L_{loc};p(\mathcal{R})$ dense $L;p(\mathcal{K})$ for any compact $\mathcal{K}\subset \mathcal{R};N$, if only not a polynomial almost everywhere. also shown SPSNN ${L_{loc};\infty(\mathcal{R})}$, then constant (a.e.).

10.1142/s0129065707001251 article EN International Journal of Neural Systems 2007-10-01

10.1007/s10483-008-0912-z article EN Applied Mathematics and Mechanics 2008-09-01

Considered in this short note is the design of output layer nodes feedforward neural networks for solving multiple-class classification problems with <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$r$ </tex-math></inline-formula> ( notation="LaTeX">$r\geq 3$ ) classes samples. The common and conventional setting called " notation="LaTeX">$one - to one~approach$ paper, as follows: contains corresponding...

10.1109/access.2018.2888852 article EN cc-by-nc-nd IEEE Access 2018-12-20

Recurrent neural networks have been used for analysis and prediction of time series. This paper is concerned with the convergence gradient descent algorithm training diagonal recurrent networks. The existing results consider online based on assumption that a very large number (or infinitely many in theory) samples series are available, accordingly stochastic process theory to establish some probability nature. In this paper, we case only small available such treatment problem no longer...

10.1109/bicta.2007.4806412 article EN 2007-09-01

An online gradient method is presented and discussed for Pi-Sigma neural networks with stochastic inputs. The error function proved to be monotone in the training process, of tends zero if weights sequence uniformly bounded. Furthermore, after adding a moderate condition, itself also convergent

10.1109/foci.2007.371528 article EN 2007-04-01

The main results of the paper concern enhanced tensor product model transformation-based variable universe discourse controller (EHTPVUD) design, which uses Hammersley sampling method to generate hyper-cube grid for higher order singular value decomposition. Moreover, method-based parallel distributed compensation is also proposed as a comparison newly controller. Error and error derivative are synthesized by gains generated compensation. Then, EHTPVUD designed based on discourse. Finally,...

10.1109/icist.2018.8426173 article EN 2018-06-01

Convergence results are presented for the batch backpropagation algorithm with variable learning rates training feedforward neural networks a hidden layer. The monotonicity of error function in iteration is also proved.

10.1109/micai.2006.11 article EN 2006-11-01
Coming Soon ...