Sandro Ridella

ORCID: 0000-0003-0612-8219
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Neural Networks and Applications
  • Face and Expression Recognition
  • Machine Learning and Algorithms
  • Machine Learning and Data Classification
  • Machine Learning and ELM
  • Image and Signal Denoising Methods
  • Microwave and Dielectric Measurement Techniques
  • Advanced Data Compression Techniques
  • Chaos control and synchronization
  • Anomaly Detection Techniques and Applications
  • Blind Source Separation Techniques
  • Microwave Engineering and Waveguides
  • Computability, Logic, AI Algorithms
  • Gene expression and cancer classification
  • Control Systems and Identification
  • Model Reduction and Neural Networks
  • Metaheuristic Optimization Algorithms Research
  • Nonlinear Dynamics and Pattern Formation
  • Numerical Methods and Algorithms
  • Evolutionary Algorithms and Applications
  • Fuzzy Logic and Control Systems
  • Fault Detection and Control Systems
  • Digital Filter Design and Implementation
  • Electromagnetic Scattering and Analysis
  • Fractal and DNA sequence analysis

University of Genoa
2015-2024

National Research Council
1987-2003

Sapienza University of Rome
2002

Istituto per il Rilevamento Elettromagnetico dell'Ambiente
1973-1991

Superconducting and other Innovative Materials and Devices Institute
1975-1987

Consorzio Roma Ricerche
1987

Temple College
1982

Temple University
1982

A new global optimization algorithm for functions of continuous variables is presented, derived from the “Simulated Annealing” recently introduced in combinatorial optimization. The essentially an iterative random search procedure with adaptive moves along coordinate directions. It permits uphill under control a probabilistic criterion, thus tending to avoid first local minima encountered. has been tested against Nelder and Mead simplex method version Adaptive Random Search. test were...

10.1145/29380.29864 article EN ACM Transactions on Mathematical Software 1987-09-01

In this paper, we propose a digital architecture for support vector machine (SVM) learning and discuss its implementation on field programmable gate array (FPGA). We analyze briefly the quantization effects performance of SVM in classification problems to show robustness, feedforward phase, respect fixed-point math implementations; then, address problem learning. The described here makes use new algorithm which is less sensitive errors solution appeared so far literature. composed two parts:...

10.1109/tnn.2003.816033 article EN IEEE Transactions on Neural Networks 2003-09-01

In-sample approaches to model selection and error estimation of support vector machines (SVMs) are not as widespread out-of-sample methods, where part the data is removed from training set for validation testing purposes, mainly because their practical application straightforward latter provide, in many cases, satisfactory results. In this paper, we survey some recent not-so-recent results data-dependent structural risk minimization framework propose a proper reformulation SVM learning...

10.1109/tnnls.2012.2202401 article EN IEEE Transactions on Neural Networks and Learning Systems 2012-06-29

The class of mapping networks is a general family tools to perform wide variety tasks. This paper presents standardized, uniform representation for this networks, and introduces simple modification the multilayer perceptron with interesting practical properties, especially well suited cope pattern classification proposed model unifies two main paradigms found in classification, namely, surface-based prototype-based schemes, while retaining advantage being trainable by backpropagation....

10.1109/72.554194 article EN IEEE Transactions on Neural Networks 1997-01-01

An optimum weight initialization which strongly improves the performance of back propagation (BP) algorithm is suggested. By statistical analysis, scale factor, R (which proportional to maximum magnitude weights), obtained as a function paralyzed neuron percentage (PNP). Also, by computer simulation, performances on convergence speed have been related PNP. range for shown exist in order minimize time needed reach minimum cost function. Normalization factors are properly defined, leads...

10.1109/72.143378 article EN IEEE Transactions on Neural Networks 1992-07-01

A common belief is that Machine Learning Theory (MLT) not very useful, in pratice, for performing effective SVM model selection. This fact supported by experience, because well-known hold-out methods like cross-validation, leave-one-out, and the bootstrap usually achieve better results than ones derived from MLT. We show this paper that, a small sample setting, i.e. when dimensionality of data larger number samples, careful application MLT can outperform other selecting optimal hyperparameters SVM.

10.1109/ijcnn.2010.5596450 article EN 2022 International Joint Conference on Neural Networks (IJCNN) 2010-07-01

We present here a hardware-friendly version of the support vector machine (SVM), which is useful to implement its feed-forward phase on limited-resources devices such as field programmable gate arrays (FPGAs) or microcontrollers, where floating-point unit seldom available. Our proposal tested machine-vision benchmark dataset for automotive applications.

10.1109/ijcnn.2007.4371156 article EN IEEE International Conference on Neural Networks/IEEE ... International Conference on Neural Networks 2007-08-01

A method for identifying scattering parameters of launchers and uniform microstrips is presented. It shown that 8 complex measurements (magnitude phase) on two which are different only in length, inserted between launchers, can give, with suitable algebraic treatment, the S-parameters either launchers. This technique promising deembedding active devices as well microstrip discontinuities.

10.1109/tim.1976.6312235 article EN IEEE Transactions on Instrumentation and Measurement 1976-12-01

This century saw an unprecedented increase of public and private investments in Artificial Intelligence (AI) especially (Deep) Machine Learning (ML). led to breakthroughs their practical ability solve complex real-world problems impacting research society at large. Instead, our understand the fundamental mechanism behind these has slowed down because increased complexity, while past often emerged from foundational research. questioned researchers about necessity for a new theoretical...

10.1016/j.neucom.2023.126227 article EN cc-by Neurocomputing 2023-04-28

Various possible definitions of characteristic impedance are derived from two different microstrip models obtaining similar results. It is shown that slightly yield strongly behavior versus frequency.

10.1109/tmtt.1978.1129341 article EN IEEE Transactions on Microwave Theory and Techniques 1978-03-01

The purpose of this paper is to obtain a fully empirical stability-based bound on the generalization ability learning procedure, thus, circumventing some limitations structural risk minimization framework. We show that assuming desirable property algorithm sufficient make data-dependency explicit for stability, which, instead, usually bounded only in an algorithmic-dependent way. In addition, we prove well-known and widespread classifier, like support vector machine (SVM), satisfies...

10.1109/tcyb.2014.2361857 article EN IEEE Transactions on Cybernetics 2014-10-20

We describe in this work a Core Generator for Pattern Recognition tasks. This tool is able to generate, according user requirements, the hardware description of digital architecture, which implements Support Vector Machine, one current state-of-the-art algorithms Recognition. The output consists high-level language core description, suitable be mapped on reconfigurable device, like Field Programmable Gate Array (FPGA). As an example use our tool, we compare different solutions, by targeting...

10.1142/s0218126611007244 article EN Journal of Circuits Systems and Computers 2011-03-10

In-sample model selection for Support Vector Machines is a promising approach that allows using the training set both learning classifier and tuning its hyperparameters. This welcome improvement respect to out-of-sample methods, like cross-validation, which require remove some samples from use them only purposes. Unfortunately, in-sample methods precise control of function space, can be achieved through an unconventional SVM formulation, based on Ivanov regularization. We prove in this work...

10.1109/ijcnn.2011.6033354 article EN 2011-07-01
Coming Soon ...