- Interconnection Networks and Systems
- Parallel Computing and Optimization Techniques
- Distributed and Parallel Computing Systems
- Network Security and Intrusion Detection
- Cloud Computing and Resource Management
- Advanced Malware Detection Techniques
- Supercapacitor Materials and Fabrication
- Distributed systems and fault tolerance
- Advanced Memory and Neural Computing
- Software-Defined Networks and 5G
- Advanced Data Storage Technologies
- IoT and Edge/Fog Computing
- Neuroscience and Neural Engineering
- Network Traffic and Congestion Control
- Simulation Techniques and Applications
- Mobile Agent-Based Network Management
- Spectroscopy and Chemometric Analyses
- Data Stream Mining Techniques
- Peer-to-Peer Network Technologies
- Advanced Optical Network Technologies
- Embedded Systems Design Techniques
- Service-Oriented Architecture and Web Services
- Analytical Chemistry and Chromatography
- CCD and CMOS Imaging Sensors
- Scheduling and Optimization Algorithms
University of the Basque Country
2012-2023
University of Manchester
2009-2023
Polymat
2010-2023
Chicago State University
2020
Purdue University Northwest
2020
Hospital Clínico Universitario de Valencia
2016
Purdue University West Lafayette
2000
Hospital de Cruces
1995
The high performance computing landscape is shifting from collections of homogeneous nodes towards heterogeneous systems, in which consist a combination traditional out-of-order execution cores and accelerator devices. Accelerators, built around GPUs, many-core chips, FPGAs or DSPs, are used to offload compute-intensive tasks. advent this type systems has brought about wide diverse ecosystem development platforms, optimization tools analysis frameworks. This review the state-of-the-art for...
Many current parallel computers are built around a torus interconnection network. Machines from Cray, HP, and IBM, among others, make use of this topology. In terms topological advantages, square (2D) or cubic (3D) tori would be the topologies choice. However, for different practical reasons, 2D 3D with number nodes per dimension have been used. These mixed-radix not edge symmetric, which translates into poor performance due to an unbalanced network resources. work, we analyze twisted that...
SpiNNaker is a massively parallel architecture designed to model large-scale spiking neural networks in (biological) real-time. Its design based around ad-hoc multi-core System-on-Chips which are interconnected using two-dimensional toroidal triangular mesh. Neurons modeled software and their spikes generate packets that propagate through the on- inter-chip communication fabric relying on custom-made on-chip multicast routers. This paper models evaluates instances of its novel interconnect...
Object detection is an essential capability for performing complex tasks in robotic applications. Today, deep learning (DL) approaches are the basis of state-of-the-art solutions computer vision, where they provide very high accuracy albeit with computational costs. Due to physical limitations platforms, embedded devices not as powerful desktop computers, and adjustments have be made models before transferring them This work benchmarks object devices. Furthermore, some hardware selection...
This paper proposes new parallel versions of some estimation distribution algorithms (EDAs). Focus is on maintenance the behavior sequential EDAs that use probabilistic graphical models (Bayesian networks and Gaussian networks), implementing a master-slave workload for most computationally intensive phases: learning probability and, in one algorithm, "sampling evaluation individuals." In discrete domains, we explain parallelization EBNA/sub BIC/ PC/ algorithms, while continuous selected are...
The identification of cyberattacks which target information and communication systems has been a focus the research community for years. Network intrusion detection is complex problem presents diverse number challenges. Many attacks currently remain undetected, while newer ones emerge due to proliferation connected devices evolution technology. In this survey, we review methods that have applied network data with purpose developing an detector, but contrary previous reviews in area, analyze...
Any simulation-based evaluation of an interconnection network proposal requires a good characterization the workload. Synthetic traffic patterns based on independent sources are commonly used to measure performance in terms average latency and peak throughput. As they do not capture level self-throttling that occurs most parallel applications, can produce inaccurate throughput estimates at high loads. Thus, workloads resemble varying levels synchronization actual applications needed study...
The Software Defined Networking (SDN) paradigm enables the development of systems that centrally monitor and manage network traffic, providing support for deployment machine learning-based automatically detect mitigate intrusions. This paper presents an intelligent system capable deciding which countermeasures to take in order intrusion a software defined network. interaction between intruder defender is posed as Markov game MuZero algorithm used train model through self-play. Once trained,...
Traditional centralized monitoring systems do not scale to present-day large, complex, network-computing systems. Based on recent SNMP standards for distributed management, this paper addresses the scalability problem through distribution of tasks, applicable tools such as SIMONE (SNMP-based prototype implemented by authors).Distribution is achieved introducing one or more levels a dual entity called Intermediate Level Manager (ILM) between manager and agents. The ILM accepts tasks described...
This paper studies the influence that task placement may have on performance of applications, mainly due to relationship between communication locality and overhead. impact is studied for torus fat-tree topologies. A simulation-based study carried out, using traces applications application kernels, measure time taken complete one or several concurrent instances a given workload. As purpose not offer miraculous strategy, but performance, we selected simple strategies, including random...
Traditional centralized monitoring systems do not scale to present-day large, complex, network- computing systems. Based on recent SNMP standards for distributed management, this paper addresses the scalability problem through distribution of tasks, applicable tools such as SI- MONE (SNMP-based prototype implemented by authors). Distribution is achieved introducing one or more levels a dual entity called Intermediate Level Manager (ILM) between manager and agents. The ILM accepts tasks...