Hugo Taboada

ORCID: 0000-0003-3902-9313
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Parallel Computing and Optimization Techniques
  • Distributed and Parallel Computing Systems
  • Advanced Data Storage Technologies
  • Interconnection Networks and Systems
  • Embedded Systems Design Techniques
  • Cloud Computing and Resource Management
  • Distributed systems and fault tolerance
  • Ferroelectric and Negative Capacitance Devices
  • Advanced Data Compression Techniques
  • Advanced Optical Network Technologies
  • Advanced Neural Network Applications
  • Software System Performance and Reliability
  • Brain Tumor Detection and Classification
  • Advanced Memory and Neural Computing
  • Neural Networks and Applications
  • Advanced MEMS and NEMS Technologies

Commissariat à l'Énergie Atomique et aux Énergies Alternatives
2018-2025

CEA DAM Île-de-France
2016-2025

Université Paris-Saclay
2023

Laboratoire Bordelais de Recherche en Informatique
2018

Institut Polytechnique de Bordeaux
2018

Université de Bordeaux
2018

Centre National de la Recherche Scientifique
2018

In order to enable Exascale computing, next generation interconnection networks must scale hundreds of thousands nodes, and provide features also allow the HPC, HPDA, AI applications reach Exascale, while benefiting from new hardware software trends. RED-SEA will pave way European interconnects, including BXI, as follows: (i) specify architecture using hardware-software co-design a set representative terrain converging AI; (ii) test, evaluate, and/or implement architectural at multiple...

10.1109/dsd57027.2022.00100 article EN 2022 25th Euromicro Conference on Digital System Design (DSD) 2022-08-01

To amortize the cost of MPI collective operations, nonblocking collectives have been proposed so as to allow communications be overlapped with computation. Unfortunately, are more CPU-hungry than point-to-point and running them in a communication thread on dedicated CPU core makes slow. On other hand, application cores leads no overlap. In this article, we propose placement algorithms for progress threads that do not degrade performance when get communication/computation We first show even...

10.1177/1094342019860184 article EN The International Journal of High Performance Computing Applications 2019-07-02

Since the beginning, MPI has defined rank as an implicit attribute associated with process' environment. In particular, each process generally runs inside a given UNIX and is fixed identifier in its WORLD communicator. However, this state of things about to change rise new abstractions such Sessions. paper, we propose outline how evolution could enable optimizations which were previously linked specific runtimes executing processes shared memory (e.g. thread-based MPI). By implementing...

10.1145/3343211.3343221 preprint EN 2019-08-23

HPC systems have experienced significant growth over the past years, with modern machines having hundreds of thousands nodes. Message Passing Interface (MPI) is de facto standard for distributed computing on these architectures. On MPI critical path, message-matching process one most time-consuming operations. In this process, searching a specific request in message queue represents part communication latency. So far, no miracle algorithm performs well all cases. This paper explores...

10.1109/sbac-pad55451.2022.00038 preprint EN 2022-11-01

High-Performance Computing (HPC) is currently facing significant challenges. The hardware pressure has become increasingly difficult to manage due the lack of parallel abstractions in applications. As a result, programs must undergo drastic evolution effectively exploit underlying parallelism. Failure do so results inefficient code. In this constrained environment, runtimes play critical role, and their testing becomes crucial. This paper focuses on MPI interface leverages binding tools...

10.1145/3615318.3615329 preprint EN 2023-09-11
Coming Soon ...