Vasileios Titopoulos

ORCID: 0009-0009-0123-5737
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Embedded Systems Design Techniques
  • Parallel Computing and Optimization Techniques
  • Interconnection Networks and Systems
  • CCD and CMOS Imaging Sensors
  • Advanced Memory and Neural Computing
  • Machine Learning and ELM
  • Ferroelectric and Negative Capacitance Devices
  • Advanced Neural Network Applications
  • Advanced Data Storage Technologies
  • Tensor decomposition and applications

Democritus University of Thrace
2024-2025

University of Cyprus
2024

Structured sparsity has been proposed as an efficient way to prune the complexity of Machine Learning (ML) applications and simplify handling sparse data in hardware. Accelerating ML models, whether for training, or inference, heavily relies on matrix multiplications that can be efficiently executed vector processors, custom engines. This work aims integrate simplicity structured into execution speed up corresponding multiplications. Initially, implementation structured-sparse multiplication...

10.48550/arxiv.2501.10189 preprint EN arXiv (Cornell University) 2025-01-17

Deep Learning (DL) has achieved unprecedented success in various application domains. Meanwhile, model pruning emerged as a viable solution to reduce the footprint of DL models mobile applications, without compromising their accuracy. To enable matrix engines built for dense also handle pruned counterparts, follow fine-grained structured sparsity pattern 1:4, or 2:4, whereby each group four contiguous values, at least one, two, respectively, must be non-zero. Structured recently moved...

10.1109/lca.2024.3355178 article EN IEEE Computer Architecture Letters 2024-01-01

Convolution neural networks (CNNs) are widely applied in many machine learning applications. Hardware acceleration for CNNs is crucial, given their high computational intensity and the demand enhanced energy efficiency reduced latency application response. This work leverages simplicity of modelling CNN structure Python with flexibility High-Level synthesis to automate creation dataflow hardware accelerators. The methodology emphasizes ease design, enabling users effortlessly generate...

10.1109/access.2024.3390422 article EN cc-by IEEE Access 2024-01-01

Structured sparsity has been proposed as an efficient way to prune the complexity of modern Machine Learning (ML) applications and simplify handling sparse data in hardware. The acceleration ML models - for both training inference relies primarily on equivalent matrix multiplications that can be executed efficiently vector processors or custom engines. goal this work is incorporate simplicity structured into execution, thereby accelerating corresponding multiplications. Toward objective, a...

10.48550/arxiv.2311.07241 preprint EN other-oa arXiv (Cornell University) 2023-01-01
Coming Soon ...