- Embedded Systems Design Techniques
- Parallel Computing and Optimization Techniques
- Interconnection Networks and Systems
- CCD and CMOS Imaging Sensors
- Advanced Memory and Neural Computing
- Machine Learning and ELM
- Ferroelectric and Negative Capacitance Devices
- Advanced Neural Network Applications
- Advanced Data Storage Technologies
- Tensor decomposition and applications
Democritus University of Thrace
2024-2025
University of Cyprus
2024
Structured sparsity has been proposed as an efficient way to prune the complexity of Machine Learning (ML) applications and simplify handling sparse data in hardware. Accelerating ML models, whether for training, or inference, heavily relies on matrix multiplications that can be efficiently executed vector processors, custom engines. This work aims integrate simplicity structured into execution speed up corresponding multiplications. Initially, implementation structured-sparse multiplication...
Deep Learning (DL) has achieved unprecedented success in various application domains. Meanwhile, model pruning emerged as a viable solution to reduce the footprint of DL models mobile applications, without compromising their accuracy. To enable matrix engines built for dense also handle pruned counterparts, follow fine-grained structured sparsity pattern 1:4, or 2:4, whereby each group four contiguous values, at least one, two, respectively, must be non-zero. Structured recently moved...
Convolution neural networks (CNNs) are widely applied in many machine learning applications. Hardware acceleration for CNNs is crucial, given their high computational intensity and the demand enhanced energy efficiency reduced latency application response. This work leverages simplicity of modelling CNN structure Python with flexibility High-Level synthesis to automate creation dataflow hardware accelerators. The methodology emphasizes ease design, enabling users effortlessly generate...
Structured sparsity has been proposed as an efficient way to prune the complexity of modern Machine Learning (ML) applications and simplify handling sparse data in hardware. The acceleration ML models - for both training inference relies primarily on equivalent matrix multiplications that can be executed efficiently vector processors or custom engines. goal this work is incorporate simplicity structured into execution, thereby accelerating corresponding multiplications. Toward objective, a...