- Ferroelectric and Negative Capacitance Devices
- Advanced Memory and Neural Computing
- Blind Source Separation Techniques
- Advanced Neural Network Applications
- Advanced Adaptive Filtering Techniques
- Speech and Audio Processing
- Digital Filter Design and Implementation
- Neural Networks and Applications
- Advanced Image Processing Techniques
- Cryptographic Implementations and Security
- Physical Unclonable Functions (PUFs) and Hardware Security
- Process Optimization and Integration
- Fault Detection and Control Systems
- Mineral Processing and Grinding
- Advanced Data Storage Technologies
- Parallel Computing and Optimization Techniques
- Advanced Malware Detection Techniques
- Advanced Algorithms and Applications
Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau
2025
University of Kaiserslautern
2022-2024
Shahid Bahonar University of Kerman
2021
Deep Neural Network (DNN) training consumes high-energy. On the other hand, DNNs deployed on edge devices demand very high-energy efficiency. In this context, Processing-in-Memory (PIM) is an emerging compute paradigm that bridges memory-computation gap to improve energy-efficiency. DRAMs are one such memory type employed for designing energy-efficient PIM architectures DNN training. One of major issues DRAM-PIM designed high number internal data accesses within a bank between arrays and...
Processing-in-Memory (PIM) is an emerging approach to bridge the memory-computation gap. One of major challenges PIM architectures in scope Deep Neural Network (DNN) inference implementation area-intensive Multiply-Accumulate (MAC) units memory technologies, especially for DRAM-based PIMs. The DRAM architecture restricts integration DNN computation near area optimized commodity Sub-Array (SA) or Primary Sense Amplifier (PSA) region, where data parallelism maximum and movement cost minimum....
As the Internet of Things applications become mission-critical and their data more valuable, it becomes essential to paramount security. The security can be improved by using emerging light-weight ciphers. SIMON is a relatively recent family ciphers which proposed National Security Agency optimized for hardware platforms. In this paper, we propose an architecture high-throughput resource-constrained with multiple levels Moreover, configurable different operating modes introduced utilizing in...
Abstract There is a high energy cost associated with training Deep Neural Networks (DNNs). Off-chip memory access contributes major portion to the overall consumption. Reduction in number of off-chip transactions can be achieved by quantizing data words low bit-width (E.g., 8-bit). However, low-bit-width formats suffer from limited dynamic range, resulting reduced accuracy. In this paper, novel 8-bit Floating Point (FP8) format quantized DNN methodology presented, which adapts required range...
In communication systems, autoencoder refers to a system that replaces parts of the traditional transmitter and receiver baseband processing chain with artificial neural networks (ANNs). This allows jointly train for an underlying channel model by reconstructing input symbols at output. Since actual behavior real cannot be perfectly reproduced abstract model, it is necessary adapt changing conditions runtime. Thus, online fine-tuning, in form ANN-retraining great importance. A platform able...