- Parallel Computing and Optimization Techniques
- Low-power high-performance VLSI design
- Radiation Effects in Electronics
- Advanced Data Storage Technologies
- Cloud Computing and Resource Management
- Advanced Memory and Neural Computing
- Ferroelectric and Negative Capacitance Devices
- Green IT and Sustainability
- Semiconductor materials and devices
- VLSI and Analog Circuit Testing
- Interconnection Networks and Systems
- IoT and Edge/Fog Computing
- Advanced Neural Network Applications
- CCD and CMOS Imaging Sensors
- Embedded Systems Design Techniques
- Distributed systems and fault tolerance
- EEG and Brain-Computer Interfaces
- Security and Verification in Computing
- Advanced MIMO Systems Optimization
- Distributed and Parallel Computing Systems
- Imbalanced Data Classification Techniques
- Machine Learning and Data Classification
- Neuroscience and Neural Engineering
- Rough Sets and Fuzzy Logic
- Brain Tumor Detection and Classification
Queen Mary University of London
2023-2025
Queen's University Belfast
2015-2023
University of Cambridge
2021
Chung-Ang University
2021
Moscow Center For Continuous Mathematical Education
2013
Institute of Electronic Control Machines
2008
Peak power consumption is the first order design constraint of data centers. Though peak rarely, if ever, observed, entire center facility must prepare for it, leading to inefficient usage its resources. The most prominent way addressing this issue limit IT far below theoretical value. Many approaches have been proposed achieve that, based on same small set enforcement mechanisms, but there has no corresponding work systematically examining advantages and disadvantages each such mechanism....
Increasing variability of transistor parameters in nanoscale era renders modern circuits prone to timing failures. To address such failures, designers adopt pessimistic timing/voltage guardbands, which are estimated under rare worst-case conditions, thus leading power and performance overheads. Recent approximation schemes based on precision reduction may help limit the incurred overheads, but is reduced statically all operations. This results unnecessary quality loss, since these neglect...
Energy efficiency is an essential requirement for all contemporary computing systems. We thus need tools to measure the energy consumption of systems and understand how workloads affect it. Significant recent research effort has targeted direct power measurements on production using on-board sensors or external instruments. These methods have in turn guided studies software techniques reduce via workload allocation scaling. Unfortunately, energymeasurementsarehamperedbythelowpowersampling...
In this paper, we present the results of our comprehensive measurement study timing and voltage guardbands in memories cores a commodity ARMv8 based micro-server. Using various synthetic micro-benchmarks, reveal how adopted margins vary among 8 CPU chip, 3 different sigma chips show prone they are to worst-case noise. addition, characterize variation 'weak' DRAM cells terms their retention time across 72 evaluate error mitigation efficacy available error-correcting codes case operation under...
The aggressive scaling of technology may have helped to meet the growing demand for higher memory capacity and density, but has also made DRAM cells more prone errors. Such a reality triggered lot interest in modeling behavior either predicting errors advance or adjusting circuit parameters achieve better tradeoff between energy efficiency reliability. Existing efforts studied impact few operating temperature on reliability using custom FPGAs setups, however they neglected combined effect...
The explosive growth of Internet-connected devices will soon result in a flood generated data, which increase the demand for network bandwidth as well compute power to process data. Consequently, there is need more energy efficient servers empower traditional centralized Cloud data-centers emerging decentralized at Edges Cloud. In this paper, we present our approach, aims developing new class micro-servers - UniServer that exceed conservative and performance scaling boundaries by introducing...
Energy efficiency is becoming increasingly important, yet few developers understand how source code changes affect the energy and power consumption of their programs. To enable them to achieve savings, we must associate with software structures, especially at fine-grained level functions loops. Most research in field relies on direct power/energy measurements taken from on-board sensors or performance counters. However, this coarse granularity does not directly provide needed measurements....
The explosive growth of data increases the storage needs, especially within servers, making DRAM responsible for more than 40% total system power. Such a reality has made researchers focus on energy saving schemes that relax pessimistic circuit parameters at cost potential faults. In an effort to limit resultant risk critical disruption, new methods were introduced split into domains with varying reliability and benefits such may have been showcased simulators but neither implemented real...
The use of credit cards is prevalent in modern day society. But it obvious that the number card fraud cases constantly increasing spite chip worldwide integration and existing protection systems. This why problem detection very important now. In this paper general description developed system comparisons between models based on using artificial intelligence are given. last section results evaluative testing corresponding conclusions considered.
Failures become inevitable in DRAM devices, which is a major obstacle for scaling down the density of cells future technologies. These failures can be detected by specific tests that implement data and memory access patterns having strong impact on reliability. However, design such very challenging, especially testing devices operation, due to an extremely large number possible cell-to-cell interference effects combinations inducing these effects.In this paper, we present new framework...
The increased variability and adopted low supply voltages render nanometer devices prone to timing failures, which threaten the functionality of digital circuits. Recent schemes focused on developing instruction-aware failure prediction models adapting voltage/frequency avoid errors while saving energy. However, such may be inaccurate when applied pipelined cores since they consider only currently executed instruction preceding one, thereby neglecting impact all concurrently executing...
The continuous scaling of transistor sizes and the increased parametric variations render nanometer circuits more prone to timing failures. To protect from such failures, typically designers adopt pessimistic margins, which are estimated under rare worst-case conditions. In this paper, we present a technique that mitigates margins by minimizing number particular, propose method minimizes long latency paths within each processor pipeline stage constrains them in as few stages possible. Such...
Deep learning is pervasive in our daily life, including self-driving cars, virtual assistants, social network services, healthcare face recognition, etc. However, deep neural networks demand substantial compute resources during training and inference. The machine community has mainly focused on model-level optimizations such as architectural compression of models, while the system implementation-level optimization. In between, various arithmetic-level optimization techniques have been...
In this paper, we propose a framework for minimizing variation-induced timing failures in pipelined designs, while limiting any overhead incurred by conventional guardband based schemes. Our approach initially limits the long latency paths (LLPs) and isolates them as few pipeline stages possible shaping path distribution. Such strategy, facilitates adoption of special unit that predicts excitation isolated LLPs dynamically allows an extra cycle completion only these error-prone paths....
We are witnessing an explosive growth in the number of Internet-connected devices and emergence several new classes Internet Things (IoT) applications that require rapid processing abundance data. To overcome resulting need for more network bandwidth low latency, a paradigm has emerged promotes offering Cloud services at Edge, closer to users. However, Edge is highly constrained environment with limited power budget servers per installation which, turn, limits devices, such as sensors, can...
Improving energy efficiency of the memory subsystem becomes increasingly important for all digital systems due to rapid growth data. Many recent schemes have attempted reduce DRAM power by relaxing refresh rate, which may negatively affect reliability. To optimize trade-offs between and reliability, existing studies rely on experimental setups based FPGAs use few known data-patterns exciting rare worst-case circuit reliability effects. However, doing so, be missing capture real behavior...
Today's rapid generation of data and the increased need for higher memory capacity has triggered a lot studies on aggressive scaling refresh period, which is currently set according to rare worst case conditions. Such analysed in detail data-dependent circuit level factors indicated online DRAM characterization due variable cell retention time. They have done so by executing few test patterns FPGAs under controlled temperatures using thermal testbeds, however cannot be available field....
The continuous scaling of transistor sizes and the increased static dynamic parametric variations render nanometer circuits more prone to timing failures. To protect from such failures, typically designers adopt pessimistic margins, which are estimated statically under rare worst-case conditions. In this paper, we aim at minimizing while avoiding margins by proposing an approach that initially minimizes number long latency paths within each processor pipeline stage con- straints them in as...
In this paper, we present the implementation of a heterogeneous-reliability DRAM framework, Shimmer, on commodity server with fully fledged OS. Shimmer enables splitting into multiple domains varying reliability and allocation data depending their criticality. Compared to existing studies which use simulators, consider practical restrictions stemming from real hardware investigate methods overcome them. particular, reveal that memory framework requires disabling interleaving, results in...
Nanometer circuits are increasingly prone to timing errors, escalating the need for <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">fault injection</i> frameworks accurately evaluate their impact on applications. In this paper, we propose ARETE, a novel cross-layer, fault-injection framework that combines dynamic-binary instrumentation with machine learning-guided dynamic-timing analysis. ARETE enables accurate into any application by...