- Advanced Neural Network Applications
- Parallel Computing and Optimization Techniques
- Low-power high-performance VLSI design
- Machine Learning and Data Classification
- Neural Networks and Applications
- Ferroelectric and Negative Capacitance Devices
- Semiconductor materials and devices
- Advanced Data Storage Technologies
- VLSI and FPGA Design Techniques
- Adversarial Robustness in Machine Learning
- Silicon Carbide Semiconductor Technologies
- CCD and CMOS Imaging Sensors
- Traffic Prediction and Management Techniques
- Advanced Multi-Objective Optimization Algorithms
- Green IT and Sustainability
- Advanced Memory and Neural Computing
- Radiation Effects in Electronics
Carnegie Mellon University
2015-2018
"How much energy is consumed for an inference made by a convolutional neural network (CNN)?" With the increased popularity of CNNs deployed on wide-spectrum platforms (from mobile devices to workstations), answer this question has drawn significant attention. From lengthening battery life reducing bill datacenter, it important understand efficiency during serving making inference, before actually training model. In work, we propose NeuralPower: layer-wise predictive framework based sparse...
While selecting the hyper-parameters of Neural Networks (NNs) has been so far treated as an art, emergence more complex, deeper architectures poses increasingly challenges to designers and Machine Learning (ML) practitioners, especially when power memory constraints need be considered. In this work, we propose HyperPower, a framework that enables efficient Bayesian optimization random search in context power- memory-constrained hyperparameter for NNs running on given hardware platform....
Recent breakthroughs in Machine Learning (ML) applications, and especially Deep (DL), have made DL models a key component almost every modern computing system. The increased popularity of applications deployed on wide-spectrum platforms (from mobile devices to datacenters) resulted plethora design challenges related the constraints introduced by hardware itself. "What is latency or energy cost for an inference Neural Network (DNN)?" "Is it possible predict this consumption before model even...
Power and thermal issues are the main constraints for high-performance multi-core systems. As current technology of choice, FinFET is observed to have lower delay under higher temperature in super-threshold voltage region, an effect called inversion (TEI). While it has been shown that system performance can be improved power constraints, as aggressively scales down sub-20nm nodes, also emerge important reliability concerns throughout lifetime. To best our knowledge, we first provide a...
Energy and temperature are the main constraints for modern high-performance multi-core systems. To save power or increase performance, Dynamic Voltage Frequency Scaling (DVFS) is widely applied in industry. As CMOS technology continues scaling, FinFET has recently become common choice In contrast with planar CMOS, observed to have lower delay under higher super-threshold voltage region, an effect called inversion (TEI). Due this effect, performance can be further improved constraints. This...
Near-threshold computing has emerged as a promising solution to significantly increase the energy efficiency of next-generation multicore systems. This paper evaluates and analyzes behavior dynamic voltage frequency scaling for systems operating under extended range: including near-threshold, nominal, turbo modes. We adapt model selection technique from machine learning determine relationship between performance power. The theoretical results show that resulting models satisfy convexity,...
Energy and temperature are the main constraints for modern high-performance multicore systems. To save power or increase performance, dynamic voltage frequency scaling (DVFS) is widely applied in literally all computing As CMOS technology continues scaling, FinFET has recently become common choice In contrast with planar CMOS, characterized by lower delay under higher temperatures super-threshold region, an effect called inversion (TEI). This paper explores TEI-aware performance improvement...
Energy and temperature are the main constraints for modern high-performance multi-core systems. To save power or increase performance, Dynamic Voltage Frequency Scaling (DVFS) is widely applied in industry. As CMOS technology continues scaling, FinFET has recently become common choice In contrast with planar CMOS, observed to have lower delay under higher super-threshold voltage region, an effect called inversion (TEI). Due this effect, performance can be further improved constraints. This...
Recent breakthroughs in Deep Learning (DL) applications have made DL models a key component almost every modern computing system. The increased popularity of deployed on wide-spectrum platforms resulted plethora design challenges related to the constraints introduced by hardware itself. What is latency or energy cost for an inference Neural Network (DNN)? Is it possible predict this consumption before model trained? If yes, how can machine learners take advantage these hardware-optimal DNN...
While selecting the hyper-parameters of Neural Networks (NNs) has been so far treated as an art, emergence more complex, deeper architectures poses increasingly challenges to designers and Machine Learning (ML) practitioners, especially when power memory constraints need be considered. In this work, we propose HyperPower, a framework that enables efficient Bayesian optimization random search in context power- memory-constrained hyper-parameter for NNs running on given hardware platform....