A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning

Resource Management
DOI: 10.48550/arxiv.1703.04221 Publication Date: 2017-01-01
ABSTRACT
Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in cloud computing system. However, a complete framework exhibits high dimensions state and action spaces, which prohibit usefulness of traditional RL techniques. In addition, power consumption has become one critical concerns design control systems, degrades system reliability increases cooling cost. An effective dynamic management (DPM) policy should minimize while maintaining performance degradation within an acceptable level. Thus, joint virtual machine (VM) is overall Moreover, novel solution necessary address even higher spaces. this paper, we propose hierarchical for solving systems. The proposed comprises global tier VM servers local distributed servers. emerging deep (DRL) technique, can deal with complicated problems large space, adopted problem. Furthermore, autoencoder weight sharing structure are handle high-dimensional space accelerate convergence speed. On other hand, server managements LSTM based workload predictor model-free manager, operating manner.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....