Taxonomy of resource allocation algorithms for inner IaaS cloud data centres Dang Minh Quan

tải về 0.56 Mb.
Chuyển đổi dữ liệu07.01.2018
Kích0.56 Mb.
1   2   3   4   5   6


We analyzed the difference among algorithms in each subsection of section 4. In this section, we focus on the applicability of the above studied solutions to the real environment. In the real environment, there are two main types of IaaS cloud data centres: the public and private clouds.

The public clouds or the commercial clouds provide resources to everyone having ability to afford the cost of resource usage. The trend of IaaS cloud data centers is providing wide range of products over one resource infrastructure. A represent example is Amazone EC2 [2] with spot market VM instances, reserved VM instances and resource on demand VM instances. This policy provides users flexible options of using cloud resources. Thus, it motivates users to transfer from traditional computing to cloud computing. The main goal of public cloud providers is maximizing profit. As discussed in Section 3.2.3, this goal can be gained by selling the same amount of resource with highest price or by maximizing the workload on a fixed amount of resources. For the first approach, algorithms for spot market such as [20] can be used. The second approach can be realized with energy-efficient algorithms such as [36,37]. It is because energy efficient mechanisms try to use smallest number of PMs to host workload. The literature also showed that the initial placement mechanism makes significant progress to reach the goal of the Cloud data centre compared with the VM migration mechanism. With the energy efficient initial placement algorithm, the VM migration algorithm increases the efficiency only few percent.

There should be no problem with the wide range of products policy if the total demand is smaller than the available capacity. The situation becomes more complicated if the total demand is larger than the available capacity. In this situation, how to allocate resources among many products with profit optimization is still an open issue. Another situation is that there is a peak of demand in a short period. Reserving enough resources to deal with this situation may lead to inefficient resource usage. An initial possible solution to this issue is proposed in [14] with the addition of bets-effort VMs. However, a detail study for a robust scenario like Amazone [2] is still necessary.

The private clouds provide resources to users inside the border of an organization. Depending on the policy of the organization, the goal of the private clouds may be different. With each goal, the system can apply different solutions. In general, the set of resource allocation algorithms for both initial VMs placement and VM migration can be divided into two classes: simple heuristics and application of complex algorithms. For example, to make initial VMs placement with the goal of load balance, the system can use simple heuristic such as Round robin [2,4,23,24], random [24], least connect [4], weighted selection [23,46] or apply genetic algorithm [25]. The simple heuristics have the advantage of fast execution and easy to implement. The application of complex algorithms usually has better performance. However, they are slower and more complicated to implement. It seems that simple heuristics are preferred in real systems [1].

  1. Conclusion

Cloud computing is the promising model for delivering IT services as computing utilities. The resource allocation module is an important part of each IaaS Cloud system. In this paper, we have studied and classified different algorithm to map virtual machines to physical machines inside the cloud computing systems. Recent research developments have been discussed and categorized over the execution phases, business models and goals of resource allocation.

Efficient resource allocation in IaaS Cloud computing systems is a well known and extensively studied in the past problem. The allocation decision is made for both homogeneous as well as heterogeneous IaaS cloud infrastructure under different business model such as spot market, game theory, resource reservation and resource on demand. The proposed allocation algorithms range from simple heuristics to applications of well-known methods such as genetic algorithm, Linear Programming, Constrain Satisfaction Programming, etc. We also discussed the applicability of studied solutions to two main Cloud data centre types: public Cloud (or commercial Cloud) and private Cloud. From the analysis, open issues and future direction are stated.


  1. Rimal, B.P., Choi, E., Lumb, I., 2009, A Taxonomy and Survey of Cloud Computing Systems, Proceeding of the Fifth International Joint Conference on INC, IMS and IDC, pp. 44 – 51.









  10. Endo, P. T., Gonçalves, G. E., Kelner, J., Sadok, D., 2010, A Survey on Open-source Cloud Computing Solutions, Proceedings of the 28th edition of the Brazilian Symposium on Computer Networks and Distributed Systems (SBRC 2010), pp. 3-16.

  11. Beloglazov, A., Buyya, R., Lee, Y. C., Zomaya, A. Y., 2011, A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems. Advances in Computers 82, pp. 47-111.


  13. Basmadjian, R., Ali, N., Niedermeier, F., Meer H. d., and Giuliani, G., 2011, A Methodology to Predict the Power Consumption for Data Centres, Proceedings of e-Energy 2011, pp. 1-10.

  14. Sotomayor, B., Keahey, K., Foster, I. T., 2008, Combining batch execution and leasing using virtual machines, Proceedings of HPDC 2008, pp. 87-96.

  15. Lifka, D. A., 1995, The ANL/IBM SP scheduling system, Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing, IPPS ’95, pp. 295–303.

  16. Mu’alem, A. W., and Feitelson, D. G., 2001, Utilization, predictability, workloads, and user runtime estimates in scheduling the IBM SP2 with backfilling. IEEE Trans. Parallel Distrib. Syst., 12(6), pp. 529–543.

  17. Wang, X., 2011, Research on Adaptive QoS-Aware Resource Reservation Management in Cloud Service Environments, Proceedings of 2011 IEEE Asia-Pacific Services Computing Conference (APSCC 2011), pp. 147 – 152.

  18. Zhao, M., and Figueiredo, R. J., 2007, Experimental study of virtual machine migration in support of reservation of cluster resources, Proceedings of the 2nd international workshop on Virtualization technology in distributed computing, pp. 1-8.

  19. Zaman, S., and Grosu, D., 2010, Combinatorial auction-based allocation of virtual machine instances in clouds, Proceedings of the 2nd IEEE Intl. Conf. On Cloud Computing Technology and Science, pp. 127–134.

  20. Zaman, S., and Grosu, D., 2011, Combinatorial Auction-Based Dynamic VM Provisioning and Allocation in Clouds, Proceedings of CloudCom 2011, pp. 107-114.

  21. Jalaparti, V., Nguyen, G. D., Gupta, I., Caesar, M., 2010, Cloud Resource Allocation Games, Illinois Technical Report, pp. 124-133.

  22. Wei, G., Vasilakos, A. V., Zheng, Y., Xiong, N., 2010, A game-theoretic method of fair resource allocation for cloud computing services, J Supercomputer v. 54, pp. 252–269.



  25. Hu, J., Gu, J., Sun, G., and Zhao, T., 2010, A Scheduling Strategy on Load Balancing of Virtual Machine Resources in Cloud Computing Environment, Third International Symposium on Parallel Architectures, Algorithms and Programming (PAAP), pp. 89-96.

  26. Nisan, N., Roughgarden, T., Tardos, E., and Vazirani, V. V., 2007, Algorithmic Game Theory. Cambridge University Press.

  27. Randles, M., Lamb, D., and Taleb-Bendiab, A., 2010, A Comparative Study into Distributed Load Balancing Algorithms for Cloud Computing, Proceedings of 24th IEEE International Conference on Advanced Information Networking and Applications Workshops, pp. 551-556.

  28. Wang, S., Yan, K., Liao, W., and Wang, S., 2010, Towards a Load Balancing in a Three-level Cloud Computing Network, Proceedings of the 3rd IEEE International Conference on Computer Science and Information Technology (ICCSIT), pp. 108-113.

  29. Galloway, J. M., Smith, K. L., Vibsky, S. S., 2011, Power Aware Load Balancing for Cloud Computing, Proceedings of WCECS2011, pp.127-132.

  30. Do, T. V., 2011, Comparison of Allocation Schemes for Virtual Machines in Energy-Aware Server Farms, The Computer Journal 54(11), pp. 1790-1797.


  32. Nurmi, D., Wolski, R., Grzegorczyk, C., Obertelli, G., Soman, S., Youseff, L., and Zagorodnov, D., 2009, The eucalyptus open-source cloud-computing system, Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid, CCGRID ’09, pp. 124–131.

  33. Mazzucco, M., Dyachuk, D., and Deters, R., 2010, Maximizing cloud providers’ revenues via energy aware allocation policies, Proceedings of the IEEE International Conference on Cloud Computing, pp.131–138.

  34. Lubin, B., Kephart, J. O., Das, R., Parkes, D. C., 2009, Expressive Power-Based Resource Allocation for Data Centers, Proceedings of the 21st international joint conference on Artifical intelligence, pp.1451-1456.

  35. Chase, J. S., Anderson, D. C., Thakar, P. N., Vahdat, A. M., Doyle, R. P., 2001, Managing energy and server resources in hosting centers, ACM SIGOPS Operating Systems Review, vol. 35, nr. 5, pp. 103-116.

  36. Quan, D. M., Basmadjian, R., Meer, H. d., Lent, R., Mahmoodi, T., Sannelli, D., Mezza, F., Telesca, L., Dupont, C., 2011, Energy Efficient Resource Allocation Strategy for Cloud Data Centres, Proceedings of ISCIS 2011, pp. 133-141.

  37. Beloglazov, A., Abawajy, J. H., Buyya, R., 2012, Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing, Future Generation Comp. Syst. vol.28. nr. 5, pp. 755-768.

  38. Srikantaiah, S., Kansal, A., Zhao, F., 2008, Energy aware consolidation for cloud computing, Proceedings of the 2008 conference on Power aware computing and systems, pp. 1-10.

  39. Lee, S., Panigrahy, R., Prabhakaran, V., Ramasubrahmanian, V. , Talwar, K., Uyeda, L. and Wieder, U., 2011, Validating Heuristics for Virtual Machine Consolidation, Microsoft Research, MSR-TR-2011-9, pp. 1-14.

  40. Bellur, U., Rao C., and Kumar, M., 2010, Optimal Placement Algorithms for Virtual Machines, Proceedings of CoRR, pp.103-110.

  41. Kozlov, M. K., Tarasov, S. P., and Khachiyan, L. G., 1980, The polynomial solvability of convex quadratic programming, USSR Computational Mathematics and Mathematical Physics, vol. 20. nr.5, pp. 223–228.

  42. lp-solve.

  43. Van, H., and Tran, F., 2009, Autonomic resource management for service host platforms, Proceedings of Workshop on Software Engineering Challenges in Cloud Computing, pp. 1-8.

  44. Meng, X., Pappas, V., and Zhang, L., 2010, Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement, Proceedings of IEEE 2010 INFOCOM, pp. 1-9.


  46. Chandrasekaran, B., Purush, R., Douglas, B., and Schmidt, D., 2007, Virtualization Management Using Microsoft System Center and Dell OpenManage, Dell Power Solutions, pp. 40-44.

  47. Machida, F., Kawato, M., and Maeno, Y., 2010, Redundant Virtual Machine Placement for Fault-tolerant Consolidated Server Clusters, Proceedings of the 12th IEEE/IFIP Network Operations and Management Symposium, pp. 32-39.

  48. Tsakalozos, K., Roussopoulos, M., and Delis, A., 2011, VM Placement in non-Homogeneous IaaS-Clouds, Proceedings of 9th International Conference on Service Oriented Computing (ICSOC 2011), pp. 172-187.

  49. Epping, D., Denneman, F., 2010, VMware vSphere 4.1 HA and DRS Technical Deepdive, CreateSpace, ISBN-10: 1456301446.

  50. Wood, T., Shenoy, P., and Arun, 2007, Black-box and gray-box strategies for virtual machine migration, NSDI 2007, pp. 229–242.

  51. Khanna, G., Beaty, K., Kar, G., and Kochut, A., 2006, Application performance management in virtualized server environments, Proceedings of 10th IEEE/IFIP Network Operations and Management Symposium NOMS 2006, pp. 373 –381.

  52. Arzuaga, E., and Kaeli, D. R., 2010, Quantifying load imbalance on virtualized enterprise servers, Proceedings of the first joint WOSP/SIPEW international conference on Performance engineering, pp. 235–242.

  53. Singh, A., Korupolu, M., and Mohapatra, D., 2008, Server-storage virtualization: Integration and load balancing in data centers, Proceedings of International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1 –12.

  54. Zhao, Y., and Huang, W., 2009, Adaptive Distributed Load Balancing Algorithm based on Live Migration of Virtual Machines in Cloud, Proceedings of 5th IEEE International Joint Conference on INC, IMS and IDC, pp. 170-175.

  55. Verma, A., Ahuja, P., and Neogi, A., 2008, pMapper: Power and Migration Cost Aware Application Placement in Virtualized Systems, Proceedings of the 9th ACM/IFIP/USENIX International Conference on Middleware, pp. 243-264.

  56. Takeda S., and Takemura, T., 2010, A rank-based vm consolidation method for power saving in datacenters. Information and Media Technologies, vol. 5, nr. 3, pp. 994-1002.

  57. Lin, C. C., Liu, P., Wu, J. J., 2011, Energy-efficient Virtual Machine Provision Algorithms for Cloud Systems, 2011 Fourth IEEE International Conference on Utility and Cloud Computing, pp.81-88.

  58. Li, B., Li, J., Huai, J., Wo, T., Li, Q., Zhong, L., 2009, EnaCloud: An Energy-saving Application Live Placement Approach on Cloud Computing Environments, IEEE International Conference on Cloud Computing, 2009. CLOUD '09, pp. 17 - 24.

  59. Lee, C. C., and Lee, D. T., 1985, A simple on-line bin-packing algorithm. Journal of the ACM, 32(3), pp. 562-572.

  60. Xu, J., and Fortes, J., 2010, Multi-objective Virtual Machine Placement in Virtualized Data Center Environments, Proceedings of the 2010 IEEE/ACM Conference on Green Computing and Communications, pp. 179-188.

  61. Bobroff, N., Kochut, A., and Beaty, K., 2007, Dynamic Placement of Virtual Machines for Managing SLA Violations, Proceedings of the 10th IFIP/IEEE Symposium on Integrated Network Management, pp. 119-128.

  62. Waldspurger, C. A., 2002, Memory Resource Management in VMware ESX Server, ACM SIGOPS Operating Systems Review - OSDI '02: Proceedings of the 5th symposium on Operating systems design and implementation, pp. 181-194.

  63. Silpa, CS., Basha, S. S. M., 2013, A Comparative Analysis of Scheduling Policies in Cloud Computing Environment. International Journal of Computer Applications 67(20), pp. 16-24.

  64. Do, T. V., Rotter, C., 2012, Comparison of scheduling schemes for on-demand IaaS requests. Journal of Systems and Software 85(6), pp. 1400-1408.

  65. Mitrani, I., 2013, Managing performance and power consumption in a server farm. Annals OR 202(1), pp. 121-134.

  66. Mitrani, I., 2011, Service center trade-offs between customer impatience and power consumption. Perform. Eval. 68(11), pp. 1222-1231.

  67. Do, T. V., Krieger, U. R., 2009, A Performance Model for Maintenance Tasks in an Environment of Virtualized Servers. In: IFIP/TC6 NETWORKING 2009, pp. 931-942.

Author' biography

Dang Minh Quan is a lecturer at the Institute of Information Technology for Economic, National Economics University, VietNam. He received his Ph.D. (2006) from the University of Paderborn, Germany. His current research centers on energy saving for data centers. In particular, he puts special focus on designing energy efficient algorithms for traditional data centers, cloud data centers and HPC data centers.

: Uploads -> files
files -> VIỆn chăn nuôi trịnh hồng sơn khả NĂng sản xuất và giá trị giống của dòng lợN ĐỰc vcn03 luậN Án tiến sĩ NÔng nghiệp hà NỘI 2014
files -> Btl bộ ĐỘi biên phòng phòng tài chíNH
files -> Bch đOÀn tỉnh đIỆn biên số: 60 -hd/TĐtn-tg đOÀn tncs hồ chí minh
files -> BỘ NÔng nghiệP
files -> PHỤ LỤC 13 MẪU ĐƠN ĐỀ nghị HỌC, SÁt hạch đỂ CẤp giấy phép lái xe (Ban hành kèm theo Thông tư số 46/2012/tt-bgtvt ngày 07 tháng 11 năm 2012 của Bộ trưởng Bộ gtvt) CỘng hòa xã HỘi chủ nghĩa việt nam độc lập Tự do Hạnh phúc
files -> TRƯỜng cao đẲng kinh tế KỸ thuật phú LÂm cộng hòa xã HỘi chủ nghĩa việt nam
files -> CHƯƠng trình hoạT ĐỘng lễ HỘi trưỜng yên năM 2016 Từ ngày 14 17/04/2016
files -> Nghị định số 79/2006/NĐ-cp, ngày 09/8/2006 của Chính phủ Quy định chi tiết thi hành một số điều của Luật Dược
files -> MỤc lục mở ĐẦU 1 phần I. ĐIỀu kiện tự nhiêN, kinh tế, XÃ HỘI 5
files -> LỜi cam đoan tôi xin cam đoan đây là công trình nghiên cứu khoa học của riêng tôi. Các số liệu, kết quả nghiên cứu nêu trong luận án này là trung thực, khách quan và chưa được ai bảo vệ ở bất kỳ học vị nào

1   2   3   4   5   6

Cơ sở dữ liệu được bảo vệ bởi bản quyền © 2019
được sử dụng cho việc quản lý

    Quê hương