Cloud Benchmarking for Maximising Performance of Scientific Applications

被引:16
|
作者
Varghese, Blesson [1 ]
Akgun, Ozgur [2 ]
Miguel, Ian [2 ]
Thai, Long [2 ]
Barker, Adam [2 ]
机构
[1] Queens Univ Belfast, Sch Elect Elect Engn & Comp Sci, Belfast BT7 1NN, Antrim, North Ireland
[2] Univ St Andrews, Sch Comp Sci, St Andrews KY16 9AJ, Fife, Scotland
基金
英国工程与自然科学研究理事会;
关键词
Cloud benchmark; cloud performance; benchmarking methodology; cloud ranking;
D O I
10.1109/TCC.2016.2603476
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
How can applications be deployed on the cloud to achieve maximum performance? This question is challenging to address with the availability of a wide variety of cloud Virtual Machines (VMs) with different performance capabilities. The research reported in this paper addresses the above question by proposing a six step benchmarking methodology in which a user provides a set of weights that indicate how important memory, local communication, computation and storage related operations are to an application. The user can either provide a set of four abstract weights or eight fine grain weights based on the knowledge of the application. The weights along with benchmarking data collected from the cloud are used to generate a set of two rankings-one based only on the performance of the VMs and the other takes both performance and costs into account. The rankings are validated on three case study applications using two validation techniques. The case studies on a set of experimental VMs highlight that maximum performance can be achieved by the three top ranked VMs and maximum performance in a cost-effective manner is achieved by at least one of the top three ranked VMs produced by the methodology.
引用
收藏
页码:170 / 182
页数:13
相关论文
共 50 条
  • [21] Cloud Data Federation for Scientific Applications
    Koulouzis, Spiros
    Vasyunin, Dmitry
    Cushing, Reginald
    Belloum, Adam
    Bubak, Marian
    EURO-PAR 2013: PARALLEL PROCESSING WORKSHOPS, 2014, 8374 : 13 - 22
  • [22] Towards building a cloud for scientific applications
    Wang, Lizhe
    Kunze, Marcel
    Tao, Jie
    von Laszewski, Gregor
    ADVANCES IN ENGINEERING SOFTWARE, 2011, 42 (09) : 714 - 722
  • [23] Scalability of parallel scientific applications on the cloud
    Srirama, Satish Narayana
    Batrashev, Oleg
    Jakovits, Pelle
    Vainikko, Eero
    SCIENTIFIC PROGRAMMING, 2011, 19 (2-3) : 91 - 105
  • [24] Secure Cloud Connectivity for Scientific Applications
    Osmani, Lirim
    Toor, Salman
    Komu, Miika
    Kortelainen, Matti J.
    Linden, Tomas
    White, John
    Khan, Rasib
    Eerola, Paula
    Tarkoma, Sasu
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2018, 11 (04) : 658 - 670
  • [25] A Reusable Architecture for Dependability and Performance Benchmarking of Cloud Services
    Sangroya, Amit
    Bouchenak, Sara
    SERVICE-ORIENTED COMPUTING - ICSOC 2015 WORKSHOPS, 2016, 9586 : 207 - 218
  • [26] Maximising performance
    Tyler, Sue
    Metallurgia, 2002, 69 (11):
  • [27] A configurable method for benchmarking scalability of cloud-native applications
    Sören Henning
    Wilhelm Hasselbring
    Empirical Software Engineering, 2022, 27
  • [28] A configurable method for benchmarking scalability of cloud-native applications
    Henning, Soeren
    Hasselbring, Wilhelm
    EMPIRICAL SOFTWARE ENGINEERING, 2022, 27 (06)
  • [29] A Benchmarking Framework for Interactive 3D Applications in the Cloud
    Liu, Tianyi
    He, Sen
    Huang, Sunzhou
    Tsang, Danny
    Tang, Lingjia
    Mars, Jason
    Wang, Wei
    2020 53RD ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO 2020), 2020, : 881 - 894
  • [30] A Platform of Scientific Workflows for Orchestration of Parallel Components in a Cloud of High Performance Computing Applications
    Silva, Jefferson de Carvalho
    de Carvalho Junior, Francisco Heron
    PROGRAMMING LANGUAGES (SBLP 2016), 2016, 9889 : 156 - 170