Evaluating the User Acceptance Testing for Multi-tenant Cloud Applications

被引:2
作者
Pinto, Victor Hugo Santiago C. [1 ]
Oliveira, Ricardo R. [1 ]
Vilela, Ricardo F. [1 ]
Souza, Simone R. S. [1 ]
机构
[1] Univ Sao Paulo ICMC USP, Inst Math & Comp Sci, Sao Paulo, Brazil
来源
CLOSER: PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND SERVICES SCIENCE | 2018年
关键词
Multi-tenancy; Cloud Applications; User Acceptance Testing;
D O I
10.5220/0006664000470056
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
SaaS (Software as a Service) is a service delivery model in which an application can be provided on demand via the Internet. Multi-tenant architecture is essential for SaaS because it enables multiple customers, so-called tenants, to share the system's resources in a transparent way to reduce costs and customize the software layer, resulting in variant applications. Despite the popularity of this model, there have been few cases of evaluation of software testing in cloud computing. Many researchers argue that traditional software testing may not be a suitable way of validating cloud applications owing to the high degree of customization, its dynamic environment and multi-tenancy. User Acceptance Testing (UAT) evaluates the external quality of a product and complements previous testing activities. The main focus of this paper is on investigating the ability of the parallel and automated UAT to detect faults with regard to the number of tenants. Thus, our aim is to evaluate to what extent the ability to detect faults varies if a different number of variant applications is executed. A case study was designed with a multi-tenant application called iCardapio and a testing framework created through Selenium and JUnit extensions. The results showed a significant difference in terms of detected faults when test scenarios with a single-tenant and multi-tenant were included.
引用
收藏
页码:47 / 56
页数:10
相关论文
共 24 条
[1]  
[Anonymous], 2016, 28 INT C SOFTW ENG K
[2]  
[Anonymous], 2014, 2014 24 INT C FIELD, DOI DOI 10.1109/FPL.2014.6927482
[3]   A systematic review of approaches for testing concurrent programs [J].
Arora, Vinay ;
Bhatia, Rajesh ;
Singh, Maninder .
CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2016, 28 (05) :1572-1611
[4]  
BALCI O, 1994, 1994 WINTER SIMULATION CONFERENCE PROCEEDINGS, P215, DOI 10.1007/BF02136828
[5]  
Buxton A., 2015, REASONS WHY SAAS WIL
[6]  
Chong F., 2006, ARCHITECTURE STRATEG
[7]  
Copeland L., 2004, PRACTITIONERS GUIDE
[8]  
Gao J., 2011, SOFTWARE ENG INT J, V1, P9
[9]  
Garg Deepak., 2013, Proceedings of the Thirty-Sixth Australasian Computer Science Conference-Volume, V135, P61
[10]  
Hartenstein R., 2001, P C DES AUT TEST EUR