全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

A Study of Performance Testing in Configurable Software Systems

DOI: 10.4236/jsea.2021.149028, PP. 474-492

Keywords: Configurable Software Systems, Performance Testing, Software Configuration, Performance Bug Study

Full-Text   Cite this paper   Add to My Lib

Abstract:

Customizing applications through program configuration options has been proved by many open-source and commercial projects as one of the best practices in software engineering. However, traditional performance testing is not in synch with this industrial practice. Traditional performance testing techniques consider program inputs as the only external factor. It ignores the performance influence of configuration options. This study aims to stimulate research interest in performance testing in the context of configurable software systems by answering three research questions. That is, why it is necessary to conduct research in performance testing, what are the state-of-the-art techniques, and how do we conduct performance testing research in configurable software systems. In this study, we examine the unique characteristics and challenges of performance testing research in configurable software systems. We review and discuss research topics on the performance bug study, performance anti-patterns, program analysis, and performance testing. We share the research findings from the empirical study and outline the opening opportunities for new and advanced researchers to contribute to the research community.

References

[1]  Webb, G.K. (2004) Predicting Processor Performance. Issues in Information Systems, 5, 340-346.
[2]  Pradel, M., Schuh, P., Necula, G. and Sen, K. (2014) EventBreak: Analyzing the Responsiveness of User Interfaces through Performance-Guided Test Generation. ACM SIGPLAN Notices, 49, 33-47.
https://doi.org/10.1145/2714064.2660233
[3]  Hollister, S. (2020) Apple Will Pay $113 Million for Throttling Older iPhones in New “Batterygate” Settlement.
https://www.theverge.com/2020/11/18/21573710/apple-battery-gate-throttle-iphones-settlement-amount
[4]  Han, X. and Yu, T. (2016) An Empirical Study on Performance Bugs for Highly Configurable Software Systems. Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, Ciudad Real, September 2016, Article No. 23.
https://doi.org/10.1145/2961111.2962602
[5]  Dean, D.J., Nguyen, H., Gu, X., Zhang, H., Rhee, J., Arora, N. and Jiang, G. (2014) Perfscope: Practical Online Server Performance Bug Inference in Production Cloud Computing Infrastructures. Proceedings of the ACM Symposium on Cloud Computing, Seattle, 3-5 November 2014, 1-13.
https://doi.org/10.1145/2670979.2670987
[6]  Han, X., Carroll, D. and Yu, T. (2019) Reproducing Performance Bug Reports in Server Applications: The Researchers’ Experiences. Journal of Systems and Software, 156, 268-282.
https://doi.org/10.1016/j.jss.2019.06.100
[7]  Attariyan, M., Chow, M. and Flinn, J. (2012) X-Ray: Automating Root-Cause Diagnosis of Performance Anomalies in Production Software. 10th USENIX Symposium on Operating Systems Design and Implementation, Hollywood, 7-10 October 2012, 307-320.
[8]  Molyneaux, I. (2009) The Art of Application Performance Testing: Help for Programmers and Quality Assurance. O’Reilly Media, Inc., Sebastopol.
[9]  Samsung for Business (2020) Your Phone Is Now More Powerful than Your PC.
https://insights.samsung.com/2020/08/07/your-phone-is-now-more-powerful-than-your-pc-2/
[10]  Pathak, A., Hu, Y.C. and Zhang, M. (2011) Bootstrapping Energy Debugging on Smartphones: A First Look at Energy Bugs in Mobile devices. Proceedings of the 10th ACM Workshop on Hot Topics in Networks, Cambridge, 14-15 November 2011, Article No. 5.
https://doi.org/10.1145/2070562.2070567
[11]  Computer Security Resource Center (2016) Automated Combinatorial Testing for Software.
http://csrc.nist.gov/groups/SNS/acts/index.html.
[12]  Reisner, E., Song, C., Ma, K.-K., Foster, J.S. and Porter, A. (2010) Using Symbolic Evaluation to Understand Behavior in Configurable Software Systems. 2010 ACM/IEEE 32nd International Conference on Software Engineering, Cape Town, 1-8 May 2010, 445-454.
https://doi.org/10.1145/1806799.1806864
[13]  Jin, G., Song, L., Shi, X., Scherpelz, J. and Lu, S. (2012) Understanding and Detecting Real-World Performance Bugs. ACM SIGPLAN Notices, 47, 77-88.
https://doi.org/10.1145/2345156.2254075
[14]  Han, X., Yu, T. and Pradel, M. (2021) Confprof: White-Box Performance Profiling of Configuration Options. Proceedings of the ACM/SPEC International Conference on Performance Engineering, Virtual Event, 19-23 April 2021, 1-8.
https://doi.org/10.1145/3427921.3450255
[15]  Song, L. and Lu, S. (2014) Statistical Debugging for Real-World Performance Problems. Proceedings of the 2014 ACM International Conference on Object Oriented Programming Systems Languages & Applications, Portland, 20-24 October 2014, 561-578.
https://doi.org/10.1145/2660193.2660234
[16]  Zaman, S., Adams, B. and Hassan, A.E. (2012) A Qualitative Study on Performance Bugs. 2012 9th IEEE Working Conference on Mining Software Repositories (MSR), Zurich, 2-3 June 2012, 199-208.
https://doi.org/10.1109/MSR.2012.6224281
[17]  Nistor, A., Jiang, T. and Tan, L. (2013) Discovering, Reporting, and Fixing Performance Bugs. 2013 10th Working Conference on Mining Software Repositories (MSR), San Francisco, 18-19 May 2013, 237-246.
https://doi.org/10.1109/MSR.2013.6624035
[18]  Koenig, A. (2013) Performance Bugs: Not Just Hard to Detect, but Hard to Define.
http://www.drdobbs.com/cpp/performance-bugs-not-just-hard-to-detect/240164448
[19]  Foo, K.C., Jiang, Z.M.J., Adams, B., Hassan, A.E., Zou, Y. and Flora, P. (2015) An Industrial Case Study on the Automated Detection of Performance Regressions in Heterogeneous Environments. 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Florence, 16-24 May 2015, 159-168.
https://doi.org/10.1109/ICSE.2015.144
[20]  Wert, A., Happe, J. and Happe, L. (2013) Supporting Swift Reaction: Automatically Uncovering Performance Problems by Systematic Experiments. 2013 35th International Conference on Software Engineering (ICSE), San Francisco, 18-26 May 2013, 552-561.
https://doi.org/10.1109/ICSE.2013.6606601
[21]  Alagar, V.S. and Periyasamy, K. (2011) Specification of Software Systems. Springer Science & Business Media, London.
https://doi.org/10.1007/978-0-85729-277-3
[22]  Nair, V., Menzies, T., Siegmund, N. and Apel, S. (2017) Using Bad Learners to Find Good Configurations. Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, Paderborn, 4-8 September 2017, 257-267.
https://doi.org/10.1145/3106237.3106238
[23]  Puoskari, E., Vos, T.E.J., Condori-Fernandez, N. and Kruse, P.M. (2013) Evaluating Applicability of Combinatorial Testing in an Industrial Environment: A Case Study. Proceedings of the 2013 International Workshop on Joining AcadeMiA and Industry Contributions to testing Automation, Lugano, 15 July 2013, 7-12.
https://doi.org/10.1145/2489280.2489287
[24]  Yin, Z., Ma, X., Zheng, J., Zhou, Y., Bairavasundaram, L.N. and Pasupathy, S. (2011) An Empirical Study on Configuration Errors in Commercial and Open Source Systems. Proceedings of the 23rd ACM Symposium on Operating Systems Principles, Cascais, 23-26 October 2011, 159-172.
https://doi.org/10.1145/2043556.2043572
[25]  Chen, T.-H., Shang, W., Jiang, Z.M., Hassan, A.E., Nasser, M. and Flora, P. (2014) Detecting Performance Anti-Patterns for Applications Developed Using Object-Relational Mapping. Proceedings of the 36th International Conference on Software Engineering, Hyderabad, 31 May-7 June 2014, 1001-1012.
https://doi.org/10.1145/2568225.2568259
[26]  Smith, C.U. and Williams, L.G. (2000) Software Performance Antipatterns. Proceedings of the 2nd International Workshop on Software and Performance, Ottawa, September 2000, 127-136.
https://doi.org/10.1145/350391.350420
[27]  Smith, C.U. and Williams, L.G. (2002) New Software Performance Antipatterns: More Ways to Shoot Yourself in the Foot. Proceedings 28th International Conference Computer Measurement Group, Reno, 8-13 December 2002, 667-674.
[28]  Smith, C.U. and Williams, L.G. (2003) More New Software Performance Antipatterns: Even More Ways to Shoot Yourself in the Foot. 29th International Computer Measurement Group Conference, Dallas, 7-12 December 2003, 717-725.
[29]  Li, Z. and Zhou, Y. (2005) PR-Miner: Automatically Extracting Implicit Programming Rules and Detecting Violations in Large Software Code. ACM SIGSOFT Software Engineering Notes, 30, 306-315.
https://doi.org/10.1145/1095430.1081755
[30]  Han, X., Yu, T. and Lo, D. (2018) Perflearner: Learning from Bug Reports to Understand and Generate Performance Test Frames. 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE), Montpellier, 3-7 September 2018, 17-28.
https://doi.org/10.1145/3238147.3238204
[31]  Yu, T., Wen, W., Han, X. and Hayes, J.H. (2016) Predicting Testability of Concurrent Programs. 2016 IEEE International Conference on Software Testing, Verification and Validation (ICST), Chicago, 11-15 April 2016, 168-179.
https://doi.org/10.1109/ICST.2016.39
[32]  Wert, A. (2013) Performance Problem Diagnostics by Systematic Experimentation. Proceedings of the 18th International Doctoral Symposium on Components and Architecture, Vancouver, 17 June 2013, 1-6.
https://doi.org/10.1145/2465498.2465499
[33]  Hovemeyer, D. and Pugh, W. (2004) Finding Bugs Is Easy. ACM SIGPLAN Notices, 39, 92-106.
https://doi.org/10.1145/1052883.1052895
[34]  Boehm, B.W. (1984) Software Engineering Economics. IEEE Transactions on Software Engineering, SE-10, 4-21.
https://doi.org/10.1109/TSE.1984.5010193
[35]  (2016) Codesurfer, a Code Browser That Understands Pointers, Indirect Function Calls, and whole-Program Effects.
https://www.grammatech.com/products/codesurfer
[36]  (2016) Clang Static Analyzer.
http://clang-analyzer.llvm.org/
[37]  FindBugs (2016).
http://findbugs.sourceforge.net/
[38]  Levi, O. (2021) Pin—A Dynamic Binary Instrumentation Tool.
https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool
[39]  (2016) Btrace.
https://kenai.com/projects/btrace
[40]  (2016) ASM 4 Guide.
http://download.forge.objectweb.org/asm/asm4-guide.pdf
[41]  (2016) Soot.
https://sable.github.io/soot/
[42]  Xiao, X., Han, S., Zhang, D. and Xie, T. (2013) Context-Sensitive Delta Inference for Identifying Workload-Dependent Performance Bottlenecks. Proceedings of the 2013 International Symposium on Software Testing and Analysis, Lugano, 15-20 July 2013, 90-100.
https://doi.org/10.1145/2483760.2483784
[43]  Nistor, A., Song, L., Marinov, D. and Lu, S. (2013) Toddler: Detecting Performance Problems via Similar Memory-Access Patterns. 2013 35th International Conference on Software Engineering (ICSE), San Francisco, 18-26 May 2013, 562-571.
https://doi.org/10.1109/ICSE.2013.6606602
[44]  Altman, E., Arnold, M., Fink, S. and Mitchell, N. (2010) Performance Analysis of idle Programs. Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications, Reno, 17-21 October 2010, 739-753.
https://doi.org/10.1145/1869459.1869519
[45]  Barna, C., Litoiu, M. and Ghanbari, H. (2011) Model-Based Performance Testing (Nier Track). Proceedings of the 33rd International Conference on Software Engineering, Waikiki, May 2011, 872-875.
https://doi.org/10.1145/1985793.1985930
[46]  Luo, Q., Nair, A., Grechanik, M. and Poshyvanyk, D. (2016) FOREPOST: Finding Performance Problems Automatically with Feedback-Directed Learning Software Testing. Proceedings of the 38th International Conference on Software Engineering Companion, Austin, 14-22 May 2016 593-596.
https://doi.org/10.1145/2889160.2889164
[47]  Huang, P., Ma, X., Shen, D. and Zhou, Y. (2014) Performance Regression Testing Target Prioritization via Performance Risk Analysis. Proceedings of the 36th International Conference on Software Engineering, Hyderabad, 31 May 2014-7 June 2014, 60-71.
https://doi.org/10.1145/2568225.2568232
[48]  Smaalders, B. (2006) Performance Anti-Patterns. Queue, 4, 44-50.
https://doi.org/10.1145/1117389.1117403
[49]  Anthopoulos, L., Reddick, C.G., Giannakidou, I. and Mavridis, N. (2016) Why E-Government Projects Fail? An Analysis of the Healthcare.gov Website. Government Information Quarterly, 33, 161-173.
https://doi.org/10.1016/j.giq.2015.07.003
[50]  Weiser, M. (1984) Program Slicing. IEEE Transactions on Software Engineering, SE-10, 352-357.
https://doi.org/10.1109/TSE.1984.5010248
[51]  Luk, C.-K., Cohn, R., Muth, R., Patil, H., Klauser, A., Lowney, G., Wallace, S., Reddi, V.J. and Hazelwood, K. (2005) Pin: Building Customized Program Analysis Tools with Dynamic Instrumentation. ACM SIGPLAN Notices, 40, 190-200.
https://doi.org/10.1145/1064978.1065034
[52]  Schwartz, E.J., Avgerinos, T. and Brumley, D. (2010) All You Ever Wanted to Know about Dynamic Taint Analysis and Forward Symbolic Execution (but Might Have Been Afraid to Ask). 2010 IEEE Symposium on Security and Privacy, Oakland, 16-19 May 2010, 317-331.
https://doi.org/10.1109/SP.2010.26
[53]  Anvik, J., Hiew, L. and Murphy, G.C. (2006) Who Should Fix This Bug? Proceedings of the 28th International Conference on Software Engineering, Shanghai, 20-28 May 2006, 361-370.
https://doi.org/10.1145/1134285.1134336
[54]  Siegmund, N., Grebhahn, A., Apel, S. and Kästner, C. (2015) Performance-Influence Models For highly Configurable Systems. Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, Bergamo, 30 August-4 September 2015, 284-294.
https://doi.org/10.1145/2786805.2786845
[55]  Song, C., Porter, A. and Foster, J.S. (2014) Itree: Efficiently Discovering High-Coverage Configurations Using Interaction trees. IEEE Transactions on Software Engineering, 40, 251-265.
https://doi.org/10.1109/TSE.2013.55
[56]  Sitaraman, M., Kulczycki, G., Krone, J., Ogden, W.F. and Reddy, A.N. (2001) Performance Specification of software Components. ACM SIGSOFT Software Engineering Notes, 26, 3-10.
https://doi.org/10.1145/379377.375223
[57]  Wong, W.E., Horgan, J.R., London, S. and Agrawal, H. (1997) A Study of Effective Regression Testing in Practice. Proceedings of the 8th International Symposium on Software Reliability Engineering, Albuquerque, 2-5 November 1997, 264-274.
https://doi.org/10.1109/ISSRE.1997.630875
[58]  Dixit, K.M. (1991) The Spec Benchmarks. Parallel Computing, 17, 1195-1209.
https://doi.org/10.1016/S0167-8191(05)80033-X
[59]  Kwon, Y., Lee, S., Yi, H., Kwon, D., Yang, S., Chun, B.-G., Huang, L., Maniatis, P., Naik, M. and Paek, Y. (2013) Mantis: Automatic Performance Prediction for Smartphone Applications. 2013 USENIX Annual Technical Conference, San Jose, 26-28 June 2013, 297-308.
[60]  Huang, L., Jia, J., Yu, B., Chun, B.-G., Maniatis, P. and Naik, M. (2010) Predicting Execution Time of Computer Programs Using Sparse Polynomial Regression. Proceedings of the 23rd International Conference on Neural Information Processing Systems, Vol. 1, Vancouver, 6-9 December 2010, 883-891.
[61]  Guo, J., Czarnecki, K., Apel, S., Siegmund, N. and Wasowski, A. (2013) Variability-Aware Performance Prediction: A Statistical Learning Approach. 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE), Silicon Valley, 11-15 November 2013, 301-311.
https://doi.org/10.1109/ASE.2013.6693089

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133