Item

Robust statistical methods for empirical software engineering

Kitchenham, B
Madeyski, L
Budgen, D
Keung, J
Brereton, P
Charters, Stuart
Gibbs, S
Pohthong, A
Date
2016-06-16
Type
Journal Article
Fields of Research
ANZSRC::080309 Software Engineering , ANZSRC::0104 Statistics , ANZSRC::010405 Statistical Theory , ANZSRC::010406 Stochastic Analysis and Modelling , ANZSRC::4612 Software engineering
Abstract
© 2016 The Author(s) There have been many changes in statistical theory in the past 30 years, including increased evidence that non-robust methods may fail to detect important results. The statistical advice available to software engineering researchers needs to be updated to address these issues. This paper aims both to explain the new results in the area of robust analysis methods and to provide a large-scale worked example of the new methods. We summarise the results of analyses of the Type 1 error efficiency and power of standard parametric and non-parametric statistical tests when applied to non-normal data sets. We identify parametric and non-parametric methods that are robust to non-normality. We present an analysis of a large-scale software engineering experiment to illustrate their use. We illustrate the use of kernel density plots, and parametric and non-parametric methods using four different software engineering data sets. We explain why the methods are necessary and the rationale for selecting a specific analysis. We suggest using kernel density plots rather than box plots to visualise data distributions. For parametric analysis, we recommend trimmed means, which can support reliable tests of the differences between the central location of two or more samples. When the distribution of the data differs among groups, or we have ordinal scale data, we recommend non-parametric methods such as Cliff’s δ or a robust rank-based ANOVA-like method.
Rights
© The Author(s) 2016. This article is published with open access at Springerlink.com
Creative Commons Rights
Access Rights