Quality of manual data collection in Java']Java software: an empirical investigation

被引:4
作者
Counsell, Steve [1 ]
Loizou, George
Najjar, Rajaa
机构
[1] Brunel Univ, Sch Comp Informat Syst & Math, Uxbridge UB8 1PH, Middx, England
[2] Univ London Birkbeck Coll, Sch Comp Sci & Informat Syst, London WC1E 7HX, England
[3] Univ Cyprus, Dept Comp Sci, Nicosia, Cyprus
关键词
data collection; !text type='Java']Java[!/text; software metrics; empirical investigation;
D O I
10.1007/s10664-006-9028-y
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Data collection, both automatic and manual, lies at the heart of all empirical studies. The quality of data collected from software informs decisions on maintenance, testing and wider issues such as the need for system re-engineering. While of the two types stated, automatic data collection is preferable, there are numerous occasions when manual data collection is unavoidable. Yet, very little evidence exists to assess the error-proneness of the latter. Herein, we investigate the extent to which manual data collection for Java software compared with its automatic counterpart for the same data. We investigate three hypotheses relating to the difference between automated and manual data collection. Five Java systems were used to support our investigation. Results showed that, as expected, manual data collection was error-prone, but nowhere near the extent we had initially envisaged. Key indicators of mistakes in manual data collection were found to be poor developer coding style, poor adherence to sound OO coding principles, and the existence of relatively large classes in some systems. Some interesting results were found relating to the collection of public class features and the types of error made during manual data collection. The study thus offers an insight into some of the typical problems associated with collecting data manually; more significantly, it highlights the problems that poorly written systems have on the quality of visually extracted data.
引用
收藏
页码:275 / 293
页数:19
相关论文
共 20 条
  • [1] [Anonymous], EMPIR SOFTW ENG
  • [2] [Anonymous], 1988, Nonparametric statistics for the behavioural sciences
  • [3] A validation of object-oriented design metrics as quality indicators
    Basili, VR
    Briand, LC
    Melo, WL
    [J]. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 1996, 22 (10) : 751 - 761
  • [4] Design patterns and change proneness: An examination of five evolving systems.
    Bieman, JM
    Straw, G
    Wang, HX
    Munger, PW
    Alexander, RT
    [J]. NINTH INTERNATIONAL SOFTWARE METRICS SYMPOSIUM, PROCEEDINGS, 2003, : 40 - 49
  • [5] Architectural level hypothesis testing through reverse engineering of object-oriented software
    Counsell, S
    Newson, P
    Mendes, E
    [J]. 8TH INTERNATIONAL WORKSHOP ON PROGRAM COMPREHENSION (IWPC 2000), PROCEEDINGS, 2000, : 60 - 66
  • [6] COUNSELL S, 2002, P INT C SOFTW SYST E
  • [7] The confounding effect of class size on the validity of object-oriented metrics
    Emam, KE
    Benlarbi, S
    Goel, N
    Rai, SN
    [J]. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2001, 27 (07) : 630 - 650
  • [8] FENTON N, 1996, SOFTWARE METRICS RIG
  • [9] Fowler M., 2002, Refactoring: Improving the Design of Existing Code
  • [10] Gamma E., 1995, Design Patterns: Elements of Reusable Object-Oriented Software