Welcome to our new site version. Your web page bookmarks may have changed, please search for pages by title to update them. Having problems ? Please try clearing your web browser cache and hard-reloading your web page first before contacting our webmaster.

Experiences in developing and applying a software engineering technology testbed

[document] Submitted on 16 September, 2019 - 22:17
Keywords Experiences in developing and applying a software engineering technology testbed
Standards groups

One requirement for a software engineering technology testbed would be to have an experience base of prior experiences, both positive and negative, for each technology, providing software engineers an indication of how well the technology worked on a representative software system. The information going into the experience base would contain information such as, but not limited to, the effectiveness of the technology to finding defects, what type of defects it found, training time to learn the technology, and a description of the technology. By analyzing the information, a software engineer would be able to gauge how well the technology will work on their project and evaluate alternative software engineering technologies. At times, a practitioner may not know if two or more technologies are complimentary or not. The technologies may find the same set of defects. With an experience base, practitioners can decide if two or more technologies are complimentary or not. In addition, researchers who use the testbed to evaluate their technology would be able to add their experiences/results to the experience base for practitioners to view.

Another critical factor according to Redwine and Riddle is conceptual integrity. By using the software engineering technology testbed, researchers will be able to demonstrate that the technology is well developed by applying the technology on a representative software system and being able to find (seeded) detects. If the technology is unable to find the seeded defects or significant additional defects in the representative system, then the researcher will need to develop/mature the technology some more before being used by the technical community.

Metadata
Document identifier
DOI 10.1007/s10664-008-9096-2
Date published
2008-11-11
Document type
technical white paper
Pages
23
Replaced/Superseded by document(s)
Cancelled by
Amended by
File MIME type Size (KB) Language Download
usc-csse-2009-528.pdf application/pdf   442.74 KB English DOWNLOAD!
File attachments
Abstract

A major problem in empirical software engineering is to determine or ensure comparability across multiple sources of empirical data. This paper summarizes experiences in developing and applying a software engineering technology testbed. The testbed was
designed to ensure comparability of empirical data used to evaluate alternative software engineering technologies, and to accelerate the technology maturation and transition into
project use. The requirements for such software engineering technology testbeds include not only the specifications and code, but also the package of instrumentation, scenario drivers, seeded defects, experimentation guidelines, and comparative effort and defect data needed to facilitate technology evaluation experiments. The requirements and architecture to build a particular software engineering technology testbed to help NASA evaluate its
investments in software dependability research and technology have been developed and applied to evaluate a wide range of technologies. The technologies evaluated came from the
fields of architecture, testing, state-model checking, and operational envelopes. This paper will present for the first time the requirements and architecture of the software engineering
technology testbed. The results of the technology evaluations will be analyzed from a point of view of how researchers benefited from using the SETT. The researchers just reported how their technology performed in their original findings. The testbed evaluation showed (1) that certain technologies were complementary and cost-effective to apply; (2) that the
testbed was cost-effective to use by researchers within a well-specified domain of applicability; (3) that collaboration in testbed use by researchers and the practitioners resulted comparable empirical data and in actions to accelerate technology maturity and
transition into project use, as shown in the AcmeStudio evaluation; and (4) that the software engineering technology test bed’s requirements and architecture were suitable for evaluating
technologies and accelerating their maturation and transition into project use.

Organisation(s)
Publisher
Springer Science + Business Media
Author(s)
Alexander Lam & Barry Boehm
Defines standard
Visit also