SCENT: A Method Employing Scenarios to Systematically Derive Test Cases for System Test

Keywords requirements scenarios scenarios system tests test cases UML Use Case
Metadata
Document identifier
TR 2000/03
Date published
2000
Language
English
Document type
technical report
Pages
116
Defines standard
Replaced/Superseded by document(s)
Cancelled by
Amended by
File MIME type Size (KB) Language Download
TR2000_03.pdf application/pdf   382.6 KB English DOWNLOAD!
File attachments
Abstract

Scenarios (Use cases) – being descriptions of sequences of interactions between two partners, usually
between a system and its users – have attracted much attention and gained wide-spread use in requirements
and software engineering over the last couple of years. Many software development methods and
modeling languages comprise some notion of scenario (e.g. OOSE Object-Oriented Software Engineering/
Jacobson/, OMT Object Modeling Technique/Rumbaugh/, and most notably, the UML Unified
Modeling Language/Booch, Rumbaugh, Jacobson/ as their successor).

In scenarios, the functionality and behavior of a (software) system is captured in a user-centered perspective.
To date, scenarios are mainly used in the requirements elicitation and analysis phase of software
development; they are used to capture and document requirements, to enhance communication
between the stakeholders (user, procurer, developer, management, …) and to involve the user more
actively in specification and validation of the system.

Even though scenarios are mainly used in system analysis, the use of scenarios in other phases of software
development is of much interest, as it could help to cut cost by reuse and improved validation and
verification. As scenarios form a kind of abstract test cases for the system under development, the idea
to use them to derive test cases for system test is quite intriguing. Yet in practice, scenarios from the
analysis phase are seldom used to create concrete system test cases.

In this report, we present a procedure to create scenarios during the analysis phase of system development,
to structure them according to a given scenario template, and to use these scenarios in the system
testing phase to systematically determine test cases. This is done by formalization of natural language
scenarios in statecharts, annotation of statecharts with helpful information for test case creation/generation
and by path traversal in the statecharts to determine concrete test cases. Furthermore, dependencies
between scenarios are captured and modeled in so-called dependency charts, and test cases are derived
from dependency charts to enhance the developed test suites.

The approach has been applied to two projects in industry. In this technical report, we also report on
some of the experiences made in applying the approach in practice

Organisation(s)
Author(s)
Visit also