1.2.1 Introduction to Complexity Metrics
Software complexity metrics can be defined as measurements used to characterize and quantify various properties of a software system. When they are applied correctly, software complexity measurements can be used to make decisions which can reduce the development cost, increase the reliability, and improve the overall quality of the software system. The science of software complexity was introduced in the mid 1970s when Halstead proposed measurements designed to determine various software quality characteristics. Since that time, researchers have proposed literally hundreds of measurements quantifying various aspects of software, from relatively simple and comprehensible measurements (such as lines of code), to very abstract measurements (such as entropy or hierarchical complexity). There is much debate over the relative value of one metric over another, and there is no standard set of metrics which has widespread use in industrial or research applications. It is up to the user to make subjective decisions as to what metrics to collect, what metrics are most applicable and useful at specific points in the software life cycle, and the proper and appropriate use of these metrics.
Replaced/Superseded by document(s)
|Software Analysis Handbook- Software Complexity Analysis and Software Reliability Estimation and Prediction.pdf
The purpose of this handbook is to document the software analysis process as it is performed by the Analysis and Risk Assessment Branch of the Safety, Reliability, and Quality Assurance Office at the Johnson Space Center. The handbook also provides a summary of the tools and methodologies used to perform software analysis. This handbook is comprised of two separate sections describing aspects of software complexity and software reliability estimation and prediction. The body of this document will delineate the background, theories, tools, and analysis procedures of these approaches.