When comparing expert-judgment versus parametric-model estimates, another
factor might cause parametric models to appear less accurate. The organization’s counting rules used for software size, cost, and schedule might differ from those used to calibrate the models. For size, this might involve differences between logical and physical lines of code, counting of non executable statements, and counting of support software such as deliverable tools and test drivers. For cost and schedule, it might involve different phases counted (programming versus full development cycle), work hours counted (training, personal email, and overtime), and activities counted (configuration management, quality assurance, data preparation, and hardware and software integration).
Defines standard
Replaced/Superseded by document(s)
Cancelled by
Amended by
File | MIME type | Size (KB) | Language | Download | |
---|---|---|---|---|---|
usc-csse-2009-527.pdf | application/pdf | 464.93 KB | English | DOWNLOAD! |
Provides definitions
Abstract
Which is better for estimating software project resources: formal models, as instantiated in estimation tools, or expert judgment? Two luminaries, Magne Jørgensen and Barry Boehm, debate this question here. Outside this article, they’re colleagues with a strong inclination to combine methods. But for this debate, they’re taking opposite sides and trying to help software project managers figure out when, and under what conditions, each method would be best.
While it might be less argumentative—and certainly less controversial—to agree that the use of both methods might be best, Magne is making a stronger point: Perhaps our refinement of and reliance on the formal models that we find in tools is wrong headed, and we should allocate our scarce resources toward
judgment-based methods. (For a primer on both methods, see the sidebar.) — Stan Rifkin