User Reports


PC-METRIC — A Measuring Tool For Software

Larry Versaw


Larry Versaw is a systems engineer at Electronic Data Systems' Corporate Communications Division. His 1984 masters thesis was entitled Measuring the Size, Structure, and Complexity of Software. He may be contacted care of 5400 Legacy Drive, Plano, TX 75024.

Have you ever wanted to compare the complexity of two programs or to tell how long it took to develop them? Have you ever needed a precise measure of programmer productivity? No one yet can produce truly reliable answers to these problems, but researchers in the 1970s invented many software metrics and have since conducted hundreds of experiments to see what information could be derived by analyzing program source code. Some metrics purport to measure software complexity; others gauge program size or calculate how well structured a program is. The researchers developed many static code analyzers for use in their software metrics experiments, but few such tools were commercially marketed. PC-METRIC, developed by SET Laboratories, Inc., is one of the few stand-alone software metrics programs, if not the only one, said today.

To evaluate PC-METRIC, I tried it out on 80 source files containing 25,000 lines of working C code. That exercise proved PC-METRIC to be a reliable product, an efficient program measurement tool that would be indispensable to anyone wishing to use software metrics in his work.

The Product

For this article I evaluated v1.1, then v2.3 of PC-METRIC. In addition to the C language versions which I examined, SET Laboratories has produced metrics programs for Ada, Assembler, COBOL, FORTRAN, Modula-2, and Pascal. Some languages are supported on systems other than MS-DOS.

PC-METRIC specializes in static code analysis; that is, it reports certain quantifiable attributes of program source code without executing it. These attributes include number of source lines, number of executable statements, and a dozen other quantities which are derived by counting certain program elements. Software metrics experiments have usually shown correlations between these kinds of metrics and actual, observed software management factors, such as programmer skill, number of remaining bugs, and actual programming effort.

PC-METRIC is based on the work of several of the pioneers in software metrics, notably Tom McCabe and Maurice Halstead. McCabe [McCabe 1976] proposed a measure of program control flow complexity based on a program's directed control flow graph. This metric, called cyclomatic complexity, may be calculated as one plus the number of branches (if statements, loops, alternatives in case statements) in a program. PC-METRIC reports two variants of cyclomatic complexity for each function it analyzes. McCabe's metric is widely accepted and intuitively satisfying as a complexity measure because it represents the amount of program logic that must be understood and retained to understand an algorithm.

One of the most imaginative and ingenious models of software, including software size, was developed by the late Professor Halstead [Halstead 1977]. Halstead's system, labeled software physics, is ultimately based on counts of operators and operands in program source code. Several of PC-METRIC's metrics, including length, estimated length, purity ratio, volume, the effort metric, estimated time to develop, and estimated errors, are implementations of Halstead's software physics formulae. Some have seriously questioned the theoretical basis underlying Halstead's model, and Halstead's attempt to bring theory from the realm of psychology to bear on software development has been widely discounted [Coulter 1981, Perlis 1981]. On the other hand, some rather impressive correlations have been observed between certain of Halstead's metrics and such management factors as code quality, programming time, and debugging effort [Gordon 1979, Curtis 1979, Funami 1976, Paige 1980].

If you are experienced with software metrics, you may find some of your favorite metrics missing from PC-METRIC's repertoire. However, PC-METRIC supports more measures of program size and complexity than are actually needed. Most size and complexity metrics are highly correlated with each other, so that beyond the first two or three, additional size and complexity metrics are redundant. In a study which analyzed great quantities of source code written by diverse programmers in C, Ada, PL/I and Pascal, no statistically significant differences were found among the reliability of different size metrics [Versaw 1984]. They all measure the same attribute, after all. Variations in programming style notwithstanding, it is my belief that lines of code remains as good a measure of program size as any other measure we have today, and is almost as good a measure of complexity as any other. Research continues on the subject, but on a smaller scale than ten years ago.

Installing and using PC-METRIC is simplicity itself. A user must learn only one command, CMET, which runs interactively or in batch mode. PC-METRIC is configurable to different dialects of C, by modifying a table of key words and symbols, which is stored in an ASCII text file.

As PC-METRIC analyzes source code, it produces two reports. The complexity analysis report lists metrics values calculated for each function, and the combined values for the entire module being considered. In the new version of PC-METRIC, SET has remedied the worst problem with version 1.1, which was its inability to analyze units of source code larger than one file. The second report file, called the exceptions report, highlights all measured values which lie outside of predetermined, user-defined limits.

Both the analysis report and exceptions report are output as ASCII files. In the current version of PC-METRIC, these reports are suitable for printing without any manual editing or reformatting. The current version of PC-METRIC provides a CONVERT utility which can convert the report data into a comma-delimited text file suitable for uploading into many spreadsheet or database packages. This is an especially valuable addition to the PC-METRIC package.

Program attributes which cannot be measured by simply counting certain operators and identifiers are beyond the scope of PC-METRIC, unfortunately. These would include attributes such as the degree of information hiding, module coupling, function binding, and efficiency. If we could only measure these attributes objectively and automatically, it would greatly enhance the practice of software engineering. Where PC-METRIC does excel is in calculating reliably the most common size and complexity metrics with a minimum of fuss at a reasonable speed (4000 lines per minute on a 10 MHz AT type computer).

System Requirements

PC-METRIC requires far less memory and disk space than any C compiler would, so hardware requirements do not limit the use of PC-METRIC.

The Audience

PC-METRIC is intended primarily for two kinds of users. The first is software developers who would use a statistical analysis of their code as a help in identifying overly complex modules or functions. The PC-METRIC manual correctly identifies programmer feedback as an important application of PC-METRIC. The second kind of person who needs PC-METRIC is the manager or software project leader who would use software metrics as an tool to monitor programmer compliance to local standards of function size, module complexity, or other quantifiable program aspects.

Documentation

All bases are covered in PC-METRIC's three-part manual. Part 1 provides a well-written tutorial on the field of software metrics, concentrating on the specific metrics obtainable with PC-METRIC. It even includes a brief annotated bibliography of software metrics literature. Users with little prior exposure to the field of software metrics should be sure to read this part.

Part 2 describes how to install, configure, and use PC-METRIC. It also is well-organized and gives the right amount of examples. PC-METRIC's counting strategy is documented toward the end of this section.

Part 3, "Applying PC-METRIC", instructs users on what to do with all those numbers PC-METRIC generates. It first documents the indispensable new CONVERT utility mentioned above. Then it explains ways to interpret the results: how to properly use software measures as a feedback tool or resource estimation tool, in practice.

Support

SET Labs offers technical support by telephone for their products and will answer general questions on software metrics. SET offers site licensing as well as individual licenses. If you have a particular machine or language for which you would like a version of PC-METRIC, SET Labs will usually do a port for the price of a single site license.

Conclusions

PC-METRIC is an indispensable tool, and perhaps the only tool in its class, for analyzing program size and complexity by those software metrics it provides. By cleaning up the reports and by providing the CONVERT utility, the new version of PC-METRIC has enhanced users' ability to analyze and apply program metrics.

PC-METRIC applies state of the art methods for objectively measuring two basic attributes of program source code: size and complexity. The usefulness of these measures is variable, but not because of any deficiency in PC-METRIC itself. PC-METRIC, and counting programs in general, find their surest application in measuring adherence to a specific coding standard.

I recommend PC-METRIC for programmers and managers as a tool for monitoring adherence to their coding standard which could, and probably should, include some complexity metrics. I recommend it also as a tool for identifying overly complex modules that need extra testing or rewriting. The list price is $199. You can contact SET Labs for more information at P.O. Box 86327, Portland, OR 97283 (503) 289-4758.

References

Coulter, Neal S., Applications of Psychology in Software Science, Proceedings of IEEE COMPSAC 81, (1981), 50-51.

Curtis, Bill; Sheppard, Sylvia; and Milliman, Phil, Third Time Charm: Stronger Prediction of Programmer Performance by Software Complexity Metrics, Proceedings of Fourth International Conference on Software Engineering, (1979), 356-360.

Funami, Y., and Halstead, M.H., A Software Physics Analysis of Akiyama's Debugging Data, Proceedings of the Symposium on Computer Software Models, (1976), 133-138.

Gordon, Ronald, A Quantitative Justification for a Measure of Program Clarity, IEEE Transactions on Software Engineering, IV (March 1979), 121-128.

Halstead, Maurice, Elements of Software Science, New York, Elsevier, 1977.

McCabe, T.J., A Complexity Measure, IEEE Transactions on Software Engineering, II (December 1976), 308-320.

Paige, M., A Metric for Software Test Planning, Proceedings of IEEE COMPSAC 80, (1980), 499-504.

Perlis, Alan J.; Sayward, Frederick G.; and Shaw, Mary, editors, Software Metrics: An Analysis and Evaluation, Cambridge, Massachusetts, MIT Press, 1981.

Versaw, Larry, A Tool for Measuring the Size, Structure and Complexity of Software, thesis, Denton, Texas, North Texas State University, 1984.