BENCHMARKING TURBO C AND QUICKC

Wrapping up the Comparison

Scott Robert Ladd

Scott Ladd is a full-time, free-lance computer journalist. You can reach him at 705 W Virginia, Gunnison, CO 81230.


Wrapping up the comparison

This month Scott Ladd reports the benchmark results as part of a follow-up on the review he presented in the May issue: "QuickC versus Turbo C." In addition, the Doctor examines CompuView's "VEdit Plus," Magna Carta's "C Windows Toolkit," and Genus's "PCX Programming Toolkit."

This benchmark report is a follow-up to the comparative review of Microsoft's QuickC 2.0 and Borland's Turbo C 2.0 that appeared in the May issue. That article was a qualitative comparison. This one is quantitative and measures the performance of the two under similar conditions. As I've said before and I'll say again, no set of benchmarks can truly reveal how good a product is. The only purpose benchmarks serve is to give a general idea of how well a given compiler performs on certain types of code. (And speaking of code, because of space constraints, I'm not including the programs used to generate the results. If you'd like to see them, though, they'll be posted on DDJ's CompuServe forum and are available through the DDJ editorial offices.)

For this comparison, I used five tests whose results appear in Table 1. Dhrystone 2 is an updated version of the famous Dhrystone benchmark, which represents the statistical "average" program. Because Dhrystone comes in three files, it made an excellent test for the make facilities.

Table 1: QuickC versus Turbo C benchmark results

    Benchmark Test         Borland    Microsoft
                         Turbo C 2.0  QuickC 2.0
  ----------------------------------------------

    timings (seconds)
      compile                11.80         12.42
      link                    5.28          9.56
      run                    19.45         18.67
    Dhrystones/second        2,571         2,678
    .EXE size (bytes)        9,692        11,405

  DMATH
    timings (seconds)
      compile                 7.31          9.28
      link                    7.90         10.82
      run
         emulator           170.21        194.76
         coprocessor          5.44          5.38
    .EXE size (bytes)
      emulator              20,546        17,578
      coprocessor only      10,706         9,786

  FXREF
    timings (seconds)
      compile                 9.81         11.54
      link                    5.16          9.84
      overall run            31.47         26.90
    .EXE size (bytes)        8,710         9,461

  GRAPH
    timings (seconds)
      compile                 9.28         11.59
      link                   10.16         23.20
      emulator run          158.85         53.61
      coprocessor run       158.52         53.28
    .EXE size (bytes)
      emulator              38,720        43,312
      coprocessor           28,864        35,520

  GRIND
    timings (seconds)
      compile                 9.10         11.26
      link                    7.91         15.38
      run
           emulator          28.10         30.43
           coprocessor       17.74         15.00
    .EXE size (bytes)
      emulator              25,976        27,340
      coprocessor           16,136        19,548

FXREF is a filter. It reads in a text file from standard input, and builds a binary tree of the text tokens and their line-number references. Once the file has been read and displayed, a cross reference is printed from the binary tree. I have used FXREF in the past as a useful benchmark of I/O and dynamic memory allocation speed.

Both compilers provide a graphics library, and thus was born the GRAPH benchmark. It is actually a simple, three-part test. Part one tests line-drawing speed, part two looks at the quickness of drawing ellipses, and part three times the rapidity of filling an irregular area with a solid color. Two separate programs were created, one for each compiler, due to the differences in initialization code and function names.

GRIND is a report program. It reads 1000 floating point numbers from disk and sorts them, then calculates a table of values, which it writes to disk.

Many floating point benchmarks test the speed of library functions more than the quality of floating point code generated. DMATH solves this problem by avoiding the use of any library functions. All variables in the program, including loop counters, are doubles. DMATH calculates the sines of all of the angles from 0 to 359 degrees, using a simple series.

The benchmarks were conducted on an Intel-motherboard 80386 machine running at 16 MHz. The computer was equipped with an 80387 coprocessor, 2 Mbytes of memory, and an EGA card and monitor. Timing was performed by a program that resets the interrupt timer to 1/100th of a second of accuracy, and each timed test result is the average of five iterations. Tests were run for both the emulator and coprocessor floating-point packages, whenever possible. All times for compiles and links are for the command-line versions. QuickC's incremental compilation and linking would speed it up immensely on developmental program builds. Compile speeds were tested with all debug options and warnings turned off.

The command-line options used for QuickC were -Ox -FPi. For Turbo C, the options were -Z -0 -G -d. The -FPi87 flag was used with QuickC to generate the coprocessor tests, and the -f87 flag was used for the same purpose with Turbo C.

Some immediate trends can be seen by looking at the benchmark table. The compile times for QuickC and Turbo C are very close together, but the Turbo C linker runs much faster by far. It's important to note that these compiles are for the entire modules; using QuickC's incremental compilation and linking features make it considerably faster than Turbo C.

The most surprising run-time result is Turbo C's poor performance on the GRAPH test. The ellipse() and flood fill() functions perform more than twice as slowly as their Microsoft counterparts. Borland claims that "professional programmers" use polyfill() -- which draws an automatically-filled polygon -- rather than floodfill(), but I don't agree with that logic. Most filling is done in regions bounded by objects that may or may not be drawn in one piece.

Drawing conclusions from these tests can be risky. In general, Turbo C's compile/link cycle is shorter than QuickC's (although, use of QuickC's incremental compile/link would probably reverse this trend), but QuickC programs tend to run a little faster. Nevertheless, the overall performance of the two C compilers in these benchmarks is too close to reveal any clear winner.

And that's good news for the vendors and -- even more important -- for us programmers. No matter which product you choose, you can be assured that it is of excellent quality.


Copyright © 1989, Dr. Dobb's Journal