Sir,Any system claiming to be real-time must address several basic and fundamental issues including deadlines, schedulability and priority inversion, none of which were addressed by Michel de Champlain in his article, "An Object-Based Real-Time Executive" which appeared in the December issue of your journal. Ignoring these issues fosters serious misconceptions about the nature of a real-time system.
The most fundamental property required of a real-time application is the ability to meet specified time deadlines. Deadlines might be specified in milliseconds or in years the issue is therefore not one of speed, a common misconception and only if each task can meet its deadlines can the system be considered to be real-time. Sadly, the article does not even mention the word "deadline" anywhere the concept is not even considered.
The second issue is scheduling. The author says, "To meet performance requirements, some real-time applications use explicit control over the scheduling ..." And a little later, "When dealing with excessive loads, an application needs to control scheduling and dispatching..." In fact, scheduling tasks that have deadlines is a non-trivial problem and the author is waving his hands in the air on the whole subject. If you cannot guarantee that your tasks will meet their deadlines, then you do not have a real-time system. At the least, there should be some mention of the better known scheduling algorithms, in particular rate-monotonic scheduling as well as earliest deadline, least-laxity, and best-effort algorithms.
The concepts behind rate-monotonic scheduling (RMS) date back to 1973 when Liu and Leyland proved a seminal result for scheduling periodic tasks when each task's deadline is the start of its next period. They showed that if priorities are assigned (statically) such that tasks with shorter periods have higher priorities then there is a utilization bound below which schedulability is guaranteed. For n tasks, if
where Ci is the computation time and Ti is the period of the i-th task, then the tasks are indeed schedulable with a preemptive scheduler. More recently, the formula has been modified to take into account blocking times due to interrupts and priority inversion. RMS is now a viable algorithm that should be available in real-time kernels.
Priority inversion is the third issue. The author discusses mutual exclusion and explains that when several tasks need to access the same resource, it is necessary to serialize the access. Unfortunately, in a real-time system, that is not a sufficient mechanism. Suppose there are three tasks with priorities low (L), medium (M), and high (H) respectively. L starts executing and locks a resource (R). Soon it is preempted by H which starts running. H then needs the resource R, but since R is still locked by L, H must suspend itself. Now L continues running but is then preempted by M. Since M does not require the resource it may continue to run as long as it needs. Thus we have a lower priority task M preventing a higher priority task H from running, hence the term "priority inversion". Because of this inversion, task H may miss its deadline causing the system to fail. There are some important protocols available to solve priority inversion and the author does a disservice to the readers by not even mentioning the topic.
It is important that programmers involved in real-time programming understand the basic issues involved. I do not want a nuclear power plant control system that has been designed without considering (and accounting for) these issues. By the way, these same issues are also relevant in areas such as multimedia applications, a topic currently receiving much attention these days. An understanding of real-time can help dramatically in building such applications.
In summary, if you need to build a real-time system rather than a system with fast response times, you must consider the issues discussed above. Michel de Champlain's article describes a encapsulated multi-tasking executive useful for concurrent programming. It may be a fast executive but it is not a real-time executive.
For those who are interested, the IEEE publishes the proceedings of the Real-Time Systems Symposium (RTSS), an annual conference dedicated to advancing the state of the art in real-time.
Yours sincerely,
David Jameson
36 Roseholm Place
Mt. Kisco, NY 10549
Member, 1990 RTSS Program Committee
E-Mail: dhj@rhun. watson. ibm. comMr. Jameson calls attention, quite rightly, to the importance of "hard real-time" constraints in many systems. He outlines the major topics quite well. Nevertheless, I feel he is a bit harsh on Mr. de Champlain's article for not stressing those issues as much as Mr. Jameson would like.
Many real-time systems have no hard real-time constraints. Put more accurately, any such constraints are so easy to meet that they require no significant design effort. The klystron in my microwave oven may burn up under certain operating conditions. That's a hard real-time constraint. But the four-bit chip running the oven has oodles of spare capacity to avoid such conditions even if I permit a determined brat to pummel its keypad and generate nonsense input while the oven is running.
I developed a real-time seminar for Technology Exchange Co. (a division of Addison-Wesley) last year. I found that the hardest message to get across to experienced real-time programmers is that real-time comes in many flavors. Mr. Jameson recognizes that time scales can vary widely among real-time systems. I hope he can accept that system architectures can too. pjp
Dear CUJ:
For the past two years I have been trying to master the rudiments of the C language. In this time I have collected a fair library of books and periodicals covering basic, advanced, and graphics in C. I have spent many long hours studying and implementing code, but to my dismay and frustration I have not been able to write routines which will capture graphics displays and write them to disk, or read them from a disk and display them on the screen. I would appreciate any information you could give me on available books and any known shareware that contain detailed information on such routines. I am most interested in finding routines for IBM CGA, HERCULES, and VGA compatible systems.
I have enjoyed your magazine for over a year now and although many articles are a bit beyond me I find something useful in each issue. I have just spent the past week going through last year's issues to see if there was some small article which addressed saving graphics to disk. Unfortunately, I could not find any. Perhaps sometime in the future you could include a series of articles devoted to this subject. Thank you.
Sincerely,
E. J. McConnell
6466 Bisby Lake Ave.
San Diego, CA 99119I have read the very thing you're looking for. Unfortunately, my library, including back issues of CUJ, is on the wrong side of the planet. Anybody? pjp
I checked our back issues and some of the library disks and didn't find much that directly addresses the question of saving images to disk. I did find a section in the book Graphics Programming in Turbo C 2.0.
If you just want to save an image and aren't particular about the file format used, this book will be helpful, If, however, you need to save the file in a particular graphics format (.TIF, .PCX, etc.). I'm afraid the book won't be much help. Anyone else know of a good source for graphics file format information?
You might be able to find some user-supported programs that manipulate specific graphics formats by contacting PC-SIG or one of the other large national groups designed for PC end-users. rlw
Dear Mr. Plauger,
I have never written to a magazine to comment on an article before, but Stuart Baird's article on "Using Large Arrays In Turbo C" was so incomplete and inaccurate that I felt I had to respond.
Contrary to the article, the following program will not compile in the small memory models. An error complaining about "Too much global data defined in file" will occur. If you place the two arrays (a and b) in separate modules, the linker will complain about DGROUP overflowing and won't link the program. Somehow, Mr. Baird claims he was able to compile this without any complaints from the compiler. I started using TC with version 2.0, and can't speak about earlier releases, but from what I've heard, this never worked (nor should it have).
Again, the following program fails to compile:
int a [20000]; int b [20000]; void main(void) { }The following program will compile and work in the small memory models (with the exception of the tiny model), but this method of declaring objects was never mentioned in the article:
int far a[20000]; int far b[20000]; void main(void) { }Contrary to the article, in the small memory models the following program will fail on the second allocation, as it should:
#include <stdio.h> #include <stdlib.h> void main(void) { int *a, *b; if ((a = malloc(20000 * sizeof(int))) == NULL) { printf("Cannot allocate a\n"); exit(l); } if ((b = malloc(20000 * sizeof(int))) == NULL) { printf("Cannot allocate b\n"); exit(l); } }Making a or b far pointers instead of near is irrelevant, since malloc only returns near pointers in the small data models.The discussion of wraparound with malloc was befuddling to me. If malloc allocates more memory than is available in the near heap (in the small data models), it's a bug. I've never seen it happen with any compiler, and certainly not one of Borland's.
Further, Mr. Baird states that there is no way of getting more than 64K of global data, regardless of the memory model. On the contrary, it is possible to get as much static data as you would like in all of the memory models (with the exception of the tiny model). However, you must tell the compiler that you want your data to be far.
The following code shows a good example of this:
int a[20000]; /* Places an array in DGROUP, where there is a 64K max */ int far b[20000]; /* Places an array outside of DGROUP, where the available memory in the computer is the max. */The basic problem which Mr. Baird is commenting on is a bug in TC, which has been around for awhile and which I believe Borland is getting ready to finally fix, since it now directly conflicts with the ANSI standard. This is a bug in the way arrays are accessed, not the way in which they are allocated.What happens is that an intermediate calculation is done to determine the address of the element in question. This is done as a 16-bit operation, rather than a 32-bit operation. The result is that you cannot access more than 32K of an array using the subscripting method. This is a bug if Borland claims ANSI conformance. It is also in conflict with their documentation.
Another inaccuracy is Baird's statement about the huge model eliminating limitations on global data. It is true that you can allocate as much memory as you would like in a huge program, but unless you use the far data modifier, you are still limited to 64K per module.
Thank you for your time.
Very truly yours,
Todd Weiss
84 India Street
Second Floor
Brooklyn, NY 11222Thanks for the clarifications. I have personally stumbled across more than one C implementation that goes nuts when you try to reserve too much static space or buy too much heap space. pjp
Dear Editors:
Thanks to Mark Nelson for an excellent article on the QuickSort (August 1990), but I would like to point out a bug in the implementation. In Listing 6 the following code appears inside the main loop of the sort:
while ( data[i] <= data[first] && i <= last ) i++; while ( data[j] >= data[first] && j > first ) j- -;The problem occurs when last is pointing to the last element of the array (as it is the first time through the loop). Because the first clause of the while loop (data[i] <= data[first]) is evaluated before the second clause (i <= last), the last iteration of the loop will have i == last+l, which causes a reference to data[last+l]. This is beyond the bounds of the array.One solution is to just reverse the order of these clauses in the while loop, although perhaps a cleaner solution is to just use a for loop:
for ( ; i <= last; i++ ) if ( data[i] > data[first] ) break;The reason I showed the second while loop above is because it should really be rewritten in the same way. It won't ever cause a reference beyond the bounds of the array, but it does an extra data comparison every time it is executed. If you are using Mr. Nelson's QuickSort for sorting string arrays, this extra comparison can add up.Thanks for an enlightening article, though!
Sincerely,
Kenneth E. Van Camp
P.O. Box 878
Marshalls Creek, PA 18335Thank you pjp
Dear Sirs:
Does anyone know of a version of Core Wars as described a couple of years ago by A K Dewdney in Scientific American? Preferably as C source, but any version for the PC would do. Many thanks in advance to anyone who might be able to help.
Barry Thomas
Managing Director
Thomas Technical Publications Ltd.
Copestake Cottage Pinfold Land
Bradley, ASHBOURNE Derbyshire
DE6 1PNSorry, I don't. Anybody? pjp
To the Editor:
First off, I have just sent in my subscription renewals to both The C Users Journal and TECH Specialist for another two years of each. I complement you and your fine crew for producing two outstanding publications. I've often said (half seriously) that I'd probably take out lifetime subscriptions, were they available.
However, there remains one issue that causes me some grief. The issue has been brought up before by many others, but remains unresolved. That issue is the one of accessibility to the code listings.
Sure, the code is available on disk, but I submit that is not really good enough. Consider that the average reader, like myself for example, would really like to get all of the listings from every issue of both publications. Even with your new "disk subscription" service, the combined cost works out to $60 per year. By the time that is translated into Canadian dollars, it gets quite expensive. Add to that the inherent danger of mailing magnetic media (the disk I received with my original CUJ subscription was unreadable). And, if that's not enough, add to that the fact that you're limiting yourself to MS-DOS format disks, even though the magazines themselves cover a wider range of platforms.
There has been considerable discussion of the idea of you operating your own public BBS system. The concept has been mentioned several times, and your typical response is "we're thinking about it." I can understand your reluctance, running a public BBS is hard work, and quite expensive. You would have to have a multi-line system to be useful, and would likely require at least one full-time staff member to maintain it.
But I submit that there's a better way, and one that I haven't yet seen mentioned. It seems to me that the obvious solution would be to scrap the public BBS idea, and instead make use of one (or more) of the huge national networks. One rather obvious choice would be CompuServe. (Although my personal preference would be BIX, I can see how the fact that it's owned by that "other" publisher would cause you some concern). Such a network is probably already used by many of your subscribers, and so is quite convenient. Maintaining an account (or perhaps an entire forum) would require considerably less effort than a private BBS, and cost everybody less in the long run. Consider the advantages: The listings are immediately available to all, no need to wait for the mail. The file format limitations are removed, since these major networks can support many formats, like DOS, UNIX, etc. There is no need for you to spend vast amounts of money maintaining a private BBS; rather the overall cost is spread around to all of the users by way of their regular on-line charges. Other advantages include better use of E-mail, and the extra exposure your magazines get in the general electronic BBS community.
Frankly, I feel that this approach would be advantageous to you, and to your readers. I urge you to give it serious consideration, and I look forward to finally being able to make use of some of the splendid sample code that appears in your publications each month.
Sincerely
Brad P. Smith
R.R. #2, Oxford Mills
Ontario, Canada, K0G 1S0I defer to Robert Ward on such administrative topics, but I also agree that code availability is a perennial problem. pjp
Dear Mr. Ward:
I am sorry to say that I have noticed a definite trend towards academia in your editorial style in the past year. I have been receiving your publication for almost ten years, and the publication has indeed been through substantial change. But, until recently, it could be counted on to supply practical solutions to real-world problems, and I found it to be very useful in my professional life.
Endless debate on computer theory is typical of academics, and it is the very reason I abandoned a wasteful membership in the ACM (American Association of Computing Machinery). This, unfortunately, appears to be the trend in The C Users Journal. Don't forget where your roots are (circa 1980 or so) because that's where your loyal readership of today came from.
Perhaps if you de-emphasized credentials and emphasized wisdom combined with experience, you might find editors and contributors that could provide a more useful service to your readership.
If you have consciously chosen the current trend (as I perceive it) for your periodical, I won't bother renewing my subscription for the eleventh year.
Sincerely,
Todd Merriman
Software Toolz, Inc.
BallGround, GA 30107-9610Golly. If you read my editorial in the April 1991 issue, you'd find a polemic against academic articles and a plea for more of a practical nature. I respect the fact that you can see things differently. I only wish you had given some concrete examples of places where we debate computer theory endlessly or where we emphasize credentials over wisdom. (The worst offender in both regards is probably my Standard C column, I suppose.) Sorry if we lose you as a subscriber. pjp