We Have Mail


Letters to the editor may be sent via email to cujed@cmp.com, or via the postal service to Letters to the Editor, C/C++ Users Journal, 1601 W. 23rd St., Ste 200, Lawrence, KS 66046-2700.


The article "C++ Expression Templates" by Angelika Langer and Klaus Kreft in the March 2003 issue of C/C++ Users Journal uses an integral as a main example. However, the integral:

Integral (1.0,5.0) (x*dx/(1+x))
has a closed-form solution based on the indefinite integral:

Integral (x*dx/(a+b*x)) =
  1/(b*b) *(a + b*x - a * log(a + b*x))
as many tables of integrals will show. Here, a = 1.0; b = 1.0. Thus, the integral chosen for this article wasn't the best one to use. Also, the numerical method used to evaluate it wasn't a very good one. Many books on numerical analysis would suggest better methods. Most numerical libraries would have such methods in them. The ideas in this article might be useful, but just not as much in this context.

Mike Frisch

From a strict numerical analysis point of view, you make very good points. For pedagogical and editorial reasons, however, the authors were illustrating the metaprogramming technique in the simplest way possible, so that the widest audience could grasp the topic. We simply did not have room to motivate the more sophisticated solution. Thank you for writing. --ca


Am I missing something here? I read Miro Samek's article in the March 2003 issue of CUJ, "Quantum Programming for Embedded Systems: Toward a Hassle-Free Multithreading." It looks to me like QP solved nothing, but increased the complexity of the programming.

For the Dining Philosophers Problem, the traditional solution that I am familiar with requires that a hungry philosopher picks up a fork. If the second fork is taken, he must put down the first fork and wait a random time. Samek refers to this solution. It solves the deadlock solution.

Samek instead proposes a central server for forks. That works fine for a very small problem such as the philosophers. But it becomes unwieldy if you consider that all lockable resources on the computer must be served from the same server. Otherwise you can still deadlock trying to acquire resources from multiple servers. That makes for a mighty complex server. Remember that the server needs to know all of the possible deadlock situations so that it can avoid them.

But wait, it gets worse. Suppose you are on a network and will acquire resources from multiple computers; they must all be served from a single uber-server. I can see it now; I cannot print to my own printer unless I get a resource lock from Redmond. And each time I do, I send a penny to Redmond.

Thanks anyway. I'll stay with the traditional solution.

Allen Brown

Hi Allen,

Thank you for your letter. I believe that you raise important concerns (alongside some common misunderstandings), which all indicate that my article was perhaps not clear enough.

The traditional solution to the DPP (Dining Philosophers Problem) that requires a philosopher to immediately put down the first fork if the second is unavailable, wait a random time, and try again is just one of many. Such a course of action is not required in the original problem statement, as posed by Dijkstra in Hierarchical Ordering of Sequential Processes (reference [7] in the article).

More importantly, though, your greatest concern seems to be about the strategy of encapsulating shared resources in dedicated active objects. This strategy leads to inventing the Table active object that effectively becomes the central server for the forks. You observe that this strategy scales poorly and that the active object solution can still be subject to a deadlock if the shared resources are served from separate sources. This is an important point. The article didn't make it clear enough that the main challenge of the active object-computing model is to maintain the system-wide consistency of state among active objects. Indeed, inconsistencies can lead to system failures similar to race conditions or deadlocks in the traditional multithreading.

For example, the programmer must realize that the consistent state configuration in the DPP doesn't allow two adjacent philosophers to be in the "eating" state at the same time, or otherwise the forks would be over-allocated. Obviously, such a constraint can be realized in many ways within the active object paradigm, not necessarily with the central server for the forks. For example, the adjacent Philosopher active objects can resolve contentions over the forks (most likely at the cost of increased communication) without using a central arbiter. The main point here is that a central resource server is not the imperative of active-object computing -- maintaining the system-wide state consistency is. Unless the consistency of state of your network depends on it, you don't need to consult Redmond to print a document to your network printer.

Having said that, however, I must also say that the resource encapsulation strategy typically yields minimal communication and often offers other benefits. For instance, the dining philosopher application must report the status to the screen. Placing the screen output in the Philosopher active objects, however, can lead to either scrambled output (if the screen routines are not reentrant) or unexpected blocking and potentially priority inversions (if the screen routines are protected by a mutex or are synchronized Java-style). The Table active object turns out to be the ideal place for outputting the application status to the screen. (The article's accompanying code illustrates this aspect.) So, even if you didn't invent the Table for coordinating philosophers, you should invent it for the screen output (at least in a truly real-time application). This is just one example how the simple rule of encapsulating shared resources (such as the forks or the screen) in dedicated active objects, or more generally, not sharing any resources directly, automatically leads to safer solutions and avoids the whole classes of nasty problems.

Incidentally, most of the traditional DPP solutions also employ some form of shared facility such as an array of semaphores through which resource allocation is to be managed. One notable exception is a fully distributed solution described by C.A.R. Hoare in his landmark paper "Communicating Sequential Processes" (Communications of the ACM, August 1978, pp. 666-677). In addition to each philosopher being a process, now we have each fork being a process. Interestingly, even this solution contains a central process called ROOM.

Which leads us to the final point of using active objects for distributed computing. Although beyond the scope of the article, it's clear that active objects that share no resources and communicate only in a "thin wire" style are ideal for building highly parallel distributed applications. This is yet another aspect in which active object computing shows more flexibility than the traditional approaches to concurrency. However, as Bran Selic describes eloquently in the article "Distributed Software Design: Challenges and Solutions" (Embedded Systems Programming , November 2000, pp. 127-144), distributed computing brings it's own unique challenges. For example, it has been formally proven that it is not possible to guarantee that two or more distributed sites will reach agreement (consistency in state) in finite time over a lossy communication medium. So, even though extending an active object-based framework (such as QF) to exchange events over networks is easy, building truly robust distributed applications is not.

Sincerely,
Miro Samek