The (B)Leading Edge: Well Principled (and Practiced) Software Development

Jack W. Reeves


This month's column is more about philosophy and less about coding. Nevertheless, it may be even more about the bleeding edge of software development than many of the other things I have written. This column is inspired by the imminent arrival of Robert Martin's latest book, Agile Software Development: Principles, Patterns, and Practices (Prentice Hall, 2002). I have been waiting for this book for almost a year. I hope it sells like hotcakes. Unfortunately, I doubt that it will do that well. Oh, I am sure it will do just fine, but I fear that many of the people that should be reading this book probably will not.

I am looking forward to reading the book in its final form, and I really expect to learn something in the process, but frankly most of the topics in the table of contents I have seen before in other books and magazines on my shelves. But that is OK. Finally, I have collected in one place some of the most important information in our entire industry.

One of my growing frustrations over the last few years has been a continuing stream of encounters with people and projects that — to put it politely — do not seem to know what they claim to know. I have worked on projects that mandated the use of UML to document designs, and everybody did that. Unfortunately, the diagrams they produced did not seem to mean the same thing to those of us who had learned the technique elsewhere. I have worked on projects where people claimed to know software design patterns, but the only thing they seemed to do with them was to drop names at meetings, apparently to impress others with their expertise. I have worked on projects literally beyond counting that claimed to be doing object-oriented development, but one look at most of the code quickly revealed that the people did not have the first clue about basic principles of object-oriented design. They seemed to feel that inheritance was basically a code reuse mechanism.

Finally, I have even worked on a project where certain parties claimed they were practicing XP (eXtreme Programming). About the kindest thing I could say is that they seemed to be somewhat misguided in their enthusiasm. Less kind souls tended to remark about how they used XP as an excuse to get away with things that were specifically forbidden by more traditional processes — like making changes in production systems without going through the official change control process.

The problem is that most of these people were pretty good programmers. Of course there were a few total incompetents in the batch, but quite a few of these people were considered top-notch developers by their peers and their bosses. And they were — by the simple criterion of being able to deliver working code.

So, "What is the problem?" you might ask. Isn't working code the only real litmus test in software development? Isn't that what XP and the other Agile Processes teach? In fact, isn't that what I argued 10 years ago in my article "What is Software Design?" (reprinted as an appendix in Agile Software Development: Principles, Patterns, and Practices). In one sense, the answer has to be "yes." Certainly that was the only thing that mattered to their managers. Unfortunately, I find that answer pretty sad. It is like saying that the only criteria by which to judge a novel is whether it is readable English.

If these people were already good, how much better could they have possibly been had they been more knowledgeable about the underlying principles of their craft — the principles that are specifically discussed in Agile Software Development: Principles, Patterns, and Practices. I have to use the conditional here because there is no guarantee that they would have learned anything had they been exposed to the information. After all, most of the basic principles that Bob describes have been known since object-oriented software first came into vogue. In fact, he presented many of them himself in his earlier book Designing Object Oriented C++ Applications Using the Booch Method (Prentice Hall, 1995). Likewise, while the pattern information is somewhat newer, most of it has been available since 1995 when the GoF Design Patterns book (Addison-Wesley, 1995) first appeared. Only the Agile development practices can be considered as really new. Even with Agile, if you haven't heard of XP by now, you must have been programming at some very isolated skunk works.

Nevertheless, while all of this information has been available for those software professionals who were willing to investigate it, this is the first time it has all been brought together in such an accessible form. So I will give people the benefit of the doubt and assume that their ignorance was simply lack of exposure to the material. But this leads to another aspect. If it is sad that otherwise good people were performing well below their potential, how much sadder is it for the entire organization.

Every one of the projects that I mentioned above was late. Most were also over budget. Very few of them ever delivered anything, and even when they did I do not believe any of them lasted a full year before they were replaced by something else. If it is sad when individuals could do much better, it can be economic catastrophe for an organization. Under the circumstances, I have to wonder why it is that not one of the places I have ever worked has had any policy whatsoever about ensuring their people had anything beyond minimum skills. They have often insisted upon university degrees, and some minimum number of years of experience, but in this business those things do not even come close to guaranteeing that people have the background that I think they should. In a couple of places where I have worked, the team itself tried to ensure that new candidates had some of these skills, but time and again the process would get stop-gapped by the simple fact that none of the candidates would meet the criteria and management would insist that somebody had to be hired.

Finally, this issue is sad for our entire industry. Real engineers (for that matter, most of the rest of the public) see software development as an undisciplined cottage industry that hasn't grown up yet. And as long as the vast majority of practicing software developers do not have a clue about the basic principles that underlie the techniques that they claim to be using, then the public's perception is pretty much a correct one.

I am not going to give you a book review; I simply recommend that you get a copy of the book. Instead, I am going to give you an example from one of my projects and relate it to one of the chapters in the book.

Consider the following line of code. This is a real line of code taken from one of the projects that I have worked on. While I have simplified it slightly, I have not altered the essence at all.

new LoggerLib ("MyApp", NULL, "Logfile");

What does this line of code say to you? I will cut straight to the chase — if you said "that is obviously the initialization of a Singleton object," then you are a much more perceptive developer than I. When I first saw it, I said "that's a memory leak."

Obviously that did not make sense, so I looked at the header for LoggerLib. I found a class that had two constructors, this one and the default constructor, and a virtual destructor. All of the other public methods of the class are declared static. When I looked at the constructor, I discovered that it initialized a static instance variable with the value this. All the static methods check to make sure that the instance has been initialized; otherwise they throw an exception. Again, this did not make a lot of sense to me. Why create an object that nobody can use directly? Why not just have a LoggerLib::Initialize function that has to be called instead of a constructor? Why have a virtual destructor?

Why have a public destructor for that matter? While the library provides a function GetLib to return the internal instance pointer, the only reason I could see for deleting the object would be when you needed to reinitialize it. Why not just have a function that does that? It gets more interesting.

The constructor really does not do anything, nor do most of the functions. The real work is done by an underlying C library. For example, the constructor just initializes the object and then calls the loggerInitialize function. When I looked at the header file for logger, I discovered that it declared a structure LoggerContext that was much more extensive than the LoggerLib object. One thing confounded me however: nowhere in logger.h was any instance of LoggerContext used. Again, when I looked at the implementation file for logger, I found that there was a static instance of LoggerContext declared in the file, and that was the only instance that the library functions used. Finally, just to make things interesting, the implementation of the logger library is a C++ file. Apparently, at some point it was converted to C++ to allow the implementation to take advantage of some existing class libraries. So you have a library written in C++ that exposes a C interface. The data structure manipulated by the library is declared in the header file even though it is only used in the implementation file. This library is in turn wrapped by a C++ interface, which is based on an oddball version of the Singleton pattern.

Just when I thought I had this pretty much figured out, I made one other discovery. In my own program, I tried to initialize another library (one with a more conventional static Initialize function) only to get an exception. I discovered that said library expected to be able to use the LoggerLib functions to log information during initialization. Naturally, I had to initialize LoggerLib first.

Again, you may be thinking "so what." After all, initialization is something that only has to be done once in a program. Maybe it did take me a little while to figure it out, but it wasn't that hard. And maybe I did make a mistake or two when I tried to use the library the first time myself, but that just ensured that I did not forget what I had figured out. If I didn't know anything about the Singleton pattern, or its sibling the Monostate pattern, then I probably would agree with you, although frankly I think that this mixing of an object paradigm (a constructor) with a functional paradigm (static member functions) would still have bothered me.

The problem is: I do know the Singleton and Monostate patterns. They have become idioms that I have internalized in my own approach to design. This internalization helps me to be a better designer. Since I do not have to stop and think about how to solve this type of problem every time I encounter it, I can concentrate more of my limited mental faculties on solving the problems that I haven't encountered before. Unfortunately, that means that when I come across something like this, something that doesn't follow the pattern but seems different for no good reason, it bothers me out of all proportion to its actual differences.

You may wonder what I would have preferred? Since I am a big fan of the Monostate pattern, what I probably would have done is have a static initialization function for the Logger library and then have the rest of the library interface follow the Monostate pattern. It is possible to have a constructor of the Monostate object do the initialization (or one of the instance functions for the Singleton pattern), and some people prefer to do it that way, but as a rule I prefer to do one time initializations via a static function and just provide a default constructor for the object. I would also have had the initialize function for the second library take a Logger object reference as one of its arguments. This might seem silly with a Monostate object, but the point would be to make it explicitly clear that the second library required an initialized Logger library before it could itself be initialized.

As you probably suspected, Agile Software Development: Principles, Patterns, and Practices contains a chapter on the Singleton and the Monostate patterns. The chapter presents the two patterns and compares and contrasts the advantages and disadvantages of each. It is not as if there is only one "right" way to solve any given problem, but the whole idea behind patterns is that they form a catalog of common ways to solve common problems. Patterns are not reusable code, so every use has to be tailored to the problem at hand, but every pattern has certain elements that are its distinguishing characteristics. Deliberately following a pattern means that readers who understand the pattern can quickly grasp what is intended. On the other hand, the more a solution differs from the common understanding and/or the more it violates those vital characteristics the less likely it is that people will be able to apply their knowledge of the underlying pattern to understand the current situation. Looking like you are following a pattern — or saying you are — when you really are not actually increases the effort it takes to understand something.

And that is the real problem. Taken in isolation, things like this are no big deal. But this is not an isolated incident. In fact, every day it seems like I encounter something similar. In many cases, I haven't been allowed to make any changes either. Many times there are a lot of projects that already use a library, and "fixing" them would break many other things. Additionally, often nobody else seems to feel there is anything that needs to be fixed, so I know management would consider it a waste of my (supposedly) valuable time to be making "unnecessary" changes ("it ain't broke, so ...). Finally, my own sense of priorities tells me I am better off concentrating my efforts elsewhere.

Nevertheless, being forced to use libraries that don't meet my own standards of quality or "good design" significantly lowers my own feelings about the quality of the work I am doing. Eventually it gets harder and harder to work up enthusiasm about creating the highest quality software that I can when my gut tells me that it could be so much better. And that means that after a while developing software in such an environment is simply not fun anymore.

So that is why I hope that Agile Software Development: Principles, Patterns, and Practices sells extremely well. I have said before: most of us tend to enjoy activities more when we feel like we are doing them well. We also tend to enjoy them more when we figure that the other people we are doing them with are equally good, or maybe even a little better than ourselves. This book has the potential to make software development a whole lot more enjoyable for a lot of people. In the process, it might also significantly increase the quality of the software that actually gets produced.

About the Author

Jack W. Reeves is an engineer and consultant specializing in object-oriented software design and implementation. His background includes Space Shuttle simulators, military CCCI systems, medical imaging systems, financial data systems, and numerous middleware and low-level libraries. He currently is living and working in Europe and can be contacted via jack_reeves@bleading-edge.com.