Dr. Dobb's Journal December, 2004
Welcome to the December installment of this column. In this chilly or toasty month, depending on your particular hemisphere of choice, we venture into wireless futures, walk on fractal shores, wonder what service is really being provided by web services, and outsource everything in sight.
It is a common but still fascinating observation that many natural forms can be simulated very neatly by fractal models. It seems as though there is something inherently fractal about nature's algorithms, revealing itself in such diverse natural phenomena as the branching of trees and lungs, the spiraling of seashells and sunflower florets, the silhouettes of mountains and the shape of coastlines, and the distribution of stripes on zebras and craters on the moon.
This fact has proven to be very useful and several fortunes have been made exploiting it. Richard Voss began creating fractal mountains in the 1970s, Shaun Lovejoy pioneered modeling clouds with fractals, and soon Loren Carpenter was dividing and subdividing triangles in a way that was put to good use at LucasFilm and Pixar and elsewhere in creating computer-generated mountains and clouds and coastlines and forests and alien landscapes and Genesis planets and Perfect Storms and other effects.
What is not so clear is just why any of this should work. Why should this particular mathematical trick, devised in the 1970s by IBM Research Fellow Benoit Mandelbrot, who used it to produce bizarre graphical monstrosities of fractional dimensionality, be so good at predicting the growth and shape of plain old two- and three-dimensional natural forms?
It's a reasonable assumption that for each such fractal-like natural phenomenon, some truly or nearly fractal process is at work in nature, generating the phenomenon. But merely assuming this doesn't tell you what the actual process is. You can generate wonderfully realistic shrubs and trees with very simple rewriting systems such as L-systems, which can be implemented graphically with brute-simple turtle graphics code, but it is not plausible that shrubs and trees actually get to look the way they do by some analog of a turtle staggering around in space, leaving behind a turtletrack of bark and xylem and phloem.
Actually, there's a "fractal" model for trees that predates Mandelbrot by five centuries. Leonardo da Vinci once turned his awesome mind to the matter of the branching structure of trees. He opined that, in order for the sap to flow smoothly through the trunk and branches of a tree, the cross-sectional area of the trunk should be repeated in the sum of the cross-sectional areas of the branches at each level of branching. The recursive flavor of Leonardo's prediction shows the kind of self-similarity that characterizes fractal processes. Recursion does seem to be the key to fractalness: The rule itself has to be passed along to the next stage in development. But although it is suggestive, da Vinci's model doesn't specify a true fractal theory of tree growth.
In fact, Mandelbrot came up with a more convincing argument for the fractal development of another natural branching structure: lungs. "The growth starts with a bud," he wrote, "which grows into a pipe, which forms two buds, each of which behaves as above." It's that last phrase, "each of which behaves as above," that carries all the recursive weight, and makes the fractal self-similarity possible. It also suggests where the recursion in at least biological natural processes comes from: DNA is all about a biological system passing on the rule for its own generation.
DNA doesn't have anything to do with the distribution of craters on the moon, though, at least that doesn't seem likely, nor with the shapes of mountains or coastlines. But in the case of coastlines, there is now a plausible theory of the physical mechanism that shapes them, and it is a truly recursive process, which is necessary if it is to generate a fractal.
The model (the work of Bernard Sapoval and his colleagues at the ƒcole Polytechnique near Paris) starts with waves eroding the coast at the weakest points, forming bays and inlets. The resulting irregularities sharply dampen the power of the waves, but the waves have exposed new ground that is now subject to the reduced wave effect. The tidal flow keeps the waves coming, the newly exposed coast provides new ground for erosion to continue, and this and the damping effect propagate the effect of the waves down to ever-smaller scales. Well, not ever-smaller: Eventually, the process achieves some equilibrium, and then the coast stays relatively stable until some weather event shakes things up. Like what happened in Florida several times this year.
The model fits the data well, and probably is a fair description of how fractal-like coastlines form. But it would be reassuring to see the recursion more clearly in the process, as it is built into all biological systems in the form of DNA.
The point of the aforementioned discussion was to suggest that merely descriptive models, models that show how amazingly fractal-like a lot of natural processes are, are being supplanted by theories that try to do more, to explain how these natural processes actually workhow things get to be the way they are. I will only mention the theories of Stephen Wolfram, Ed Fredkin, and Gregory J. Chaitin, about which I have written here several times before, and that their work seems to me to be on the path of doing the same for life, the universe, and everything else. Generative theories, I call them. A New Kind of Science, Wolfram calls this kind of "theorizing."
"Wireless" is a recycled buzzword. There are still people alive to whom the word calls up memories of the days when one cranked up the "wireless" to listen to Fred Allen. Or whoever was on then. I'm not one of them, but I've heard about it.
To many people today, the term "wireless" conveys something much more modern. I am one of those people; I bought into wireless not long after the charm of sliding on my back amid black widow spiderwebs in the crawlspace under the house to run Ethernet cable ran out. And for a brief moment, I thought I knew something that a lot of other people didn't. Now the 802.11b spec that I was so pleased to embrace has a lot of sisters. There are many wireless futures to choose among.
There's wheatfield Wi-Fi. A community-owned utility company in Walla Walla County, Washington, has built a huge Wi-Fi web that spans some 1500 square miles. Farmers are using it to monitor irrigation equipment in the field and residential and business customers are using it to get on the Internet. A city-wide network in Philadelphia was getting press recently as possibly the largest Wi-Fi network in the world. It's not. This one is 10 times as large. It's bigger than Rhode Island. It even extends into part of Oregon.
I'm not sure there aren't some gaps in the system's coverage becausewell, because there's a lot of territory in eastern Washington where nobody lives. But the developers say it's a lot easier to cover farmland with Wi-Fi than cities because there is nothing to block the signal. They hope to clone the network elsewhere. This could be great news for rural folks outside the reach of broadband services today.
802.11b has been replaced by 802.11g or 802.11a, but then there's big Wi-Fi: 802.16, or WiMax. It is designed to provide network coverage over a distance of 30 miles at speeds up to 70 Mbps. Intel is building WiMax into its Rosedale system-on-chip networking processors. Aren't you glad you didn't recently buy a landlines telephone company?
WiMax takes wireless toward broader coverage, but NFC takes it in the opposite direction, pushing wireless down below the 1-meter range. Well below. NFC, which stands for "Near Field Communications," has an ideal range of under 10 cm.
One of the benefits of NFC is (where have we heard this word before?) security. With a range of 10 cm, it is impervious to sniffers unless they are so close that, well, you can smell them. But 10 cm is fine if what you want to do is wave your key card at the door lock or transfer data from one handheld device to another, or between a handheld device and a computer. Just hold them close together and wait. We're talking 424 kilobits per second. Philips and Sony developed it and Nokia and Samsung have already bought in.
In my brief review of Ed Yourdon's scary book Outsource last month, I overlooked three interesting trends in technology outsourcing. I'd like to remedy that this month.
Outsourcing security. On first hearing, this sounds like a lousy idea. Hand over the keys to all your secrets to complete strangers? How can that be good business?
But there is really nothing new about hiring outside security: rent-a-cops, strike breakers, muscle, goons, there's quite a history of corporate America addressing security concerns by hiring outsiders who will remain loyal to management and not ally themselves with the workers.
That, I suspect, is the critical issue driving outsourcing of security: not that the talent can't be found within the organization, or brought into the organization, to protect its computer systems from intrusion and sabotage and theft of secrets, but that management doesn't trust the workers. Evidence of this is to be found in the fact that MSSPs (managed security service providers) are increasingly offering not just perimeter protection of computer systemsfirewalls, intrusion detection, virus detection, and so onbut also defense against what they term "internal threats." Maybe that just means poor security practices within the firm. But I tend to take a more paranoid view, and I think that it also means monitoring employees as potential threats.
I can hear the pitch: If they'll steal your paperclips, why should you doubt that they'd sell your secret recipe to the competition? Hey, I don't have a monopoly on paranoia.
Outsourcing R&D. This seems dangerously close to violating the rule "never outsource your core capabilities," but that's only because I, like you, tend to think of thinking as one of my core capabilities. Managers don't always think that way.
Besides, outsourcing R&D is nothing new. The biggest firms do it both ways: Microsoft funds its own very impressive R&D barn, but also considers itself to have R&D grazing rights over the entire tech industry.
What is new is the rapid growth in the offshore outsourcing of tech R&D, which is projected by research firm Research & Markets of Ireland to shoot up from $1.3 billion last year to $8 billion in 2010.
More often than not, this R&D outsourcing goes to India. There is a widespread impression that India produces a lot of bright mathematicians and tech types. I can see at least three reasons for this impression: history, personal experience, and statistics. As mathematical historian Georges Ifrah argues, the number system we call Arabic is actually Indian in origin. (No offense intended to ancient Arabic mathematicians: They never claimed to have invented the number system anyway.) And I suspect that many managers in tech companies had an Indian math prof who made an impression on them. I know I had several, all of whom talked very fast and very precisely. Finally, a country with over a billion people and a 65 percent literacy rate ought to have a few technologically adept people. India is competition, definitely, and as indicated below, folks over there sometimes work pretty cheap.
Outsourcing yourself. Now there's a concept. Here's what one programmer wrote in a Slashdot posting:
"About a year ago, I hired a developer in India to do my job. I pay him $12,000 out of the $67,000 I get. He's happy to have the work. I'm happy that I have to work only 90 minutes a day just supervising the code. My employer thinks I'm telecommuting. Now I'm considering getting a second job..."
Yes, we can all be like Microsoft. If you can't lick 'em, hire 'em.
I continue to wonder if the corporate world really understands what it's getting itself into with web services, Service-Oriented Architectures, and the general informationizing and componentizing of businesses.
Whether the business produces horse collars or air compressors or diapers or delivers the news or flowers or hot dogs, the owners are increasingly encouraged to think that they are in the information business, and moreto restructure their businesses so that they are more clearly in the information game. There could be a trap in this.
Someone, it may have been Paul Heckel or Brenda Laurel or Cecil B. DeMille, once said that there are three kinds of jobs: producers, directors, and "creatives." Creatives generate new content, directors assemble and package the work of creatives, and producers manage the money and the rights.
We all know how upset the producer types get when folks start using the Internet to bypass them. They try to get laws passed, they try to get hardware redesigned, and they sue everybody in sight.
We creatives get less upset when our stuff is ripped off because, generally, we have no faith in residuals and royalties and subscription fees and the like anyway. Living by our wits, from hand to mouth, we want the money up front and don't count on anything more until we produce our next epic. Isn't that how you picture yourself? As the lone brainslinger who rides into town on a white horse to save the peons and then rides off again, into the sunset? Me, too.
Directorswell, that's what web services want to turn everyone into. Applications should be services, and these services should plug together, and they should be available via the Internet. The sellable product, then, could come to look more and more like a movie, assembled by the director from the creative efforts of many. The product of tomorrow is a template with the slots filled by these outsourced services. It's the ultimate outsourcingoutsource your product.
But what is to keep your customers from being their own directors?
If the services are out there, why can't they assemble them for themselves? For every commercial service there is likely to be a freeware version somewhere on the Internet. And as for the skills the director brings to the production, the industry is making it ever easier for people to find the components, evaluate them, and do the assembling themselves. You can create your own CDs by downloading music and cover art and lyrics from different sources and combining them.
If you have a bit more skill, you'll tap the APIs and turn some other creative's or director's opus into something new and unanticipated, like those who are turning Google's e-mail service into extensions of their desktops and other things.
Some aspects of this may not be legal, but you can do it. And the standards and protocols underlying web services are not proprietary.
We are building a network that lets us put together what we want from components. Or maybe the network is just growing, or building itself. Anyway, that dream that the Internet would eliminate lots of middlemen because it would empower us to find what we need for ourselvesit's happening.
So should we be encouraging businesses to become such endangered middlemen?
I'm just asking.
DDJ