Dialin' for High-Tech Dollars

Ray Valdes

Today's Internet is considered by many to be the precursor to the information superhighway. As reported here several months ago, many developers are not waiting around for acts of God, or for Congress and megacorporations to magically conjure up the interstate dataway. Instead, small ventures like O'Reilly & Associates, Internet Shopping Network, Enterprise Integration Technologies, and Mosaic Communications are using the Internet as the system platform for a new breed of applications. Meanwhile, the daily newspapers continue to report nonevents about the coming information highway, such as uncompleted mergers, imagined markets, stalled legislation, and so on.

In all this talk, however, there is the fundamental assumption that this future system will basically work. Exactly how it will work is left to the net.geeks, present and future, while corporate honchos concern themselves with planning how many O.J. channels they will carry.

The truth is the global net exists and it works. Any glitches you encounter are probably by your newly established access provider's 486 going down after one of the three employees turns on the toaster at lunchtime. By and large, the Internet does not go down, even though it is one of the most massively complex systems ever built--currently 35,000 networks, more than 2 million host machines, and 20 million users in 70 TCP/IP-connected countries, on a smorgasbord of operating systems and hardware platforms. (The Internet Worm incident of a few years ago is a solitary blot on an otherwise stellar record.)

But, as novice users learn the joys and travails of Mosaic, gopher, and the alt.flame, comp.unix, and alt.sex newsgroups, few are aware that the technological foundation--the Internet Protocol (IP) that forms the basis for the TCP/IP protocol suite--is, as we speak, being revamped under our feet. The emerging version of the protocol is known alternately as "IPng" (Internet Protocol: The Next Generation), and "IPv6" (IP Version 6). This reconstruction work is being carried out by a loose-knit group of sometimes self-appointed (and occasionally self-taught) world-class hackers and system designers that work under the auspices of the Internet Engineering Task Force (IETF).

This reengineering effort is interesting for several reasons. There are, of course, some tricky design issues having to do with trade-offs between performance, compatibility, and added functionality (for example, facilities related to multimedia broadcasts and to commercial transactions). Unlike the PC arena, where a developer's decisions on matters such as a spreadsheet data format or graphics API might have a lifespan of 4--5 years, the decisions of IPng designers will have an impact for the next 15--20 years. The IPng effort is also interesting, for the way the design process is carried out, which is unlike most development efforts in industry and government in that it follows the Internet tradition of "cooperative anarchy."

Although the initial IPng specification was decided upon only a few months ago, the overall effort has been underway for more than two years--soliciting alternative proposals, then evaluating and deciding among them. The primary motivation for IPng is the distressing fact that the Internet is running out of addresses, as accelerated growth consumes the 32-bit IP address space.

Enthusiasts about the online experience like to say that cyberspace has no boundaries. But as the settlers of the Old West eventually ran into the Pacific Ocean, so the new settlers of cyberspace frontier will inevitably encounter the limit of the 32-bit address space. No one can predict exactly when this will happen, but estimates range from 2 to 10 years from now. Wait a minute, you say, 32 bits equals 4 billion, and there are only 35,000 networks and 2.2 million hosts, so where's the problem? Even if the system continues to grow at current rates (doubling every 14 months), there's still a long way to go. The answer is that the 32-bit address space is not homogenous, but instead is structured into several classes of addresses, reflecting various ways of dividing the 32-bit value into "the network part" and "the host part." These partitioning schemes are called "classes." Class A addresses use most of the first 8 bits in an IP address field for the network portion of the address; Class B addresses use most of the first 16 bits, while class C addresses use 21 out of the first 24 bits.

The total number of addressable networks is therefore 221 plus 214 plus 27, a little over 2 million. And some parts of the space are running out more quickly than others. The Class B space, which is used for mid-sized networks, was predicted in summer of 1990 to run out by March 1994. It would have if the Network Information Center (NIC), which assigns IP numbers, had not switched to a palliative scheme that assigns blocks of multiple Class C addresses in place of a single class B.

Well then, you might say, why not just take the address fields in the current IP packet header and double their size? The answer is that it's not that simple. Any change in the header, no matter how small, will impact scores of TCP/IP implementations on many different operating systems and platforms, not to mention the millions of installations. Many installations are perhaps running systems that are no longer marketed, for which the source code is not available, but which cannot be removed from service. So backward compatibility is of paramount concern. Also, it turns out that the current IP header, which comprises 12 fields totalling 20 bytes, could use some refurbishing. There are fields in there that are not used very much or very well, and others that perhaps should be expanded. Even the address field needs to be re-examined, beyond its physical size. It can have a rich semantics in the form of multiple ways of structuring the address space, the design of which needs to be guided by knowledge of things like the subtleties of router algorithms. The dynamics of network behavior under the influence of multiple router algorithms can fill several large volumes and already has. And once these design decisions have been made, there is a flock of smaller optimizations and tweaks that can be done to the IP header layout in order to bum instructions out of a driver's inner loop. If you add to all these considerations the wish to support new functionality, given a once-in-a-generation opportunity to lay a new foundation for cyber-

space, then you can understand why as many as a dozen different proposals were circulated prior to the final decision by the IPng directorate. In just one group working on an IPng proposal, more than 80 individuals participated, an indication of the scale of this human effort.

Given the resources brought to bear on this problem, some find the outcome to be surprisingly modest. At this writing, the IPng packet header consists of just seven fields totalling 40 bytes. (This size does not include a number of optional header packets, for things such as auto-configuration security and authentication.) However, one technical reviewer called this design "an aesthetically beautiful protocol well tailored to compactly satisfy today's known network requirements." The most salient feature of the new IP header is that address fields are 128 bits instead of 32, leading more than one person to remark that this is enough to address "every proton in the known universe" (if these bits were part of a flat address space, which they are not).

The winning proposal was a result of a compromise hammered out after many grueling and impassioned sessions. The three finalist proposals were: Simple Internet Protocol Plus (SIPP), TCP and UDP with Bigger Addresses (TUBA), and Common Architecture for the Internet (CATNIP). Some participants in the debate bemoaned the last-minute compromise: "It was depressing to learn that we have become so politically rather than technically oriented that we now choose to make decisions which make everyone equally unhappy, rather than decisions which make one faction happy." But one positive indicator was that some people who were formerly squared off against each other were each claiming that their side won, a tribute to the statesmanship of cochairs Scott Bradner and Allison Mankin, who rode herd on the process and engineered the compromise.

After the dust settled, everyone became more sanguine and some emphasized the limitations of what had been accomplished. Frank Kastenholz of FTP Software considers IPv6 to be a "minor evolutionary step" that leaves a number of hard questions unanswered. The depletion of the IP address space is a serious problem, but just as critical is the issue of router table exhaustion (the Internet's tremendous growth has led to significant increases in the memory requirements of routers). According to Kastenholz, IPv6 could make this situation worse, increase the size of router tables by a factor of 4. Also, "it shouldn't have taken two years."

Another observer has been Radia Perlman, inventor of the distributed spanning tree algorithm used in routing. She considers the differences between the three proposals to be "not that great," saying, "the whole thing is extremely political, there's not much technical content". Although IPv6 is a definite improvement, "it's not as good as CLNP would have been."

There's a saying that goes "There are two things that people should never watch being made. One is sausages, and the other is Congressional legislation." If you followed the heated debate, you may be tempted to add "_and Internet protocols." But the reality is that the apparently chaotic process works surprisingly well, especially when compared to more bureaucratic alternatives such as those relating to Open Systems Interconnection (OSI).

Robert Kahn, one of the codesigners of the ARPAnet of the 1970s, writes in the August 1994 CACM:

Internet standards are those that have been found by actual trial and error to be desirable, and the resulting standards developed as a result of widespread implementation. No better model of a standards process has yet emerged that is as dynamic and agile to allow more direct involvement by industry.

This sentiment is echoed by Frank Kastenholz, in "Technical Criteria in Choosing IPng," in a section entitled "Cooperative Anarchy"(May 1994):

A major contributor to the Internet's success is the fact that there is no single, centralized, point of control or promulgator of policy for the entire network. This allows individual constituents of the network to tailor their own networks, environments, and policies to suit their own needs. The individual constituents must cooperate only to the degree necessary to ensure that they interoperate. We believe that this decentralized and decoupled nature of the Internet must be preserved. Only a minimum amount of centralization or forced cooperation will be tolerated by the community as a whole.

Many members of the Internet technical community are proud of the apparent chaos and deep-seated pragmatism that characterize the development process. Recent years have added a patina of organizational lines of authority to this process--lines which extend up from IPng directorate, to IETF, to the Internet Architecture Board (IAB), ending up with the Internet Society, a private industry group. Even though some of the participants are in the employ of large vendors such as Sun and DEC, much of the work is done on a volunteer basis. One of the most vocal participants reportedly does not have to work for a living, having already made his fortune in a technical venture. There is a sense of commitment, purpose, and enjoyment.

Dyed-in-the-wool members of the Internet community like to contrast this with the OSI process: "It must be emphasized that the only real things produced during OSI standardization are paper products, which needn't bear any relationship to what is implementable or useful in the real world." (Marshall Rose, The Internet Message). By contrast, most Internet proposals already have a number of implementations interoperating with each other and with older-generation systems. Marshall Rose adds: "[The Internet approach] has been proven successful in delivering solutions, and the [OSI approach] is successful in delivering international agreements on paper_OSI is hardly a threat_OSI technology is second-rate, the products aren't credible, and there are no real OSI solutions."

David Piscitello and Lyman Chapin echo this point in their book Open Systems Networking:

There is a perception (all too often accurate) that the OSI standards process is more apt to converge on solution that is politically correct than one that is technically so. Within the OSI standards community, there also appears to be a tendency to compromise by embracing multiple solutions to a single problem, as well as a tendency to create and tinker with new technology within committees often without the implementation and experimentation that is necessary (essential) to determining whether the technology is useful._ All too often, ISO and CCITT standards committees hammer out compromises that have a significant impact on technology with regard only for the holy spirit of compromise.

This criticism is no longer new, and some people in the OSI community have responded by adopting some of the classic Internet approach. Judging by a few of the morning-after comments on IPng, it may be that the Internet community has likewise picked up the OSI penchant for compromise.

Nevertheless, according to Steve Deering of Xerox PARC(who is one of the key participants in IPng), "so far we've managed to retain what's effective," despite the imposition of more formal procedures over the years.

Looking ahead to the coming months, there's still much work to be done: Prototype implementations need to be built (and rebuilt), the protocol needs further work in order to support mobile hosts, security mechanisms need to be clarified, and so on.

And, once this work is done, Deering points out that "it's hard to know if this will take off in a big way." The problem is that there is little incentive for current installations to upgrade, because the primary beneficiaries of the larger address space are those who are not yet on the Net. The current inhabitants need to be willing to do outsiders a favor. Deering says, "We've tried to add a few carrots as incentives for existing users to upgrade, such as built-in security and plug-and-play autoconfiguration," but whether these are enough remains to be seen.

BCG?

Jonathan Erickson

Continuing in its efforts to foster a rich research infrastructure with an eye towards commercial development, the National Institute of Standards and Technology's Advanced Technology Program (NIST/ATP) has gone on the road to drum up more interest. "I need more proposals," ATP associate director Marc Stanley told attendees at a one-day conference sponsored by the Kansas Technology Enterprise Corporation. What the NIST's ATP is looking for are high-risk, enabling-technology projects that will provide long-term benefits to the economy by enhancing economic growth, increasing productivity, and providing jobs. The big payoff for companies that qualify are grants--not loans-- of up to $2 million annually, with few strings attached. In total, the NIST is making available $431 million to support critical research needed to develop key commercial technologies. So far, the NIST has made 89 awards through its ATP program, to the tune of nearly $250 million.

Up to this point, awards generally have been granted in the areas of manufacturing and processing--flat-panel displays, electro-optics, semiconductor processing, plastic recycling, printed-circuit boards, and the like. However, much of the recent activity has focused on software. To that end, NIST will be pumping millions of dollars into projects over the next five years to enable the development of software tools. To qualify, companies must propose projects that exhibit both technical and business merit--and that can withstand a rigorous, but surprisingly streamlined, review process. Large companies don't necessarily enjoy an advantage over startups, Stanley claimed, pointing out that nearly 50 percent of the awards made so far have gone to small businesses (ten or fewer employees).

The ATP supports both broad grants covering "generic" technologies--process control, manufacturing, and the like--and "focused" grants, which are more vertical--healthcare, information technology, and component software, for instance. Both types of grants undergo a similar awards process. Proposals are submitted to NIST for technical and business screening. Those that make the first cut move forward for an oral review; others are given phone interviews detailing why they were rejected. Awards are made depending on the results of the oral review. According to Stanley, 60 percent of the weight is based on the submitted business plan, 30 percent on the scientific/technical feasibility of the project, and 10 percent on the qualifications of the project team. Proposals are mercifully to the point, at least compared to most government proposals. Small-business proposals must be 50 or fewer pages in length, while medium- to large-size businesses can file no more than 70 pages. The NIST has also instituted a new "abbreviated-proposal" program of no more than ten pages in length that lets you measure ATP interest without fully developing a business plan. Longer proposals are rejected outright. All proposals are confidential, as all ATP evaluators (both staff and industry participants) sign nondisclosure agreements. Interestingly, as a means of ensuring business-plan confidentiality, the NIST ATP program is one of the few government programs exempted by law from the Freedom of Information Act. (Likewise, the ATP program is excluded from the General Agreement on Tariffs and Trade international agreement, so that participants can never be sanctioned by GATT member nations.)

In addition to established grant programs, NIST is actively seeking new areas to support. Consequently, the ATP's "Program Ideas" scheme accepts letters or white papers (again, no more than 5--10 pages long) that outline nonproprietary technical and business goals currently not covered by the ATP. So far, more than 600 program ideas have been submitted to the ATP, leading to several new grant programs. Interestingly, with more than 100 submitted white papers, software has generated more program ideas than any other research domain.

According to David Fisher, ATP program manager for computing, software, and information technology, most of the current software initiatives focus on improving software quality and development productivity. Consequently, the ATP is closely following semantically based, rather than syntactically based approaches to software as a means to automating and improving the development process. Still, the long-term goal of the group is to help create viable commerce in vertical software components, including alternative revenue schemes for software reuse. Other areas the ATP is actively investigating are learning technology (including multimedia), communications, virtual enterprises, and dependable and renewable industrial systems. Of the 40 or so full proposals Fischer has received, 18 have moved to oral reviews; of the 33 abbreviated proposals, 10 have been invited to submit full proposals.

To become part of any NIST ATP program, you simply register with the ATP database to be routinely notified about upcoming awards, projects, and workshops. Guidebooks (including forms) for submitting proposals are also available (request the "ATP Guide to Program Ideas"). Likewise, the 50-page document, "Component-based Software '94-'06 is also available. To request documents or register, call the ATP at 800-287-3863, or write ATP, NIST, A430 Administration Building, Gaithersburg, MD 20899. Alternatively, you can fax the ATP at 301-926-9524, or send e-mail to atp@micf.nist.gov. For specifics on software projects, contact ATP program manager David Fisher at 301-975-3649 or dfisher@micf.nist.gov. Similarly, Jack Boudreaux, program manager for computer science, applied mathematics, and robotics, can be contacted at 301-975-3560 or jackb@micf.nist.gov.

ATP files are also available electronically via anonymous ftp at enh.nist.gov in the techserv public subdirectory. (The password for enh.nist.gov is userident.)

How To Make Friends and Influence Developers

According to Harry Helms of HighText Publications, development- relations evangelist Michael Windsor told attendees at the recent Microsoft Multimedia Developers Bootcamp: "Working with Microsoft is like jumping in front of a speeding truck. You can get one hell of a ride for free, or get run over."