Dr. Dobb's Journal August, 2005
There are two essential Microsoft conferences for Windows applications developersthe Professional Developer Conference (PDC) and the Windows Hardware Engineering Conference (WinHEC). Microsoft usually releases something significant at WinHEC. This year, we got Longhorn Developer Preview Discs (the full beta wasn't ready) and the 64-bit Edition of Windows 2003 Server. The latter is pretty cool, and I'll have it running here in a month or so. If you're going to migrate to Server 2003, this may be the right way, assuming the driver developers do their stuff on time. That may be an invalid assumption. We will see.
We expected a lot about Longhorn. After all, WinHEC is where you get the gory details of how new Microsoft OS features will work, and how they hook into the hardware.
Some of the breakout sessions at WinHEC are as technical as you can get. There's always plenty of both big picture and detail. I generally find that I learn more at WinHEC than at any other show I get to for the year; and while some of the breakout sessions are technical indeed, I can't recall going to one at which I didn't learn something important.
This year was no exception, but some of the information was gained more by inference than direct communication. So it goes.
One problem with WinHEC is that it really is like drinking from a fire hose. With all that information rushing in, how in the world can I record it? Particularly this year, when they provided neither tables nor electrical power outlets for the press.
Code-named "Longhorn," this "Operating System of the Future" was announced four long years ago (http://groups.msn.com/ eXperienceWindows/mslonghorn.msnw). Actually, though, some of us have been waiting longer than that. (Remember "On to Cairo"?) Now we learn that this radical revision of Windows will be with us in stages. Bill Gates himself named the stages: Key hardware requirements locked down in 2005, product ships for "Holiday 2006" for Longhorn Client, and some unspecified date in 2007 for Longhorn Server.
As to what will be in it, if they told us anything new from last year, I didn't get it into my notes. Since they have mostly been quietly yanking features out, I hadn't expected any new ones to be in. (For example, see http://msdn.microsoft.com/ data/winfs/ for the sad fate of the Windows Future File Storage WinFS.) They did demonstrate some features we'd only heard about before. There were screenshots of dynamic icons, including the dynamic "Documents" icon that replaces "My Documents." Some of the features of those dynamic icons are pretty cool: previews of folder contents, a sort of organizational picture of subfolders, the ability to organize items within those folders, that kind of thing.
In fact, the ability to organize files and data is pretty impressive. You can make new lists by drag-and-drop and build new data structures on-the-fly. All this whizzed by pretty fast in the demonstrations, and if you blinked, you missed some of the magic. How much of it will actually make it into Longhorn isn't entirely clear. Microsoft is getting wary of making promises. But it sure looked good, and at least they're up to slideware. Of course, the latest versions of Linux already have some of these features. So it goes.
It's also not entirely clear how much will be in Windows XP well before Longhorn. As the release date for Longhorn recedes, Apple continues to roar into the competition with new releases of the Mac OS.
Since we don't know precisely what goes in Longhorn, what will be released before Longhorn, and what they hope to have but may not get to, it's difficult to present all this in an organized fashion: One moment I may be talking about the future of the industry, another about Longhorn, and another about pipedreams. Bear with me.
There is multiprocessing in your future. Depend upon it.
Moore's Law states, roughly, that system capabilities double about every 18 months. This is an empirical rule that has been observed since the 1970s, and predictions based on that "Law" have proved out. The original formulation was based on underlying empirical observation, that the number of transistors that could be put in a very large-scale integrated chip doubled on the same time scale. Note that the two "Laws" are not quite the same, and that while the second statement implies the first, it's not a certain cause and effect.
The remarkable thing has been how well this prediction has held up, particularly recently when attempts to improve CPU performance through cranking up the clock speed became counter productive: Above 3 GHz, increasing clock speed produced far more heat than computational power. The clock-speed game ended with Prescott, which was supposed to go up to 5 GHz, but which will probably never top 4 GHz.
If speeding up one CPU can't do the job, how can designers take advantage of increased transistor densities to get higher performance? The obvious answer is through multiple processors. Instead of more speed, you break the problem up into parts and have a number of slower machines that each do a part of the job at the same time, then integrate the results.
At the extremes required for scientific computing, this approach has been known as "massive parallel processing," and showed up in our world of small computers as "Symmetrical MultiProcessing" (SMP). Most of us experience it, if at all, as "dual processing." Moving to four processors was tougher. Using massively parallel systems for general-purpose computing proved to be really hard, as was stated succinctly by John McCarthy: Such systems "tend to be immune to programming." Over time, many of the problems were solved, particularly in graphics processing, and Intel has featured multithreaded processing, called "Hyper-Threading," for some years now.
When Prescott proved to generate too much heat and too little computing power, Intel moved to the multiprocessing path, and announced the next step would be dual-core processors.
According to Peter Glaskowsky, VP of System Architecture for MemoryLogix (a chip design firm), "Intel does not make a dual-core processor. The Pentium D is literally two chips that have not been cut apart. Intel has samples of true dual-core chips, but it is unlikely we'll get them before 2006."
AMD, meanwhile, has actual dual-core processors. The difference is this: The two Intel cores are on one chip, but they have to go outside the chip, out to the bus, to talk to each other. The AMD design, on the other hand, has two CPUs and a Northbridge bus in the same chip. The result is true dual processing, and Glaskowsky says, "True dual processing works even better than hyperthreading."
And having said all that, it may not make much difference to anyone outside a lab. Beyond dual coreno matter how implementedlies a new world of multicore processors, servers with up to 64 discrete processors in perhaps four chips. The result is so much computing power that a lot of new software is designed around the premise that when everything is "fast enough," there is still going to be power left over: How shall we make use of that?
Or as Microsoft put it, what compelling reasons will there be to upgrade your computer?
One obvious use for extra computing capacity is to increase reliability. The original translation of the RAID acronym was "Redundant Array of Inexpensive Disks." The original notion was increased reliability, but RAID systems also used extra disk-storage capacity to increase performance. RAID 5 arrays increased both performance and reliability and changed the way designers think about disk storage.
Now view processors the same way we view disk drives in a RAID setup.
The assumption here is that systems are already fast enough, so much so that we can afford to leave hardware capability on hot standby or in a redundant configuration. That way, if something begins to fail, it can be replaced on-the-fly without the system losing either data or performance. You partition your system into virtual chunks. The operating system is not really aware of the existence of the "extra" resources, and doesn't try to make use of them, until some eventfailure warning or actual failurealerts the resources to wake up, and alerts the operating system to make use of them. Big iron started this: IBM has an entire ad campaign based around "On Demand" computing, and Sun will sell you a system with "extra" processors that, for a fee, you can have turned on as needed. Now this trend will be ubiquitous, in desktops and midrange servers alike--possibly even laptops.
This makes so much sense, particularly in servers, that the only question is where the partitioning should take place-- in the operating system, the BIOS, or elsewhere? That doesn't turn out to be a trivial question, but present plans are to build such a capability into Longhorn.
Emulationmaking a system with one CPU run as if it had a different processor in ithas a bad name, mostly because of the performance hit it incurs. As an example, the early version of the HP TabletPC used a Transmeta chip that was supposed to allow it to emulate an Intel x86. Of course, they didn't call it emulation because, if they had, they'd never have talked HP into buying it; no matter, whatever you call the process, it was slower than molasses and provided such a bad user experience that many were soured on the whole notion of the TabletPC. It almost did that to me.
However, if you have enough computing power, any system can be made to look like any other. For instance, it costs a lot in performance to make a Mac run Windows applications, but it can be done, and with a powerful enough Mac, it can be done so well that users may not notice that it's being done at all.
We are rapidly approaching a time where there will be enough computing power to allow every physical system to contain a half dozen or more Virtual Machines, each with its own operating system. The same machine may be running Longhorn, Windows XP, Linux, Windows 98, and for all of me, some version of the Pick OSall running one or more applications, and each entirely unaware of the others.
If that's not startling enough, control of all this may not be by any one of those operating systems; and none of the operating systems will be directly controlling the hardware.
One of the Longhorn demonstrations was central control of resourcesboth server and clientsin a network from a central administrator. Desktop machines could be shut down, restarted, updated, debugged, and have worms and viruses removed, all without any action on the part of the user physically present at the machine.
This kind of administrative control will be a key feature in Longhorn Server. Microsoft will not be an immediately dominant in server control as it would likejust after WinHEC, System Center was morphed into a brand name, instead of the Grand Unification of all their management products previously announced. Also, the server-level talks at WinHEC were much more general than last year's, where they were clearly planning an immediate assault on Hewlett-Packard's OpenView and Computer Associates' UniCenter.
TPM is short for "Trusted Platform Module." It's a chip built into the motherboard that stores passwords and performs cryptographic processing. It can be used to validate boot code (both local and from remote sources) and encrypt sensitive user data.
Startup allows the entire hard disk, except for a small tools partition, to be encrypted. The TPM validates the flash BIOS, the disk's boot block, and the operating-system code up to the point where a user is able to log in, at which point the usual OS protection features take over. If the BIOS is changed, the hard disk is moved to a new system or a different OS is loaded from a CD, and the data on the hard disk can't be accessed without the proper password.
Corporate system's administrators can give local users a local password while retaining another administrative password for themselves in case the user loses the local one, or leaves the company.
Longhorn, we are told, will support Secure Startup with a number of features, provided only that there's a TPM chip in the system. Expect TPM to be ubiquitous in mid-2007.
At 2004's WinHEC, TPM was part of the "Next Generation Secure Computing Base," which in turn evolved from "Palladium." That's still around, and TPM is part of it.
Longhorn is supposed to end a lot of security worries. Among other features, it will have ways to segregate instructions that can be executed from data areas that cannot contain executable instructions. This feature is already present in XP Service Pack 2, known alternately as "Data Execution Prevention" (DEP) or "No Execute" (NX). It only works on 64-bit-ready systems (AMD's Athlon64 and Opterons; the latest Intel Pentium 4 chips). If your system doesn't support DEP, you'll see a message saying so.
DEP sounds like strong type checking, and after the security session, I put it to the Microsoft presentation team headed by Dave Aucsmith, Security Architect and CTO of the Microsoft Security Business and Technology unit: "If XP had been built in a strongly typed language with range checkingsay Modula-2 or even Adawouldn't most buffer overflow worms starve to death?"
It was a moment of triumph when they all agreed. After all, I've only been making this argument since about 1982.
But gloat as I might, the security problem is severe, and if we had to wait for Longhorn, we might not have a computer industry left.
The computer book of the month is Jonathan Hassell, Learning Windows Server 2003 (O'Reilly & Associates, 2004). Hassell knows his subject, and if you find yourself needing to migrate to 2003 Server, this is the book to have when you do it.
DDJ