Dear DDJ,
Norman Dotti makes excellent points in his "Real Real-Time" letter (DDJ, February 1999). Regarding real-time instruments such as FFT spectrum analyzers, he distinguishes between those that can acquire and display data continuously with no gaps, versus those that only give the visual appearance of continuous data. Because our Data AcQuisition And Real-Time Analysis (DAQARTA) shareware allows both approaches, I'd like to comment on the virtues of each.
In a typical real-time analyzer, when the sample rate is adjusted to the point where processing just keeps up with acquisition, all of the input data is reflected in the display. If that sample rate is, for example, 20 kHz when processing 1024-sample FFTs (typical for Daqarta running on an old 386 system), then the display update rate is 20000/1024, or 19.5 screens per second.
At higher sample rates the display shows only the most recent data, but still updates at the same rate. In theory, a signal could contain time-domain transients that fell "between the cracks" of displayed spectra and were thus overlooked. With a faster system, these might turn up as occasional jumps in the noise floor of the spectrum...assuming you could spot an infrequent flicker at such high display rates.
Although it's pretty unlikely that such transients would be synchronized with the data gaps, they could be detected by reducing the sample rate to eliminate the gaps. But if you know there are transients, then a better approach is to trigger on them: You can then observe the transient alone, or the "clean" data before or after the transient using a deep data buffer. There is no problem with missed data at high sample rates, because the analyzer waits for the trigger before processing only the desired synchronous data.
In fact, triggering is useful even with repetitive signals, to give a more stable spectrum. And for viewing waveforms instead of spectra, triggering is practically mandatory for a stable display. Even more important, proper triggering allows time-domain (waveform) averaging for impressive noise reduction of evoked responses or other repeating signals. So if you want a fast sample rate, there is really no need to forgo it or to buy a faster system just to stay within the real-time limits of your analyzer.
This discussion presumes that the data-acquisition board or sound card can use DMA or FIFO interrupts to acquire data in the background while processing in the foreground. But many popular laboratory boards lack these facilities, and rely upon interrupts to acquire each sample. At high sample rates the interrupt overhead slows processing so much that a sequential mode of operation is better: Instead of interrupts, foreground polling is used until enough data is acquired, then that data is processed before going back for more. As the sample rate goes up, the time to acquire the data goes down. Processing time is unchanged, so the display update rate actually rises.
Sequential mode allows typically twice the throughput of real-time mode for these boards, particularly for evoked response applications using simultaneous stimulus generation. As an example, a basic DAS08/ Jr-AO board (under $200 from Computer Boards or Cyber Research) in a 386DX-40 system can run at its maximum ADC rate of nearly 40 kHz while simultaneously outputting different tone burst complexes from both DACs at 120 kHz each, for a fully synchronous aggregate of almost 280 kHz.
And even for boards that don't need it for speed, sequential mode makes it easy to poll a TTL input to act as an external trigger, or to produce a TTL trigger pulse to synchronize external equipment. There are many real-world applications where such features and performance are much more important than the opportunity to view every last sample in untriggered true real-time mode.
Of course, there is a trick to getting this kind of power and performance (in addition to 100 percent assembly language with a custom just-in-time optimizer that was flying when Java was only a brew): Daqarta drops down to real-mode DOS. As Norman points out, Windows is not a real-time operating system, and even a real-time operating system may be hopelessly inadequate for high-speed operation without special hardware. That same DAS08/Jr-AO board under Windows would be limited to a sample rate of only a few kHz, with no simultaneous outputs, even on a fast system.
What about NT? With latencies running to the hundreds of milliseconds, NT is an even worse choice. "But NT is supposed to be more stable," you say. Yes, indeed...but "stable" in the context of one application not crashing another. For a single application, real-mode DOS is by far the most stable...it's a "Don't just do something...stand there!" kind of system, as compared to the "But I was only trying to help!" approach of Windows and NT.
And as Norman notes, loss of data is a real issue; you just never hear it mentioned by vendors of multitasking data-acquisition systems. Exactly what applications can run concurrently without risking your data? How will you know when you've exceeded the limit? Consider that nearly everyday someone complains to the sound card tech newsgroup about "stuttering sound when I move the mouse;" and ask yourself how to detect a similar corruption in physiological signals or machine vibrations, where people don't ordinarily listen to the data.
So Norman's wise advice bears repeating: Make sure you know what the other guy means by "real-time"!
Bob Masta
tech@daqarta.com
Dear DDJ,
If the Online Op/Ed "Windows: Linux's Secret Weapon," by Lou Grinzo, had been written a year ago I would have asked for the crystal ball and complimented him on being a visionary. As is, it just looks like he has been paying attention.
Right now, Corel has 16 programmers on staff working to complete WineLib (part of the Wine Project). Simply put, this is a stab at making Windows programs run on UNIX Linux. The Lib portion is to ease the porting of Windows apps to native Linux. Of course, Corel is helping because they have more legacy Win32 code than any other software company (except Microsoft).
Item two. Drop by http://www.kde.org and read the archives in the kde-look@ and kde-devel@ lists. That looks like these people spend an awful lot of time working on usability and trying different approaches to making the interface ergonomic.
Kevin Forge
forgeltd@usa.net
Dear DDJ,
After reading Tim Pfeiffer's Online Op/Ed "Windows DLLs: Threat or Menace?," I felt compelled to respond. Pfeiffer's simple solution "don't use them" is actually not so simple. Not using any DLL is hardly possible: You would need to find static linkable equivalents for GDI, USER, KERNEL, and other core components of Microsoft Windows. Not using any DLL, except for system DLLs does not solve the problem: Many versioning problems that applications have are in fact due to updates of system DLLs (COMCTL32.DLL, for example) through the installation of "office suites" or web browsers.
The use of DLLs instead of static libraries indeed may be a potential cause for problems for which there is no easy fix. My own approach is to include version checks in each application, and to warn for conflicts. This at least keeps the end user informed. As for storing the locations of DLLs, I much prefer the use of local configuration files to cluttering the central registry (or the WIN.INI, for that matter). Installing a DLL in the Windows "system" directory is usually not a good idea.
Thiadmer Riemersma
thiadmer@compuphase.com
Dear DDJ,
I enjoyed the Online Op/Ed entitled "Windows: Linux's Secret Weapon," by Lou Grinzo at http://www.ddj.com. I would like to make one comment, however, regarding the "insular" mindset of Linux development to date.
The OS, as you probably know, is not only undergoing rampant development but also a redefinition of its clientele. While previous versions may well have been suited only to highly technical programmers with a UNIX background, that is changing. In the past year, usability has been drastically improved through work on the window manager KDE and the desktop system GNOME, which have the ability to make many programs appear almost identical to those running on Windows (or the Mac). This alone is a great thing -- I'm so used to Windows, sometimes I have a hard time "thinking outside the box." But with the ever-present discussion on Linux's role in the desktop market, there has been debate about what to do with legacy utilities that relied on a command-line interface and textual config files. Joe Blow doesn't want to type everything, but Mr. Hacker doesn't want his hands tied by any given user interface, which tends to rigidly structure, if not limit, what you can do with those utilities. The command-line interface, after all, is extremely flexible.
The most common solution I've seen, and I think it's a good one, is to develop a GUI that controls the original command-line version. This is happening for all manner of applications, from package installation to desktop configuration, Apache (WWW server) configuration, you name it. In this way, users who resist or dislike the CLI can see the util's pretty side, while the grunts can still get the good old CLI they crave. It works to everyone's benefit. In fact, I think it would be shameful for more programmers, Windows or no, to rewrite solid proven apps in order to restrict those apps to a GUI interface. Linux seems to be about openness and meeting everyone's needs. A commendable goal.
Regarding the state of autofs and the mount/umount situation, I had a Mac friend who constantly complained that Windows couldn't tell if there was a disk in the drive or not, in that no icon showed up when you popped one in. There's no visual clue what's going on. And both Windows (DOS) and Mac sometimes attempt to access a disk that isn't there and complain about it. I'm not sure if there is a universal solution to this problem...usability means different things to different people. My Mac friend thought the icon thing was stupid. I think it's stupid to drag some icon to the trash just to get your disk out. I like the push button floppy drive. I don't really care for mount/umount, but I can't really think of a better solution.
I should point out that there is strong development in the areas of plug-and-play recognition and power management in the 2.2 kernel, and this is ongoing. Perhaps this year more people will regard Linux as a strong desktop contender. I already do.
Michael Coddington
madrid@bway.net
DDJ