Real-Time Systems


Forked Interrupt Systems

Marc L. Allen


Marc L. Allen is a senior design engineer with Hamilton Test Systems, Inc., a subsidiary of United Technologies Corporation, where he designs point-of-sale systems and equipment. He has a B.S. in computer engineering from the University of Arizona. He may be contacted at Hamilton Test Systems, Inc., 2202 N. Forbes Blvd., Tucson, AZ 85745.

I recently designed a system controller for a PC-based point of sale credit card authorization system. This controller is capable of handling up to four subordinate terminals and several miscellaneous communications and storage devices

This application handles interrupts generated by keystrokes from subordinate terminals, communication activity, disk I/O, and an internal timer. These interrupts must be processed as quickly as possible while guaranteeing that every interrupt is processed.

This system generates enough interrupt activity that I couldn't run with interrupts disabled for fear of missing one, and in certain cases would need to service an incoming interrupt before I had finished dealing with a previous interrupt from the same device. To address these needs I settled on a forked interrupt system running in protected mode and developed using Intel's IC-286 compiler under MS-DOS.

A forked interrupt system utilizes a fork queue to serialize interrupts while minimizing the amount of time they are disabled. To do this, device drivers are broken up into two parts. The first handles the immediacy of the interrupt. Since interrupts are disabled during this portion of the driver, it should perform only the minimum work required. This normally includes acknowledging the device, clearing the interrupting condition, and (for input interrupts) reading the input data. Finally, this interrupt-disabled portion of the driver places the interrupt in the fork queue to be completed by the second, interrupt-enabled, portion.

The second part of the driver is activated by the fork queue task and performs the remaining interrupt processing. For communication devices, this portion might store incoming data or extract and send outgoing data, perform checksum or CRC calculations, and handle hardware handshaking details. For a timer interrupt, the interrupt-enabled portion would handle the effect of the timer event on the system. Listing 1 contains the two portions of a clock driver which uses this technique. The clock interrupts occur at some system-configurable interval and are used for task time-slicing and the handling of timer events on the tasks waiting for them.

The interrupt-disabled portion of the clock driver, timer_int() (Listing 1) , is one of the simplest interrupt-disabled portions in the system. The timer interrupt is cleared and then a utility routine is called to place the second half of the driver (the interrupt-enabled portion, alarm()) in the fork queue. fork_driver() effectively ends the interrupt-disabled portion of the clock driver by transferring control to the fork queue task. When this transfer occurs, the driver is suspended and is not resumed until another clock interrupt occurs. At this point, the driver completes the call to fork_driver() and continues to the top of the external while loop to handle the current interrupt.

The meat of the driver is contained in alarm() (Listing 1) . This interrupt-enabled code first informs the system that a significant event (a timeslice event) has occured, increments a system tick counter, and processes any expired timers on the timer tick list. With interrupts enabled, other interrupts may occur and be placed in the fork queue while this driver is in operation. In fact, since the clock driver by design has no commonality between its two portion, a second clock interrupt can be placed on the fork queue while the present one is being handled. Naturally, if a driver can't keep up with its own device, it's eventually going to have some serious problems. But with a conservative queue size, the driver could get behind its device during a sudden burst of activity and still catch up during the following idle period. This can easily happen if many devices interrupt at the same time. Remember that each interrupt will suspend the current driver until the new interrupt can be placed into the fork queue.

The call to fork_driver() in Listing 1 is not strictly correct. fork_driver() actually takes an additional long (four-byte) argument, allowing the interrupt-disabled portion to pass any necessary data to the interrupt-enabled portion. Although the choice of a long argument was appropriate for my system, any size is acceptable. This argument is passed to the interrupt-enabled portion as its first parameter. In practice, this parameter may be a character received over a communications line, some kind of device identifier, or a device status. Those who like to play games with parameters can use the long argument to pass two integer or character values or even a structure containing four characters. This is not ANSI standard and certainly is not portable C; however, it does make certain operations much simpler. As the clock driver has no need for any data, dummy is used as a place holder.

The final parameter passed to the routine is a pointer to the driver's acting 80286 Task State Segment (TSS), a structure which contains all the driver specific information required by the system. I use the term "acting" because this TSS is not the original TSS for that driver. The original is reserved for the driver's interrupt-disabled portion. Otherwise, the orignal TSS might be active when the next device interrupt occurs forcing a general protection fault while trying to activate a busy task.

Listing 2 shows how fork_driver() operates. Notice that if the system was running a normal task, the fork queue task would startup to handle the lastest fork entry. If the fork queue task was already running, this entry will be taken care of in due course, and if the system was executing a system service call, that call would be allowed to finish. The system scheduler will start the fork queue task at the completion of the system service. My system also contains a fork_continue() routine. It allows a driver to place an entry in the fork queue but returns control to the driver. fork_continue() is only used if the driver has more than one routine to fork. The last fork operation a driver performs should be through fork_driver().

The physical queue entry contains elements to store the address of the driver's original TSS, the address of the interrupt-enabled routine, the long parameter, and a link to the next element in the queue. I store the address of the original TSS so that the fork queue task can set up an environment identical to that of the driver before activating its interrupt-enabled portion. This allows a driver to switch to the activating task's Local Descriptor Table (LDT) and maintain the LDT association through the interrupt. The initial LDT switch would be performed when a system task initiates an I/O to the driver. Note that while the clock driver does not support direct I/O from a system task, it is activated by a number of system service calls regarding timeslicing and system timers. The interrupt-enabled portion does change LDTs to gain access to different tasks' parameter blocks which may be in local data areas.

Once started, the fork queue task (fork_execute(), Listing 3) will execute all queue entries, including those added during queue execution.

For each entry in the queue, the fork queue task creates an exact duplicate of the entry's original TSS with the following exceptions:

fork_start() (Listing 4) is used to front-end the entry's execution. It provides a stack environment for the entry to return. No special routines need to be called by the entry routine to exit the queue.

After building a copy of the TSS, the fork queue task performs a task switch to the new copy. The new task starts running at fork_start() and calls the entry routine, passing the four-byte parameter and the address of the TSS copy. When the entry routine returns, fork_start() task switches back to the fork queue task, which continues with the next queue entry.

Although this implementation of a fork queue works well for my application, it has some limitations. While the fork queue increases the number of interrupts that can be handled during a burst of acctivity, the extra overhead also increases the interrupt latency (the time from when an interrupt occurs until its processing is completed). Additionally, entry routines are not allowed to utilize system services in the normal fashion. To perform system services, I needed to place hooks allowing the entry routines to directly call internal functions that normal tasks can access only through the system service calls.

Future Directions

Presently, to run a routine at a very high priority I must have the calling task raise its priority, call the routine, and then lower its priority on return. Placing such routines on the fork queue would be much simpler. Because such a task would be a normal task routine, as opposed to a driver routine, it should have access to system services.

You could add system services capability to the fork queue by creating a real task for the fork queue. Presently, the fork queue task is an internal system task without all the information needed to handle system services. It isn't included in the system task table and isn't scheduled in the normal manner. Even if a fork queue entry could use system services, certain ones should be avoided or even ignored. Any service that requires the queue to block or wait would defeat the purpose of the fork queue.

Another possible extension to the forked interrupt system is a prioritized fork queue. Some devices may be considered more important than others. For instance, an imminent power failure interrupt should take higher precedence than a clock interrupt.

Conclusion

The forked interrupt system has shown itself to be a good way to serialize interrupts. Drivers are easier to write since reentrancy is not required. The fork queue allows these non-reentrant drivers to operate in an environment where interrupts are mostly enabled, allowing a faster burst rate of interrupts to be handled in a timely fashion.