Distributed Computing and the OSF/DCE

Building client/server apps for distributed systems

John Bloomer

John is a staff scientist at GE's corporate R&D center, working on distributed multimedia and video applications. His publications and patents cover the areas of distributed computing, medical imaging, information theory, and neural nets. He is author of Power Programming with RPC, published by O'Reilly & Associates.


Distributed computing typically involves two or more computers on a network, computing in cooperation and sharing resources ranging from CPU cycles to databases. While distributed systems provide end users advantages such as greater reliability and flexibility (as compared to centralized environments), developers are faced with a number of hurdles when creating distributed applications. These challenges include:

The Open Software Foundation's (OSF) Distributed Computing Environment (DCE) is an integrated suite of tools and services that support the development of distributed applications. DCE provides interoperability and portability across heterogeneous platforms across LANs and WANs. One global namespace makes the resources across interconnected LANs and WANs look like a hierarchical file system (X.500 or DNS) through the directory-services API. A user on the distributed system can share resources (data, services, or whatever) by finding them or placing a mention of them in the namespace.

In addition, DCE Release 1.1, available for UNIX, MVS, Windows, HP-UX, Alpha NT, VMS, and OS/2, provides a consolidated interface for system administration throughout DCE, plus remote startup and shutdown of remote services. It also provides a generic security-service API (GSSAPI) which allows non-RPC-based systems to take advantage of DCE security, extended registry attributes allowing various proprietary systems to be registered in the DCE security registry, and security-delegation and auditing capabilities. Release 1.1 also supports internationalization, including standardized POSIX and X/Open interfaces which provide character-code interoperability.

DCE makes possible client/server architectures and data sharing using a remote procedure call (RPC) paradigm. In a client/server model, anyone (a client) interested in a particular resource on a network (a server) uses a formally defined application protocol to find and request a service. The server sends a reply, completing the "request-reply cycle."

Typically, servers are continuously running daemons that specialize in performing a few functions or services. The nature of the client/server communication is often synchronous, with the client waiting for the server to send a response. This is not necessary, though. Application protocols may also be designed to support asynchrony if necessary.

OSF/DCE Elements

As Figure 1 illustrates, DCE resides between distributed applications and the operating system, network transports, and protocols. It isolates the programmer from low-level, platform-specific nuances when communicating across a network between heterogeneous machines.

Threads are the most fundamental component specified by DCE. DCE needs to be able to run multiple threads of execution simultaneously on the involved machines to facilitate things like asynchronous I/O and concurrent servicing. Since many operating systems do not inherently support multithreaded execution, a user-level (nonkernel) threading package is included with DCE and is compliant with POSIX 1003.4a, Draft 4. This package, known as "DCE Threads," gives users the ability to create, schedule, synchronize, and otherwise manage multiple threads in a single process.

Often, clients and servers are first implemented as single-threaded processes. As the application evolves, it may require that multiple RPCs be placed from one client at the same time. Servers may have to field many requests at once, or applications may need to maintain a live user interface while placing RPCs. To achieve this, you'll need to split processing at the local machine into threads--to allow one thread, for example, to remain blocked, waiting on I/O, while another thread executes.

Use of communication and synchronization agents between threads is crucial to multithreading. DCE Threads provides mutual exclusion objects (mutexes) to limit access to associated resources to one thread at a time. Resources can be locked and unlocked to be shared among threads. Condition variables provide a mechanism for threads to be notified when another thread has completed some task or access. A thread can effectively wait for another thread to signal it and for a specified condition to be met before continuing. A joining mechanism allows one thread to wait for another to complete. Combined with the DCE thread-scheduling tools and exception-handling API, an elaborate multitasking system can be quickly assembled.

The network data representation (NDR) in DCE specifies a standard, architecture-independent format for data, to facilitate data transfers between heterogeneous architectures. This serial encode/decode scheme insulates application code from the differences in data types, enabling application portability and interoperability. NDR uses the receiver-makes-right paradigm, making it the receiver's job to convert the data into the form required of that architecture. Compared to the more common single-canonic approach used in Open Network Computing (ONC) RPC, the receiver-makes-right scheme distributes the translation computation load while scaling overall load with heterogeneity.

DCE RPC is layered on top of DCE Threads. All DCE services are based on RPC. RPC calls are function calls outside of your address space that look and feel like local procedure calls. DCE RPC specifies its own network-access mechanisms, different from the ONC RPC on which systems like NFS are based. For example, a local database query might look like Example 1(a). A remote database query to a potentially remote database would be similar to Example 1(b). The parameter someRemoteBank may be optional, depending on whether you want a specific bank (service) or one that is automatically searched-out and used at run time according to some externally specified criteria.

A protocol or interface compiler is used to generate stubs (containing low-level network-interface code) from a textual definition that describes the application protocol between clients and servers; see Figure 2. The stubs generated by the compiler allow a network client to execute a procedure on a remote host as if it were a local procedure call in its own address space. The stubs include numerous calls to the services in Figure 1, and they handle packaging and encoding/decoding of procedure parameters (both outward and inward bound). This packaging of parameters is called "marshaling."

DCE Services and Utilities

As we move away from centralized systems, we need tools to locate resources (data, services, and the like) on the network, keep systems synchronized, and provide network security. Consequently, DCE provides a directory service, distributed time service, and security service, which are layered on top of the basic mechanisms.

The directory service is used on a network to store and retrieve information about distributed resources--users, machines, and services--with specific attributes like location of a home directory or the host a service is running on. Integral to the directory service is the concept of a "cell." A DCE cell is a group of machines running a minimum set of DCE services and sharing a common network. For DCE applications to function, either a directory, security, and distributed time service must be running on a machine in the cell.

Additionally, machines in a cell can be running any combination of other DCE services, including GDS, DFS, and diskless support, and may require clerk daemon processes to cache and otherwise forward requests for standard DCE services. Basically, a cell is the smallest organization of machines on which you can perform DCE computing. Vended DCE software and development environments (layered products) typically ship in per-machine or per-cell measures. Linkages across cells are provided via the directory services' global naming tools. Figure 3 illustrates the connectivity of DCE service components. The cell directory service (CDS) manages naming and directory services for a cell and is first to be consulted for resource location. Should the resource not be local to the cell, CDS must consult other connected cells. If the DCE global directory service (GDS, an X.500 implementation) or domain-naming service (DNS) is installed, the global directory agent (GDA) is directed to consult with them to determine in which known foreign cell the resource is listed. A DCE application can use the X/Open directory service (XDS) API standard to access the DCE directory-service library. The XDS library can determine from the format of the name to be looked up whether to direct the look up to CDS or GDS, making your source code independent of service location.

A CDS namespace server for a cell stores names and other information in databases called "clearinghouses." These databases have a hierarchical structure similar to a file system, with nodes that can be object entries (network resources), soft links to other entries in the namespace, or child pointers to subordinate directories.

Database nodes can be master or read-only replicas. Read-only replicas are updated from master replicas either through immediate update requests or a "skulking" process. The latter facilitates the update of replicated entries of databases whose server may have been unavailable when the master was first altered and changes propagated. CDS servers skulk themselves, thereby updating themselves from associated master nodes--either on demand, when other management activity requires, or automatically in the background. Master entries can be read/write accessed if the user meets prescribed security and access-control constraints.

Client applications in search of resources actually send their lookup requests through local "clerk" processes. As Figure 4 shows, a CDS clerk first checks its cache of transactions that have not grown stale to see if naming information can be returned immediately. If no cache entry matches, the request is forwarded to the cell CDS server. In the third and fourth steps, the CDS server looks through any clearinghouse databases to look for the requested information. For step five, the closest possible match to the requested information is returned. This could range from nothing to the actual object entry with all the necessary information for the client to locate and bind to a server. It might be a pointer to another CDS server to query, for example, telling the clerk that all print objects or services are located in another cell, thereby linking cell namespaces. The clerk continues the strategy prescribed to it to find the requested resource. The clerk finally returns the requested data to the client and caches any useful information. All this is transparent to the client.

You can find the CDS daemon (service) cdsd, the GDA daemon gdad, and the GDS daemon gdsd running on a cell CDS master host. Each machine's cdsadv process proactively looks for networked CDS servers and receives unsolicited broadcast advertisements from them to build cell-to-cell linkages. cdsadv also spawns any necessary CDS clerk services (cdsclerk) for caching, and so on. The cdscp is a control program acting as a client interface to cdsd. It is used by the cell administrator to manage (add, delete, and so on) namespace entries. (cdsbrowser is a handy Motif application that is also a cdsd client.)

The security and time services are vital to the existence of a cell. The DCE security-service daemon secd provides a way for clients and servers to prove their identities. Identities (users, servers, and computers) known to the security service are called "principals." Entries for entities are stored in a database called the "registry." It contains group, organization, account, and administrative-policy information in addition to principal definitions. A separate registry service facilitates slave replicas of the registry to increase availability and aid in the management of user and group information. The rgy_edit command provides the cell or security-administrator principal with a way to manage the information in the registry. This information is usually derived from /etc files using other DCE setup utilities. The security service includes an authentication and privilege service in the form of libraries that call security services.

The authentication service provides a trustworthy principal-identification scheme. DCE system users log into principal accounts with the dce_login password-checking utility. A secret key shared with the authentication system is generated. Credentials that have finite lifetimes and identities must be periodically reverified via the authentication service. To reduce the chance of tampering, DCE uses an extended version of the Kerberos shared secret-key-authentication encryption scheme. The privilege service uses the verified principal's identity to see to which groups the user belongs. Privilege-attributes certificates (PACs) establish the rights of DCE principals to access networked services or resources DCE calls its "networked-resources objects," with a unique identification provided for each. Objects have methods equivalent to procedures, comprising services. The security service also includes an extensive access control list (ACL) environment. Access rights to network resources are determined by using the principal's identity and group membership to consult a list associated with that resource. ACLs can be managed with the acl_edit utility. Most standard DCE services use ACLs. cdsd, for example, uses ACLs to enforce read/write access on the clearinghouse. ACL interaction at a server is managed through a boilerplate code module known in DCE lingo as the "ACL Manager."

The DCE distributed time service (DTS) is essential to maintaining time stamps of data throughout the DCE system, such as service-database updates, credential checks, and file-system management. The dtsd daemon runs on each DCE machine. Most are configured as clerks, responsible for retrieving current time and adjusting the local clock. Those configured as servers are responsible for synchronizing time amongst themselves as well as performing clerk tasks. A master on the cell may derive the de facto notion of time for that cell as derived from its own clock or external sources such as an NNTP daemon or external clock source. The dtscp control program is a client interface to dtsd, allowing the administrator to configure and manage DTS or change the nature of the background updates taking place across a cell.

Threads Package

An RPC inherits its synchronous behavior from the local procedure calls within the stubs. Unless asynchronous programming is used to facilitate multiple concurrent threads of execution or nonblocking I/O, a server is capable of servicing only one request at a time and a client is blocked while waiting for the server to return a reply. Figure 5 illustrates the steps behind a synchronous RPC.

Asynchronous programming tools native to most versions of UNIX can provide nonblocking (remote) procedure calls: forking, multithreading, or lightweight processes; asynchronous, nonblocking reads/writes through I/O control calls; or event-driven programming such as X11, signals, timers, and the like.

All of the DCE RPC libraries and services are thread-safe or reentrant, making it possible for multiple threads of execution to be accessing a piece of code at a given time. For application programmers, this means any resources shared between threads of execution must be independently managed or owned by a thread. Synchronization and locking primitives exist to make sequencing of tasks and sharing of resources possible. The DCE Threads (pthreads) package implements Draft 4 of the POSIX 1003.4a standard, plus some additional _np suffix (not portable) routines. Note that linked libraries must also be reentrant or "thread safe."

The four states a thread can have are: waiting, ready, running, or actually executing and terminated. In Figure 6, for example, threads A--Z may be vying for CPU time, as moderated by the priority-driven, preemptive scheduling algorithm you've selected. When thread A executes, it becomes I/O blocked and yields control to thread Z, and gets marked as waiting until the I/O condition clears. When thread Z executes, it gets preempted by thread B, possibly because a time-sharing scheme was specified. Several types of scheduling are available within pthreads: first-in, first-out, round-robin, and three types of time-slicing across all priorities.

The Mechanism Behind an RPC

Figure 7 outlines the steps behind performing RPCs. Notice that the client and server processes perform all their network communication--RPCs to directory or endpoint servers--through another entity called the "RPC run time." Before an RPC can be performed, a client must get a service address and other information necessary to "bind" the client and server together. You can do this explicitly in your client program, implicitly, or automatically delegate this responsibility to the run-time libraries. Binding information at a client resides behind a "binding handle" and includes: a protocol sequence specifying the network; transport and RPC protocol to use; network-address information including the host name and endpoint on that host at which a service is listening; transfer syntax (really a nonissue here); and version number of the client/server RPC interface.

Step 1: The server registers itself with the system. This includes exporting some of the information necessary for building the client/server communication channel or binding into the local namespace. Since it is often done as a supervisory function without a specific instance of a service up and running on a host, only part of the information necessary for binding is exported. Essentially, a mapping between an interface specification and a server host name with acceptable protocol sequences is passed to the cell-namespace service. This service listing is available to all hosts in that cell and other cells that can access it through global directory services. Protocol sequences will be discussed in detail later. To complete the client/server binding, specific endpoint information is necessary. On startup, a server must register the endpoints it will use with the local endpoint-mapping service, rpcd. Today these take the form of UDP or TCP port numbers. In the case where servers wish to use well-known endpoints, these are established typically in the interface definition, and rpcd never gets involved. The CDS can be found without consulting any other directory service as it runs on a predefined host. The endpoint-mapping services for each machine run at well-known ports.

Step 2: The client consults the directory service. The cell-directory service attempts to match service interfaces registered with it (or peers it can contact) with those asked for by the client. Interfaces are registered by universal unique identifiers (UUIDs). The directory service returns, one at a time as demanded, matching interfaces that meet version-compatibility criteria. When successful, the client imports the part of the binding information necessary including at least the server's host name.

Step 3: Client requests a specific service procedure. With only partial binding information, the first RPC is directed to the server host's endpoint mapping service. From there the binding information is completed with port information added, and the call is passed on to the target service. The server replies if possible.

Step 4: Subsequent RPCs from the client are placed directly with the server as the binding is complete.

The RPC run-time API and stubs isolate the client and server development process from the nitty-gritty details of the DCE service and utility APIs. This helps reduce volumes of function calls to a manageable number of rpc_ function calls with a somewhat reduced flexibility. As Figure 8 shows, your application code will depend heavily on the code generated into the stub, as well as the run-time API. Most applications will require only a few RPC run-time calls, and often no calls directly to the DCE services and utilities.

Developing a Distributed Application

To illustrate how you develop a typical distributed application, I've taken an image-database management application from a single process to a network form. Listing One is im.c, the flat-file, single-process implementation of the database, while Listings Two and Three are rim_client.c and rim_server.c, respectively, the distributed client/server implementation of the database. The entire source code (including support files) for both the local and client/server distributed versions is available electronically; see "Availability," page 3. Noting the differences between the single-process and distributed versions of this database make it clear that developing a distributed application can be a complicated process, and walking step-by-step through the development process is beyond the scope of this article. Consequently, I'll provide an overview of the steps required to move from a local to a distributed application. For more information, I recommend the works listed at the end of this article.

Figure 9 illustrates how to develop DCE applications. Figure 10 and Table 1 list the files you'll author or generate while developing the client and server parts of the application. The files you are responsible for developing are highlighted ovals.

You start by developing the protocol-specification file appn.idl (where appn is the application name). uuidgen -i > appn.idl is run once to start things off, generating a UUID by which the interface will be known. (The attribute-control file or appn.acf is an optional way to alter the behavior of the stubs produced.) After running the protocol compiler with a command such as idl appn.idl to produce the header file and stubs, you proceed to develop the client and server functions. On the client side this may be solely a main procedure with a user interface to the remote procedures. At the server, you must not only codify the procedures to be executed (the service "manager" code), but also develop a main that initializes the server when first invoked. You then compile and link your client and server code with the associated stubs to create client and server executables.

Code-generation technology like this is sensitive to source-code automation and management. Use of make and a source-code management system such as SCCS is advised for even modest-sized projects. Take note that the default client/server header filename is appn.h. You will have to use another header filename to isolate your own generic definitions and prototypes. I'll use appn_util.h here.

Debugging a Distributed Application

Distributed-application debugging can be very challenging. What you have in your favor is the similarity between remote and local procedure-call models. It's extremely productive to first link the service procedures directly with the client side of your application, as shown in Figure 11. By sidestepping the network and RPC calls, you can expose and debug parameter passing and overall functionality before distributing the application. It may be necessary to use preprocessor directives in your client and server code to make this linking possible.

Once local debugging is complete and the functionality of the client and server has been fleshed out, you can run the client and server applications in separate debuggers, each in its own process. Be sure to use a thread-aware debugger or inhibit threading at the client and server. It is then that additional violations to the protocol prescribed by your .idl file are found. Common protocol-programming mistakes include:

The Image Database Application

The local database application (Listing One) makes no major assumptions about operating-system, C, or system-support libraries. It basically offers a way to add, extract, delete, and list entries in a database designed for imagery, organized as a flat file with organization embedded as headers for each entry. It provides local users with a repository with which to share images, thereby conserving disk space and allowing version management. Since it retrieves and records user identity (thereby requiring a notion of system user ID), it cannot enforce any access control or security measures. It can only access database files in a file system accessible to all interested users for reading and writing.

You might argue that if the necessary machines were networked and a package like PC-NFS or OSF-DFS installed, common mount points would make image databases accessible across the network. But what if all machines aren't sharing the same file system, or they (as is common) have different notions of mount points, or user identity is not maintained or consistent across the network? What if you want to give remote dial-in users access without bringing up a shared file system? Most importantly, what if one machine is to be dedicated to serving image archive requests, perhaps because it has optical drives or because it has the horsepower needed to compress and decompress images? For these reasons and more, it is important to think about how you would develop a truly distributed version of this application.

For obvious reasons, you'll want to establish an interface to this database that's accessible across the network as a service. This will allow you to craft different clients to achieve different purposes, all sharing the same data through the same consistent interface. A two-tiered information system results. Tools such as Visual Basic, Visual C++, and PowerBuilder are good for crafting GUIs that access commercial databases to form two-tiered systems. Nontrivial database applications warrant adding an additional layer of services or proxies between the clients and the data or other resources being shared. A middle layer isolates the reusable low-level routines used by different types of clients as separate services. This three-tiered strategy allows clients to keep network interactions at an abstract, potentially database-independent level, thereby concentrating on the user interface and on unique client-processing needs.

Several questions regarding client/server partitioning must be addressed before writing the local-procedure calling code:

In addition, questions related to the RPC system that you need to ask include:

Conclusion

The DCE system and API is broad, deep, and potentially intimidating. Nonetheless, distributed computing will likely shape the future of computing in the coming years, and programmers will need to come to grips with this complexity.

References

Bloomer, John. Power Programming With RPC. Sebastopol, CA: O'Reilly & Associates, 1992.

Borghoff, L.M. Distributed File/Operating Systems. Berlin: Springer-Verlag, 1992.

Corbin, John. The Art of Distributed Applications. Berlin: Springer-Verlag, 1991.

Lyons, Tom. Network Computing System Tutorial. Englewood Cliffs, NJ: Prentice-Hall, 1990.

Rosenberry, Ward. Understanding DCE. Sebastopol, CA: O'Reilly & Associates, 1992.

Shirley, John. Developing Distributed Applications with DCE. Sebastopol, CA: O'Reilly & Associates, 1992.

Stevens, W. Richard. Advanced UNIX Network Programming. Englewood Cliffs, NJ: Prentice-Hall, 1992.

Stevens, W. Richard. UNIX Network Programming. Englewood Cliffs, NJ: Prentice-Hall, 1990.

Example 1: (a) Local database query; (b) remote database query.

(a)
      bucks = getAcctBalance(acctName);
(b)
      bucks = getAcctBalance(acctName, someRemoteBank);

Figure 1 The elements of DCE. Figure 2 RPCs look and feel like local calls. Figure 3 Connectivity of DCE directory-services components. Figure 4 Under the hood of a CDS lookup. Figure 5 Flow of control during a synchronous RPC. Figure 6 States of a thread. Figure 7 The steps in binding a sequence of RPCs. Figure 8 The relationship between DCE application, stub, run-time, and service/utility operations. Figure 9 RPC distributed-application development steps. Figure 10 DCE RPC client and server development steps. Figure 11 Linking around RPC calls lets you debug in a single process first, thereby speeding development.

Table 1: Files for DCE RPC client and server development.

File You Develop           Purpose   
appn.idl                        Interface-description file
appn.acf                        Attribute-control file (optional)
appn_client.c                   Client functions, including main()
appn_server.c                   Server functions (manager) and initialization
Produced by Protocol Compiler
appn.h                          Client/server header file
appn_cstub.c                    Client stub
appn_sstub.c                    Server stub
appn_caux.c                     Client auxiliary functions (optional)
appn_saux.c                     Server auxiliary functions (optional)
Target Executables
appn_server                     Server
appn_client                     Client

Listing One


#include <stdio.h>
#include <string.h>
#include <pwd.h>
#include "im.h"
#define USAGE() { fprintf(stderr, "Usage: %s ", argv[0]); \
    fprintf(stderr, "\t-a imageName \"comments\" width height depth 
                                                             compressType"); \
    fprintf(stderr, "\n\t\t\t\t\tadd an image from file 'imageName'\n"); \
    fprintf(stderr, "\t\t-d imageName\t\tdelete an image\n"); \
    fprintf(stderr, "\t\t-x imageName\t\textract an image to file 
                                                             'imageName'\n"); \
    fprintf(stderr, "\t\t-l\t\t\tlist contents of archive\n"); \
    exit(1); }
#define PRINTHEAD(pI) { \
      printf("name:\t%s\n\towner: %s\n\tcomments: %s\n\tdate: %s\n", \
        pI->sN, pI->sO, pI->sC, pI->sD); \
      printf("\tbytes: %d\twidth: %d\theight: %d\tdepth: %d\tcompress: %d\n", \
        pI->b, pI->x, pI->y, pI->d, pI->c); }
image          *readImage();
FILE           *fp;
main(argc, argv)
  int             argc;
  char           *argv[];
{
  pStr            expectEmpty;  /* a NULL if success, else an error string */
  imageList      *pIL;
  image          *pI;
  pStr            sImageName;
  int             arg;
  /* Parse the command line, doing local procedure calls as requested. */
  if (argc < 2) {
    USAGE();
    exit();
  }
  for (arg = 1; arg < argc; arg++) {
    if (argv[arg][0] != '-')
      USAGE();
    switch (argv[arg][1]) {
    case 't':
      arg++;
      break;
    case 'a':
      if ((argc - (++arg) < 6) || !(pI = readImage(argv, &arg)))
    USAGE();
      expectEmpty = add(pI);
      if (expectEmpty[0] != '\0')
    fprintf(stderr, "local call failed: %s", expectEmpty);
      break;
    case 'd':
      if (argc - (++arg) < 1)
    USAGE();
      sImageName = (pStr) strdup(argv[arg]);
      expectEmpty = delete(sImageName);
      if (expectEmpty[0] != '\0')
    fprintf(stderr, "local call failed: %s", expectEmpty);
      break;
    case 'x':
      if (argc - (++arg) < 1)
    USAGE();
      sImageName = (pStr) strdup(argv[arg]);
      expectEmpty = extract(sImageName, &pI);
      if (expectEmpty[0] != '\0')
    fprintf(stderr, "local call failed: %s", expectEmpty);
      else
    (void) writeImage(pI, sImageName);
      break;
    case 'l':{
    if (!(pIL = list()))
      fprintf(stderr, "local call failed:");
    else
         for (pI = pIL->pImage; pIL->pNext; pIL = pIL->pNext, pI = pIL->pImage)
        PRINTHEAD(pI);
    break;
      }
    default:
      USAGE();
    }
  }
}
image          *
readImage(argv, pArg)
  char          **argv;
  int            *pArg;
{
  static image    im;
  char            buffer[MAXBUF];
  char            null = '\0';
  u_int           reallyRead;
  u_int           imageSize = 0;

  /* Build the header information then look at stdin for data. */
  im.sN = (pStr) strdup(argv[*pArg]);
  im.sO = UIDTONAME(getuid());
  im.sC = (pStr) strdup(argv[++*pArg]);
  im.x = atoi(argv[++*pArg]);
  im.y = atoi(argv[++*pArg]);
  im.d = atoi(argv[++*pArg]);
  im.c = atoi(argv[++*pArg]);
  im.sD = &null;    /* don't forget to terminate those empty strings! */
  im.data = (char *) malloc(0);
  if (!(fp = fopen(im.sN, "r"))) {
    fprintf(stderr, "error opening imageName \"%s\" for reading\n", im.sN);
    return (0);
  }
  while (reallyRead = fread(buffer, 1, MAXBUF, fp)) {
    im.data = (char *) realloc(im.data, imageSize + reallyRead);
    (void) bcopy(buffer, im.data + imageSize, reallyRead);
    imageSize += reallyRead;
  }
  im.b = imageSize;
  fclose(fp);
  return (&im);
}
writeImage(pImage, sImageName)
  image          *pImage;
  pStr            sImageName;
{
  if (!(fp = fopen(sImageName, "w"))) {
   fprintf(stderr, "error opening imageName \"%s\" for writing\n", sImageName);
   return (1);
  }
  PRINTHEAD(pImage);
  if (fwrite(pImage->data, 1, pImage->b, fp) != pImage->b) {
    fprintf(stderr, "error writing imageName \"%s\" data\n", sImageName);
    fclose(fp);
    return (1);
  }
  fclose(fp);
  return (0);
}


Listing Two


/* rim_client.c - client application for remote image database service  */
#include <malloc.h>
#include <stdio.h>
#include <string.h>
#include <pwd.h>
#include <dce/rpc.h>
#include <pthread.h>
#include "rim.h"
#include "rim_util.h"
#define USAGE() { fprintf(stderr, "commands:\n"); \
        fprintf(stderr, "\ta imageName \"comments\" width height depth 
                                                              compressType"); \
    fprintf(stderr, "\n\t\t\t\t\tadd an image from file 'imageName'\n"); \
    fprintf(stderr, "\td imageName\t\tdelete an image\n"); \
    fprintf(stderr, "\tx imageName\t\textract an image to file 
                                                             'imageName'\n"); \
    fprintf(stderr, "\tl\t\t\tlist contents of archive\n"); \
    fprintf(stderr, "\tq\t\t\tquits\n"); }
#define PRINTHEAD(pI) { \
      printf("name:\t%s\n\towner: %s\n\tcomments: %s\n\tdate: %s\n", \
        pI->sN, pI->sO, pI->sC, pI->sD); \
      printf("\tbytes: %d\twidth: %d\theight: %d\tdepth: %d\tcompress: %d\n", \
        pI->b, pI->x, pI->y, pI->d, pI->c); }
typedef struct work_arg {
  pthread_t      *thread_id;
  int             server_num;
  char           *server_name;
  rpc_binding_handle_t bind_handle;
  image          *pImage;
} work_arg_t;
image          *readImage();
FILE           *fp;
#define MAX_SERVERS     100
pthread_mutex_t     WorkMutex;
pthread_cond_t      WorkCond;

/* The single-arg wrapper routine around the list() RPC accessed by each
 * thread we ask to list - must be reentrant  */
void list_wrapper(work_arg_t * work_arg_p)
{
  imageList      *pIL;
  image          *pI;
  if (!(pIL = list(work_arg_p->bind_handle))) {
    fprintf(stderr, "remote call failed:");
    pthread_exit((pthread_addr_t *)1);
  } else {
    for (pI = pIL->pImage; pIL->pNext; pIL = pIL->pNext, pI = pIL->pImage)
      PRINTHEAD(pI);
    iLFreeOne(pIL);
  }
  pthread_exit((pthread_addr_t *)0);
}
/* the wrapper around the add() RPC */
void add_wrapper(work_arg_t * work_arg_p)
{
  pStr            expectEmpty;  /* a NULL if success, else an error string */
  expectEmpty = add(work_arg_p->bind_handle, work_arg_p->pImage);
  if (expectEmpty[0] != '\0') {
    fprintf(stderr, "remote call failed: %s", expectEmpty);
    pthread_exit((pthread_addr_t *)1);
  }
  pthread_exit((pthread_addr_t *)0);
}
/* the wrapper around the delete() RPC */
void delete_wrapper(work_arg_t * work_arg_p)
{
  pStr            expectEmpty;  /* a NULL if success, else an error string */
  expectEmpty = delete(work_arg_p->bind_handle, work_arg_p->pImage->sN);
  if (expectEmpty[0] != '\0') {
    fprintf(stderr, "remote call failed: %s", expectEmpty);
    pthread_exit((pthread_addr_t *)1);
  }
  pthread_exit((pthread_addr_t *)0);
}
/* the wrapper around the extract() RPC */
void extract_wrapper(work_arg_t * work_arg_p)
{
  image          *pI;
  pStr            expectEmpty;  /* a NULL if success, else an error string */
  expectEmpty = extract(work_arg_p->bind_handle, work_arg_p->pImage->sN, &pI);
  if (expectEmpty[0] != '\0') {
    fprintf(stderr, "remote call failed: %s", expectEmpty);
    pthread_exit((pthread_addr_t *)1); 
  } else {
    (void) writeImage(pI, pI->sN);
    iFreeOne(pI);
  }
  pthread_exit((pthread_addr_t *)0);
}
main(argc, argv)
  int             argc;
  char           *argv[];
{
  int             server_num, nservers;
  work_arg_t      work_arg[MAX_SERVERS];
  char           *server_name[MAX_SERVERS];
  rpc_binding_handle_t *binding;
  /* Check usage and initialize. */
  if (argc < 2 || (nservers = argc - 1) > MAX_SERVERS) {
    fprintf(stderr, "Usage: %s server_name ...(up to %d server_name's)...\n",
        argv[0], MAX_SERVERS);
    exit(1);
  }
  for (server_num = 0; server_num < nservers; server_num += 1) {
    server_name[server_num] = (char *) argv[1 + server_num];
    /* Import binding info from namespace and annotate handles for security. */
    binding = importAuthBinding(rim_v1_0_c_ifspec,
                SERVER_PRINC_NAME, server_name[server_num],
                '\0', 1, rpc_c_protect_level_pkt_integ,
                rpc_c_authn_dce_secret, '\0', rpc_c_authz_name);
  }
  /* Initialize mutex and condition variable. */
  printf("Client calling pthread_mutex_init...\n");
  if (pthread_mutex_init(&WorkMutex, pthread_mutexattr_default) == -1) {
    dce_err(__FILE__, "pthread_mutex_init", (unsigned long) -1);
    exit(1);
  }
  printf("Client calling pthread_cond_init...\n");
  if (pthread_cond_init(&WorkCond, pthread_condattr_default) == -1) {
    dce_err(__FILE__, "pthread_cond_init", (unsigned long) -1);
    exit(1);
  }
  /* Initialize work args that are constant throughout main loop. */
  for (server_num = 0; server_num < nservers; server_num += 1) {
    work_arg[server_num].server_num = server_num;
    work_arg[server_num].server_name = server_name[server_num];
    work_arg[server_num].bind_handle = binding[server_num];
    work_arg[server_num].pImage = (image *) malloc(sizeof(image));
    work_arg[server_num].thread_id = (pthread_t *) '\0';
  }
  /* Transaction loop -- exits with a 'q' and reaps threads. */
  while (1) {
    /* Per-loop initialization.  We're single-threaded here, so locks and
     * reentrant code is unnecessary. For each server... */
    char            line[256];
    char            args[7][256];
    int             argc, argcc;
    void           *local;
    /* scrape up to 7 args from the command line */
    gets(line);
    argc = sscanf(line, "%s%s%s%s%s%s%s", args[0], args[1], args[2], args[3],
         args[4], args[5], args[6]);
    server_num = (server_num + 1) % nservers;   /* NEXT! */

    local = (void *)'\0';
    switch (tolower(args[0][0])) {
    case 'a':
      argcc = 1;
      if ((argc != 7) || (!(work_arg[server_num].pImage = 
                                                    readImage(args, &argcc))))
    USAGE()
      else
        local = &add_wrapper;
      break;
    case 'd':
      if (argc != 2) USAGE()
      else {
        work_arg[server_num].pImage->sN = (pStr) strdup(args[1]);
        local = &delete_wrapper;
      }
      break;
    case 'x':
      if (argc != 2) USAGE()
      else {
        work_arg[server_num].pImage->sN = (pStr) strdup(args[1]);
        local = &extract_wrapper;
      }
      break;
    case 'l':
      local = &list_wrapper;
      break;
    case 'q':
      /* If we ever started a thread for a server, wait for it to die if not 
     already dead, print exit status. Note they have not been
     detached yet so we have status available */
      for(server_num=0; server_num<nservers; server_num++) {
    pthread_addr_t status;
    if (work_arg[server_num].thread_id) {
      pthread_join(*(work_arg[server_num].thread_id), &status);
      printf("thread %d exit status %d\n", server_num, status);
        }
      }
      exit(0);
    default:
      USAGE();
      break;
    }
    if (local) {
      fprintf(stderr, "threading for the call to server %s...\n", 
    server_name[server_num]);
      work_arg[server_num].thread_id = (pthread_t*)malloc(sizeof(pthread_t));
      pthread_create(work_arg[server_num].thread_id, pthread_attr_default, 
        (void *)local, (void *)&work_arg[server_num]);
    }
  }
}
image          *
readImage(argv, pArg)
  char            argv[7][256];
  int            *pArg;
{
  static image    im;
  char            buffer[MAXBUF];
  idl_char        null = '\0'; /* note the idl_*/
  u_int           reallyRead;
  u_int           imageSize = 0;

  /* Build the header information then look at command line for data. */
  im.sN = (pStr) strdup(argv[*pArg]);
  im.sO = (idl_char *) UIDTONAME(getuid());
  im.sC = (pStr) strdup(argv[++*pArg]);
  im.x = atoi(argv[++*pArg]);
  im.y = atoi(argv[++*pArg]);
  im.d = atoi(argv[++*pArg]);
  im.c = atoi(argv[++*pArg]);
  im.sD = &null;    /* don't forget to terminate those empty strings! */
  im.data = (idl_char *) malloc(0); /* note the idl_*/

  if (!(fp = fopen(im.sN, "r"))) {
    fprintf(stderr, "error opening imageName \"%s\" for reading\n", im.sN);
    return (0);
  }
  while (reallyRead = fread(buffer, 1, MAXBUF, fp)) {
    im.data = (idl_char *) realloc(im.data, imageSize + reallyRead);
    (void) bcopy(buffer, im.data + imageSize, reallyRead);
    imageSize += reallyRead;
  }
  im.b = imageSize;
  fclose(fp);
  return (&im);
}
writeImage(pImage, sImageName)
  image          *pImage;
  pStr            sImageName;
{
  /* same as in Listing One*/
  }
  fclose(fp);
  return (0);
}
/* The next four routines are just image linked-list maint. stuff. */
image          *
iAllocOne()
{               /* allocate one image structure */
  image          *pI = (image *) calloc(sizeof(image), 1);
  pI->sN = (pStr) calloc(MAXSTR, 1);
  pI->sO = (pStr) calloc(MAXSTR, 1);
  pI->sC = (pStr) calloc(MAXSTR, 1);
  pI->sD = (pStr) calloc(MAXSTR, 1);
  return (pI);
}
imageList      *
iLAllocOne()
{               /* allocate one imageList structure */
  imageList      *pIL = (imageList *) malloc(sizeof(imageList));
  pIL->pImage = iAllocOne();
  pIL->pNext = '\0';
  return (pIL);
}
iFreeOne(pI)
  image          *pI;
{
  cfree(pI->sN);
  cfree(pI->sO);
  cfree(pI->sC);
  cfree(pI->sD);
  cfree(pI);
}
iLFreeOne(pIL)
  imageList      *pIL;
{
  imageList      *pil;
  imageList      *pil_prev = '\0';
  while (pIL) {
    for (pil = pIL; (pil->pNext) != '\0'; pil_prev = pil, pil = pil->pNext);
    iFreeOne(pil->pImage);
    cfree(pil);
    if (pil_prev) {
      pil_prev->pNext = '\0';
    }
    if (pil == pIL)
      break;
  }
}


Listing Three


/* rim_server.c - server intitialization and procedures for remote
 * image database service  */
#include <stdio.h>
#include <sys/types.h>
#include <sys/time.h>
#include "rim.h"
#include "rim_util.h"

#define FGETS(ptr, max, fp) { fgets(ptr, max, fp); ptr[strlen(ptr)-1] = '\0'; }
#define READHEADER(n, o, c, d) \
    { FGETS(n,MAXSTR,fp); FGETS(o,MAXSTR,fp); \
    FGETS(c,MAXSTR,fp); FGETS(d,MAXSTR,fp); }

FILE           *fp;
imageList      *iLAllocOne();
image          *iAllocOne();

/* ref_mon()- reference monitor for rim. It checks generalities, then calls
 * is_authorized() to check specifics. */ 
int
ref_mon(bind_handle)
  rpc_binding_handle_t bind_handle;
{
  int             ret;
  rpc_authz_handle_t privs;
  unsigned_char_t *client_princ_name, *server_princ_name;
  unsigned32      protect_level, authn_svc, authz_svc, status;
  /* Get client auth info. */
  rpc_binding_inq_auth_client(bind_handle, &privs, &server_princ_name,
               &protect_level, &authn_svc, &authz_svc, &status);
  if (status != rpc_s_ok) {
    dce_err(__FILE__, "rpc_binding_inq_auth_client", status);
    return (0);
  }
  /* Check if selected authn service is acceptable to us. */
  if (authn_svc != rpc_c_authn_dce_secret) {
    dce_err(__FILE__, "authn_svc check", (unsigned long) -1);
    return (0);
  }
  /* Check if selected protection level is acceptable to us. */
  if (protect_level != rpc_c_protect_level_pkt_integ
      && protect_level != rpc_c_protect_level_pkt_privacy) {
    dce_err(__FILE__, "protect_level check", (unsigned long) -1);
    return (0);
  }
  /* Check if selected authz service is acceptable to us. */
  if (authz_svc != rpc_c_authz_name) {
    dce_err(__FILE__, "authz_svc check", (unsigned long) -1);
    return (0);
  }
  /* If rpc_c_authz_dce were being used instead of rpc_c_authz_name, privs
   * would be a PAC (sec_id_pac_t *), not a name as it is here. */
  client_princ_name = (unsigned_char_t *) privs;
  /* Check if selected server principal name is supported. */
    if (strcmp(strrchr(server_princ_name, '/'), strrchr(SERVER_PRINC_NAME, 
                                                                  '/')) != 0) {
    dce_err(__FILE__, "server_princ_name check", (unsigned long) -1);
    return (0);
  }
  /* Now that things seem generally OK, check the specifics. */
  if (!is_authorized(client_princ_name)) {
    dce_err(__FILE__, "is_authorized", (unsigned long) -1);
    return (0);
  }
  /* Cleared all the authorization hurdles -- grant access. */
  return (1);
}
/* is_authorized() - check authorization of client for this service. We could
 * check on a per-procedure basis, rather than once for the interface, to give
 * more control over access. Typically, an application (i.e., one using PACs &
 * ACLs) would be using sec_acl_mgr_is_authorized().  */
int
is_authorized(client_princ_name)
  unsigned_char_t *client_princ_name;
{
  /* Check if we want to let this client do this operation. A list or
     ACL would be better */
  if (strcmp(strrchr(client_princ_name, '/'), strrchr(CLIENT_PRINC_NAME, 
                                                                  '/')) == 0) {
    /* OK, we'll let this access happen. */
    return (1);
  }
  return (0);
}
void
die(rpc_binding_handle_t bind_handle)
{
  printf("server answering the call...\n");
  /* should de-register enpoints and directory info */
  exit(0);
}
void
restart(rpc_binding_handle_t bind_handle)
{
  /* should de-register enpoints and directory info */
  (void) execl(SERVERPATH, (char *) 0);
}
pStr
add(rpc_binding_handle_t bind_handle, image *argp)
{
  static pStr     result;
  static idl_char msg[MAXSTR];
  static char     N[MAXSTR], O[MAXSTR], C[MAXSTR], D[MAXSTR];
  char            head[MAXSTR];
  int             fstat, b, x, y, d, c;
  time_t          tloc;
  result = msg;
  msg[0] = '\0';
  printf("server answering the call...\n");
  if (!(fp = fopen(SERVERDB, "r"))) {
    sprintf(msg, "cannot open server database %s for reading\n", SERVERDB);
    return (result);
  }
  /* First make sure such an image isn't already archived. */
  while ((fstat = fscanf(fp, "%d%d%d%d%d\n", &b, &x, &y, &d, &c)) == 5) {
    READHEADER(N, O, C, D);
    if (!strcmp(N, argp->sN))
      break;
    fseek(fp, (long) b, 1);
  }
  switch (fstat) {
  case EOF:         /* not found - that's good */
    fclose(fp);
    if (!(fp = fopen(SERVERDB, "a"))) {
      sprintf(msg, "cannot open server database %s to append\n", SERVERDB);
      fclose(fp);
      return (result);
    }
    break;
  case 5:           /* there already is one! */
    sprintf(msg, "%s archive already has a \"%s\"\n", SERVERDB, argp->sN);
    fclose(fp);
    return (result);
  default:          /* not a clean tail... tell user and try */
    repairDB(msg);      /* to recover */
    fclose(fp);
    return (result);
  }
  CompressImage(1, argp);   /* compress as specified */
  /* Get the date, add the image header and data, then return. */
  time(&tloc);
  sprintf(head, "%d %d %d %d %d\n%s\n%s\n%s\n%s",
      argp->b, argp->x, argp->y, argp->d, argp->c,
      argp->sN, argp->sO, argp->sC, (char *) ctime(&tloc));
  if ((fwrite(head, 1, strlen(head), fp) != strlen(head)) ||
      (fwrite(argp->data, 1, argp->b, fp) != argp->b))
    sprintf(msg, "failed write to server database %s\n", SERVERDB);
  fclose(fp);
  return (result);
}
/* This is included for the sake of completeness but is brute-force. */
pStr
delete(rpc_binding_handle_t bind_handle, pStr argp)
{
  FILE           *fpp;
  int             fstat;
  static pStr     result;
  static idl_char msg[MAXSTR];
  char            N[MAXSTR], O[MAXSTR], C[MAXSTR], D[MAXSTR];
  char           *buffer;
  int             bufSize, bytesRead, b, x, y, d, c;
  int             seekPt = 0;

  printf("server answering the call...\n");

  if (!ref_mon(bind_handle)) { /* a simple monitor */
    dce_err(__FILE__, "ref_mon - not allowed to delete", (unsigned long) -1);
    return;
  }
  msg[0] = '\0';
  result = msg;
  if (!(fp = fopen(SERVERDB, "r"))) {
    sprintf(msg, "cannot open server database %s for reading\n", SERVERDB);
    return (result);
  }
  /* Look thru the DB for the named image. */
  while ((fstat = fscanf(fp, "%d%d%d%d%d\n", &b, &x, &y, &d, &c)) == 5) {
    READHEADER(N, O, C, D);
    fseek(fp, (long) b, 1); /* fp stops at next entry */
    if (!strcmp(N, argp))
      break;
    seekPt = ftell(fp);
  }
  switch (fstat) {
  case EOF:         /* not found */
    sprintf(msg, "%s not found in archive\n", argp);
    break;
  case 5:   /* This is the one! Remove it by copying the bottom up. */
    bufSize = MIN(MAX(1, b), MAXBUF);
    buffer = (char *) malloc(bufSize);
    fpp = fopen(SERVERDB, "r+");
    fseek(fpp, seekPt, 0);  /* fpp is at selected image */
    while (!feof(fp)) {
      bytesRead = fread(buffer, 1, bufSize, fp);
      fwrite(buffer, 1, bytesRead, fpp);
    }
    seekPt = ftell(fpp);
    fclose(fpp);
    truncate(SERVERDB, (off_t) seekPt);
    break;
  default:          /* not a clean tail... */
    repairDB(msg);
  }
  fclose(fp);
  return (result);
}
static image   *pIm = '\0'; /* keep this around as we are interative now */
pStr
extract(rpc_binding_handle_t bind_handle, pStr argp, image **ppIm)
{
  int             fstat;
  static pStr     result;
  static idl_char msg[MAXSTR];

  printf("server answering the call...\n");
  result = msg;
  msg[0] = '\0';

  if (!(fp = fopen(SERVERDB, "r"))) {
    sprintf(msg, "cannot open server database %s for reading\n", SERVERDB);
    return (result);
  }
  /* Free previously allocated memory. Look thru the DB for the named image. */
  if (pIm != '\0')
    free(pIm);
  pIm = *ppIm = iAllocOne();
  while ((fstat = fscanf(fp, "%d%d%d%d%d\n", &(pIm->b), &(pIm->x), &(pIm->y),
             &(pIm->d), &(pIm->c))) == 5) {
    READHEADER(pIm->sN, pIm->sO, pIm->sC, pIm->sD);

    if (!strcmp(pIm->sN, argp))
      break;
    fseek(fp, (long) pIm->b, 1);
  }
  switch (fstat) {
  case EOF:         /* not found */
    sprintf(msg, "%s not found in archive\n", argp);
    break;
  case 5:           /* this is the one! */
    pIm->data = (idl_char *) malloc(pIm->b);
    if (fread(pIm->data, 1, pIm->b, fp) != pIm->b) {
      sprintf(msg, "couldn't read all of %s\n", argp);
      repairDB(msg);
    }
    break;
  default:          /* not a clean tail... */
    repairDB(msg);
  }
  fclose(fp);
  return (result);
}
static imageList *pIList = '\0';/* keep this around as we are interative now */
imageList      *
list(rpc_binding_handle_t bind_handle)
{      /* inconsistent - should return a string, but there's a reason... */
  imageList      *pIL;
  int             fstat;
  printf("server answering the call...\n");
  /* Free previously allocated memory.  Build a list. */
  if (pIList)
    iLFreeOne(pIList);
  pIL = pIList = iLAllocOne();
  if (!(fp = fopen(SERVERDB, "r"))) {
    sprintf(pIL->pImage->sN, "cannot open server database %s for reading\n", 
                                                                     SERVERDB);
    pIL->pNext = iLAllocOne();  /* needs a dangler...:-( */
    return (pIList);
  }
  while ((fstat = fscanf(fp, "%d%d%d%d%d\n", &(pIL->pImage->b),
             &(pIL->pImage->x), &(pIL->pImage->y),
             &(pIL->pImage->d), &(pIL->pImage->c))) == 5) {
    READHEADER(pIL->pImage->sN, pIL->pImage->sO,
           pIL->pImage->sC, pIL->pImage->sD);
    fseek(fp, (long) pIL->pImage->b, 1);
    pIL->pNext = iLAllocOne();  /* hang an empty one on the end */
    pIL = pIL->pNext;
  }
  if (fstat != EOF) {       /* not a clean tail... */
    repairDB(pIL->pImage->sN);
  }
  fclose(fp);
  return (pIList);
}
/* The next four routines are just image linked-list maint. stuff. */
imageList      *
iLAllocOne()
{               /* allocate one imageList structure */
  imageList      *pIL = (imageList *) malloc(sizeof(imageList));
  pIL->pImage = iAllocOne();
  pIL->pNext = '\0';
  return (pIL);
}
image          *
iAllocOne()
{               /* allocate one image structure */
  image          *pI = (image *) calloc(sizeof(image), 1);
  pI->sN = (pStr) calloc(MAXSTR, 1);
  pI->sO = (pStr) calloc(MAXSTR, 1);
  pI->sC = (pStr) calloc(MAXSTR, 1);
  pI->sD = (pStr) calloc(MAXSTR, 1);
  return (pI);
}
iLFreeOne(pIL)
  imageList      *pIL;
{
  imageList      *pil;
  imageList      *pil_prev = '\0';
  while (pIL) {
    for (pil = pIL; (pil->pNext) != '\0'; pil_prev = pil, pil = pil->pNext);
    iFreeOne(pil->pImage);
    cfree(pil);
    if (pil_prev) { pil_prev->pNext = '\0'; }
    if (pil == pIL) break;
  }
}
iFreeOne(pI)
  image          *pI;
{
  cfree(pI->sN);
  cfree(pI->sO);
  cfree(pI->sC);
  cfree(pI->sD);
  cfree(pI);
}
repairDB(s)         /* doesn't do much, yet... */
  pStr            s;
{
  sprintf(s, "server database %s data hosed, repaired\n", SERVERDB);
}
CompressImage(d, pIm)       /* compression and decompression */
  int             d;
  image          *pIm;
{
  /* omitted */
}
/******** server initialization starts here *********/
#ifndef LOCAL       /* go LOCAL if you want to link with rim_client.c */
#include <dce/rpc.h>

#define MAX_CONC_CALLS_PROTSEQ  5   /* max conc calls per protseq */
#define MAX_CONC_CALLS_TOTAL    10  /* max conc calls total */
/* definition, generated by IDL, are all that is necessariliy unique below */
#define SERVER_IF       rim_v1_0_s_ifspec

char           *server_name;
/* main() Get started; set up server how we want it, and call listen loop. */
int
main(argc, argv)
  int             argc;
  char           *argv[];
{
  rpc_binding_vector_t *bind_vector_p;
  unsigned32      status;
  int             i;
  /* Check usage and initialize. */
  if (argc != 2) {
    fprintf(stderr, "Usage: %s namespace_server_name\n", argv[0]);
    exit(1);
  }
  server_name = argv[1];
  /* Register interface with rpc runtime - no type_uuid/epv associations */
  rpc_server_register_if(SERVER_IF, '\0', '\0', &status);
  if (status != rpc_s_ok) {
    dce_err(__FILE__, "rpc_server_register_if", status);
    exit(1);
  }
  /* Tell rpc runtime we want to use all supported protocol sequences. */
  rpc_server_use_all_protseqs(MAX_CONC_CALLS_PROTSEQ, &status);
  if (status != rpc_s_ok) {
    dce_err(__FILE__, "rpc_server_use_all_protseqs", status);
    exit(1);
  }
  /* Ask the runtime which binding handle(s) it's going to let us use. */
  rpc_server_inq_bindings(&bind_vector_p, &status);
  if (status != rpc_s_ok) {
    dce_err(__FILE__, "rpc_server_inq_bindings", status);
    exit(1);
  }
  /* Register authentication info with rpc runtime. */
  rpc_server_register_auth_info(SERVER_PRINC_NAME, 
  rpc_c_authn_dce_secret, '\0', KEYTABFILE, &status);
  if (status != rpc_s_ok) {
    dce_err(__FILE__, "rpc_server_register_auth_info", status);
    exit(1);
  }
  /* Register binding info with endpoint mapper. No object UUID vector */
  rpc_ep_register(SERVER_IF, bind_vector_p, '\0',
       (unsigned_char_t *) "rim explicit secure server, version 1.0", &status);
  if (status != rpc_s_ok) {
    dce_err(__FILE__, "rpc_ep_register", status);
    exit(1);
  }
  /* Export binding info to the namespace. */
  rpc_ns_binding_export(rpc_c_ns_syntax_dce, server_name,
            SERVER_IF, bind_vector_p, '\0', &status);
  if (status != rpc_s_ok) {
    dce_err(__FILE__, "rpc_ns_binding_export", status);
    exit(1);
  }
  /* Listen for service requests. */
  fprintf(stdout, "server %s ready.\n", server_name);
  rpc_server_listen(MAX_CONC_CALLS_TOTAL, &status);
  if (status != rpc_s_ok) {
    dce_err(__FILE__, "rpc_server_listen", status);
    exit(1);
  }
  /* Not reached. */
}
#endif


Copyright © 1995, Dr. Dobb's Journal