Letters

Dr. Dobb's Journal August 2002

Nash Equilibrium & Open Source

Dear DDJ,

In Mike Swaine's closing comment in "Programming Paradigms" (DDJ, June 2002) about open source possibly being defensible as a Nash equilibrium is, in fact, correct, but the full argument is even more interesting.

A Nash equilibrium, in its simplest form, is a strategy that makes sense provided other people are playing the same strategy. The open-source approach is therefore a Nash equilibrium--if everyone used an open-source approach, there would be no incentive (or opportunity) to use a proprietary approach. That is, in a world that was open source, any proprietary software would have to surmount the fact that it was (a) nonstandard; (b) closed; and (c) costly; hence, it would be very unlikely to succeed.

However, the interesting aspect is that any world where there is a dominant operating system and set of core tools (for example, Wintel plus Microsoft Office) is also a Nash equilibrium: There is no incentive for any single player to deviate from the strategy. For example, if everyone in the world is using Windows/ Office, it is very hard to succeed if one is not. This will come as no surprise to anyone dealing with nonWindows systems.

This phenomenon is a general problem in game theory called a "coordination game," where the payoff occurs in doing whatever everyone else is doing--in matrix form, staying on the main diagonal of the game. Coordination games have multiple Nash equilibria.

So, what does this say for open source? Contrary to the claims of some of its ideological opponents, a world dominated by open source is no less stable than a world dominated by Windows. If the proponents of open source are correct and open-source software is actually of higher quality, an open-source world would actually be a bit more stable, since it is not only a Nash equilibrium but also "Pareto optimal"--in an all open-source world, everyone is better off than they would be in an all-Windows world (I'm generalizing here: Obviously, "everyone" would not include Bill Gates and many lawyers).

The problem, however, is getting from the Windows equilibrium to the open-source equilibrium. This is not news to anyone involved with open source, but does provide an interesting example of Nash's concepts in action.

Philip A. Schrodt

schrodt@ku.edu

VB.NET

Dear DDJ,

I'd like to point out an inaccuracy in the article "Examining VB.NET," by Lauren Hightower (DDJ, March 2002). Lauren says, "Early binding is the practice of declaring a variable and assigning it to an instance of a class before the application is compiled. Doing so instantiates the object and stores it in memory from the time when the application is run until you use it...Late binding, on the other hand, is the practice of declaring a variable as an object and then assigning it to an instance of a class at run time."

This is incorrect for a number of reasons. Early binding enables the compiler to do proper type checking at compile time. Late binding is a mechanism for dynamically creating an object based on its progid. It maps down to the COM CoCreateInstance() call at the C++ level.

In fact, the run-time semantics in terms of when the object is loaded are the same in both cases. The object is actually created when it is first referenced, not when it is created. So, in the case of:

dim obj as New ADODB.Recordset

dim obj2 as ADODB.Recordset

...

some more code

...

obj.SomeMethod

set obj2=CreateObject(''ADODB.Recordset'')

obj2.SomeMethod

obj is created when SomeMethod is called, not at the new statement. Even if this were not the case (it's really a run-time optimization), there's no reason not to do set obj2 = new ADODB.Recordset instead of set obj2 = CreateObject("ADODB.Recordset"). The argument for using late binding for objects that are used less frequently to save memory is incorrect. In general, it's almost always better to use new to create objects. The only exception is when you don't know what you want to create until run time. For example:

lateboundLoader( param as string )

dim objLatebound as IFoo

dim progid as string

progid = "FooLib" & param

objLatebound= new CreateObject(progid )

End Sub

instead of

sub lateboundLoader( param as string )

dim objLatebound as IFoo

if param = "FooA" then

objLatebound = new FooLib.FooA

else if param = "FooB"

objLatebound = new FooLib.FooB

End if

End Sub

Assume that FooA and FooB both implement IFoo. This enables you to add as [many] variations as you [want], and [do so] without modifying the creation code. The other time you would use createObject is with DCOM where you can use the second parameter to the create method to create an object on a remote server CreateObject(progid,Server).

Ian MacLean

ianm@ActiveState.com

Disk Thrashing

Dear DDJ,

Thanks to Scott Meyer who kindly pointed out that in my article "Disk Thrashing & the Pitfalls of Virtual Memory" (DDJ, May 2002), I mistakenly implied that indexed access to a deque is done through a directory tree, which would lead to logarithmic times. Instead it is done through constant-time index arithmetic. As always, he's right! This, by the way, makes my argument for using deques for large data sets even stronger.

Bartosz Milewski

bartosz@relisoft.com

Mail4Me Update

Dear DDJ,

Thanks to feedback I've received from readers regarding my article "The Mail4Me Project" (DDJ, June 2002) I'd like point out a typo in the listings: Line 3 of Listing Six on page 44 says if (int i == 0) where it should instead be if (count == 0).

Joerg Pleumann

joerg.pleumann@trantor.de

DDJ