Optimal Solutions and Incremental Improvements


The famous traveling salesman problem is a classic example of a combinatoric problem. The salesman must visit a number of cities, while travelling the shortest distance possible. He must visit each city exactly once, and when finished return to his starting point. One can easily develop a route that visits every city once and returns to the starting poing. But how can you be certain you have chosen the shortest route? Computational theory says that you cannot, unless the number of cities is trivially small.

The brute-force solution tries each and every possible input or arrangement to determine which is optimal. But this solution is feasible only for very small problems because the traveling salesman can choose from n! routes. For five cities, there are 120 routes. For fifteen cities there are 1,307,674,368,000 possible routes! There is no known computationally-feasible exact solution to solve these problems.

Heuristics

One can, however, get near optimal results in reasonable amounts of time by exploiting heuristics. Heuristics are simply rules that can help limit the number of prospects that must be examined, hopefully eliminating most of the poor solutions. Heuristics tend to provide adequate solutions most of the time, and sometimes they actually identify the optimal solution. Chess-playing programs use heuristics based on the strengths of the various pieces and board positions to (hopefully) rule out most of the stupid moves rather than examine every possible move. Unfortunately, most heuristics tend to be very problem-specific; chess heuristics are worthless for other games. However, there are a few methods that can be considered general-purpose heuristics.

One common heuristics method, called incremental improvement, works by searching for changes that improve upon a given starting point. This starting point (or initial variable value, or initial configuration) can be chosen by any method, even at random. Each individual parameter is then altered at random. If the new result is not worse than the current the new result becomes the starting point for another cycle of searching.

Incremental improvement is also known as hill-climbing. In this metaphor, the goal is to find the highest point in the local terrain. To complicate matters, a dense fog prevents you from seeing more than one or two steps in any direction. If you wander around for a while always selecting your next step so that it does not lead downhill, you will eventually come to a point where every direction leads downhill. At this point, you must be at the top of a hill (or perhaps a mesa).

Hill-climbing's main limitation is its susceptibility to becoming trapped in a local maxima. Once you start going up a hill, you are committed to going to the top of it and once you reach the top you cannot get down. You may have climbed a molehill when a mountain is close by. Of course, you might also have scaled K2 and have no desire to seek out Mr. Everest. A common enhancement avoids the worst local maxima traps by iterating the hill-climbing algorithm. This solution repeatedly chooses a random starting point and keeps track of all solutions, eventually choosing the best of the group as the answer. Unfortunately, if many iterations are allowed this solution can end up being closer to the brute-force method than you can afford.