Out of Hybernation?
I came across a fine, stirring, dramatic phrase the other day: "the rise and fall and gradual redemption of artificial intelligence." Redemption. Sounds nice, doesn't it? AI has been given a cold shoulder in certain quarters in recent years. Not that the chill wasn't forecast.
Peter Friedland, one of the pioneers of commercial AI, has pointed out that there probably wouldn't have been an "AI winter" if there hadn't been an AI summer. And it's clear enough that there was an AI summer: A few years ago, the phrase "artificial intelligence" worked magic with venture capitalists and potential customers. People threw money at practically any project and product that had "AI inside."
It's no less clear that this was followed by an AI winter: That same phrase became poison as it became apparent that having AI inside didn't magically cause a piece of software to meet needs, budgets, or deadlines. Companies and products that employed AI technology began to find it wise to hide that fact.
What's less clear is whether some spring is now breaking through. Despite various claims of recent successes and potential acceptance of AI, vendors of AI tools are few and far between, and this August one LISP vendor (Franz) announced that it was acquiring another (Lucid). There doesn't seem to be an AI groundhog to resolve this AI spring thing, but specific questions about the state of AI can be profitably asked, and were asked repeatedly at a particularly introspective Twelfth National Conference on Artificial Intelligence (AAAI-94) in Seattle this August.
Questions like: Is AI just a research area, or should we expect it to show commercial successes? Where are the commercial successes, if any? What are the hot areas in AI research? Should the AI community, if there is such a thing, direct its collective efforts toward some grand goal, and if so, what? If your product has AI inside, should you tell people?
Or this blunt question: "Does AI pay off?" For the past six years, AAAI has been presenting successful applications of AI technology in its Innovative Applications of Artificial Intelligence (IAAI) conference that runs in conjunction with the main conference. IAAI only presents successes, of course, so it only answers the question, "Can AI pay off?" It can, and the successes are not just implementations of trusty old expert-systems technology.
In the high summer of AI, rule-based systems were hot. One of the papers presented in the IAAI conference is probably representative of a lot of successful rule-based or expert systems today. The presentation, by Robert Chalmers and colleagues of Lockheed Palo Alto Research Laboratories, described a system with distinctive features:
This year, though, IAAI gave evidence of a spectral shift toward case-based reasoning (CBR) and model-based reasoning (MBR) approaches to problem solving. In another building at Lockheed's Palo Alto site, David Hinkle and Christopher Toomey recently fielded and proved the success of one of the first CBR systems, known as Clavier.
The problem was another specialized one. Lockheed produces aerospace parts from composite materials requiring curing in a sort of oven. Economics require packing as many parts as possible into one curing, but the parts have different thermodynamic characteristics. The result is a real-world packing problem that apparently has little in common with textbook problems of packing objects into containers.
So little, in fact, that the developers decided to use CBR. The experts in this particular domain appeared to have no rules, that is, no general principles, to offer. What they did have was knowledge of specific cases that had been successful (or not) in the past. That's the situation for which CBR was invented, and CBR proved to be a good choice for Clavier.
Clavier demonstrated its success (and the usefulness of CBR) by virtually eliminating certain types of errors, saving Lockheed thousands of dollars per month. It is in the process of demonstrating its success in another way: Lockheed is negotiating to license the software to other aerospace firms.
There are also some significant wins in the practical application of natural-language processing and text-recognition techniques.
Jack Mostow and fellow colleagues at Carnegie Mellon University, reported on Project LISTEN, a reading coach that, yes, listens. The system presents text visually, prompts the user to read it aloud, and gives feedback and assistance based on what the user says. It's a clever application; unlike most speech-recognition applications, it knows in advance what the speaker is supposed to be saying, making it a lot easier to recognize it, one would imagine. Preliminary results suggest that it is an effective reading coach; more interestingly, post-test interviews say that kids prefer working with the system to working with a real teacher.
This is an "academic" success that could have real commercial potential.
There are some real advances in other areas of natural-language processing and speech recognition. As Sara Hedberg pointed out recently in AI Expert magazine, pure machine translation from one human language to another, without human intervention, is having remarkable success, particularly with restricted text--like company manuals.
AAAI-94 also presented advances in distributed AI, machine learning, and planning and scheduling, all "academic" advances that could have commercial payoffs.
And then there are the "quiet" successes.
In one revealing AAAI-94 panel discussion, a number of successful, AI-savvy software vendors and contractors stated in no uncertain terms that they did not use "AI inside" as a sales tool, even when they were, in fact, using AI technology. Some admitted that knowledge of AI techniques sometimes gave them an advantage over their competitors, but didn't see that talking about AI gave them any benefit at all.
Perhaps not, but didn't they see, asked Monte Zwelen (whose Red Pepper Software is also coy about its AI insides), that if they played down their AI-based successes, they were injuring the reputation of AI? And possibly undermining funding for the very kind of academic AI research that was helping them to succeed?
Raj Reddy, in his keynote speech at AAAI-94, echoed this need to talk up AI's successes, but he also warned that if AI is to make a really significant contribution, it needs a map. Reddy argued for pooling efforts in areas where pooled efforts would pay off, where there are interesting problems large enough to demand pooled efforts, and where significant government funding was likely.
Like AI cruise control on all cars.
Like intelligent electronic tutors.
Like agents. Programs that interact with users, learn from the interaction, and become better able to serve their users as a result of the interaction, have a lot of promise, Reddy thinks, and a lot of challenges.
For example, there's the problem of accepting and later using a definition. ("You've used that term before. Would you care to define it?")
And where do we need these agents? Why, on the information superhighway, of course.
According to ARPA's Kirstie Bellman, the information superhighway requires an intelligent information infrastructure or it just won't go anywhere. She talked about the need for intelligent agents to help users (or other agents) select or modify or explain information resources on a huge, growing, ever-changing, heterogeneous system. A sufficiently messy problem to occupy many researchers for many years, and one that is likely to get significant government funding, to boot.
Agents, agents. The image of electronic agents negotiating out on the infobahn ran through many of the talks at AAAI-94. It all sounded so civilized and sophisticated that I was glad when I had the chance to go watch the robots play soccer.
There is such a thing as an overdose of intelligence.