The problem with software is that it takes too much time and money to develop, and is brittle when used in situations for which it was not explicitly designed. Various software design methodologies address this problem:
Of course, there have been other proposed methodologies for improving software. Some address the problem of managing change, but only adaptive programming is about anticipating change and automatically dealing with it within a running program, without need of a programmer. As a definition, we can say:
"Adaptive software uses available information about
changes in its environment to improve its behavior."
Software is expected to do more for us today, in more situations, than we ever expected in the past. This is the challenge of complex environments. The complexity comes from three dimensions. First, there are more users. Now everyone, not just trained professionals, uses software. Second, there are more systems and more interactions among them. A company that once had a homogeneous mainframe now has a wide variety of desktop, server, and mainframe machines, running a wide variety of protocols. Third, there are more resources and goals. Programmers are accustomed to trading off time versus space. Now they also have to worry about bandwidth, security, money, completeness of results, quality of information, resolution of images and audio, and other factors, and they have to make the right trade-offs for a wide variety of users.
Together, these three dimensions make the designer's job harder. The designer can't foresee all the circumstances in which an application will be used, and thus can't always make the right design choices. This can shorten a product's lifetime, because upgrades will be needed for unforeseen situations, and it undermines the end user's confidence that the product will even perform as advertised. We can summarize the difficulties posed by complex environments with this table:
|Standard Environment||Complex Environment|
|All inputs observable||Some inputs hidden|
|No other players||Other players|
|Virtual world||Real world|
Traditional software development is based on some principles which are no longer appropriate under this new world of complex environments. It is worth looking at five of these myths.
Now we know what adaptive software does: it uses information from the environment to improve its behavior over time, as the program gains more experience. And we know why it is difficult to do this: because specifications are incomplete and changing, because the environment is constantly changing, because carefully crafted designs may rely on assumptions that become obsolete, and because there is never as much programmer time as you really need. In this section we survey some of the tools that help us overcome these difficulties. Here are five of the most important:
Static languages like C require the programmer to make a lot of decisions that nail down the structure of the program and the data it manipulates. Dynamic languages like Dylan and CLOS (Common Lisp Object System) allow these decisions to be delayed, and thus provide a more responsive programming environment in the face of changing specifications. Changes that would be too pervasive to even try in static languages can be easily explored with dynamic languages. Dynamic languages provide the interfaces by which a program can change or extend its performance. For example, in Dylan, a running program can add a method to an existing class without access to the original source code; can define a brand new class or function under program control; can debug another Dylan program, even one running remotely over the web. All this makes it possible to build, re-build and modify complex programs using components. Java is currently the most popular language with dynamic features, although it is not as thoroughly and elegantly dynamic as Dylan and CLOS.
There has been a lot of talk about software agents, and not much agreement on what they are. The Oxford English Dictionary entry for "Agent" says: from Latin agere ("to do"), 1. One who acts or exerts power. 2. He ... who produces an effect. 3. Any natural force. (e.g. a chemical agent). 4. One who does the actual work - a deputy, substitute, representative.
So we see that essentially, an agent does something and optionally does it for someone. Of course, all programs are intended to do something for someone. Therefore, in order to make a distinction between "software agent" and just a "program," we stress that software agents should be able to immediately respond to the preferences of their sponsors, and act according to those preferences. In addition, we think of an agent as existing in a complex environment. It can perceive part of the environment, and take some actions, but it does not have complete control over the environment.
In short, we want the agent to do the right thing, where "right thing" is defined by the user or users at that point. There are three reasons why this is difficult:
Decision Theory is the combination of utility theory (a way of formally representing a user's preferences) and probability theory (a way of representing uncertainty). Decision Theory is the cornerstone for designing proper adaptive software in an uncertain, changing environment. It is "the" language for:
Reinforcement Learning is a powerful technique to automatically learn a utility function, given only a representation of the possible actions and an immediate reward function - an indication of how well the program is doing on each step. Given enough practice (either in the real world or in a simulated environment), reinforcement learning is guaranteed to converge to an optimal policy. Reinforcement Learning problems could in principle be solved exactly as a system of nonlinear equations, but in practice the problems are much too large for an exact solution, so we rely on an approximation technique based on dynamic programming.
As an example, consider assigning cellular phone calls to channels. You want to use the available channels wisely so that there is no interference and a minimum of dropped calls. Using reinforcement learning, Harlequin invented an algorithm that dynamically adapts to changes in calling patterns, and performs better than all existing published algorithms. Reinforcement learning has been applied to other domains, such as flying an airplane, scheduling elevators, and controlling robots.
Probabilistic networks (also known as Bayesian networks, Decision networks, and Influence diagrams) are a way of representing a complex joint probability distribution as a graphical model of nodes (representing random variables) and arcs (representing dependence between variables). There are algorithms for determining the value of any variable, for learning qualitative and quantitative dependencies from data, for quantifying the sensitivity of an answer to uncertainties in the data, and for computing value of information: the worth of acquiring a new piece of information.
Here are some real-world applications of adaptive software undertaken by the Adaptive Systems Group at Harlequin:
Crime Pattern Analysis and Fraud Detection: Harlequin makes a data visualization tool for fraud and crime analysis called Watson. For mid-size databases, trained users can easily navigate through the various displays to find the patterns they are looking for. However, for larger databases we needed to add data mining techniques to help spot the patterns. As new patterns in the data occur, we provide ways of visualizing them.
Financial: A credit card company wanted to be able to track purchase authorization requests, and decide which requests were likely to be from a stolen card, or were likely to be a credit risk. Harlequin supplies the software upon which the authorization system is written. The key for the customer was to have an efficient system capable of processing large volumes of transactions in real time, but still have a flexible system, where changes could be made in minutes, not days, with no need to bring the system down to install upgrades.
Manufacturing: A multinational company faced the problem of needing to continually build new versions of their product to meet new industry standard specifications. Each test run of a new product variation costs about $10,000. Harlequin is actively consulting on this project to bring together all the prior test results and accumulated experience of the organization in one online tool, and to provide sophisticated statistical optimization techniques to recommend an optimal experimental test plan, based on past results and predicted future contingencies.
Electronic Publishing: Harlequin's raster imaging processing software for high-end commercial printing is the fastest on the market. But customers are interested in the speed of the total solution, so we are developing a dynamic workflow management system that will optimize the tracking and scheduling of the customer's complete set of job tasks. This requires monitoring the jobs and being able to predict how long each job will take at each processing stage.
Telecommunications: A telecommunications company wanted to build an experimental switching system for delivering video on demand to the home. This requires strict real-time response (30 frames per second, no matter what), but it also requires flexibility in the software. The customer required that they be able to add new functionality and even redefine existing functions while the switch is running, because it would not be acceptable to interrupt service. Harlequin worked with the customer to meet these requirements using a real-time version of CLOS. The result was a switching system that met all requirements, and was built with only 25 programmers, as compared to the 250 needed on the system it replaced. A corresponding 10-fold improvement was also seen in development costs and engineering change turn-around time.
Harlequin's Adaptive Systems Group brings added value to Harlequin's programming tools, web tools, and postscript printing software, and also offers consulting services for other companies. Some of the projects are shown in the accompanying side bar.
Microsoft's Decision Theory and Adaptive Systems Group was started by several experts in decision theory (David Heckerman, Jack Breese and Eric Horvitz), and was recently renamed to include "Adaptive Systems" in the title. The group contributed the Print Troubleshooter in Windows 95 and the Answer Wizard in Microsoft Office.
The Symposium on Flexible Computation from the AAAI Fall Symposium dealt with "procedures that allow a graceful tradeoff to be made between the quality of results and allocations of costly resources, such as time, memory, or information. Systems employing flexible computation gain the ability to adapt the quality of their response to dynamic changes in requirements."
The Adaptive Intelligent Systems group run by Barbara Hayes-Roth at Stanford is investigating "systems that coordinate perception, reasoning, and action to pursue multiple goals while functioning autonomously in dynamic environments."
The Demeter Group at Northeastern University says that "adaptations of the interactions of components can only be defined in terms of accumulated individual component adaptations. New techniques are starting to provide higher levels of abstractions for adaptability. Software architecting brings system level abstractions. The Adaptive Software family of languages allows this higher abstraction at the programming level."
The Open Implementation project at Xerox PARC provides the basis of the black box metaphor used in this paper. Xerox's Aspect Oriented Programming project goes one step further. "AOP works by allowing programmers to first express each of a system's aspects of concern in a separate and natural form, and then automatically combine those separate descriptions into a final executable form using a tool called an Aspect Weaver™."
The GMD's (German equivalent of NSF) Adaptive Systems Research Group deals with statistical learning and with adaptive robotics.
Some people use "adaptive" for software that helps people with disabilities. Other people use "adaptive" to refer to evolutionary, neural, or biological computation. See the book The Mind, the Brain, and Complex Adaptive Systems, edited by Harold Morowitz and Jerome Singer.