Introduction TXA1: "simple interoperability" between ALP and Grid agents TXA2: dynamic configuration TXA3: mobility TXA4: system control and user interface development
For each TXA there is a brief description, motivation for each program to address the area, and some discussion. The discussion sections contain miscellaneous observations and comments related to the areas identified. They will be further amplified as additional details are investigated and issues resolved. Although they are identified as separate TXAs, the areas are somewhat interrelated (as indicated in the discussion), and may become more so as details are fleshed out.
While this is currently a separate report; it will shortly be integrated into a revision of the design document, together with other additional material. Comments and feedback are welcome.
description:
This TXA involves providing simple "Grid presence" for ALP clusters, so that Grid agents can access them, and "ALP presence" for Grid agents, so that ALP clusters can access them. This TXA would support some simple TIEs to get things started, and could help test CoABS scaling and interoperability capabilities. It would also support tasking ALP from outside agents. This TXA would be the basis for other TXAs investigating further integration at more detailed levels, e.g., providing ALP access to the full range of Grid services, adding more Grid-like functionality to the ALP architecture, etc.
motivations:
From the ALP viewpoint, this would support experiments with external tasking in worked out scenarios involving areas other than logistics planning, as well as addressing basic ALP-CoABS interoperability issues.
From the CoABS viewpoint, this provides a realistic system to include in scenarios, with additional heterogeneity, distribution, access to real systems. It also provides a testbed in which CoABS agents can access/track a dynamically changing (and distributed) plan.
discussion:
This TXA basically provides access at the level of individual agents (from one agent/cluster to another). That is, CoABS agents become aware of individual ALP clusters. Similarly, ALP clusters become aware of individual Grid agents, and possibly some services, but not necessarily of the Grid as a first-class concept, or of the full range of Grid services. Via the current TXA, individual plugins could access some Grid services as if they were directly available either via the ALP infrastructure or via direct calls out to the Grid. The Grid Logging and Visualization Services would appear to be usable in this way. More substantial integration of CoABS and ALP ideas is required to support ALP access to the full range of Grid-like services, e.g., awareness of the potential need for Matchmaking or Brokering services. Further TXAs involve providing this type of access (either via the Grid or by including similar services directly in ALP).
Grid agents can task ALP clusters by sending directives to them (as if the Grid agents were other clusters, or their allocators). In this direction of access (Grid->ALP), a cluster wrapper (possibly making use of the Grid Access Framework, or some specialized variant of it) would be required. This would support a mapping between the FIPA ACL used by the Grid and the directive and tasking language used by ALP. An instance of this wrapper would be used for each cluster that should be known to the Grid. In the reverse direction (ALP->Grid), individual plugins would need to be provided with Grid access, since they are the components that call for agent/system services in handling their tasks. The most straightforward way to do this would appear to be to define an appropriate wrapper plug-in (i.e., expander, allocator, assessor, depending on the CoABS agent involved) for each CoABS agent or service that should be accessible to ALP. This wrapper would mediate between the language spoken by the agent and information to be added to the LogPlan, the way such plugins wrap external systems and integrate them into the ALP society now.
ALP clusters, rather than plugins, appear to be the components that should have "first-class" Grid presence (i.e., appear as agents to the Grid). Clusters are the components in ALP that are designed to accept messages, while plugins are only designed to function as parts of clusters; they communicate via changes to the LogPlan, rather than by explicit messages, and are designed to be independent of other plugins.
From the ALP side, in addition to simply providing generic access from ALP clusters to CoABS agents, it will also be necessary for some ALP clusters to "know about" those CoABS agents (have them represented as assets, "understand" their capabilities, and be set up to use those CoABS agents for specific purposes).
ALP currently plans to have operations plan information passed from an external system into the ALP society. This could be the basis of a TIE with a CoABS agent or set of agents representing a J3 activity. A straightforward TIE along these lines would be for something like the current CoABS NEO TIE to use ALP clusters in planning an operation, using the interoperability facilities discussed above.
Further investigation of the translation aspects of Grid-ALP wrapping might look at CoABS ACL ideas as a possible vehicle to extend ALP's directive and task vocabulary (e.g., use the ALP task language as a content language in an ACL). This would be a move in the direction of adapting ALP to provide more generic agent capabilities, and interoperate with ACL-based agent systems. There are probably ALP plugins now that do translation between ALP directives and various ACLs in interacting with existing agent-based systems, which could provide the basis for some of this work.
description:
Large scale open agent systems must generally provide support for dynamically changing configurations, including in "configuration" such aspects as which agents or other components are present in the system, and which agents are known to which other agents. Providing this support involves a number of different mechanisms in agent systems, including discovery and join mechanisms, matchmaking and brokering capabilities, and others. This TXA investigates common issues relating to dynamic configurations in agent systems common to CoABS and ALP.
motivations:
Dynamic configuration capabilities have many applications in agent systems. For example:
From the CoABS viewpoint, technology exchange in this area would
provide a test of Grid capabilities that support dynamic configuration,
such as the Directory Service, and CoABS technology that supports the creation
of agent teams, in the context of a system with ALP's scale, distribution,
and relative closeness to operational systems and users. This technology
exchange could also provide a basis for extending CoABS technology to support
agent models, such as the ALP cluster model, in which individual agents
(clusters) can change their own configurations and capabilities (in ALP,
by adding plugins). For example, service description languages must
be capable of representing separate cluster and plugin capabilities, and
directory services must be capable of dealing with changes in cluster capabilities
as plugins are added or changed.
discussion:
"Configuration", as used here, covers a number of issues in CoABS and ALP, including:
Providing more dynamic configuration capabilities in ALP requires such things as:
Aspects of these issues have been discussed in the ALP program. For example, ALP documentation indicates that clusters could be defined to support roles such as "service manager" (what CoABS calls a "broker") or a directory service (which maintains a directory of service offers for access by other clusters), and other clusters designed to use such components. However, these roles are not inherently part of an ALP society, and, in general, clusters need not be built with the knowledge that such services are available. As a result, adding such components could be part of a "special-case" architecture which ALP could support; the current architecture just doesn't mandate such components.
In addition, an ALP cluster apparently can do a form of "Matchmaking", if only among those clusters that are modeled as cluster assets. For example, the Plugin Developer's Guide describes a scenario in which a task will not complete as scheduled (e.g., an asset is rendered inoperative or destroyed). In this case, the assessor plugin is supposed to update the corresponding asset object so that the allocator plugin will not use this asset in its next allocation cycle. Since clusters are considered assets, this would potentially allow a cluster to which an allocation had been made to become inoperative, and for the assigning cluster to reallocate the task to another cluster (provided that the failure of the original cluster could somehow be detected). However, the problem remains that, if the cluster has no direct knowledge of another cluster that can handle the same task, lacking a general matchmaking service, it has no way of finding an alternative cluster.
The idea of a plugin repository from which a cluster could retrieve plugins it needed to do new tasks, or downloading the plugins with the tasks, has also been raised. Downloading a required plugin with a new task would potentially require no fundamental architectural changes; e.g., a cluster could automatically load any plugin it received with a task, and continue to work as usual. However, the logic required to decide to do this would have to be built into the tasking clusters. Moreover, the changed capabilities of the tasked cluster due to the additional plugin would be unknown to other clusters. The use of a plugin repository would require plugins to have service descriptions which described their capabilities, and clusters to contain logic (not currently present) that could determine when the cluster needed an additional plugin, and arrange to retrieve it from the repository.
It has also been suggested that, via a user interface, a user might be able to direct a cluster to, e.g., load a given plugin. However, such manual changes to the configuration generally requires other configuration changes as well. For example, unless the added plugin provides no new functionality that other clusters would care about, changing the capability of a given cluster would require corresponding changes to other clusters so they would recognize and use the new capability in the changed cluster.
Because of the flexibility of ALP's cluster architecture, there are potentially many ways of addressing some of these issues, e.g., changes might be made to either clusters or plugins, and there may be a choice of which specific type of plugin (expander, allocator, assessor) to use as well. As another alternative, the process of clusters calling a directory service, etc., could itself be explicitly made a part of the overall planning process in a reflective implementation.
One aspect of dealing with more dynamic plugin loading in ALP involves providing additional plugin control mechanisms. At the present time, ALP plugins somewhat resemble the rules in a rule-based system. They define predicates which indicate which sorts of tasks they are interested in, and when the predicate is matched by a task, they can begin working on it. Plugins are totally independent of each other, and there is no built-in mechanism to prevent two plugins from working on the same task, or which requires that a given task be worked on by some plugin. Instead, the plugins must be designed in such a way that they work together without such conflicts, and clusters designed so that they don't send tasks to a cluster that the receiving cluster cannot handle. In a more dynamic architecture, where, e.g., the collection of plugins in a cluster might dynamically change, additional mechanisms are necessary to reduce semantic coupling between plugins, enable them to be developed more independently, and reused in more flexible combinations. Explicit service descriptions for plugins would be part of addressing this problem. In addition, for example, some form of "conflict resolution" might be provided to control cases where multiple plugins match the same task (or when no plugin wants to work on a task).
The motivations section mentioned the CoABS NEO TIE as illustrating the use of a matchmaking capability as part of a fault-tolerance mechanism, in which an agent uses a matchmaking capability to find an alternative agent to replace another agent that has crashed. A problem in such scenarios (and in distributed systems in general) is actually determining that another agent/component has crashed, in order to make the decision to select another. This is an issue that has been discussed in the ALP context as well. While this is not an issue of "dynamic configuration" per se, CoABS technology such as the MIT exception handling work may be relevant in attempting to address this particular motivation for dynamically changing agent relationships.
Both ALP and CoABS are making extensive use of Java technology, and hence this could provide a vehicle for useful technology exchange. For example, Jini discovery and join capabilities could be used to support added cluster and/or plugin dynamics, just as these capabilities are being used in the CoABS Grid prototype. Similarly, since ALP plugins are defined as JavaBeans, they have explicit introspective interfaces already, which could be used as the basis of further development of service interfaces for these components.
description:
Agent mobility can be an important facility in certain types of applications. This TXA investigates common issues relating to agent mobility in ALP and CoABS. The idea is to investigate both simple and complex types of mobility, e.g.:
From the ALP viewpoint, ALP clusters and plugins are currently not mobile. Mobility in ALP could be used to support scenarios such as:
From the CoABS viewpoint, ALP provides a large-scale, highly-distributed testbed with realistic requirements for investigating mobility-related algorithms and facilities. These requirements can drive the identification of mobility-support services the Grid should provide. As with the dynamic configuration TXA, ALP also involves a distinct logical requirement, namely providing mobility for parts of agents (plugins). The type of support that is necessary for this requires further investigation.
discussion:
The associations between agents, and between plugins and clusters are aspects of logical configuration that are to some extent independent of mobility. For example, mobility may be used for load-balancing or optimizing message traffic even if the logical configuration, i.e., the association of a given plugin with a given cluster, is static. However, they are not entirely independent. For example, mobility control requires a certain amount of configuration information (e.g., directory information on nodes, and which agents are located at which nodes). Mobility control also requires interfaces to support moving the agents (clusters, plugins), and updating the configuration information. More detailed work will be required to define which aspects of mobility require which configuration (and other) information and related services.
The CoABS Grid Prototype V1 does not currently contain a Mobility Service, but does include a Logging Service, which could possibly be used, e.g., to support message queuing for disconnected operations in support of some forms of mobility. The CoABS Robustness TIE appears to include investigation of aspects of mobility.
Control of some forms of mobility requires access to system-level information, e.g., information on node loads, so this is related to the system monitoring TXA as well. This access to system-level information may be provided simply to users operating system management interfaces (e.g., when doing remote installation), or provided to the agents themselves.
Mobility for ALP plugins is a part of this TXA. As noted above, the extent of this support requires further investigation. Remote installation of plugins (and general support for movement of plugins to clusters to support adding capabilities to those clusters, as discussed in the dynamic configuration TXA) seems a reasonable facility to investigate. ALP currently assumes that plugins are local to their clusters. Changing this assumption (to support mobility of plugins independently of the clusters with which they are associated) would require reworking the plugin interface definitions to support remote messaging between plugins and clusters. It is not clear that this level of mobility is really necessary. (However, this does not mean that, e.g., external systems wrapped by plugins cannot be remote from clusters, since the plugins can use whatever remote access capabilities they want in accessing systems they wrap).
description:
Large-scale agent systems require user interfaces for a number of purposes, including:
motivations:
From the ALP viewpoint, ALP could make use of CoABS user interface technologies developed by individual projects, as well as the Visualization and Logging Services, and the XML-based Agent Activity Markup Language (AAML), being developed for the CoABS Grid, in providing general means for agents to record relevant information, and user interfaces to display it.
From the CoABS viewpoint, ALP already includes the idea of a User Interface plugin. This may be a mechanism CoABS agents should look at. ALP also includes the idea of individual plugins recording information in the LogPlan for use by user interfaces. ALP is also developing an XML server to provide relevant information recorded by individual clusters to generalized user interface clients. ALP also has experience in designing user interfaces that support realistic user requirements. ALP also has plans to look at system management issues (e.g., using SNMP) that the CoABS Grid will also need to consider. ALP provides a large-scale system in which to realistically explore these and related issues.
In general, both ALP and CoABS have developed user interface technology, and appropriate data collection and interface mechanisms, for particular agents and agent systems. Joint technology development in this area might prove extremely fruitful. For example, both programs are currently investigating the use of XML in various aspects of their user interface development, e.g., in creating user interfaces that are able to adapt to openness in the agents that are incorporated in the system. This is an area where joint technology development could yield important dividends. Both programs could also track and possibly use XML-based system management protocols and data representations currently being developed by industry groups, e.g., the XML encoding of the Desktop Management Task Force's <http://www.dmtf.org> Common Information Model (CIM) data for Web-based system management (see in particular <http://www.dmtf.org/pres/rele/1998_10_16_1.html>). This will be an issue in being able to integrate and monitor the widest range of COTS components that might be attached, or wrapped by agents and plugins.
discussion:
The CoABS Grid prototype describes a Logging Service that will allow Grid members to log information they want to make persistent. The log will be stored in an XML-based language (Agent Activity Markup Language). The Logging Service also provides a query interface, and a trigger facility to allow agents to be notified when certain data is stored in the log. The Grid Visualization Service provides a Web browser interface to agent activity in the Grid. ALP provides somewhat similar facilities via the ability for plugins to record information in the LogPlan for debugging purposes, and for User Interface plugins to provide this information to clients supporting user displays. ALP also has plans to implement an XML-based server capability into each cluster to allow for user interface queries and commands; commands include those to start and stop clusters within nodes, and plugins within clusters. The similarity of these facilities suggests that technology exchange in this area would be highly promising. ALP is investigating the use of a CPOF visualization framework called LEIF (Lightweight Extensible Information Framework) <http://www.dtai.com/leif/execsumm.html>, which CoABS may also wish to consider using.
The ALP plan this year calls for making cluster management more automatic, using SNMP to control clusters and report on status. System monitoring and management of this type is an issue that should also be explored in the context of the CoABS grid. Some level of integration between the various facilities involved in recording/logging information at various levels (agent, system, etc.) in ALP and CoABS (Grid Logging Service, AAML, SNMP MIBs, parts of the ALP LogPlan) should also be explored. Work on system management could lead to further work on, e.g., obtaining qualities of service (e.g., by altering communications protocols, adjusting agent capabilities, or even selecting alternate agents), and infrastructure developments that would help in this area. Such work will be important in interacting with certain types of agents or components (e.g., those providing real-time video feeds). The recording of agent and system-level information also provides the basis for a number of other facilities, such as information required to control mobility.
A particular application of this technology is decision tracking, access to agent assumptions/policies, and diagnosis. This involves the definition of data and mechanisms for accessing (and possibly changing) details of agent operations and internal assumptions via user interfaces. This can be considered a form of high-level "debugging" (at a more "knowledge level") of agents (as well as being relevant to ALP "policies"). CoABS appears to have some technology that involves creating this type of information for access via higher-level agents (TEAMCORE may support this). ALP could use interaction in this area to get an idea of the types of data and agent capabilities needed to support this type of access. A question about access to agent-level policy information was raised at one of the ALP demonstrations by a potential user. From the CoABS viewpoint, ALP provides a testbed to try out this type of technology on a heterogeneous agent platform, and investigate how to adapt this technology to a different kind of agent. CoABS could also study how "debugging" is done on the various heterogeneous ALP clusters/plugins and attached systems, what types of data they use, etc. Supporting this facility requires that individual agents (and in many cases attached systems) be written to provide the necessary control information. The same is true for providing user access to assumptions made by individual attached systems. This could presumably be considered part of the ability for users to specify policies.