Future AITS Architectures
Craig Thompson
Object Services and Consulting,
Inc.
29 July 1997
The Homework Assignment (Tuesday, 29 July 1997)
Reviewers,
Dave Signori has put together a small tiger team to further work
out the AITS architecture and roadmap. We had our first meeting on
17 July and the next is scheduled for 12 August (with plenty of assignments
in between!). The goal is to have a good briefing on the architecture
and roadmap in October.
In this context, Dave asked me to put together some high-level
descriptions of what future AITS architectures might be. The
purpose is to have some idea of how the current architecture might change
(possibly radically) in the longer term so that some of the right "hooks"
can be put in it now. There are (at least) two things to be addressed
here:
-
The new architectural concepts themselves, e.g., composability of fine
grained elements or the use of intelligent agents. These don't necessarily
have to be new architectures in their entirety, but can also be new approaches
for some component of the architecture -- e.g., means for greater robustness
in stressing environments.
-
The framework for expressing these new ideas. That is, at the
highest level, the framework might be enabling technologies, the general
architectural concept, and implications from a user's perspective, all
expressed in a time-phased way. Then the trick is to figure out how
to bin the technologies and determine the right categories for expressing
the elements of the architecture (e.g., data distribution mechanism, method
for overall system control, approach for achieving semantic consistency
of exchanged information).
I would appreciate input from you on these subjects since I'm sure you
all have some definite ideas! So please pepper me with your thoughts
-- anything from a snippet or two to a well elaborated thesis! This
is a real chance to start coalescing some ideas that will affect the eventual
evolution of the AITS architecture.
I would appreciate it very much if you could start feeding me input
this week or early next week so I can begin to collect my thoughts for
the August meeting. Thanks in advance for your help.
Richard Ivanetich
The Solution
Requirements driven architecture
The AITS architecture, like all enterprise-level systems, will be built
to meet requirements. There are three kinds:
-
requirements that we can predict and that the initial system is built to
satisfy,
-
requirements that we can predict but that current resources cannot implement
but the design should not preclude, and
-
requirements that we cannot predict in advance and that
-
may be easy to satisfy
-
may be hard to satisfy and may cut across the current design in unforeseen
ways to force redesign
There is real value in trying to write down the requirements and review
them periodically since they represent a way to scope the system
and also avoid rework. The scope today is the requirements you can
check off as satisfied. A growth path is to assign dates to requirements
on a roadmap. Since AITS is a large, evolving architecture, one can
immediately predict that it involves programming in the large, which always
involves discovery of "the (previously) unknown requirement" -- either
due to size or time related scaling. We cannot protect against this.
AITS Scope TBD
A big question for me is, where does the scope of AITS end? Right
now, it is at least a command and control, crisis management, simulation,
logistics, and infrastructure architecture. Most of these architectures
are, at the moment, loosely coupled. There is much more that a DoD
architecture could cover and the architectures could be more tightly coupled.
What are the scope bounds and how do you know when you have reached them.
What is an architecture really?
Along these lines, do you want to think of AITS as one architecture meeting
all needs or a composition of architectures? Hint: the latter.
If so, it makes sense to consider a requirement
that the multiple architectures be capable of interoperating. Given
these intuitions, how do we characterize architectures, treat them as units,
and characterize potential inter-architecture interfaces? What type of
information and mediation is needed to enable architectures based on different
architectural assumptions (of what type?) to interoperate. Along
these lines, how do we characterize stovepipe architectures and how do
we know AITS as implemented is not one? Is there evidence of module
replacement, mix and match, best-of-breed modules? Is there a notion
of a lightweight and heavyweight variant of the AITS?
AITS and the ilities
The ilities are generally cross-cutting meta-architecture constraints that
are often hard to add to a conventional system when designed as a stovepipe
or missing an ility. DARPA ITO has or is planning whole programs
devoted to some key ilities:
-
scalability
-
evolvability
-
survivability
-
security
-
composability
There are more ilities (e.g., understandability, affordability, agility,
performance) but focusing on just these, one could use them as a metric
to measure AITS against now, with the presumption that AITS must grow to
meet these predicable architectural framework requirements. For instance,
-
how can one replicate JTF architectural instances so many command and control
instances can co-exist. How do they compose hierarchically to meet
the needs of hierarchical control? How do functions cooperate across
cell boundaries so an intelligence function can feed many command and control
cells, so a tracked object can move smoothly between one tracking dbms
and another? etc.
-
how can JTF/AITS evolve to add anchor desks, to provide more battlefield
info, to filter better, to take advantage of the latest trends in infrastructure
technology? how can the AITS architecture evolve consistently to
reflect (changing) DoD Joint doctrine (business rules?) which is currently
distributed in many documents and software programs.
-
what are the threat models and how can the AITS survive attacks
One can say much more about the ilities. (Including trying to list
ilities and characterize what exactly they are. To what extent to
ilities have to built in to all modules and when can they provide a framework
that is imposed from outside without requiring cooperation from the managed
components, which might be ility-unaware?
AITS Business Model missing
Right now, AITS has a technical and application architecture and a rendezvous
path for taking ITO technologies and testing them in ISO programs, then
migrating these in DISA/LES/COE. But there is less focus on how DARPA
work can affect industry standards or industry technology trends.
NIMA and HLA have aggressive models of getting industry to partner in the
adoption of their critical path architectures. How will DARPA do
this with AITS (which includes HLA and might also include NIMA's architecture
if DARPA pursues the Dynamic Database program)?
Technology Trends
Focusing together on infrastructure and evolution, we can predict several
industry trends the AITS architecture must take into account. These
are environmental in nature but can affect the core nature of the AITS
architecture. The following sample of Nostradamus-like predictions
will surely be true to some extent so the issues are, how to accelerate
these directions since there are technical and social barriers for each,
and how will these trends affect AITS.
-
Componentware will increasingly be an economic force
in software development and programming will get easier
-
the OMG Object Management Architecture Next Generation
will include (my list): ilities, dynamic binding, mobile code, composition
containment model, exoskeleton, metadata repository, packaging technology,
policy managers, distributed system management, federation, web + object
integration, convergence with Java
-
Agents will replace objects (OMG and also OBJS will have to change our
names!). The agents that prevail will need to be simpler than the
intelligent agents of today so anyone can program
them; they will be mobile and massively distributed.
-
caveat: one can well ask if this is a prediction
about technology or terminology, particularly given the varied definitions
of what an "agent" is. A related question is what the relationship
is between "agents" and "components" (see below) -- are these competing
or complementary technologies?
-
how do we converge objects, agents, ontologies, KBMS
systems, rules systems, constraints?
-
how do we get pervasive adoption of a reasonable
agent model (by analogy, having 20++ variations on object models was not
all that helpful for interoperability).
-
how can we insure mobility, i.e., the ability
to move, not necessarily whether or not there is actual movement. Movement
itself might need to be governed by load balancing, demand, free market
economy schemes where agents have funds and pay for what they want, and/or
survivability considerations.
-
A corollary is that system management will be more
important than it is now, and more technology will need to be developed
to support it. This doesn't necessarily mean the centralized kind of management
practiced now, but it does mean more work on what metadata is required
to do the right kinds of management, what protocols are needed, and what
hooks need to be built into both lower level system components and higher
level application components to enable this to happen. System management
must itself be controllable from the outside. The mechanism by which
services bid for resources and migrate in response to attacks must itself
be situation dependent. Services priorities change as the real world
situation changes. This is a value judgement that must be made from
the outside, since all applications think they are the most important.
-
The web digital library will become richer,
better organized, easier for us all to augment. One corollary:
dynamic dbms (GIS) will provide a pervasively available spatial index for
locating places in a 3D world. Another corollary: new annotation
technologies will make situation modeling easier for communities of experts
and better support pedigrees and web views
-
Electronic commerce will make sharing code and software much easier, which
will break down some organizational barriers, for instance, virtual enterprises
and supply chains will change the information boundaries of organizations,
affecting ALP at least. A corollary: AITS will be harder to
distinguish from the rest of the information space which it will depend
on more than is shown in the current architecture.
Puzzles
Some of the technology puzzles that seem relevant to AITS that I hope to
understand more about in the next year or two are:
-
how can we converge web and distributed object technologies (including
agent technologies) so that the right infrastructure is in place for load
balancing, application partitioning, and other forms of distributed mobile
computing
-
how can we build a new system from a bin of parts when we see all the functionality
we need in the bin but a little is in part A and a little in part B and
so on. How can we routinely write software that can be deconstructed
and reconfigured?
-
how do we make components that have an extensible exoskeleton so we can
expose new interfaces over the life of the component even if we are not
the component designer. That means exposing new metadata via new
interfaces.
-
how can we put in place system management interfaces to get end-to-end
managed object systems? Agent-based architectures look like part
of the solution.
-
how do we expose QoS information from parts and compose it for end-to-end
and bottom-to-top QoS solutions? and what do we do in resource poor
environments where not all customers can be served? and what
happens when some components are opaque and do not expose QoS (or system
management) metadata?
-
how do we figure out what QoS (and other metadata)
we need to have in the first place (we need to know that before we can
expose it). What metadata is required, particularly at the higher levels,
to describe QoS and other important characteristics is far from clear right
now (e.g., how do I described the "qualities" of various high-level services).
Technology like the W3C's PICS, which allows people to develop their own
rating schemes in a self-describing way (so they can be understood by Web
clients without being built into them), extended to deal with arbitrary
types of ratings and other metadata (the focus of W3C's Resource Description
Framework (RDF) activity), seems like a key component for dealing with
some of these issues.
-
how can we build algorithms that are incremental or partial or ... that
provide useful answers in open world assumption environments where not
all is known.
-
how can we build broad coverage ontologies that are not thin and narrow
knowledge representations that take PhDs to develop small coverage systems
but rather broad area coverage ontologies (poor man ontologies) that might
be useful in broadening and narrowing web searches so we tend to augment
queries to get much more appropriate related answer sets. Related,
how can we cover queries to structured, semistructured,
and unstructured information sources via an augmentable query service.
-
what additional meta data must be stored about services
so services can be controlled from the outside by various policy managers?
How can that data be reasonably collected?
-
how can we scale by using a federation pattern to
build global systems from components? How can we end-to-end federate
systems that have made heterogeneous policy decisions, e.g., compose systems
that support linear versioning and others supporting branching versioning?
compose systems with discretionary and mandatory access control?
support optimistic and pessimistic transactions? etc.
AITS Future Architecture
Now, to be specific, and not knowing enough about the AITS architecture
yet, here is my current view of some components that could change in the
next few years:
-
The AITS infrastructure services layer can depend increasingly on
OMG services, Java libraries, and convergence of these.
-
Some experimenting makes sense with various kinds of mobile intelligent
agents especially those compatible with Java. It might make sense
for the C2 Schema that is currently based on OO technology to think
about augmenting object definitions with KBMS augmentations like rules,
constraints, etc. or agent frameworks.
-
The web server could become more of a web knowledge base and the
web data structures (HTML and MIME) can handle metadata much better to
augment the current syntactic web with much more
semantics to enable better searching and other operations. This could
happen not only within AITS but also within W3C, via the RDF activity mentioned
above, as well as other activities like XML (Extensible Markup Language).
This is a simple dialect of SGML that is richer and more powerful than
HTML. Microsoft seems to be paying a lot of attention to it, as is Netscape;
a number of recent proposed technologies from both these companies are
based on the use of XML. Also, it should be possible for third parties
to augment web-based knowledge either via group annotations or public/community
annotations since often experts or agents other than the authors have useful
augmentations. PICS/RDF-like technology would appear to be crucial
in supporting this. Along these lines, more work is needed on defining
web-based views so some can see a higher level view and others a more detailed
view. And work is needed on webbase viewers to display the
level of information needed and still permit expansion or drill down to
see more.
-
At the same time, if this infrastructure can become a useful situation
server data model then it should be the case that the rest of the web
can benefit from a richer information base than just HTML and improvements
by a much broader community can accelerate the situation
server's representational capabilities as well as the structure of information
that an AITS might find useful both from within the intelligence community
and also from the normal industrial and educational communities.
XML seems relevant here as well
-
The query server architecture, whether SAIC's
or MCC Infosleuth's or I*3's, will need convergence and transition to OMG,
where we are starting up similar work.
-
Right now, HLA is enticing industry to adopt
its framework for federating simulations. This could lead
to a revolution in gameplaying on the web which in turn could lead to rapid
improvements in HLA and simulation technology.
-
Similarly, if NIMA's commercialization strategy is successful, then industry
will be a better partner in making GIS backplane battlefield information
more broadly available. If OGIS and Dynamic DBMS does its job right,
this will not lead to stovepipe (though componentized, services based)
GIS systems but it will also be possible to represent thematic information
that is not geographically indexed. The current Map Server will
be a simple special case of the eventual capability.
-
The JTF architecture will need a view of itself as federated, like
HLA simulation and ALP logistics architectures.
-
Planning and Scheduling technologies are perhaps ready to
become OMG services.
Conclusion
The above is a "getting started" list of AITS architectural futures that
might start up more discussion or aggregate with other lists. The
long list of possible things to do provides some views of the future.
Prioritization helps to pare this to the current resources. But seeing
the possible future helps defend against being blind-sided.