Workshop on Compositional Software Architectures
Workshop Report
Monterey, California
January 6-8, 1998
Editor: Craig Thompson
February 15, 1998
[Workshop
homepage: http://www.objs.com/workshops/ws9801/index.html]
Contents
Sponsors and Organizers
The Object
Management Group (OMG) is the software industry's largest consortium
and is focused on open interoperable component software interface and framework
technology.
Workshop Committee
Objectives of the Workshop
The workshop focused on:
-
component software architectures
-
non-functional system-wide properties (-ilities)
-
web and ORB integration architectures.
Fundamental concerns face organizations developing and maintaining large,
enterprise-critical, distributed applications.
-
Application development teams spend too much time coping with the complexities
of their chosen middleware, services, tools, web, and programming environments.
An application's choices of underlying middleware products become pervasive
and irrevocable commitments of the application or organization.
-
Complex distributed application logic must be delivered via the web, and
today there are object model and service architecture mismatches.
-
The goal of assembling applications from reusable components is still elusive
because business applications require system-wide properties like reliability,
availability, maintainability, security, responsiveness, manageability,
and scalability (the "ilities"). Assembling components and also achieving
system-wide qualities is still an unsolved problem. As long as the code
that implements ilities has to be tightly interwoven with code that supports
business logic, new applications are destined to rapidly become as difficult
to maintain as legacy code.
Component software did not exactly set the world on fire five years ago.
Now we have new languages, maturing visions of compositional architectures
(CORBA, WWW, ActiveX, ....), the web as a distributed system with a low-entry
barrier, and emerging middleware service architectures. Do we have the
critical mass to jump start the component software cottage industry? Even
if the technology enablers are there, what is needed to establish an effective
component software market? What are the remaining barriers?
The objective of the workshop is to bring together a mix of leading
industry, government, and university software architects, component software
framework developers, researchers, standards developers, vendors, and large
application customers to do the following:
-
better understand the state-of-practice of industry component software
initiatives (ActiveX, OMG's OMA/CORBA, Java, W3C) and how far they go in
solving problems of composability and plug-and-play.
-
better understand how software architectures play a role in integrating
web and object service architectures and in building systems that can maintain
architectural properties (e.g., composability, scalability, evolvability,
debugability).
-
identify key technologies, directions and convergence approaches
and characterize open research problems and missing architectural notions.
Workshop Focus
The workshop consisted of a set of invited presentations and topic-centered
breakout sessions. Topics of interest listed in the Call for Participation
included (but were not limited to):
-
State of practice in component software and software architecture - e.g.,
views from Microsoft, Netscape, JavaSoft, OMG/ODP, and the software architecture
R&D community.
-
State of practice in web + distributed object integration - e.g., views
from Netscape, Visigenc, Iona, JavaSoft, W3C, Microsoft, web-objects architecture
R&D.
-
Characterizing the problem. What do large application developers
and enterprise software architects want? How do they avoid building more
unmaintainable legacy applications? How do they build applications with
fifteen year life cycles on middleware products that change annually? How
do they architect systems so that both functionality and architectural
-ilities can be upgraded over the application's life cycle? Approaches
to evolution of software.
-
Composing components. What are examples of minimal common infrastructures
that enable component composition. Are we there yet with respect to plug
and play? Problems with component software approaches (devils' advocate
positions) and solution approaches (counter arguments) - e.g., footprint,
too many interfaces, uncontrolled evolution. Economy of component
software. Is component software a silver bullet or a nightmare or
yet-another-technology?
-
Composing object services. How can we compose object services?
could you make a competitive OODB from naming, persistence, transactions,
queries, etc.? Implicit interfaces and wrappers. What behavioral
extensions can be added implicitly to a system? Mechanisms like POA,
interceptors, before-after rules to guard objects to insure they are acted
on by implicit operations. Terminology - e.g., loose or tight coupling,
granularity, frameworks.
-
Architectural properties. What are ilities, i.e., some property
added to a system that is independent of the functionality of that system.
How to insert them into component software architectures? Say you
had a system doing something. How would it look different if ility
X was added or removed? Is there some kind of architectural level
where the boundary between the visibility/hiddenness of the ility changes?
What is needed in the architecture in order to add ilities?
-
Scaling component software architectures. Frameworks, patterns, configurations,
inter component protocols. Examples of composition involving heterogeneous
data sources. Federation - do we have to federate the services
when we have ORBs on 40,000,000 desktops? what can we say about the
federation pattern? end-to-end, top-to-bottom ilities like optimization,
QoS, security, ...
-
Adaptivity of component software architectures. Tailorability, evolvability,
assured services and graceful degradation, survivability.
-
Web object models, metadata and registry/repository in Internet/Web.
How do DOM, XML, PICS-NG, RDF, and the many metadata proposals relate to
object and agent definition languages?
-
Convergence of ORB and Web architectures. (Why) are both camps doing
the same things differently? How to avoid recreating the same set of services
like versioning on both architectures.
Planned Outcomes and Benefits
The explicit planned outcome of the workshop included position papers and
this workshop report, which summarizes the breakout sessions. Implicit
benefits of the workshop are:
-
participants and those reading the workshop report will gain a general
understanding of the state-of-the-art and the state-of-practice in the
workshop focus area, and individuals will be able to make quicker connections
to relevant research projects.
-
(a little longer stretch) possible direction changes and convergence among
technologies as different groups (e.g., DARPA, OMG, W3C, ODP, IETF) understand
what others bring to the party.
Position Papers
Position papers (mostly around three pages long) on a topic related to
the workshop theme were solicited by November 21, 1997. Generally,
an accepted position paper was a prerequisite for attending the workshop
except for a small number of invited talks. The position papers were made
web-accessible by December 7, 1997, in various widely used formats (.html,
.doc, .ps, .pdf, .txt) -- see the List
of Position Papers arranged in the order received.
We originally expected around 40 position papers but received 112 and
accepted 93. This was a first indication that the workshop theme
was of broader interest than we originally expected. We decided to
scale up the workshop rather than severely restrict participation.
We solved the scaling problem by adding extra parallel breakout sessions.
Workshop Structure - Presentations and
Breakout Sessions
The workshop consisted of presentations and breakout
sessions.
Most presentations were based on position papers
but a few were invited talks (so there is no corresponding position paper
for these). Several invited talks were scheduled the first morning
to get many of the workshop ideas exposed early. Other talks were
scheduled for presentation in relevant breakout sessions to get the conversation
going. Because of time limitations, we could not schedule all position
papers for invited talks. We did our best at matchmaking.
Breakout sessions were two-to-three hour working
sessions focused on a topic, led by a moderator and recorded by
a scribe. Most breakout sessions started with summaries of
a few relevant position papers skewed toward helping to introduce the session's
topic. Following a breakout session, in a plenary session, the moderator
present a summary of the breakout session and some discussion occurred.
The scribes were responsible for sending a summary of their breakout sessions
to the editor of the workshop report by January 16, 1998, for assembly
into a draft report.
There were four sets of half-day breakout sessions
(I-IV), each containing four parallel breakout sessions (1-4). In
order to partition the workshop into breakout sessions, we completed a
poor-man's Topic
Analysis on the workshop papers. This really just consisted of
keeping track of several topics per position paper and then making a large
outline of all the topics. The topic analysis was useful for several
purposes. As an outline for the many topics covered by the position
papers, it provides a way to scope and structure the topics covered.
To a lesser extent, it provided a way to locate (some) papers based on
topics (though it is pretty incomplete if used this way). Finally,
it provide the basis for partitioning the workshop into a collection of
breakout sessions.
The last step was to pre-plan the breakout session topics. This
involved identifying for each breakout session the title, topic description,
moderator, and relevant presentations (this was done before the workshop).
This information appears in the breakout session summaries below.
In addition, we did late binding in selecting scribes at the beginning
of each breakout session. And scribes authored the breakout session
descriptions, which is the main body of the breakout session descriptions
below.
Opening Session:
Problem Domain and Workshop Goals
[Thanks to Robert Seacord (SEI/CMU) for notes on
these presentations.]
This half hour session consisted of presentations
by
-
Dave Curtis,
Object Management Group, Workshop Sponsor
We want to foster a marketplace for software based
on some underlying assumptions about connectivity. In the last two
years, there has been an Oklahoma land rush (domain madness) when OMG bifurcated
the vertical, domain-specific areas from the horizontal platform services
and facilities. Now there is an increased demand for core services
that domain frameworks can rely on. OMG has active platform work
in security, quality of service, and real-time, and is beginning to think
about upgrading its Object Management Architecture to better cover ilities.
Our current pressing requirement is for some type of component architecture,
and there are a number of activities to support this. Part of the
goal of sponsoring the workshop is to separate the idea wheat from the
idea chaff.
-
Todd Carrico,
DARPA, Workshop Sponsor
The U.S. DoD's Defense Advanced Research Projects
Agency (DARPA) takes on high risk research. Its Information Systems
Office (ISO) and Information Technology Office (ITO) focus on software
advances that DoD needs, which now includes supporting peace-keeping and
crisis management missions. DARPA makes emergent near-term and revolutionary
long-term research investments. DARPA is building a comprehensive
Advanced Information Technology Services (AITS) software architecture that
combines command and control, planning, logistics, data dissemination,
and several other mission critical functions. We are building on
and extending the OMG Object Management Architecture as an open environment
and incorporating Java and DCOM technologies. We have significant
technical programs in security, survivability, quality of service, evolution,
adaptability, and agents. There is still a lot of work in front of
the research community to understand how to build large systems with component-based
technologies. We are seeking good ideas.
-
Ted Linden,
Microelectronics and Computer Technology Corporation,
Workshop Co-Organizer
MCC’s Object Infrastructure Project is sponsored
by several companies. The problem to be solved is that developing
and maintaining distributed business applications is complex, error-prone
and slow. Large applications with life cycles up to 15 years are built
on middleware products with much shorter life cycles. Many position
papers propose ideas for inserting ility support into component communications.
We want to know what are the minimum architectural assumptions needed to
develop, exchange, and market components. How can we compose components,
achieve functional requirements, achieve ilities with assemblies of components,
debug assemblies of components, and extend and adapt applications built
from components.
-
Craig Thompson,
Object Services and Consulting, Inc.,
Workshop Co-Organizer and Master of Ceremonies
OBJS is focused on combining ORB and web architectures
and object models, injecting ilities via an intermediary architecture,
and making component-based systems scalable and survivable. We seem
to see a common framework for ilities emerging. We want to see if
others see the same thing.
Presentations
-
Higher
Order Connectors, David Garlan,
Carnegie Mellon University (presentation canceled due to illness)
-
Component
Model, Umesh Bellur, Oracle
-
Distributed
Objects with Quality of Service, Richard
Schantz, BBN (.ppt presentation)
-
Large-Scale
Agent Architectures, Stephen Milligan,
GTE/BBN Technologies (.ppt presentation)
-
OLE DB, Michael
Pizzo, Microsoft
-
W3C
HTTP-NG, Bill Janssen, Xerox PARC
(further information)
-
Composing
Active Proxies to Extend the Web, Rohit
Khare, University of California at Irvine
-
Resource
Description Framework, Ora
Lassila, Nokia Research Center and W3C
-
Eco
System: An Internet Commerce Architecture, Jay
M. Tenenbaum, CommerceNet (.ppt
presentation)
-
A Comparison of Component Models, Dave
Curtis, OMG
-
Aspect-Oriented
Programming, Gregor Kiczales,
Xerox PARC
Breakout Sessions
Breakout Session Rationale
Breakout sessions were organized to encourage effective
discussions in a cross-disciplinary workshop where the attendees are coming
in with very different backgrounds, viewpoints, terminology and interests.
-
Breakout Session I (four parallel sessions on different
topics) allowed people with relatively similar interests to establish terminology
for communication and develop an understanding of their requirements, vision,
and key problem areas for compositional architectures. Separate sessions
dealt with problem definition from the viewpoints of large application
architects, middleware architects, middleware system developers, and architecture
theorists.
-
Breakout Session II (four parallel sessions on different
topics) focused on various aspects of solutions with separate sessions
dealing with the economics of components, composing components, achieving
system-wide properties or "ilities" with components, and architectural
views.
-
Breakout Session III (four parallel sessions on different
topics) examined the central issues involved in achieving various ilities
within the context of component-based development. Individual sessions
covered similar technical issues but focused on specific ilities including
scalability, adaptability, quality of service, and on ORB and Web integration
architectures.
-
Breakout Session IV (four parallel sessions on different
topics) examined various dimensions of propagating compositional architectures
into common practice. There were separate sessions on the problem of engineering
common solutions to problems that are being independently addressed in
diverse standards and product development activities, on moving toward
Web object models, on standardizing domain object models, and on technical
details of reifying communication paths.
Seen from another vantage point, there were collections
of sometimes sequenced sessions that covered:
-
understanding the problem from the points of view
of large application developers and middleware architects and developers,
-
software architecture, then components, composition,
and ilities, then specific ilities including scaling, adaptivity, QoS,
and specific mechanisms for inserting ilities,
-
web and object integration of architectures and data
models, and
-
transferring technology to industry and standards
to create a component software economy.
Breakout Session Structure
Each breakout session description has the format:
-
title - selected before the workshop
-
moderator - selected before the workshop
-
scribe - selected at the workshop breakout
session
-
topics - selected before the workshop
-
position papers - selected for presentation
in this session before the workshop
-
discussion - the scribe's summary of the session,
sometimes as edited by the session moderator and the workshop report editor.
Since they were authored by different people, we did not try to maintain
consistency in form or style among all breakouts.
I-1 Problem Definition by
Application Architects
Moderator: Craig
Thompson, Object Services and Consulting (OBJS)
Scribe: [notetaker, please contact report
editor]
Topics: From the large application
builder's perspective, component software is an enticing vision but there
are roadblocks in the way of realizing the benefits. Large
application architects and enterprise software architects will identify
the critical shortcomings they see in current technology, develop a vision
for future component-based development of enterprise-wide applications,
and identify key architectural concepts, tools, and processes needed to
realize their vision.
-
Characterizing the problem. What do
large application developers and enterprise software architects want? How
do they avoid building more unmaintainable legacy applications? How do
they build applications with fifteen year life cycles on middleware products
that change annually? How do they architect systems so that both functionality
and architectural -ilities can be upgraded over the application's life
cycle? Outcome is a characterization of the problem, requirements,
and list of R&D issues.
-
The vision. What do we hope to achieve?
rapid application development, understandability, reliability, ...
Might there be unexpected benefits or results?
-
Sample componentwhere* roadblocks - what are
they, how to remove them, assumptions limiting reuse
-
the same infrastructure for component software is
needed for deployment, debugging, microlicensing, ilities, fault tolerance,
... and the community will have to agree -- its all interconnected!
how can small teams end up with interoperable frameworks for all these
capabilities?
-
choice of infrastructure foundation as irrevocable
commitment
-
are monolithic stovepipes always bad? are we there
yet with respect to plug and play?
-
design for reuse requires thinking, too much knowledge
required, complexity, too many interfaces, jungles of specs, finding your
way, much less the feel of a single architecture.
-
expensive to retrofit, difficult to predict
-
uncontrolled evolution, in multi-vendor development,
if many product vendors evolve a component x.dll, how do we keep from breaking
the other user applications, managing many fielded versions
-
distributed systems are less secure, rogue components,
-
large footprint
-
IDL mappings create complexity, harden boundaries
-
need library, not executables
-
closed DBMS, compiler, workflow, etc. products need
to be more open
-
legacy constraints on solution
-
component compatibility
-
initial architecture clean but warts inevitably appear
during evolution - can we avoid this?
-
* Note: Componentware is a Registered Trademark of
I-Kinetics.
-
Roles - in session descriptions, terms like
"users" and "we" are used covering different roles, like architect designer,
component developer, system assembler, customer. How do these different
roles vary in motivation, objectives, and value models?
Papers
-
paper069.doc
- Louis Coker, Rick Hayes-Roth, Services First, Components Second!,
Teknowledge
-
paper035.html
- Colin Ashford, The TINA Service Composition Architecture,
TINA Consortium
-
paper106.html
- Gabor Seymour, Compositional Software Architecture "ilities" for
Wireless Networks, Motorola, Inc.
Discussion
The purpose of this session was to look at the world of middleware choices
from the point of view of large application architects and to understand
their requirements and experiences to date with using component-based middleware.
There were three presentations.
Louis Coker - DARPA AITS Architecture
Louis Coker talked about the DARPA AITS architecture,
especially the experiences of the JTF-ATD command and control team in being
early ORB adopters who have evolved their application in parallel with
CORBA and the World Wide Web. Their problem involves users collaborating
in developing situation representations and in planning courses of action.
Some limitations of today's ORBs are:
-
to date, ORB implementations are not robust and performance
is poor above 10K objects, partly because object location and binding is
slow.
-
they find the transaction model too restrictive;
they need to manage inconsistency
-
they need to support bandwidth-sensitive applications
that adapt to varying performance levels
-
they use object webs, which are hierarchical graphs
of typed nodes, relationships, and collections - object webs depend on
loosely coupled services: relationships, persistence, replication, and
versioning. The objects are not a standard object model; the webs
are not WWW webs but rather graphs. IDL is used to define classes
in object webs. Object webs are used as a sharable data structure
representation that other JTF-ATD "servers" depend on (e.g., MapServer,
PlanServer, DataServer).
These lessons learned are not getting out widely
to the OMG community. Vendors do not seem to be addressing the needs
of the Wide Area Net (WAN) community.
Colin Ashford - The TINA Service Composition
Architecture
TINA is a consortium of telecommunication providers, manufacturers,
and researchers. Their mission is to develop an architecture for
distributed telco applications that is bandwidth sensitive and supports
multimedia. Colin Ashford talked about service composition in TINA.
You can create a service by combining service components or in a temporary
merger in a session, e.g., a teleconferencing session. Service composition
may be static or dynamic. Services can be composed in parallel or
series (he did not mention weaving). The TINA architecture is based
on OMG's but has richer ORB+ capabilities, administration and management
services, and network resources, and it has a session model. The
business model is to allow retailers to re-sell services
to customers that the retailer has composed from more primitive
services supplied by third party providers, that run on an infrastructure
provided by a communication infrastructure provider, possible with
an RM-ODP-like broker in the picture to locate services. They
need algebras for specifying composition, scripting languages, and toolkits
to make composition easier to perform.
Gabor Seymour - Compositional Software Architecture
"ilities" for Wireless Networks
Motorola careabouts include mobility and wireless communication.
Mobility impacts the ilities. Cellular topologies require smarter
cell-sites. The physical environment and geography forces quality
concerns. There is a need for object migration at runtime and a desire
to migrate functionality to cell-sites not keep it central. With
respect to reliability, availability is location-dependent and replication
varies by site driven by cost. With respect to scalability, they
need upward scalability (LAN to WAN) and downward to move objects from
large central sites to less capable distributed sites. Network management
must work in the present of external events and providers. They need
change management and performance. They need graceful degradation.
Discussion covered the following topics:
We asked why there are so many technology floors to stand on -- OMG,
ActiveX, the web, Java, other middleware, etc. One reason is that
there are so many dimensions of requirements, for instance, the need for
static versus dynamic solutions, the need for LAN-based and WAN-based solutions,
the wide variety of needs for thin or thick security solutions, the degree
of support provided by COTS technologies, their openness, granularity,
time scales (relative speed of change), and many more. Some felt
that the variety is needed because the solutions for the different combinations
require different mechanisms to be integrated. Others felt that maybe
over time we will be able to see how to untangle this so that not every
system is some unique manually coded combination of functions and non-functional
qualities (what DoD calls stovepipes). That is the promise of open
middleware, after all.
We briefly considered the distinction between applications and infrastructure.
This line is gray today because applications must reach down to meet the
middleware floors they are built on, and there is a gap in the form of
missing middleware (missing standards and missing implementations).
Also, even when there is less gap, applications often encode controls over
middleware policies; and sometimes quality considerations reach up into
applications. The move by OMG (and PDES and SEMATECH) to standardize
some domains mean standards that reach into traditional application areas.
We still do not have a good way to insulate applications from their middleware
choices.
We discussed designing with evolution and adaptability in mind.
Craig Thompson mentioned that when one designs to meet requirements, it
is a good idea to distinguish different kinds of requirements:
-
requirements satisfied by the system as it currently exists
-
requirements that are foreseen in later versions of the system and not
precluded by the current design
-
unforeseen requirements that the design can evolve to accommodate
-
unforeseen requirements that the design is not protected against and will
require (sometimes radical) redesign to accommodate
What would be nice is to somehow guard against this final category of requirements.
Perhaps we do this by modularizing designs so cross-cutting requirements
only affect some portions of the design. In addition, maybe we can
learn how to add or change the binding glue and insert new capabilities
into systems via wrappers or other mechanisms that can side-effect the
communication paths. This would leave expansion slots for various
kinds of adapters. This would take us from a view of systems-as-designed
to a view of systems-as-continuously-evolving. In some sense, expansion
joints mean looser coupling though performance might be optimized back
in. To some extent, market conditions drive ility needs and change
rates. This also argues to avoid monolithic middleware, though there
is a tendency in the vendor community to produce just that -- today's ORB
vendor services lock you to a particular vendor; you can't often port services
to another ORB implementations. Rather, we want to compose middleware
components for a specific problem and then evolve the solution. There
is unlikely to be a "one size fits all architecture" at least as concretely
implemented (though there might be an abstract model like OMA possibly
augmented with an abstract ility architecture framework, which might be
used to generate and evolve specific concrete architectures.)
Todd Carrico showed a slide of a fat application containing many encapsulated
services on the left and a thin application dependent on many explicit
middleware services on the right. One way to interpret the picture
is to imagine that we are evolving toward thin applications and richer
available middleware so it is easier to build applications based on tried-and-true
middleware components and to mix-and-match lighter or heavier weight services
and ilities. Another interpretation of the picture is that it would
be nicest to be able to move up and down the spectrum of thick and thin
applications without redesigning systems.
We discussed some roadblocks.
-
missing architectural concepts and terminology. The component and
middleware areas are using words like component, service,
facility, and framework without providing operational definitions.
We are just now adding the notion of component models and inter-component
dependencies to the OMG suite of standards to define not just APIs but
wiring; we still do not have a framework for ilities or for architectural
constraints. A consistent layered architecture will need a modeling
approach that takes into account both vertical and horizontal consistency.
-
we do not yet have enough assurance that we can build competitive systems
from components. Or how we can evolve such systems or insure various
ility contracts or reason about ility tradeoffs.
-
we eed a CORBA-lite. What can CORBA do to become as pervasive as
the web and reduce its apparent complexity? We seem to need something
easier to understand and use than CORBA now is, with its many tree standard
specifications minus a forest view. Java may by-pass OMG if it can
fill the same needs, which it is starting to do.
-
W3C may reinvent OMG services (e.g., WEBDAV is providing versioning, OMG
has deferred its versions service for several years now). XML should
not go too long without an IDL interface. We can help OMG set roadmaps
to stay relevant.
I-2 Extending Current Middleware
Architectures
Moderator: Ted
Linden, Microelectronics and Computer Technology Corporation
Scribe: Diana
Lee, Microelectronics and Computer Technology Corporation
Topics: From the viewpoint of middleware
architects, where are the critical shortcomings in current technology,
what kinds of component-based development can be supported in the future,
and what are the additional key architectural concepts, tools, and processes
needed to realize this vision. Current middleware
architectures like CORBA and COM+ are a step toward compositional architectures,
but they do not fully support component-based development and maintenance
of large applications. Are current middleware architectures from OMG and
Microsoft steps in the right directions? What are the roadblocks in the
way of realizing greater benefits? Problem areas for discussion may
include:
-
Current middleware products are becoming huge, monolithic
systems that are themselves almost the antithesis of a component-based
system.
-
Is there any hope if you build your application on
the wrong middleware product and later need to move it to a different product?
-
When middleware products have a life cycle of six
months to a few years, how does one build applications with fifteen year
life cycles? How do you even migrate an application to new releases of
the same middleware product.
-
What about services as components? If you build on
an OMG compliant security service from one vendor, you currently cannot
expect to switch to a similar service from another vendor. It is similarly
hard to build services that port across multiple vendor's ORBs. Is
it feasible to specify services so they become replaceable components?
-
How can middleware help applications achieve system-wide
properties or ilities when the application is built from components?
-
How does middleware help one build applications in
which both functionality and architectural ilities can be upgraded over
the application's life cycle?
-
What can middleware do to help with debugging and
fault isolation?
Papers
-
paper082.html
- K. Mani Chandy, Adam Rifkin, and the Infospheres Group, Caltech
Infospheres Project, California Institute of Technology
The Infospheres project researches compositional ways of obtaining
high confidence in dynamically-re-configurable scalable distributed systems.
-
paper058.html
- Jeff Rees, Intermetrics' OWatch Debugging Technology for Distributed,
Component-Based Systems, Intermetrics, Inc.
OWatch is a practical analysis and debugging tool for distributed,
component-based software, with a particular emphasis on systems of components
in operation.
-
paper104.html
- Ted Linden, Component-based Development of Complex Applications,
Microelectronics and Computer Technology Corporation
MCC’s Object Infrastructure Project is prototyping distributed system
architectures that support ilities by injecting services into the component
communications while keeping the application components largely independent
of the injected services.
Discussion
This session identified requirements for middleware architectures capable
of fully supporting component-based development. The three introductory
papers approached the problem from complementary viewpoints and envisioned
similarly strong requirements for middleware architectures:
-
Joe Kiniry, representing the Caltech
Infospheres Project, sees the need to dynamically compose objects
selected from among the millions of objects soon to be available on the
Web. The project addresses the need to obtain high confidence in dynamically-reconfigurable,
scaleable distributed systems.
-
Jeff Rees, representing Intermetrics’
OWatch Project, described an analysis and debugging tool for distributed,
component-based software with a particular emphasis on systems of components
in operation.
-
Ted Linden, representing MCC’s
Object Infrastructure Project , took the viewpoint of developers
of large, long-lived distributed applications having strong requirements
for reliability, security, quality of service, manageability, and other
"ilities." He argued that achieving these system-wide properties is a key
problem when composing components.
These presentations and the discussions argued that support for component-based
development requires more than methods for developing, exchanging, marketing,
and composing components. We also need well worked out methods to:
-
Achieve ilities with assemblies of components.
-
Debug large, dynamic assemblies of components.
-
Maintain high confidence while dynamically reconfiguring applications built
from components.
Relation between Architecture and Components
Which comes first, architecture or components? Currently components
fit within an architecture such as those defined by Java Beans, COM+, or
a browser or other product for its plug-ins. Architecture first is consistent
with the traditional approach of architecting a system before writing components.
However, component technology will be more economical if components can
be developed and used in multiple architectures. The ability to wrap a
CORBA object, Java Bean, or COM+ object so it can appear in one or the
other architectures means that a component does not have to be totally
dependent on an architecture. There was a surprising amount of consensus
that components should not have to be strongly dependent on a specific
architecture. Components are written first, then architectures tie them
together. An application that uses components will have an architecture—especially
to the extent that the application must support ilities, dynamic debugging,
and dynamic reconfiguration. Specific components may interoperate more
or less easily within a given architecture; i.e., the wrapping necessary
to make a component work within an architecture may be more or less easy.
We asked whether there is a minimum common architecture that can be
developed as a way to facilitate reuse of components. Components developed
to this minimum common architecture could then be used in a variety of
specific architectures. We concluded that it is unrealistic to search for
a "minimum common architecture." There are multiple dimensions involved
in interoperation, and no one dimension is always most significant. One
increases interoperability by increasing architectural specifications.
The question "what is the minimal architecture" is better described as
"how interoperable do you want the component to be?" and "how much wrapping
or rewriting will be needed to make it interoperate within a specific architecture."
Levels of Interoperability:
Compositional Software Architectures must deal with component interoperability
at several levels. Interoperability at all levels is especially important
for developers of large, long-lived applications that grow incrementally.
Development of new products and technologies may, over time, necessitate:
-
Interoperation of functional components that have been developed on different
middleware products.
-
Interoperation of different services, e.g., naming, transactions, replication,
and security services may interact with each other.
-
Interoperation of services developed for or with different architectures,
e.g., interoperation of COM+ and CORBA security services.
-
Changing middleware products and/or vendors over time. Because middleware
products as well as their vendors have limited life cycles, a long-lived
application may need to be developed so it can switch to a different middleware
products to take advantage of new products and technologies.
-
Interoperability of middleware for debugging purposes.
-
Support for dynamic reconfiguration.
While there are many interoperability requirements, the answer is not in
the direction of complex middleware architectures. In fact, there is a
desire to make the middleware as transparent to the application as possible.
A paper at the workshop, Middleware
as Underwear: Toward a More Mature Approach to Compositional Software Development
[Wileden and Kaplan], states that middleware "... should be kept hidden
from public view ... It should never dictate, limit or prevent changes
in what is publicly visible... In most circumstances, it should be as simple
as possible." But how does this apply to different middleware products
being interchangeable? Is it possible to change middleware in a transparent
fashion? Using the underwear analogy, one attendee rephrased the problem
"transparent, yes, but it is awfully hard to change your underwear without
removing your clothes."
Solutions Toward Interoperability proposed and discussed include:
-
Consistent definition and use of meta-data to provide some degree
of commonality.
-
Development of a meta-model of middleware: this suggestion had a
couple of variations :
-
Look at current models of distributed middleware: COM, CORBA, etc. to develop
a component semantic meta-model. Tool writers to make tools to build and
compose software components can use this model.
-
Consider the middleware as a machine for running distributed applications
and develop a language for programming the middleware.
These "meta-model" solutions fall into the trap of merely moving the
problem to the next higher level of abstraction: once one defines the meta-model,
how does one ensure there is only one meta-model? Or if there are
multiple meta-models, how does one ensure the meta-models interoperate?
-
Use of Bridging Technologies. Based on use of method calls which
is common to all middleware products.
-
Use of "cluster" technology (the IBM example: Virtual Enterprise).
Break the problem into smaller sections over some context and then have
interactions between the clusters to get the larger solution.
-
Necessary Conditions : naming services on a well defined port and
an interface repository standard
-
Inject services into the component communication paths while keeping
the application components largely independent of the component interaction
architecture. This is the approach of several groups -- see breakout
session IV-1.
A question remains: don't the mechanisms above depend on some
common agreements, either implicit or explicit, and some amount of shared
metadata that constitutes a sort of component model even if none is explicit?
Obstacles to Component Technology:
-
Distributed component technologies are moving targets. COM, CORBA,
and Java are emerging technologies that are still being defined. It is
difficult to abstract their commonalties for an architecture to build upon.
-
Legal considerations. Copyright law favors rewriting over
reuse. There is a lack of protection for authors of components. Also,
there is no standard licensing approach for components so negotiation to
include these in products must occur on a per component basis.
Other Relevant Issues:
-
Services as component: Is it feasible to specify services so they
become replaceable components?
-
Ilities: How can middleware help applications achieve system-wide
properties or ilities when the application is built from components?
I-3 Challenging Problems
in Middleware Development
Moderator: Bob
Balzer, USC/ISI
Scribe: Kevin
Sullivan, University of Virginia
Topics: This session views component
composition from the point of view of middleware developers and system
programmers. The approach is to select one or two interesting system
software component composition challenge problems that can be used to identify
component software strengths and weaknesses. Hopefully
the challenge problem can be reconsidered from other perspectives in later
sessions of the workshop. Sample challenge problems:
-
validating the component approach -- can you
construct a workflow system (or OODB) from component services like persistence,
transactions, and queries? Will the result be as good as a conventional
workflow system? If not, why not?
-
new architectures from components -- if we
compose web, DBMS, and ORB architectures, can we identify novel new productive
architectures? e.g., a DBMS where the software lives in the web or
network, smart spaces where a spatial metaphor replaces a compound document
page-based web metaphor
-
scaling ORBs - if all the ORBs were connected
to be one ORB, what a great ORB that would be. If all the services
were replicated and federated to become one suite of services what a great
service that would be. If all the users were connected to become
one user ... nope, we don't compose people together to become huge people
though we do to become organizations. So how are we really going
to compose all the ORBs and services if they really become available on
all desktops? How do you fit ilities into this picture? How do you
make the resulting system of systems survivable?
-
Why can't I combine my system with yours?
A implements fault tolerance via replication; B implements cross-system
debugging; C implements distributed garbage collection. What prior
agreements do A, B, and C need to make to be able to compose these functionalities?
do they have to agree on object model? ORB vendor? component model?
what else? is just the design pattern portable or are APIs and implementations
composable?
Papers
-
paper063.html
- Dennis Heimbigner, Alexander Wolf, Richard Hall, Andre van der Hoek,
Deploying Component Software Systems, Department of Computer
Science, University of Colorado, Boulder
-
paper015.html
- Sung-Wook Ryu, B. Clifford Neuman, Garbage Collection for Distributed
Persistent Objects, University of Southern California/Information
Sciences Institute
Discussion
This session focused on the use of composition enablers and inhibitors
in the design of middleware systems. The questions that we addressed
included the following: What distributed middleware would be useful for
component-based system development? What information and mechanisms
are necessary to enable composition of components into systems, the automation
of such composition, and reasoning about such systems?
At a more detailed level, the questions we addressed included the following:
-
How should interfaces be described?
-
How can you ensure that incompatibilities are detected?
-
How can you exploit reification of intercomponent communications?
-
What are important inhibitors of compositionality?
-
What are differences in design and runtime composition issues?
Much of the discussion centered on the issue of metadata as a composition
enabler. Metadata is machine-readable, descriptive information
associated with components, connectors, and systems. A simple example
of metadata is the type library information that is often associated with
COM components. Such metadata describes component interfaces at a syntactic
level. An extension of that kind of metadata might include descriptions
of what interfaces are required and provided by a component, and how it
expects to interact with its environment. Metadata can be used by programs
to reason about composition, properties of components, and even about middleware
itself. What kinds of reasoning and manipulation are supported by
various metadata types?
In that dimension, we discussed the following specific issues.
First, the position was taken that we need precise semantics for metadata.
Second, we might use metadata to describe component and system provisions
and requirements, e.g., what a component needs in the security area, and
what a system needs in the reliability area. It was noted that security,
reliability, etc. cannot be bound to individual objects. One reason
is that desired properties often change as knowledge is acquired over time.
Another reason is that opinions might differ as to when given qualities
are good enough. Third, it was observed that metadata can be attached
at many levels of a system. There is no particular place where metadata
annotations necessarily go. However, there has to be a mechanism
to propagate information, so as to enable desired or required levels of
control. Fourth, it was suggested that metadata can be organized
through views of complex systems, e.g., a security view, a reliability
view, etc. Fifth, it was suggested that automated "composers" (e.g.,
class factories, the Software Dock of Heimbigner and Wolf) might use metadata
such as figures of merit to compose components to meet given specifications.
Sixth, we discussed the need for type and constraint systems to enable
automated reasoning about and systems and compositions from parts.
For example, it can be necessary to reason about which combinations of
actions and components are acceptable, and to have ways to name them. For
example, there are cryptographically insecure combinations of secure algorithms
and techniques. Another example is that adverse interactions at the level
of mechanism can have unintended semantic consequences, e.g., pinging for
fault detection can interfere with aging-based garbage collection in Internet-scale
distributed systems. We might also want to enable the automatic selection
of alternative abstractions and implementations, e.g., in the context of
management of quality of service. Finally, it was observed that it is critical
to detect mismatches between components and systems and their environments
and that metadata might facilitate detection and reasoning about such mismatches.
We also discussed relevant properties of both middleware and systems
based on it, although the discussion remained abstract in this area.
One middleware property that we discussed was usability. What can we do
to make the middleware itself easier to use? For example, what middleware
metadata would make it easier for developers or tools to understand and
evolve systems?
We also discussed inhibitors of composition in component-based systems.
First, it was suggested that we lack fundamental knowledge of what is useful,
not just in terms of the metadata descriptions of systems, but even in
what basic system properties are important. For example, what are
the key, distinct levels of security in a system? One person said
we have almost no engineering knowledge of what parameters and parameter
values are important. We lack clear definitions of key terms.
Much work in this area is vague and general, and people have inconsistent
views of what terms mean. Second, it was said that most engineering
advances are made when failures are analyzed and understood , but that
in software engineering failures tend to be hidden, and not analyzed. Third,
one person noted that there is little discussion of analysis and formal
foundations in this area. It was noted that Sullivan’s work on formal
modeling and analysis of compositionality problems in Microsoft’s COM represents
progress in the areas of failure analysis (of sorts) and the formal foundations
of component-based software development. Fourth, it was noted that
although the notion of decomposing systems into functional components
and ility aspects is seductive, it might not be possible to effect desired
behavioral enhancements without changes to functional components of a system.
Fifth, the competency of people using components and middleware is
(it was said) often questionable or poor. That idea led to the suggestion
that competency requirements for component use might be attached to components
as metadata? Sixth, computational complexity is an inherent
impediment whenever semantically rich metadata have to be processed.
Seventh, it was noted that computers are so much more capable than they
used to be, and they’re going to get even more powerful, so we need to
find ways to control complexity growth. A key strategy, it was said, is
to make systems simple so that they work. Finally, the issue of the
diversity of approaches in practice was raised as a practical impediment
to the use of metadata.
The participants also discussed the issue of where standards (e.g.,
for components and metadata) will come from: whether from de jure or
de facto standardization? We also discussed some key ways in which
software engineering is similar to or different from more traditional engineering
disciplines, such as the bridge building. In particular, we discussed
whether the notion of tolerances (close enough), which is critical in the
design of physical artifacts, has an analog in the software realm.
Good points were made on both sides of this issue.
We ended the session with a discussion of quality of service.
First, it was noted that discovery of key properties happens at both design
and run time. Second, it was observed that it’s important to avoid
a combinatorial explosion in interface types, and so interfaces types are
used to discriminate objects down to a certain level of granularity in
Microsoft’s COM, below which properties are used as a mechanism for manipulating
quality of service parameters (in OLE DB). Second, QoS specifications
can differ, even for a single component. Third, there need to be
generic services for invoking components that provide services.
I-4 Software Architecture and
Composition
Moderator: Gul
Agha, University of Illinois at Urbana Champaign
Scribe: Adam
Rifkin, CALTECH
Topics: What is the vision of component
composition from the software architecture perspective? What does
software architecture explain and what does it not yet address?
-
architecture, design patterns, and frameworks
-
are the approaches from the architecture, design
patterns, OA&D, and frameworks communities different or the same using
different words? how much do architecture description languages help
us in building large-scale systems?
-
define architecture - components, connections, constraints.
Is this good enough? How do ilities fit in this definition?
OMG does not have a standard way to specify connections, constraints, or
patterns of interaction though the current Component
Model RFP submissions begin to address composition. What is needed
beyond a Component Model?
-
casting the problem of system design as an AI search
problem -- if the problem is underconstrained and not unique then which
solution should we choose? which is most stable?
-
operations on architectures. Here are
some operators on architectures? Is this a useful way to proceed?
-
assembly - composing components to form a new component.
Can we talk about algebra-like operations on architectures or are scripting
languages destined to be the glue that binds most components?
-
decomposition, deconstruct operator
-
instantiating a general architecture, customization,
subsetting an architecture, specializing a component.
-
will frameworks that are customized still interoperate?
if FW1 = FW2 except that they vary wrt service X then are they interoperable
modulo service X?
-
composing two heterogeneous architectures (e.g.,
web and ORB)
-
replication of components, federation of (homogeneous,
heterogeneous) replicated components to achieve scaling
-
are services like persistence, query, ... object
model independent?
-
deploying
-
locating gaps and stress points
-
architecture alignment and mismatch
-
validation - does the component-based solution do
what you want it to. Does it do it as well as the traditional monolithic
solution?
Papers
-
paper023.ps
- Gul Agha, Compositional Development from Reusable Components Requires
Connectors for Managing Both Protocols and Resources, University
of Illinois at Urbana-Champaign
-
paper050.html
- Cristina Gacek and Barry Boehm, Composing Components: How Does
One Detect Potential Architectural Mismatches?, Center for Software
Engineering, University of Southern California
Discussion
Many fundamental challenges exist when developing software applications
from components, among them:
-
reusability - how can components to be developed for reuse?
-
composition - how can new components be assembled from existing
components?
-
correctness - how can the behavior of components and the composed
system be specified and guaranteed?
Many members of software community have been researching solutions to address
these challenges. Among these efforts:
-
The Open Systems Laboratory [1]
uses the Actors model to define a system of connectors for managing the
protocols and resources needed in composing reusable components [2].
-
A formal description language can help detect mismatches of architectural
features when connecting components [3].
-
The Infospheres Project [4]
uses temporal logic to reason about the interactions between the dynamically
reconfigurable components communicating through RPC, messaging, and event-oriented
middleware [5].
What these systems and others have in common is the meta-model: customizable
components as actors running in given contexts (such as environments with
real-time scheduling and fairness constraints), interacting via connectors,
which themselves are also first-class actors. Constraints can be
imposed on components, on connectors, and on contexts as well. As
system designers, we can specify protocols over the connectors' interactions
with components, and we can specify policies managing the deployment of
resources.
As first-class actors, components and connectors are dynamically reconfigurable,
and they manifest observable behaviors (such as replication or encryption);
a component in isolation in a certain context has an observable behavior
that may differ from its behavior when it is composed into a new environment.
This may have a significant impact on the software "ilities" such as quality
of service (performance), reliability, and survivability. However,
the lesson of aspect-oriented programming is that some ilities cannot be
encapsulated entirely within connectors because they by nature cut across
components [6].
It would be ideal to have modular ility first-class connectors that
transparently provide appropriate behaviors for interacting components
assembled by application developers. In some cases, this is feasible
(for example, the addition of a transaction server to a system designed
to accommodate transactions); in other cases, it is not (for example, building
recovery into a system not designed to accommodate fault tolerance).
Ultimately, the software component architecture vision is to build a
notion of "compliance" on a component so it can work with arbitrary connectors
and behave as promised. Then, compliant components can be plugged
together using connectors to achieve a desired (feasible) ility.
For example, under compliance assumptions, a connector can provide a property
like passive replication.
The research challenge, then, is to provide formal methods and reasoning
models for making clear the semantics of the components both in isolation
and interacting through connectors, and for making clear the properties
of the aggregate system. In addition, developers can use tools for
performance modeling, runtime monitoring, system configuration, and component
feedback cycle tweaking.
We have already witnessed the utility of reasoning models in furnishing
specific ilities to specific applications. For databases, performance
is the desired ility (for example, "What will be the size of the query
result?"), and research has led to reasoning models for concurrency control
and transaction management to address that ility. On the other hand,
for Mathlab, accuracy is a desirable ility (for example, "How much error
exists in the answer?"), and research has led to reasoning models of composable
algebraic matrix operations for predicting the accuracy of results.
These models and others -- such as atomicity precedence constraints
[7] and modular interaction specifications [8]
-- demonstrate that research can provide useful models for distributed
component software architects. They also indicate that much more research
needs to be done -- for example, in the automatic checking of compliance
for component validation [9].
When CORBA-compliance alone is not enough to guarantee an ility, solutions
can be custom-made; for example, a consortium is working on an object framework
for payments [10].
Furthermore, distributed object communities continue to work on the problem
of common ontologies to pave the way toward common solutions [11].
In short, the challenges to software component architectures have no
generic solutions; however, the abstractions and models developed by specific
efforts have led to considerable gains in understanding and guaranteeing
properties of systems and their components.
[1] http://www-osl.cs.uiuc.edu/
[2] http://www.objs.com/workshops/ws9801/papers/paper023.ps
[3] http://www.objs.com/workshops/ws9801/papers/paper050.html
[4] http://www.infospheres.caltech.edu/
[5] http://www.objs.com/workshops/ws9801/papers/paper082.html
[6] http://www.parc.xerox.com/spl/projects/aop/default.shtml
[7] Svend Frolund, Coordinating Distributed Objects, MIT Press, 1997.
[8] Daniel Sturman, Modular Specification of Interaction in Distributed
Computing, available at http://www-osl.cs.uiuc.edu/
[9] Paolo Sivilotti, Specification, Composition, and Validiation of
Distributed Components, abstracts available at http://www.cs.caltech.edu/~adam/papers/distributed-components.html
[10] http://www.objs.com/workshops/ws9801/papers/paper021.pdf
[11] http://www.infospheres.caltech.edu/mailing_lists/dist-obj/
II-1 Economy of Component Software
Moderator: Jay
M. Tenenbaum, CommerceNet
Scribe: Catherine
Tornabene, Stanford
Topics: The purpose of this session
is to develop a framework for thinking about what it will take to build
an economy of component software. Why? to accelerate coming
economy of components and possibly to level the playing field for multiple
vendors [avoid CORBA vs. DCOM as the central debate]
-
what is the state of practice? what are
the roadblocks? (from Breakout I-1)
-
choice of infrastructure as irrevocable commitment
...
-
will intellectual property legal barriers become
the real barriers? will microlicenses evolve?
-
what if all products were open and free, would that
solve all our problems?
-
winning the architecture wars
-
is a consistent architecture needed for component
composition, complete with standards like OMG services - if so, is there
an 80% solution that most will adopt? Can it be grown to a 95% solution
with openness add-ons?
-
does marketplace favor one dominant vendor, many
middle tier vendors, millions of micro vendors
-
is there an analogy to many varieties of Unix fragmenting
the Unix marketplace
-
is there an analogy to many varieties of object model
forcing application builders to select one, which then locks in many or
most other choices and also obsoletes the technology when that object model
falls from favor?
-
is OMG behind? can it become more Java-friendly?
-
is there a danger of too few/many independent ORB
vendors?
-
discontinuities - what might change in the future
affecting a component economy?
-
will O/S absorb word processors, ORBs and middleware,
dbms, workflow, ... like file systems and TCP/IP?
-
will middleware component software become free like
the highway system?
-
will there be a few dominant McDonald's of software
or Mom and Pop diners?
-
will density of available solutions on the Internet
increase - everything you can think of is out there. But how to sum
the results?
Papers/Talks
-
paper027.html
- Robert Seacord, Duke ORB Walker, Software Engineering Institute,
Carnegie Mellon University
-
paper001.html
- Anup Ghosh, Certifying Security of Components used in Electronic
Commerce, Reliable Software Technologies (http://www.rstcorp.com)
-
paper077.html
- Arie Segev, Carrie Beam, Martin Bichler, Object Frameworks for
Electronic Commerce Using Distributed Objects for Brokerage on the Web,
Fisher Center for Management & Information Technology, Haas School
of Business, UC Berkeley (
http://haas.berkeley.edu/~citm/OFFER)
Discussion
See session summary slides (.ppt4).
In this workshop session, the participants examined the issues surrounding
the development of an economy of component software.
The session had three presentations:
Robert Seacord - Duke ORB Walker
Robert Seacord's work on the Duke ORB Walker is based on a model of
the component software economy that was widely accepted by session participants;
namely, a marketplace which will house many heterogeneous components, some
publicly available, and some behind corporate firewalls. The Duke ORB Walker
is an automated Internet search engine that walks through this marketplace
to collect information about software components that are ORB compliant.
It is analogous to how current Internet search engines collect information
about web pages and web content. Questions to answer regarding the Duke
ORB Walker revolve around mechanisms for searching for other component
technologies such as COM, JavaBeans, etc. as well as whether the ORB walker
will need to rely on an implementation repository to find ORB compliant
components.
This talk led to further discussion regarding the essential infrastructure
of a software component market. Session participants considered a simple
method of finding usable components as a necessary part of the infrastructure
of a component-based software market.
Anup Ghosh - Certifying Security of Components
used in Electronic Commerce
Anup Ghosh's work examines a method of providing software certification
for software components. The underlying assumption of his work with regards
to a software component marketplace is that consumer confidence in the
-ilities of a software component is necessary to the widespread adoption
of the component marketplace. He examines the use of certification as a
viable business model for providing security assurance for software components.
Under this model, a potential component is put through a series of pre-planned
tests. If the component passes these tests, then it is considered security-certified,
and thus can be considered ready for the component marketplace.
This talk led to a further discussion of what sort of certification
services might exist in a component-based software market. It was largely
agreed that not only would certification of a component's -ilities would
be essential, but that further semantic certification would be necessary
as well. There was great interest displayed in services that might test
a component's semantic correctness.
Martin Bichler - Object Frameworks for Electronic
Commerce Using Distributed Objects for Brokerage on the Web
Martin Bichler discussed the OFFER project, which is a CORBA-based object
framework for electronic (e) commerce. One of OFFER's key components is
the e-broker, which acts as an intermediary for e-procurement. The OFFER
e-broker assists consumers as they peruse heterogeneous e-catalogs and
also acts as a price negotiator using auction mechanisms. The OFFER group
is also studying what other functionality might be needed in an e-broker.
This talk led to a discussion about the research question the OFFER
group is studying regarding what features might be necessary in an e-broker.
Since OFFER is CORBA and Java compliant, there was discussion as to whether
the e-broker should be extended to COM, and whether that would be feasible.
Discussion covered these topics:
Market Issues
The fundamental issues surrounding a component software market were
effectively reduced to one question: what will people pay for? This
question established a framework for our discussion of market issues:
-
Component Granularity
The issue of granularity sparked discussion as to which granularity
would become predominant in a software component market. The point was
made that there is already a market for small granularity components such
as ActiveX controls, JavaBeans, and widgets. Others theorized that the
component market would probably have to begin with small, easily controlled
components, but that as the market matured, the component size would grow.
The point that 'ease-of-use' tends to lead to broader initial adoption
was raised to support the point that the preferred granularity would be
small at the beginning.
-
Domain
There was discussion over where (in which domain) the component marketplace
would develop first. Suggestions included software utility objects or financial
services.
-
Payment and Pricing Models
The issue of payment models was brought up. Discussion centered on
whether payment would tend to be made for design-time use of the component
(payment up front before the software is deployed), run-time usage (payment
made depending on how often the component is used, or a percentage of sales
made with that component), or some mixture of both models. We also discussed
pay-per-use services versus subscription services. There was no real consensus
on what sort of model might be adopted, and some pointed out that there
might not be a dominant payment model.
This led to a further broadening of the topic and we then discussed
whether the first widely marketed components would tend to be static software
or software services or some combination of both. The point was raised
again that there is already a market for static software components, albeit
small ones.
-
Third-party value added services
This topic was touched upon during the discussion of market issues
and during the talks. Ideas for possible third-party services included
licensing services, verification and security services, wrapper services,
software maintenance services, and intellectual property protection services.
Conceptual Issues & Barriers
After the discussion about the market issues, we then turned to broader
issues in the development of a component market that are not purely marketing
issues. (Some of the marketing issues were subsumed under these conceptual
issues.) We had originally intended to talk about barriers to the component
market as a seperate discussion, but that discussion ended up merging with
conceptual issues, so they are listed here together.
-
Market Dynamics
We spent some time discussing the potential dynamics of a component
market. In particular, we looked at how the preferred granularity of a
software component might affect the market dynamics as a whole. The point
was raised that there may be room for both millions of microvendors and
a oligopoly of a few dominant vendors. The large vendors have the ability
to support complex and sophisticated software components that require a
lot of up-front effort and continued maintenence, while the microvendors
can more easily provide small, single product components. There would be
scales of economies in between.
One participant mentioned that the size of the market may actually tip
the market towards the microvendors as the market may not be big enough
to support many large vendors. Those vendors that could be supported would
be highly specialized. An example might be travelocity.com.
Other participants observed that the size and character of vendors in
the market would probably be dependent on where in the software hierarchy
the component existed. This is an extension of how the market is structured
now. (i.e., a few number of OS vendors but many application vendors and
even more web designers).
-
Consumer and Vendor Attitudes
Participants largely agreed that one of the biggest conceptual issues
to tackle (and one of the largest barriers) are the attitudes of both consumers
and vendors towards a component marketplace. For example, one participant
observed that while many online services market themselves as a place to
comparison shop, many of the services or goods that they advertise (i.e.,
airline tickets or car purchases) are produced by vendors who do not want
consumers to be able to easily comparison shop. These vendors will be reluctant
to support a technology that commoditizes their goods or services. In response
to that point, another participant pointed out that while that reluctance
certainly exists, it might be seen as a catalyst to the component marketplace
rather than an impediment as small startups will take advantage of the
reluctance of the big companies to enter the component market.
Another existing barrier is the lack of consumer confidence in the web
or other online technologies. This was touched upon during the talk by
Anup Ghosh (Reliable Software Technologies). Before a consumer marketplace
becomes large, consumers must feel confident and secure in their purchases,
so there is a great need for both consumer education as well as more reliable
and secure technology.
-
Architecture
One of the biggest conceptual (and marketing) issues to address is the
question of which architecture(s) will dominate. Participants debated over
whether a consistent architecture was, in fact, needed at all. Most participants
agreed that competing architectures created more market competition and
therefore more choice in components. One participant observed that a strong
component market was dependent on differing architectures to ensure that
the best components were developed. Another questioned whether the architectures
themselves could be considered components and therefore mixed and matched
just as component software products were expected to mix. An observation
was made, however, that differing architectures would eventually lead to
problems in scalability as the more complex a component becomes the harder
it would be to interface that component to competing architectures.
-
Legal Barriers
It was observed that one of the barriers to the component marketplace
is the lack of intellectual property protection for component designers.
Several issues, such as rights to derivative work, rights to source code,
protection from modification, and other related issues have not yet been
addressed by law.
Good quote: "Forget REUSE. Just aim for
USE." -- Marty Tenenbaum, CommerceNet
II-2 Component Model Representation
and Composition
Moderator: Umesh
Bellur, Oracle
Scribe: Kevin Sullivan, University
of Virginia
Topics: Component models expose information
about how a thing is constructed, which is needed for replacing parts and
for providing higher order services. What information should be exposed?
What is the nature of the glue? We are not
there yet with respect to mix-and-match, plug-and-play, or best-of-breed.
Give technical reasons why not? One is that encapsulation hides implementations
(which components need to do) but component models expose some information
about how a thing is constructed, which is needed for replacing parts.
Component models are just now coming on the scene.
-
component models
-
What metadata do we need to collect to represent
components. Is this an open-ended collection of information, that
is, will we need other metadata for unforeseen purposes? How should
we represent the metadata? Are we assuming black-box assembly or
translucent components? How will this be done in Java, OMG, the web, ...
-
composition
-
Scripting. Scripting is the likely composition
glue for many purposes. What criteria do we use in selecting a good
scripting language? see recent OMG
Scripting RFP submissions.
-
Visual Assembly. How far can this take us?
important successes? are compound documents a variant of this?
-
algebras of composition
-
What are alternatives? self-assembly as when
agents might cooperate, ..., others?
-
binding
-
what are the main binding concepts and mechanisms?
tight versus loose coupling, static versus dynamic, events, detection,
notification, triggers, aperiodic, subscribe, push, pull, traders, mediation,
negotiation, wrappers, POA, interceptors, before-after methods, rules.
Do we need all of these to succeed? Do we need more? is there
a minimal set? is there a big book of bindings? how can we simplify
all this to make systems easier to build and maintain?
-
puzzles
-
Composing object services. Can we really compose
object services like persistence and queries just by knowing their APIs?
could you make a competitive OODB from naming, persistence, transactions,
queries, etc.? Could we remove one vendors persistence service and
replace it with another?
-
what don't we know yet? should we learn from
agents? patterns? ...
Papers
-
paper070.html
- Guido van Rossum, Glue It All Together With Python, CNRI
-
paper061.html
- Jack Wileden (1), Alan Kaplan (2), Middleware as Underwear:
Toward a More Mature Approach to Compositional Software Development,
(1) Department of Computer Science, University of Massachusetts, (2) Department
of Computer Science, Clemson University
Discussion
This session focused on what components are, how they are changed, and
how they can be designed to ease the evolution of component-based systems.
It was proposed that we discuss what a component is in terms of a programming
model (how it is used in implementations), meta-information model
(how it is described in machine-readable form), and rules of composition.
It was emphasized that meta-information should be "reified" so as to be
machine-usable at runtime. It was also suggested that we consider
the issue in terms of the model of system evolution implicit in the definition
of a component.
By way of scoping, it was also noted that composition is not always
merely a question of gluing components together; it requires reasoning
about system configurations. We distinguished between design and
execution time. We also asked whether substitutability fits under
the component model.
A significant part of the discussion focused on the question, What
is a component? Many useful definitions were offered. It was
clear that the question is one for which no single answer (other than
it depends) could suffice. The notion was offered that a component
is something that satisfies a given component standard, that there is no
one useful standard, that the purpose of a standard is to provide certain
defined assurances about the properties of components that actually
conform to the standard, and that the right question is, What assurances
do you want from components and systems from built from them to have, and
what rules must a standard impose in order to ensure that such properties
are obtained?
Among the answers to the question What is a component question
were the following: a reusable package of functionality; a unit of manufacture;
a unit of distribution; a unit of standardization; a reusable specification/implementation
pair (emphasizing the provision of a semantically rich specification along
with an implementation); and a unit of software obtained elsewhere (emphasizing
that a central purpose of component approaches is to enable construction
of systems from parts whose designs are not themselves controlled
by the system designer).
The discussion then turned to the categorization (in terms of programming
model, meta-model and composition model, primarily) of concrete aspects
of the modern concept of what a component is.
-
Programming Models
-
Properties
-
Constraints
-
Customization
-
Events/Dependencies
-
Aggregation
-
Introspection API - to be used by tools to display component meta information
-
Serialization
-
Versioning
-
Packaging - will be used by tools for packaging and installation of components
-
Declarative specification of concurrency control, persistence, transactions,
security etc.
-
Semantics
-
Meta information Models
-
Introspection
-
Component info
-
Properties, events, dependencies, constraints etc.
-
Quality of service information
-
Licensing and billing
-
Deployment information
-
Documentation
-
Resource utilization
-
Composition models
-
Substitutability and replaceability
-
Packaging
-
Connectivity using various gluing tools
-
scripting was discussed in a little detail but not much else.
Next we discussed the connection between components and objects.
Is every component an object? Is every object a component?
If not, then what is it that you have to add to an object to make it a
component? Finally, if a component has to do with packaging, then
what is it that’s inside the packaging—an object, or something else, e.g.,
something more complex than an object?
II-3 System-wide Properties
or Ilities
Moderator: Robert
Filman, Microelectronics and Computer Technology Corporation
Scribe: Diana
Lee, Microelectronics and Computer Technology Corporation
Topics: How does one achieve system-wide
properties when composing components. How can we separate application logic
from -ility implementation
-
identify and define selected -ilities
-
Ilities are system-wide properties like performance,
affordability, usability, understandability, simplicity, stability, reliability,
availability, fault tolerant, scalability, evolvability, continuous evolution,
openness, seamlessness, flexibility, configurable, adaptable, security,
safety, trust, high confidence, information assurance, survivability, timeliness,
real-time, mobility, QoS, system monitoring. Are there an open number
of ilities?
-
Are some kinds of ilities harder to achieve than
others?
-
architectural properties
-
How to insert ilities into component software architectures?
Say you had a system doing something. How would it look different
if ility X was added or removed? Is there some kind of architectural
level where the boundary between the visibility/hiddenness of the ility
changes? What is needed in the architecture in order to add ilities?
what mechanisms might be aids?
-
Concepts for achieving system-wide ilities:
end-to-end composition, top-to-bottom composition, X-unaware vs. X-aware
components, centralized or de-centralized control, policies, boundaries
-
managing ility trade-offs
-
infrastructure for ilities
-
how to install ilities? is a common infrastructure
needed for some or all? metadata repository, policy manager, boundary
manager, ...?
Papers
-
paper080.ps
- Zhenyu Wang, Separate Application Logic from Architectural Concerns
-- Beyond Object Services and Frameworks, Department of Computer
Science, Carnegie Mellon University
-
paper046.doc
- Bob Filman, Achieving Ilities, Lockheed Martin Missiles
and Space and Microelectronics and Computer Technology Corporation
Discussion
The Problem: How does one achieve system-wide properties (also known
as: ilities, system qualities, qualities of service, and non-functional
requirements) when composing components? How can we separate application
logic from ility implementation and then weave them together to make a
complete system?
Proposals:
-
Specify software architecture templates, which are very generic with built-in
architectural functions and proven architectural properties. Develop application
logic independently and then unified with these templates to produce a
complete system. Presentation
by Zhenyu Wang (see paper080.ps).
-
Intercept the communications between components and weave ilities into
the communication process. Security (access control, intrusion detection,
authentication, encryption), manageability (performance measurement, accounting,
failure analysis, intrusion detection, configuration management), reliability
(replication, transactions), and quality of service (soft-real-time priority
management) can be achieved using this approach. Presentation
by Robert Filman. (see Paper046.doc
).
-
Combine aspect-oriented components using a component weaver. The idea is
to transfer the concepts of aspect-separation from workflow management
to component-oriented systems. Presentation
by R. Schmidt. (see paper099.doc).
Discussion:
What makes an ility?
Ilities have in common the property that they are not achievable by
any individual component. It is not possible to achieve an ility internally
within a component.
A Partial List of Ilities:
Reliability, security, manageability, administrability, evolvability
flexibility, affordability, usability, understandability, availability,
scalability, performance, deployability, configurability, adaptability,
mobility, responsiveness, interoperability, maintainability, degradability,
durability, accessibility, accountability, accuracy, demonstrability, footprint,
simplicity, stability, fault tolerant, timeliness, schedulability.
See also Workshop
Topics - Ilities.
Which ilities can be achieved in a component system? What makes some
harder to achieve than others?
There are some properties of ilities that impact how easily the ility
is achieved in a composed system. For example, a security policy for mandatory
access-control (which is transitive) is easy to compose. However, security
based on discretionary access control uses a user or group id (which is
not transitive) and is more difficult.
Composability of policy, where implementation and system architecture
are dependent on one another, will also determine if ilities are composable
How do we go from the customer’s high-level description of an ility
to specifications that can be mapped to code? Are we really achieving the
ilities? For example how do we go from a requirement that a system be "secure"
to a code specification for 64-bit encryption? Is this 64-bit encryption
really what is meant by "security"? Conclusions:
-
There is no easy mapping from the high-level description of an ility
to particular algorithm or service specification.
-
This mapping is dynamic. 64-bit encryption maybe secure today, but
a new specification might be needed tomorrow. Architectures that support
ilities must allow for this dynamic nature.
-
Some ilities are context-dependent – dependent on context to determine
whether they are fulfilled. Examples include "understandability" or "usability."
Managing compatibility:
-
Ility Interactions: Some ilities will interact synergistically while
other combinations of ilities will have no impact on each other. Still
other combinations of ilities will be incompatible (one might have to make
trade-offs in reliability or accuracy in order to gain performance). On
a pair-wise basis, it is possible to study the impact of one ility on another.
However, groups of ilities will be a mess of trade-offs.
-
"Model Mismatches": each component may have its own concept and
model of services that support ilities; these models may not be compatible
from one component to the next. For example, CORBA and DBMS each have their
own model of access-control that may be incompatible. Why?
-
Components are not dumb: each may have its own model of different
services to achieve ilities
-
Autonomy: vendors like having their own approaches
-
Policy Boundaries: policies regarding ility requirements
may be made at many levels (such as the corporate level versus the department
level). These policies may be incompatible or involve model mismatches.
Can we support hard real-time Quality of Service?
There seem to be two communities interested in hard real time:
-
Aerospace: an environment which the architects have control over everything
and resources are shared
-
Telecommunications: environment architects may depend on interoperability
with someone else’s resources. Resources are not shared; they are reserved.
Enabling hard real time: Tools such as Doug Schmidt’s TAO and OMG’s
Real Time RFP. There is a need to guarantee that components obey
certain rules that allow for things such as scheduling of services and
resources.
Some Other Related Papers:
-
Paper094.ps
- (1) Lawrence Chung, (2) Eric Yu, Achieving System-Wide Architectural
Qualities, (1) The University of Texas at Dallas, (2) University of Toronto:
Ilities are constraints on the architectural design space. Propose a process-oriented
approach to system design to achieve ilities.
-
Paper081.ps
- Nalini Venkatasubramanian (1), Gul Agha (1), Carolyn Talcott (2), Composable
QoS-Based Distributed Resource Management, (1) University of Illinois at
Urbana-Champaign, (2) Stanford University : Two-level model of distribution
where meta-level controls placement, scheduling and management.
-
Paper037.ps
- Aloysius Mok, The Objectization Problem - What It Is and What Can be
Done About It, Department of Computer Science, University of Texas at Austin:
Illustrates conflicts among ilities in architectural decomposition.
-
Paper023.ps
– Gul Agha,, Compositional Development from Reusable Components Requires
Connectors for Managing Both Protocols and Resources, University of Illinois
at Urbana-Champaign: Components connectors need to enforce properties of
components.
-
Paper030.html
- Robert Balzer, Program Managers: An Infrastructure for Program Integration,
USC/ISI : Capture actions to add ilities.
Paper039.txt
- Clemens Szyperski, Rudi Vernik, Establishing System-Wide Properties of
Component-Based Systems: A Case for Tiered Component Frameworks, (1) School
of Computing Science, Queensland University of Technology, Brisbane, Australia,
(2) Software Engineering Group, Defence Science and Technology Organisation,
Salisbury, South Australia tiered component frameworks.
II-4 How do these fit in?
Moderator: Gio
Wiederhold, Stanford
Scribe: Craig
Thompson, OBJS
Topics: Few position papers covered
the following topics directly but they are challenging, and we'll need
to understand them to fully understand architectures and component technology.
-
optimization versus modularity - there is
a tradeoff, how can we have the best of both worlds
-
views - different views of views: DBMS
views, different views or abstractions of an architecture, multiple interfaces
-
managing inconsistency - how can systems operate
with data that is incomplete, inconsistent, or of bounded accuracy; how
can you make use of and propagate such information while keeping track
of information pedigree and quality? are there architectural patterns for
adding this ability to a system?
-
relation of wire protocols, APIs and grammars
- IETF favors protocols, OMG APIs. What is the relationship between
APIs and protocols? when should you use one and when the other?
-
agents - if we view objects as degenerate
agents that are less mobile, less intelligent, and less autonomous and
agents needing similar infrastructure for security, replication, and the
ilities, then can that point of view help unify the object and agent technical
stories, or is there good reason to keep them separate?
Papers
-
paper037.ps
- Aloysius Mok, The Objectization Problem - What It Is and What Can
be Done About It, Department of Computer Science, University of
Texas at Austin
Discussion
We only discussed the first two topics listed above:
modularity and views.
Aloysius Mok - The Objectization Problem
The presentation by Al Mok covered what he called
the objectivization problem, that objects effectively set up encapsulation
boundaries but that different ilities tradeoff differently, which can often
lead to different object decompositions, so that there is not always one
best decomposition, or object factoring.
Mok described a problem in real-time scheduling of AIMS Boeing 777 data
, which includes process, rate, duration, and latency bounds. There were
155 application tasks and 950 communication tasks. The hardware architecture
is a bus plus 4 task processors and a spare, 4 I/O processors and a spare.
Such systems are built using repetitive Cyclic Executive processes. The
FAA likes this proven technique for building certified systems. Process
timings are rounded off. The problem is represented as inequalities and
many disjunctions representing many scenarios or choices. A paper
in the RTSS'96 proceedings describes the work. We noted similar problems
in other domains: In A7 aircraft, every unit is different and can
require constant reprogramming. Bill Janssen mentioned that the largest
Xerox copier has 23 processors and 3 ethernets with many paper paths to
coordinate.
Why show this? You can't solve these sorts of problems by adding more
processors. To add a new function to the above system, we need to recompute
schedules and possibly even rework the hardware architecture. We want to
make it much easier to build, maintain, and validate such systems! Object
technology is attractive because of the small address spaces per object,
with limited interactions walled off by encapsulation and methods. But
tasks participate in other ways than to send messages; they share resources
to insure performance.
Mok's work indicates that requirements precede design and are not themselves
objectified though design is. There seems to be a different view
for each ility.
Mok showed a use of objects as a unit for resource allocation, integrity
management, and task requirements capture. These three roles require different
granularities. We need the system to provide these views and automate consistency
maintenance among them. The three types of ilities conflict: resource vs.
tasks, integrity vs. task, resource vs. integrity.
Favoring different ilities lead to different object decomposition schemes.
One scheme is more amenable to incremental modification, another is more
resource efficient, another is more robust against object failures. There
is no best scheme due to tradeoffs. Now, when one objectifies a certain
solution, then a change in system decomposition is needed to optimize for
another purpose. A conclusion is that OO technology
makes choices up front regarding ility tradeoffs. Sometimes we take objects
too seriously.
Discussion followed the talk.
We seem to agree that the right way to view a system is from the vantage
of multiple views (aspects), not a single view. The power is being
able to separate the views when you want to. A key advantage is separation
of concerns. This leads to a series of puzzles:
-
how to keep the views separate
-
how to generate code from the views
-
how to update the views or make changes (and so affect the code)
-
how to add new views
-
how to keep the code consistent with the view
-
how to effect tradeoffs
In practice today, much of this is manual but it can be automated by UML-like
notations, compiler-like technology, and other mechanisms. One of
the promises of middleware today is to use object wrappers around legacy
code to turn legacy code into an object. There are several problems:
it is difficult to make sure there are no interactions, internal modularity
that may be present may not be exposed, it is difficult to know the metadata
component properties of the opaque legacy code if that code was not built
to expose such information (which it was not, almost by definition), and
the legacy code does not make or expose explicit guarantees. Still, there
can be value in walling off in a standard way the parts of the system,
even if only to provide a place to hang metadata.
We are searching for ways to glue parts together that will result in
systems that are easy to maintain, more fault tolerant, etc. But the search
will likely require going beyond a black box components view of compositional
middleware to say more about aspects of the glue, that is, treating the
connectors and constraints as first class so we can reason about them.
Today this glueing is mostly done with scripting languages.
Applying this to middleware, what concrete things to we need to do to augment
the OMG OMA architecture? Add ilities via implicit invocation. The end
result is a pretty tightly coupled system. One comment was to treat views
as a constraint satisfaction system, and let the ultimate compiler put
them together. Another comment was that reflection is important.
Connectors are meta objects. An architecture is more than nodes and links;
it should also be reflective and connectors have state. Andrew Barry mentioned
he is working on a connector language where connectors enforce these constraints
and might reify the events into methods.
We discussed static versus dynamic aspects of systems. It is easier
to build static systems versus dynamic ones but the latter is the goal.
In the firm real time approach, you budget everything you do and know where
you are in the budget. Failure in hard real-time systems does not mean
the whole system collapses but that some constraints are missed. You want
to insure traceability. If you have done analysis at compile time, you
don't have to make decisions at run time.
But in several kinds of systems, we also need to accommodate evolution.
In the automobile industry, design lifetime is relatively well understood.
Car designers put in well-defined change points for upgrading to
the next model year. Malleability should be a design view. You construct
your software so what must be changed should be visible. This is predicated
on the assumption that subsystems will be stable. Gio Wiederhold noted
that universities do not teach design for maintenance yet the military
budget is 70% maintenance, industry is 85%. Thompson referenced his point
made in session I-1 that system design is protected
against known and foreseen requirements but some unforeseen ones which
can cause radical redesign. One hope is that modularizing a system's
functionality and ility aspects can often make future systems more evolvable
and adaptable.
III-1 Scaling component software
architectures
Moderator: Stephen
Milligan, GTE/BBN Technologies
Scribe: John
Sebes, Trusted Information Systems
Topics: What family of design patterns
and mechanisms can be used to scale component software -- federation, decentralized
or autonomous control, metadata, caching, traders, repositories, negotiation,
etc. in this environment. What
will break as we move from ORBs in LANs to ORBs on 40M desktops?
We'll have to replicate services and connect 1000's of data sources, cache
at intermediate nodes, federate services, federate schemas, and worry about
end-to-end QoS. What can we say about federation, decentralized or
autonomous control, metadata, caching, traders, repositories, negotiation,
etc. in this environment. And don't forget that when we scale across
organization boundaries, we have security and firewalls to contend with.
Papers
-
paper107.doc
- E. John Sebes, Terry C. Vickers Benzel, Collaborative Computing,
Virtual Enterprises, and Component Software, Trusted Information
Systems, Inc.
-
paper041.html
- Venu Vasudevan, Trading-Based Composition for Component-Based Systems,
Object Services and Consulting, Inc.
Discussion
We started by having each person describe their definition of and interest
in scalability, which included design scalability, operational scalability,
number of components (as software grows in the maintenance cycle), amount
of capacity (as number of transactions grow at runtime), control scalability,
network management, and survivability.
There were two presentations.
John Sebes - Collaborative Computing, Virtual
Enterprises, and Component Software
John Sebes addressed the combination of security and scalability in
the context of distributed applications operating between multiple enterprises.
He asserted that the scaling factor of number of enterprises has a significant
potential for breaking other system properties. He described a technique
for cross-enterprise distributed object security, and asserted that component
software techniques (especially composability) are required for ordinary
mortal programmers to be able to integrate distributed security functions
into applications that require them. Scale factors include: number of applications
with security requirements, number of users with access privileges to applications,
number of rules relating users and privileges to applications, and number
of system elements (application servers, firewalls, etc.) that must enforce
security constraints.
In discussion, various respondents described related scalability factors:
-
number of transactions
-
number of objects to be controlled by security mechanisms
-
number of interfaces [to objects], and size of
interfaces (number of methods, number of parameters of methods) that could
compose a very large vocabulary for stating application-specific security
constraints
-
implementation complexity and resource usage of security mechanisms themselves:
as other scale factors grow, a security mechanism's computational or resource
requirements might grow to the point where the security mechanism can't
work effectively
-
number of users, single sign-on, certificate-based authentication, scaling
problems of public key infrastructure
Electronic commerce was identified as an area of new software development
where rapid scale-up will occur quickly in the software lifecycle.
Pitfalls include bad individual components, bad
integration of components: you can break the system at any level.
We also discussed that in scaling a component, one must also address the
interrelationships (reuse) of that component by other components, and the
effect scaling will have on components using the scaled component, resource
utilization requirements and the impact on the configuration of the system.
There is a general issue of whether components themselves should be responsible
for scaling
Venu Vasudevan, Trading-Based Composition for
Component-Based Systems
Venu Vasudevan addressed scalability in the context
of dynamic service location: scalability of number of service instances,
where there are multiple instances in order to provide service distribution,
availability and reliability. His example was annotation of World
Wide Web documents. Service discovery in this example is discovering
services that store and provide annotations to URLs. Multiple repositories
might publish information about the annotations that each has, so that
other repositories and/or users can find them. Federations of annotation
repositories would be related by a "trading service" (in the CORBA sense
of the term, which itself is based on the trader
standard from Reference Model for Open Distributed Processing (RM-ODP))
that allows users to find annotations throughout the federation of annotation
services. There are multiple possible approaches to implementing
such a trader. Some solutions are so simple as to be scalable, but
also not useful in large-scale systems. For example, a pure registry-based
trader is stateless (and hence does not have to store progressively larger
state as transaction scale grows) but can't refine previous request/results
because it doesn't remember them.
Discussion followed.
Object composition/delegations was also identified as a scale factor
in terms of performance. If an object is composed of multiple other first-class
objects, this is inherently higher overhead than an object being composed
of components that are "mixed-in" to form the object, but which are not
objects themselves. Hence, component composition (in one object, rather
than componentified objects calling on componentified objects) may be a
promising way to increase the scale of software reuse without getting an
exponential growth of objects.
We also discussed scaling issues related to federation.
One sense of federation was explored in more detail than some others --
that of linking together organizations/domains in ways so that resources
are shared while still retaining the autonomy of the domains system (without
relinquishing control over the resources). A definition was "A community
of autonomous, freely cooperating sub-communities. Within a federation,
one sub-community is not forced to perform an activity for another. Each
community in a federation determines resources it will share with its peers,
resources offered by its peers, how to combine information from other communities
with its own, and resources it will keep private." Autonomous control implies
fairly fine grained control over what is shared and what is not, and with
who, e.g., dynamically enabling a domain to offer resources for sharing,
and to relinquish those resources from sharing, as well as to enter or
leave the federation. Federations allow interfacing for resource
sharing among organizations in a way that can slow the growth of combinatorial
explosion that would apply with the N-squared bilateral relationships that
would apply without federating (a more brittle, complex system of systems).
The work currently being done by the Defense Modeling and Simulation Office
(DMSO) on their High Level Architecture addresses federation among environments
of models; the work of NIIIP addresses federation among Virtual Enterprises;
the work of the Advanced Logistics Program on clusters addresses federation
of logistics domains.
One of the challenges of trading/brokering/warehousing
is the desire to centralize (or to create central repositories) in order
to "scale up" the amount of data that can be consistently stored and analyzed.
However, this should be virtualized and the ability to migrate (one more
ility) from a central repository to a distributed repository should be
transparent to the accessing component. When composing components, you
don't want to be faced with the choice of relying on centralized control
or implementing scalable consistency. Perhaps some part of the glue
between components can provide some of the scalability so that component
composers don't have to.
Metadata is critical to writing evolvable/scalable components. A component
should describe what it wants to use (from other components) rather
than referring to what it needs. This seems counter-intuitive (why
would I look up my friends phone number every time I call?) but it is an
isolation principle that is needed as systems get larger and change. Connectors
and their resultant interfacing mechanisms, along with the metadata about
components, is the key to enable scalability. But in order to do this effectively,
the connectors between components must be defined well enough to include
not only syntax, parameters, naming, but also semantics, reusability, side-effects,
and other components upon which they are dependent. Further, connectors
must be efficient enough to find what is described very quickly in the
99 times out of a 100 when the answer is always the same. Hence, component
connectors seem to be critical in terms of making components general enough
to be scalable. Here we arrive at the idea of connectors as first
class objects. Caching and related techniques/issues (consistency cache
invalidation etc.) then become critical infrastructure parts of the glue
between components.
III-2 Adaptivity of component
software architectures
Moderator: Bob
Balzer, USC/ISI
Scribe: Kevin
Sullivan, University of Virginia
Topics: Is there a common component
software framework that allows us to build software that is
-
adaptable, dynamically configurable
-
tailorable, customizable
-
evolvable
-
secure
-
survivable
-
graceful degradation
Papers
-
paper014.html
- David Wells, David Langworthy, Survivability in Object Services
Architectures, Object Services and Consulting, Inc.
-
paper034.ps
- L.E. Moser, P.M. Melliar-Smith, P. Narasimhan, V. Kalogeraki, L. Tewksbury,
The Eternal System, Department of Electrical and Computer
Engineering, University of California, Santa Barbara
Discussion
There were two presentations:
David Wells - Survivability in Object
Services Architectures
The project objective is to make OSA applications
far more robust than currently possible in order to survive software, hardware,
and network failures, enable physical and logical program reorganization,
support graceful degradation of application capabilities, and mediate amongst
competing resource demands. Survivability is added, not built in,
requiring minimal modifications to OSA application development and execution,
because survivability is orthogonal to conventional OSA application semantics.
Louise Moser - The Eternal System
The Eternal Systems, based on CORBA, exploits replication to build systems
that are dependable, adaptable and evolvable. Consistency of replicated
objects is maintained with multicast messages. All of the processors
receive the multi-cast messages in the same total order and perform the
same operations on the replicas in the same total order. The replicas
of an object are presented as a single object to the application programmer.
Discussion followed.
Discussion centered on identifying and characterizing various mechanisms
and approaches to adapting systems. By adaptivity in this
context we meant changing systems in ways that were not anticipated by
their designers, and for which the right "hooks" are not present in the
design.
The discussion focused on changes involving augmentation of systems
with desired non-functional properties (ilities). The approach
that the group took was roughly analogous to the taxonomic style of the
Object-Oriented Design Patterns work of Gamma et al. More
specifically, the group identified different adaptation mechanisms, and
then, for each, developed a description of it by giving it a name; referring
to one or more systems in which it is used; identifying the basic technique
involved; and listing key benefits and limitations. The following
mechanisms were identified during the session. Some mechanisms are
special cases of others.
Mechanism: Callback
Instance: Message/Event Handlers
Technique: User supplied handler
Benefits: Late binding
Limitations: Not composable, nestable; Long user callbacks
can starve event loop
Mechanism: Type Extender
Instance: OLE DB
Technique: Enriched set of interfaces for supplied component
Benefits: Expanded standardized interface for alternative
implementations; Existing interfaces pass through
Limitations: Fixed extension
Mechanism: Binary Transformations
Instance: EEL from Wisconsin, Purify, Quantify
Technique: Rewriting
Benefits: Source not needed; Fault isolation guarantees
(Sandbox)
Limitations: Low level; Representation not adaptable (composability
hard)
Mechanism: Source Transformations
Instance: Refine, Polyspin
Technique: Rewrite
Benefits: Composable; Application level behavior mix-ins;
Simpler analysis
Limitations: Require access to source; Confounds debugging;
Mechanism: Target Intermediation
Instance: Proxy, Wrapper
Technique:
Benefits:
Limitations:
Mechanism: Communication Intermediation
Instance: Firewall, Proxy Server, CORBA interceptors
Technique: Communication Mediator
Benefits: Client and server unchanged; Overhead amortized
in high cost operation; Facilitates playback
Limitations: Only applies to explicit communications channel.
Mechanism: Aspect Oriented Programming
Instance: AspectJ
Technique: Weaver (transformation)
Benefits: Maintain modularity; Separate aspect specification
Limitations: Possible interactions between aspects; Complex
interactions debugging
Mechanism: Instrumented Connector
Instance: PowerPoint architecture editor, virtual file
system
Technique: Shared Library (DLL) mediator
Benefits: Finer granularity (by called, by interface)
Limitations: Platform dependent; Composability is difficult
Mechanism: Reified Communication
Instance: Higher Order Connectors; ORB Interceptors
Technique: Connector modification
Benefits: Locus is independent of participants; Mediation
at application level
Limitations:
The group found the discussion useful enough that it was decided to
continue the effort after the workshop. To that end, we decided to
develop a World Wide Web repository of these adaptation patterns.
See http://www.ics.uci.edu/~peymano/adaptation/
(contact: Peyman Oreizy <peymano@ics.uci.edu>) which contains
more detail on the above mechanisms. You are invited to contribute
to the discussion represented at that site.
III-3 Quality of Service
Moderator: Richard
Schantz, BBN
Scribe: Joseph
Loyall, BBN Technologies
Topics: What are the main concepts?
How do you insert QoS into an environment.
-
The broad definition of QoS is the same as ilities,
the narrow definition is bandwidth limited.
-
What are the main concepts? timeliness, precision,
accuracy, resource limited situation, policy, reservations, scheduling,
trading, control, region-based QoS management, ...
-
Tradeoffs if applications are QoS-unaware versus
QoS-aware. Do QoS wrappers help make a QoS unaware component QoS
aware?
-
Relation to other ilities like security and system
management. Are the same mechanisms useful?
Papers
-
paper081.ps
- Nalini Venkatasubramanian (1), Gul Agha (1), Carolyn Talcott (2), Composable
QoS-Based Distributed Resource Management, (1) University of Illinois
at Urbana-Champaign, (2) Stanford University
-
paper034.ps
- L.E. Moser, P.M. Melliar-Smith, P. Narasimhan, V. Kalogeraki, L. Tewksbury,
The Eternal System, Department of Electrical and Computer
Engineering, University of California, Santa Barbara
-
paper099.doc
- Richard Schantz, Distributed Objects with Quality of Service,
BBN
Discussion
This breakout session focused on Quality of Service (QoS) as an organizing
concept for integrated resource management, especially as it relates to
development by composition.
The session had four presentations:
Gul Agha - Composable QoS-Based Distributed Resource Management
Gul presented a research direction for addressing the composition and
management of QoS policies and mechanisms. His idea proposes enforcement
of QoS separate from application functionality and enforcement of QoS mechanisms
separate from one another. After presenting an overview of the Actor model,
a formal model of distributed objects, he proposed several ideas for managing
and enforcing QoS using actors, including connectors, objects representing
component interfaces; a two-level actor model for managing and enforcing
QoS, including base level actors that are simply functional objects
and meta level actors that watch system activities and resources;
a set of core resource management services, i.e., basic system services,
chosen by looking at patterns of system activities, where interactions
between an application and its system can occur; and QoS brokers
for coordinating multiple ilities.
Gul proposed that the set of services be made components themselves
with a compliance model. Then the actor model can be used to formally prove
properties, such as liveness. It would also enable services to be reused
and composed in managed ways.
Peter Krupp - Real-time ORB services in AWACS
Peter is co-chair of the OMG Real-time SIG. His talk discussed the work
that is in progress to include QoS in CORBA. This work came out of work
in evolvable real-time systems being developed for AWACS 4-5 years ago.
In the last year of that project, the team decided to use CORBA, but they
needed scalability, predictability, real-time response, and fault-tolerance.
OMG currently is developing a specification for real-time CORBA and is
soliciting a real-time ORB, i.e., one that does not get in the way of a
real-time operating systems, has predictable QoS, real-time performance,
and fault-tolerance. In addition, OMG wants an ORB with an open middleware
architecture that is customizable.
Peter described some current real-time ORB work. A real-time ORB has
been developed for the AWACS program, but when everything was put together,
the database became a bottleneck. MITRE, the University of Rhode Island
(Victor Wolf), Sun, TINA, and Washington University (Doug Schmidt) have
been doing work in the area of real-time ORBs.
Michael Melliar-Smith - Fault-tolerance in the Eternal System
The objective of the Eternal system is to provide fault-tolerance with
replicated objects without using a custom ORB. Eternal uses a replication
manager sitting on the IIOP interface, soft real-time scheduling, and extensions
to QuO's QDL languages. The replication manager is a set of CORBA objects,
with a profiler that feeds it information about replicated objects. Eternal
tries to hide replication, distribution, consistency, and handling of faults.
Rick Schantz - QoS descriptions and contracts in QuO
QuO is a framework for incorporating QoS in distributed applications.
QuO provides a set of quality description languages (QDL) for describing
possible regions of desired and actual QoS, mechanisms for monitoring and
controlling QoS, and alternate behaviors for adapting to changing regions
of QoS. QDL can be used to describe aspects of an application's
QoS and a code generator (similar to the weaver in aspect-oriented
programming) creates a single application from all the description files
and application code.
The moderator posed the following questions to the group for discussion:
What is and isn't meant by QoS? What do we mean to cover with the
term? Is it defined broadly or narrowly? What is the relation of QoS to
other ilities like security and system management: same, different, integrated?
We discussed whether QoS should be defined in the traditional network-centric,
narrow way as network throughput and bandwidth; or if it should be defined
in the broader sense, including QoS from the network level up to the application
level. As a group, we unanimously (or nearly so) agreed that QoS should
be defined to include "ilities", as well as network capacity. Thus, QoS
includes security, timeliness, accuracy, precision, availability, dependability,
survivability, etc.
Security and system management needs change over the lifecycle of systems
and coordinating these changes is a part of providing QoS in the system.
Specifically, mediating the need for security, varying degrees and levels
of security, at different times and situations, is analogous to providing,
negotiating, and adapting to changing bandwidth needs in a system.
What are useful concepts toward introducing, achieving, integrating,
and composing QoS?
The moderator offered the following candidate concepts: adaptation,
reservation, scheduling, trading, control,
policy, region-based, specification, components,
abstractions, and aspects. Gul's talk stressed that composition
of QoS is necessary since more than one "ility" might be needed in an application.
It also stressed formal analysis of QoS mechanisms, since some might
interfere. The AWACS work relies on QoS being enforceable. The QuO
work, however, doesn't rely on QoS being enforced, but relies on adaptability,
i.e., the ability of mechanisms, objects, ORBs, managers, and applications
to adapt to changes in QoS.
How is QoS introduced into an environment and into which environments?
How aware should client applications be of QoS: unaware, awareness
without pain, immersion? How do we specify or measure it? Is there a common
framework? Mechanisms vs. Policies vs. Tools vs. Interfaces vs. Application
specific solutions?
Several of the speakers and participants mentioned specific examples
of ways in which QoS had been inserted into applications. AWACS embedded
QoS into the application and environment, i.e., the application provided
interfaces for specifying QoS parameters while the mechanisms and enforcement
were provided by the OS. Eternal placed QoS (i.e., replication) at the
IIOP interface, effectively hiding it from the application. Gul's approach
uses wrappers so that objects believe that they are actors and exhibit
actor properties. QuO uses a combination of wrappers (i.e., object delegates),
QoS contracts separated from the functional application code, and interfaces
to system resources and mechanisms; it supports insertion of QoS at many
different levels.
Most session participants agreed that QoS concerns should be kept as
separate from functional concerns as possible. However, while some believed
that QoS could be provided by wrappers and middleware, others believed
that QoS could not be in middleware. Instead it needs to be somewhere,
like the OS, where it can be enforced. Others believed that, in many cases,
enforcement is not as important as notification and adaptation. That is,
instead of trying to guarantee QoS, the system does its best to provide
it and tries to adapt (or allow the application to adapt) when it is not
provided. It was mentioned that there are situations in which enforcing
QoS requirements is more important than other situations (hard vs. soft
QoS requirements).
Many session participants also agreed that QoS in distributed applications
provides the need for another role in software development, that
of the QoS engineer. In many cases, the lines between the roles will be
blurred, and it's possible that one person or set of persons will develop
both the functional and QoS part of applications. However, in many cases,
someone will require an ility, e.g., availability, and someone else will
decide what policies and mechanisms are needed to provide it, e.g., the
number of replicas and type of distribution.
The session participants disagreed on the idea of awareness of QoS.
In some situations, applications and users might want complete awareness
of QoS and many believed that some techniques, such as wrappers and embedding
QoS into the application (e.g., AWACS), provided it. In other situations,
applications and users want to be completely unaware of QoS. One person
argued that complete unawareness is seldom, if ever, wanted. He offered
the analogy that airline passengers didn't want to worry about QoS, but
they want someone (e.g., the pilot, the mechanics) to worry about it. Someone
else offered the opinion that there are two kinds of being unaware: not
caring and not knowing. In some cases, one doesn't care how QoS is provided,
as long as it is provided. This might fit into awareness without pain.
Composition was a major concern when providing QoS. Everyone agreed
that many applications will need more than one ility at a time. However,
we believe that some will compose better than others, while some will not
compose at all. The concern was expressed that retrofitting applications
with QoS might lead to interoperability and composition problems. It might
not be possible to separate ilities in many cases, even though it is desirable
to change one without concerning the other. Designing QoS concerns or ilities
in so that they are maintainable and controllable might be all that we
can accomplish. The speakers provided different ideas about composition.
The actor model enables a certain amount of formal reasoning about the
composition of ilities. AWACS provided interfaces to QoS mechanisms so
that a trained expert could make tradeoff decisions. QuO recognizes the
need for composition of QoS contracts and needs to address it.
Where are we now, in which directions might we head, what are the
hard problems to overcome?
As the last part of the session, the moderator asked each participant
to summarize a major point, concern, problem or direction with relation
to QoS and the session discussions. The answers follow:
We need a system-level notion of QoS and need to build adaptive applications
aware of the quality they need and adapting to changes in it.
Providing QoS means striking a balance between conflicting non-functional
requirements and providing the tools to make tradeoffs. This creates a
new engineering role, that of the quality engineer with the expertise to
make these tradeoffs.
Building systems will include building QoS contracts and developing
code compliant with them.
Composition of ilities, contracts, and mechanisms is a key issue that
will need to be addressed.
There is no single definition of QoS yet, but examples suggest that
it can be addressed by a common framework. There is also no well-established
notation for describing QoS yet.
Another key issue is bridging the gap between the high-level notion
of QoS that applications need, i.e., ilities, and low-level QoS that mechanisms
and resources can provide.
III-4 ORB and Web Integration
Architectures
Moderator: Rohit
Khare, University of California at Irvine
Scribe: Adam
Rifkin, CALTECH
Topics: ORB and web architectures
will increasingly overlap in function. How are ORB and web architectures
alike and how are they different? are the differences accidental
and historical? How can we avoid recreating the same set of services
like versioning for both architectures?
can we find ways to neatly splice them together.
-
Do we need the same openness extension mechanisms
for both architectures? PEP, filters, caching, dispatch, interceptors,
..., what else?
-
Do we need the same object models for each?
-
Do we need the same services for each?
-
Do we need the same service support infrastructure
for each? e.g., common way to handle metadata
Should they both take the same approach to ilities?
Papers
Discussion
Distributed object enterprise views were converging
nicely in the early 1990s, until the Web came along. Tim Berners-Lee
succeeded in modularizing systems, making information truly accessible
to the masses by combining universal thin clients with flexible back-end
servers, with interaction through a gateway with third-tier applications
such as databases.
In the late 1990s, the question remains how to use the best of both
the Object and Web worlds when developing applications. Object Services
and Consulting, Inc., (OBJS) is investigating the scaling of ORB-like object
service architectures (for behaviors such as persistence, transactions,
and other middleware services) to the Web (for behaviors such as firewall
security and document caching, as well as rich structuring) by using intermediary
architectures [1].
They are also exploring data models that converge the benefits of emerging
Web structuring mechanisms and distributed object service architectures
[2].
Ultimately, the Holy Grail of application development using "Components
Plus Internet" can be realized in many different ways, including:
-
By putting ORBs behind the gateway of a Web server's back-end.
-
By using Java; that is, by downloading an applet in the Web browser as
an ORB client that can communicate directly with ORBs.
-
By placing middleware between ORB clients and HTTP clients.
-
By using HTTP-NG, a binary distributed object protocol designed for use
with the Web, CORBA, and DCOM [3].
-
By using MIME typing with OMG's IDL, with the ORB as the server, and Web
browser-clients connecting.
-
By composing active proxies that act on behalf of Web and ORB clients and
servers as needed [4].
-
By composing data from different sites using data fusion (for example,
collating information using WIDL [5]).
-
By performing compound transactions with ORBs and Web servers.
In prototyping an annotation service [6]
and a personal network weather service [7],
OBJS is developing an intermediary architecture that exploits the benefits
of both ORBs and the Web [8].
Web engines provide universal clients and web servers provide global access
to rich data streams; ORBs furnish middleware object services and open
the door to enterprise computing. In integrated, hybrid systems,
these roles are leveraged.
Fundamentally, is a Web server all that different from an ORB?
Perhaps. HTTP as a protocol makes provisions for error recovery, latency,
platform heterogeneity, cross cultural issues, caching, and security, for
a specific type of application (the transport of documents), whereas ORBs
have a generic object architecture that allows for the selection of services
piecemeal as needed.
The Web has focused on providing a rich typing system, whereas ORBs
have focused on providing a rich set of APIs. To that end, the Web
is useful for describing data aspects (as opposed to operations),
whereas CORBA focuses on procedural aspects (as opposed to types).
This is manifested in the Web's document-centric nature -- and in CORBA's
loss of its compound document architecture.
It is also manifested in the Web's approach to heterogeneity: a single
common type indirection system, MIME, allowing new data types to be added
to the system as needed. By contrast, ORBs define data types strongly,
so that the IDLs know exactly what is going to hit them.
MIME is one example of how the Web was adopted using a strategy of incremental
deployment, starting with file system semantics and building up from there.
As a result, the Web in its nascent stage has not yet deployed "services"
(in the traditional "object" sense), but they are forthcoming shortly (WebDAV
for versioning, XML for querying, and so on).
One limitation of HTTP is that although in theory HTTP can be run for
communication in both directions (through proxies), in practice HTTP 1.x
must be initiated by a client, so two peers would need two open channels
(that is, two HTTP pipes). IIOP has hooks for avoiding this through
back and forth polling.
On the other hand, the Web has several strengths:
-
Caching - a bag of bits and an URL can live anywhere
-
Error processing
-
Proxies - a gateway is part of the Web architecture
-
Type system - Web data types are loose (MIME but no PEP) but rich (human-readable)
and dynamically discoverable
-
Scaling - synchronous point-point, distributed group services
-
Directory services - DNS, but no traders
These strengths could be applied to CORBA, adding real attributes to CORBA
interfaces. Perhaps some combination of Object and Web technologies
may ultimately prove useful (for example, IDL for interface specifications
and XML for data exchange and storage). The Web might assimilate
CORBA, and CORBA might assimilate the Web, and W3C's HTTP-NG might do both
[3].
And although Web technology as commercially available is insufficient presently
for many object developers' needs, it is, as Marty Tenenbaum of CommerceNet
would say, simple enough that a fifth grader can use it.
For now, we continue to attempt to understand the commonalities and
the differences of ORBs and the Web in our application analyses and designs.
As of January 1998, CORBA remains a generic framework for developing many
applications, whereas the Web is a generic framework that has deployed
many applications.
[1] http://www.objs.com/workshops/ws9801/papers/paper103.html
[2] http://www.objs.com/OSA/wom.htm
[3] http://www.w3.org/Protocols/HTTP-NG/.
[4] http://www.objs.com/workshops/ws9801/papers/paper102.html
[5] http://www.w3.org/Submission/1997/15/Overview.html
[6] http://www.objs.com/OSA/OSA-annotations.html
[7] http://www.objs.com/OSA/OSA-network-weather.html
[8] http://www.objs.com/OSA/OSA-intermediary-architecture.html
IV-1 Working Toward Common Solutions
Moderator: Dave
Curtis, Object Management Group
Scribe: Craig
Thompson, OBJS
Topics: The database, middleware,
Web, and other communities are expanding their territories and are coming
up with their unique solutions to problems already addressed in other communities.
How do we prevent a proliferation of incompatible standards from developing
out of these separate communities.
-
Is component software a silver bullet, a nightmare,
or yet-another-technology?
-
What's missing to build an object/component software
economy? (breakout I-1)
-
What can we do to accelerate mass value from components?
-
Avoiding surprise - What are alternatives
to objects and components? agents, XML, ...
-
Directions for OMG, W3C, JavaSoft, RM-ODP, ...
What have we learned from this workshop that would benefit the communities
involved?
-
What communities might get benefit from working more
closely? how should they proceed?
-
middleware and web, e.g., OMG and W3C, OMG and Java,
XML/DOM and Java
-
middleware and agents, e.g., OMG and FIPA
-
middleware and middleware, e.g., DCOM and OMG
-
DBMS and middleware, e.g., OMG and ODMG
-
enterprise and middleware, e.g., RM-ODP and OMG
-
research and development, e.g., OMG and DARPA (e.g.,
encourage DARPA to use OMG as a place to pin its research results and OMG
to use DARPA as a research arm, generally encourage closer industry-research
ties)
-
How important is it that convergence occurs in these
areas? what is the downside if it does not? what steps could
be taken to make one community more friendly to another?
-
How can we use the web to build a component marketplace?
for DoD COE? for everyone? should we build web repositories for components?
web spiders for finding components? microlicensing and zero-cost
maintenance deployment technologies?
Papers
-
paper017.doc
- Craig Thompson (1), Ted Linden (2), Bob Filman (2), Thoughts on
OMA-NG: The Next Generation OMG Object Management Architecture,
(1) Object Services and Consulting, Inc. and (2) Microelectronics and Computer
Technology Corporation.
Discussion
Dave Curtis began this session by renaming it "Can't We Just Get Along?"
He translated this to several questions:
-
Can we deter or live with incompatible standards?
-
Can we realistically forecast the software weather?
-
What role will the web play in these developments?
-
DARPA and OMG - what can we do together?
Where should the OMG Object Management Architecture go from here?
Craig Thompson presented some ideas for the next generation of the Object
Management Group's Object Management Architecture (OMA) -- see full
paper. The OMA Reference Architecture is the familiar
ORB bus with basic object services, common facilities, domain objects and
application objects accessible via the bus (see OMG
OMA tutorial). This was a radical experiment in 1991 and the
OMG has since populated the architecture with a fair number of middleware
services, in fact, OMG is working on a record number of RFPs for services,
facilities, and mechanisms at present. The OMA has been a serviceable
and successful middleware architecture that has provided a design pattern
for the middleware community. One strength is that it has provided
both technical expansion joints and parallel community organizations, for
instance, the ORB, basic object services, and common facilities have their
own sub-architecture documents that expand finally into leaf RFPs (and
there have been organizational subdivisions of OMG along these lines until
recent consolidation. Thompson pointed out that the OMA does not
explain nor does it preclude several things and perhaps it is time
to give substantial attention to these (and others):
-
missing wiring and constraints. IEEE defines architecture
to include components, wiring, and constraints. The OMA really mainly
talks about the components. Traditionally, the wiring is handled
by CORBA, which provides a general dispatch mechanism (message-passing
bus) but does not specify which services call which other services, e.g.,
dependencies among components, the wiring details. As Dave
Curtis pointed out in his talk on component models, to date, OMG specifications
have provided an API but no floor or dependency interfaces, and so are
like leggo's that provide just top interfaces but no bottoms and so provide
no way to say how the component connects to others. This means that
we really cannot yet claim the mix-and-match and plug-and-play
ability to replace components. The OMG Component Model RFP responses
address this deficiency. However, it will still be the case that
there will still be no standard way to specify constaints or rules.
-
frameworks. The current draft of the OMA (the Madrid draft,
1996) defines {application, domain, facility, and service} frameworks to
be composed of {application and also {domain and also {facility and also
{service}}}} components. But so far OMG nor the larger OO community
has really operationally defined what we mean by framework and until recently
by component.
-
mobile state and mobile code. Until recently, OMA specifications
mainly focused on remote references but did not support mobile state and
mobile code in an equally first class way. But all three are primitives
in a very distributed environment. Java provides the latter two.
-
ilities. The OMA does not mention -ilities though the OMG
Security specification does provide for Interceptors that are almost
general-purpose that are used to splice security behavior into communication
paths. And OMG is working on a general Quality of Service reference
model. But a concern for ilities needs to become a more central part
of the OMA because, without it, we may be able to connect components together
but may not have a way to guarantee end-to-end or top-to-bottom composition
of non-functional properties.
-
Java-friendliness. Thompson noted that some of the features
of OMA are available in Java and there is huge Java momentum, also that
OMG has defined an IDL-to-Java mapping and is defining a Java-to-IDL not-quite-reverse
mapping. Could Java and IDL grow together? Some noted that
JOE, NEO, beans, iBus, and servelets move Java toward (monolingual) services.
One question might be, is there something special about IDL or could we
map all OMG services to Java and just use Java. This might avoid
the indirection of going through IDL whenever using distribution, something
that many Java programmers find painful and avoid. This raised a
natural controversy, which in microcosim is the one raised at OMG when
the OMG religion is questioned. Some view IDL as the preferred language
for integration and the price we pay to insure interoperability among diverse
languages. Java is here today but something else will come tomorrow
(a contrarian view is Java is not the answer, Ada is). All very large
problems eventually map across language and/or distribution boundaries,
and IDL provides a way to do this, in fact to plan for it. Others
felt that IDL puts up a pretty large complexity barrier for programmers,
one that many will avoid, and further that it puts in place in applications
a sort of brittle IDL wall where the distribution boundaries are hard-coded.
One suggestion was to continue with IDL in RFPs but encourage Java APIs
as well, or even to use the Java-to-IDL mapping to produce IDL mappings
and allow spec writers to provide native Java APIs in response to OMG RFPs.
[A side issue was raised, can OMG make its specifications available in
HTML, not just pdf -- the current answer is that OMG's staff is small and
Frame is the publishing vehicle at present.] A major reason to move
forward here is to try to capitalize on OMG's architecture (as a blueprint)
and Java's popularity as a language for portable applications. A
worry is that both are beginning to play overlapping roles and now that
both are PAS submitters (standards producers) there is nothing to
prevent scope creep and divergence, which won't help either community and
may hurt both. So far, the OMG and JavaSoft management teams have
not converged on any form of memoranda of agreement. Can we have
our cake and eat it too? or will we bifurcate communities and make choices?
The general subject of Java-friendliness remains controversial.
-
Web-friendly. Breakout session III-4
(ORB-Web Integration Architectures) and IV-2 (Towards
Web Object Models) cover some of the architectural aspects of making OMA
more web-friendly. So does the OMG
Internet Special Interest Group, which has made a series of recommendations
to OMG on this topic. Interestingly, not many in W3C or in OMG seem
to be very knowledgeable about the other's architecture (with some notable
exceptions like the W3C HTTP-NG effort).
-
agents. Though there are several at the workshop that believe
that agents are vacuous, the term agent has been loosely associated with
a family of capabilities that will be needed in large evolving systems.
Whether it is "agents" or various kinds of expansion joint, ility, late-binding
glue, ..., we will need to have mechanisms for building large systems that
have properties like scalability, evolvability, adaptability, and survivability.
OMG has yet to seriously address these ilities. Agent technology
may help (though some at the workshop do not think so).
-
microlicensing. OMG only defines specifications, vendors provide
implementations, but until the Component Model RFPs result in a Component
Model specification, we still will not have enough standard packaging machinery
to begin to talk about OMG components that might be portable across environments.
When we do, we will still have other barriers. One is locating such
components; the Duke
ORB Walker provides one way to do this, or more conventionally a Gamelan-style
repository might help. Another hurdle will be semi-standard ways
to license components so that it becomes more affordable to assemble products
and applications from others' components rather than re-writing them.
See Breakout Session II-1 (Economy of Component Software).
-
implementation gap. A comment was made that the OMG implementation
gap leads to a credibility gap. Responses were: OMG is legally
limited to specifications. At OMG meetings, customers mix with providers
to meet their requirements. ORB vendors build what customers pay them to
build. Some OMG specs are flawed, some are not needed widely enough. There
is an occasional push to enforce OMG's implementation requirements (commercial
implementations with one year of specification) and to rescind specifications
if no commercial implementation is ready on time. But there has been
no action on this to date. The open systems community itself needs
to solve this problem or it may be solved by Microsoft or Java, both vendor
proprietary. OMG does not have a good current forum to identify Achilles
heals and left-field issues that may smack it in the head. OMG does
have a formal branding program.
OMG-DARPA Bidirectional Technology Transition Opportunities.
The discussion turned to cross-community opportunities for technology transfer.
We focused on identifying specific opportunities between DARPA and OMG.
DARPA is developing an overarching architecture called Advanced Information
Technology Services (AITS) architecture which covers command and control,
logistics, planning, crisis management, and data dissemination. Todd
Carrico showed a complex foil covering one view of the AITS architecture,
which shows object services in the bottom right, and asked for a mapping
to OMG, that is, where are the technology transition opportunities.
-
Thompson mentioned the historic connection of the DARPA POB Open OODB project
that funded his work co-authoring the original OMA Reference Model (1991)
and Object Services Architecture (1992). Since then, the DARPA AITS
architecture, especially JTF-ATD, have made a number of commitments to
CORBA ORBs and services and continue to do so.
-
Thompson co-chairs the OMG Internet Special Interest Group (ISIG) with
DARPA support. In the past two years, in addition to the work of
ISIG to make OMG web-aware and make roadmap recommendations, we have invited
JTF WebServer, DataServer, and MCC InfoSleuth presentations at OMG Internet
SIG meetings, tried to get the DARPA NIIIP Consortium's Event-Condition-Action
Rules work to become an OMG RFP, and recently asked Sami Saydjari and David
Wells to brief OMG Security SIG on the DARPA Security and Information Assurance
Reference Architecture and on Survivability (respectively) at the next
OMG meeting.
-
But overall interactions where DARPA transfers technology to OMG have been
relatively few. To date, DARPA has tended to adopt OMG specifications
but, while DARPA has many leading edge services, it has not tended to actively
influence the OMG architecture directions (with some exceptions).
There is a general opportunity for DARPA to act as a research arm for OMG
(and W3C, IETF, WfMC, possibly other industry open systems proactive consensus
building communities) and OMG to act as a COTS clearing house for DARPA.
The advantage to each organization is real -- OMG gets more immediate benefit
of DARPA R&D expertise which is considerable and DARPA gets access
to a way to leverage industry direction. This is true even if OMG
just provides an industry middleware architecture blue print, where details
eventually change in Java or DCOM form.
-
Specific examples of current DARPA-to-OMG technology transition opportunities
are:
-
lessons learned on the JTF-ATD project about using OMG implementations
should be transitioned to OMG.
-
DARPA is finding it desirable to extend some OMG services, like Triggering,
which extends the Event Service with filters. Alert OMG to the need
to extend the specification or champion a Trigger RFP.
-
DARPA's Object Model Working Group schema work should be shared with OMG's
C4I Working Group.
-
DARPA's considerable work on Quorum should be shared with OMG's QoS Reference
Model Group.
-
DARPA's Security Reference Model should be shared with OMG's Security SIG
(this will happen soon).
-
OMG is working on a facility called Object Transfer and Manipulation Facility
which would benefit from help from the various DARPA Information Access
architectures (e.g., I*3, DataServer, BADD) -- it is intended to provide
access to heterogeneous information resources, specifically to make other
data sources like file systems look like typed remote objects (providing
"objects for the masses").
-
OMG has the beginnings of work on agents (Mobile Agents spec) and traders
(Trader spec) but there is more work to be done here; similarly, many OMG
services need to be extended with federation interfaces. Insights
from the DARPA ALP and DMSO HLA communities might help a lot right now
-- DMSO has established a beachhead via the OMG Simulation SIG.
-
Some AITS domains might translate to become OMG Domain SIGs - logistics
is a candidate.
-
Some JTF servers might become OMG facilities, e.g., WebServer (though the
JTF notion of webs is not the same as WWW right now). [There are
opportunities here for an upgrade of the JTF WebServer to XML, for instance
-- and OMG needs a standard way to interface to XML.]
-
We concluded that a more careful mapping of AITS and OMG OMA might reveal
other technology transfer opportunities but that the list above represented
a good start. We also concluded that there might be major architecture-driven
opportunities for convergence if a careful study is made of AITS, OMG OMA,
W3C, DII/COE, and JTA. (Of course, as noted in the discussion, there
is active work ongoing to extend DII/COE via AITS. But the opportunity
is being missed to tie this directly back to industry.)
Complexity. How can newbies take part in middleware? One would
think that since we are dealing with components that it might be easier
for small groups to contribute to middleware with new components.
This may become true over time but there are still barriers. For
instance, right now, most services vended by middleware vendors are not
portable across different vendors' ORBs. We spent some time discussing
complexity and the ility understandability? Where does the complexity
come from? ("is it Satan?" asked the Church lady) or is it having to know
about OO, OMG, W3C, and all the little standards and subsystems, hundreds
of details like when to use public virtual inheritance. There is perception
complexity in telling the healthcare community what CORBA is. There
are other roadblocks, for instance, OMG not having a programming metaphor,
the OMG community not providing too many development tools, the need for
training and difficulty in teaching students about CORBA, even the ready
availability of specifications in convenient formats. We need better ways
to facilitate the widespread use of middleware. Others noted in OMG's defense
that there is a fallacy of comparing what VB is trying to do and what CORBA
is trying to do.
Interlanguage interoperability. Another strand of discussion
covered interlanguage interoperability. One comment: OMG language
bindings are a straight jacket. If you commit to CORBA, the IDL type system
pervades everything you do. IDL provides common ground over some domains
of heterogeneity. A counter argument: there is a tradeoff of programming
convenience and expressing this convenience. If dealing with a multilingual
system, your type mismatches go up without IDL. So you are hedging against
an unknown future. There seems to be the presumption that choosing CORBA
is the right medicine to ward off later sicknesses, or said another way,
pay me now, not pay me later. But history has often sided with those
who would pay me later (Java). So a challenge for the OMG community
(and us all) is how to have our cake and eat it too -- get both the immediate
gratification of a single language solution (simplicity and tools) and
the flexibility of language-independence.
Summary of suggestions:
-
OMG: spend the time to consider your relationship to other communities
and technologies and how to deal with these to give best leverage.
Get an explicit vision of OMA-NG (next generation of the Object Management
Architecture). Consider how to make your specifications more useful,
including an HTML version and provide more connections to Java. Reduce
the gap between CORBA and programming languages and tools.
-
DARPA: consider how to leverage your expertise in middleware via
an explicit technology transfer plan that targets OMG, W3C, IETF, and other
industry targets.
IV-2 Towards
Web Object Models
Moderator: Ora
Lassila, Nokia Research Center and W3C
Scribe: Andre
Goforth, NASA Ames Research Center
Topics
-
How do XML, DOM, PICS-NG, RDF, and the many metadata
proposals relate to object models, IDL, Java, and agent definition languages?
Just what are all these representations? do we need them all?
will some converge? will we need the cross-product of mappings?
-
Metadata is useful for many purposes. Are the
representations useful for the web similar to those that are useful for
component models?
-
Objects might be viewed as degenerate agents.
Can we expect XML structural representations, objects, and agents all to
be popular web and middleware representations or do we expect some convergence?
Papers
Discussion
Frank Manola presented his paper, "Towards a Web Object Model". His central
point was there is a need to increase the web's information structuring
power. The current web object model is "weak" because it is difficult to
extract object state out of HTML and to express "deeper" semantics (behavior).
He discussed how current efforts such as XML, RDF and DOM are addressing
this need. This led to a discussion of how well these standards provide
enhanced structuring power and of a comparison of the Web's technologies
with OMG's CORBA. The session ended with a summary of what the Web can
learn from OMG and OMG from the Web.
Here are some of the salient discussion points:
Ora commented that XML's contribution for strengthening the web's object
model is overblown; it is more of a transfer mechanism that addresses syntax
and not semantics. There was no major disagreement with this point of view.
Frank commented that users with large document repositories want a markup
language that will outlive today's web technology.
There were several comments that DOM provides sufficient features to
support object discovery and visualization. DOM provides a generic API
so a client can recreate an object and push it to the server and let the
server use it; this gives a "window" on the web. It was questioned why
the client/server distinction is important: you could support a peer-to-peer
interaction model.
There was a question about ways to reference objects on the web. Ora
replied that he thinks that RDF will be sufficient to provide object discovery.
Also, the issue of different type systems was raised; for example, DOM
has its own type hierarchy. Ora pointed out that RDF does not provide a
large built-in type hierarchy but gives the user the ability to create
one; he went on to point out that RDF does not care about DTDs. Someone
commented that DTDs may serve as XML schemas.
Someone commented that it is feasible to represent CORBA IDL in XML.
Response was that one might be able to define a DTD for OMG's IDL description.
Ora commented that he had suggested that RDF transfer syntax be in done
in terms of S-expressions and was met with a lot of resistance. There were
some cries of derision and comments about religious syntax wars in the
breakout session, all in good jest.
Bill Janssen (Xerox PARC) asked why CORBA doesn't have a human representation
"get" method for objects? There was discussion on what made the web so
popular. The consensus was that it provides instant object visualization.
It was pointed out that the web provides only one instance of an information
system whereas CORBA has the generality to support a broad range of systems.
At this point, the discussion turned to whether W3C was going to overtake
OMG. The rate of growth of the web is phenomenal, whereas it seems to take
OMG forever to come out with a new standard and even when it does it takes
a long time for it to be implemented. It was pointed out though that membership
in OMG is growing briskly and shows no signs of slowing down. It was also
pointed out that if working with CORBA was as "visual" as working with
the Web then OMG would experience the same popularity and widespread growth
that the Web is experiencing. One suggestion was that CORBA provide
a visual component for all objects. Current CORBA APIs are designed for
program instead of programmer visibility.
The discussion returned to W3C versus OMG. Somebody was of the opinion
that OMG will eventually be overwhelmed by W3C. Even with the limited object
model of the web, a large number of enterprising souls are building distributed
systems (using web technology) that typically would be considered material
for a CORBA implementation. Users are pushing the application of Web technology
harder and further than CORBA has ever been pushed.
The discussion then turned to the shortcomings of the Web. What do you
"show" as an object using the Web? Get and Put provide you with a sea of
documents, i.e. pages uniquely identified by their URLs. Someone pointed
out that the Web has limited ability to do reflection; HTML's "header"
does not cut it. It was pointed out that the web has to better address
the intersection of object management, object introspection and object
streaming.
To move the discussion along, Ora posed one of the breakout session's
suggested discussion topics: Is the Web's metadata useful for other
uses such as component models? The discussion was limited due to time.
It was pointed out that the combination of different metadata standards
may cause needless overhead and round trip messaging.
At this point, the consensus of the participants was to list what the
Web and Corba could learn from each other. Here is the list that resulted:
Improvements for CORBA Inspired by Web Experience:
-
Possible GET method on Object that returns human-presentable view of the
instance
-
Add "doc-string" facility to IDL that carries through to get-interface
-
Improved caching semantics for IDL for attributes, for return results (and
cache consistency algorithms?)
-
Well defined TCP ports for standard CORBA services
-
Human-friendly URL form
-
Distributed garbage collection
Improvements for the Web Inspired by the Distributed Object Experience:
-
Distributed garbage collection (there is a lot of garbage on the web :-)
-
Stronger notions of referential integrity
-
Standardized server and browser system management APIs
-
Stronger support for associating procedural semantics with whatever is
behind a URL
-
Standard type system for web objects with an interface definition language
(HTTP-NG?, OMG IDL?, MIDL?)
The final discussion topic was on agents. What are agents? There
was a consensus that the term means different things to different people
so broadly that any feature in an information system could be called an
agent or an artifact of an agent. Nobody really knows what they are. When
is something an agent and not a smart object? It was noted that agents
keep popping up all over the place and that there appears to be a good
deal of research funding for them. Ora commented that agents are postulated
when there is a need to fill the gap ...and then some magic happens" in
describing the functionality or behavior of a system.
Finally, Ora presented this additional summary in the final plenary
session of the workshop:
Lack of Semantics
-
The world seems to have a very strong focus on syntax (XML has been hyped
up etc.)
-
Models are (desperately) needed
-
the web lacks SEMANTICS!
-
RDF is an attempt to alleviate this situation
-
Type systems
-
what are web objects, what kinds of types do we need
-
DOM provides a set of types
-
Dublin Core as a "least common denominator" for libraries
-
extensibility is important
Procedural Semantics
-
Interfaces: Can we describe method signatures
-
Programming languages
-
The importance of "context"
-
indexing, searching pulls objects (documents) out of their natural context
-
pass-by-value vs. pass-by-reference
IV-3 Standardized Domain Object
Models
Moderator: Fred
Waskiewicz, SEMATECH
Scribe: Gio
Wiederhold, Stanford
Topics: What criteria should be used
to partition and package domain spaces?
-
OMG, STEP, SEMATECH, and others are working on defining
standard object models for domains like business, manufacturing, transportation,
electronic commerce, finance, healthcare, and many others. What criteria
should be used to partition and package domain spaces? There seems
to be no best scheme for factoring. So how do we agree when it comes
to componentizing or packaging domain data.
-
Many large applications require representing an open
collection of attributes per entity (different scientific experiments collect
different observable data about entities under study). This situation
is not modeled well by objects. Is it the right solution to mixin a property
list or invent a new data model that handles property lists as first class
objects? How do you mixin the methods?
Papers
-
paper013.html
- Bob Hodges, Component Specification and Conformance: What Components
must Expose to Compose, Texas Instruments and SEMATECH (Bob had
to attend an OMG meeting instead)
Discussion
This session did not end up attracting a large enough
crowd for a full length discussion.
IV-4 Reifying
Communication Paths
Moderator: David
Wells, OBJS
Scribe: Kevin
Sullivan, University of Virginia
Topics: Several workshop papers indicate
that one inserts ilities into architectures by reifying communication paths
and inserting ility behaviors there. But many
questions remain.
-
What behavioral extensions can be added implicitly,
transparently, seamlessly to a system?
-
What are the mechanisms involved? expansion
joints, interceptors, proxies, injectors, higher-order connectors, program
managers, mediators, wrappers, filters, before-after methods, rules, ...
-
Are all ilities the same complexity and handled by
insertion into communication paths?
-
Do we have what we need to do this in today's ORBs,
web, Java. The need for before-after methods or rules or ... that
third parties can define. ORBs have filters but not all web clients
or servers do and Java does not support before-after methods, especially
not ones programmed by third parties. Should it?
-
What are the domain and scoping mechanisms that affect
boundaries for ility insertion?
-
Enforcement -- if an object is made persistent, plays
a role in a transaction, is versioned, ... then how do we guard it
so all these possibly implicit operations occur in some reasonable order
and there are no backdoors?
-
Composing multiple ilities, handling ility tradeoffs
Papers
-
paper101.doc
- Roger Le and Sharma Chakravarthy, Support for Composite Events
and Rules in Distributed Heterogeneous Environments, Computer
and Information Science and Engineering Department, University of Florida,
Gainesville
Discussion
This session focused on the use of event (implicit invocation) mechanisms
to extend the behaviors of systems after their initial design is complete,
so as to satisfy new functional and non-functional requirements.
The discussion ranged from basic design philosophy through articulating
the design space for implicit invocation mechanisms to a discussion of
tactics for exploiting observable events within specific middleware infrastructures.
We also surveyed participants for examples of successful uses of the approach.
A hypothesis was offered (Balzer): Systems comprise communicating components
and the only way to "add ilities" is with implicit invocation, moreover
in many cases it has to be synchronous."
At the philosophical (design principles) level, the discussion focused
on the question of whether engineers should exploit only intentional events,
i.e., that are observable in the system as a result of intentional architectural
decisions; or whether it is acceptable to depend on accidental events—those
that happen to be observable as a result of detailed decisions left to
detailed designers and implementers.
Sullivan’s Abstract Behavioral Types (ABTs) [c.f., TOSEM 94, TSE 96]
elevate events to the level of architectural design abstractions, making
them dual to and as fully general as operations (e.g., having names, type
signatures, semantics). By contrast, procedure call activities that
can be intercepted within ORBs are accidentally observable events. The
following positions were taken: first, architectural boundaries are natural
places to monitor event occurrences and should always be monitorable; second
some techniques provide full visibility of events occurring within the
underlying program interpreter.
On both sides of the discussion it was agreed that the set of events
observable in a system limits the space of feasible transparent behavioral
modifications. The crux of the matter was the question whether system
architects can anticipate all events that a subsequent maintainer
might need in order to effect post facto desired behavioral extensions.
No one disagreed that the answer is no: Architects can’t anticipate all
future requirements. On the other hand, it was clear that neither
can a maintainer depend on accidental events being sufficient to enable
desired extensions. For example, the sets of events visible as local
function calls is not the same as the set of events visible as "ORB crossings,"
and it’s possible that either of these sets, both, or neither is sufficient
to enable a given extensions.
The dual view is that a given extension requirement implies the need
for observability of certain events. Then the question is how do
you get it? I.e., what architectural features or design details can
you exploit to gain necessary visibility to key event occurrences? One
design principle was offered: that you should go back and change the architecture
to incorporate the required events as architectural abstractions; the other
was that you can exploit accidentally observable events directly,
if they suffice to enable satisfaction of the new extension requirements.
The scribe has attached a post facto analysis of this issue below.
A key point was that attaching behavior to an event might compromise
the underlying system: in synchronization, real time, security, etc.
Another point was made that generally there are many useful views of
complex systems, e.g., one for base functionality, another for management
and operations, and that different views might have different observable
event sets permitting different kinds of extensions.
We devoted considerable time to elaborating the design space for detailed
aspects of event mechanisms, especially what parameters are passed with
event notifications. Suggestions included the following: a fixed
set of arguments obtained from the probed event (e.g., the caller and parameter
list for a procedure invocation); a user-specified subset of that fixed
set; a subset picked by a dynamically evaluated predicate; a set of parameters
generated by a program registered with events; events as dual to operations
(abstract behavioral types). It was noted that the duality between
operations and events doesn’t hold up in real-time systems because "it’s
all events" in such systems.
It was noted that events are usually used to effect behavioral extension
leaving the probed behavior undisturbed, but that sometimes it is useful
for a triggered procedure to change the state of the triggering entity.
Next we turned to the question of experience reports. To get the
discussion going, the following questions were posed, What should we not
try to do with these mechanisms; what ilities have you added using such
mechanisms; how seamlessly; how much effort did it take; and what is the
status of the resulting system?
Responses included the following:
-
in the Open OODB project, Wells added persistence and transactions transparently
in CLOS (using its meta object protocol) and C++ (via a preprocessor),
intercepting the first method invocation on objects causing them to be
"faulted in" from a data store. The system is licensable in source
code. Design ideas helped drive the OMG OMA and ORBservices architectures
in the early 1990s.
-
Wileden’s Polyspin which adds type interoperability to Open OODB C++ and
CLOS objects; also noted addition of persistence to Java, with a system
running under Java 1.2;
-
see the real-time CORBA 1.1 RFP; also see the OMG Security specification
-- its concept of interceptors adds security via an event mechanism
-
a set of papers is available from MCC on OIP, and replication in a repository,
with the work in the early prototyping stage; insertion of logging, queue
management and priority passing from request originators; insertion
of arbitrary functionality in source-recipient pairs in CORBA calls;
-
Contingency Airborne Reconnaissance System (CARS), an MLS reconnaissance
system that adjudicates information in the system based on security labels,
of which there can be very many; now deployed in two places; a single multilevel
components mediates causing all prohibited operations to simply fail to
execute.
-
a custom ORB extension that implements some real time functions and transparent
replication for fault tolerance, about to be deployed on AWACS this summer;
-
from Mitre and University of Rhode Island, the mapping of timing constraints
into priorities added as parameters to calls for use by a scheduler;
-
Doug Schmidt’s TAOS;
-
Moser’s and Melliar-Smith’s Eternal system at UCSB provides replication
transparently by intercepting calls at the TCP level and redirecting them
to the TOTEM replication manager; the system’s running, with a paper in
COOTS’97;
-
QuO has local delegates for remote objects which check for quality of service
issues, such as replication, communications management, etc.; system is
in prototype form;
-
Thompson's Intermediary Architecture and web proxies in general add behavior
between web client and server. The behavior can be managed by third
parties.
-
Taylor’s C2, with explicit connectors and communication mediators for interoperability
between C++ and Ada, encapsulating distribution, and based on message multicast;
-
Kaiser’s Oz and Exodus transaction manager, which use before and after
callbacks attached to operations;
-
Sullivan’s abstract behavioral types and mediator-based design approach
for integrating independent components; a radiation treatment planning
system was built using this technology and is in routine clinical use at
several major university hospitals; a paper on the basic approach is available
in ACM TOSEM 1994, and one on the production system in IEEE Transactions
on Software Engineering, 1996;
-
Sullivan’s Galileo fault tree analysis tool, in which multiple shrink-wrapped
packages are used as components and tightly integrated; paper in ICSE 1996;
prototype system being distributed and in use for evaluation at Lockheed
Martin Federal Systems;
-
Balzer’s enhancement to MS Visual C++ so that it exports error messages
that occur during system development, as well as an extension of PowerPoint
(using its Visual Basic engine) to turn it into an architectural diagram
editor;
Sullivan Commentary on Essential vs. Accidental Events
As to whether designers should depend only on events for which architects
anticipate needs, or on events that are observable owing to more or less
arbitrary design decisions, it appears to the scribe that the following
observations can be made. Exploiting accidental events, as for any
implementation details, provides opportunities for immediate gain but with
two costs: increased complexity owing to the breaking of abstraction boundaries;
and difficulty in long-term evolution, owing to increased complexity, but
also because system integrity comes to depend on implementation decisions
that, being the prerogative of the implementor, are subject to change without
notice. Yet often, the maintainer/user of a system has no way to
make architectural changes, and so can be left with the exploitation of
accidental events as the only real opportunity to effect desired behavioral
extensions.
The decision to exploit accidental events end up as an engineering decision
that must account for both short- and long-term evolutionary benefits and
costs. The exploitation of accidental events procures a present benefit
with uncertain future costs. On the other hand, exploiting only those
events that are observable as a result of architectural design reflects
an assumption that it’s better to pay more now to avoid greater future
costs. Again, though, sometime—perhaps especially in the worlds of
legacy systems and of commercial off-the-shelf componentry—architectural
changes just might not be feasible.
Finally, it is possible to elevate what are today accidentally observable
events to the level of architecturally sanctioned events, through standardization
(de jure of de facto). For example, if a standard stipulates
that all function invocations that pass through an ORB shall be observable,
then systems architects who choose to use such a standard are forced to
accept the observability of such events as a part of their system architectures,
and to reason about the implications of that being the case. One
implication in the given example is that maintainers have a right to use
procedure call events with architectural authority. This approach
imposes an interesting constraints and obligations on system architects.
For example, the use of a remote procedure calls comes to imply architectural
commitments.
Closing Remarks
Summary Statement from Dave Curtis, OMG
Dave Curtis commented that lots of OMG members participated in the workshop
and many have influence over OMG direction so we can expect some changes
from their actions. He told workshop participants that one specific and
timely way to be involved is to review the OMG
Component Model RFPs and send feedback to RFP authors including Umesh
Bellur from Oracle. The next OMG meeting is in Salt Lake City on
February 9-13 1998.
Summary Statement from Todd Carrico, DARPA
Todd Carrico thanked all for coming. He stated that this workshop was
a "first of a kind" in pulling DARPA researchers and other industry researchers
and practitioners together. He cited as workshop benefits the wide community
represented by the participation, consensus building across communities,
and consequent increased understanding of fundamental issues. In the area
of achieving system-wide ilities, we now know more. Specifically what can
we do? from the DARPA perspective, the workshop helps get DARPA more involved
in industry and helps transfer DARPA technologies to industry. There
are a number of ways DARPA and OMG might interact, some covered in session
IV-1.
Closing Remarks from Craig Thompson, OBJS
Craig Thompson thanked everyone for coming. He stated that just
as -ilities cut across the functional decomposition of a system, so too
has the workshop attracted several communities that have not traditionally
talked enough to each other -- but the workshop may have helped to form
a new community with some common understanding of the workshop problem
domains, ilities and web-object integration. At very least the workshop
has been educational, serving to alert everyone to a number of other interesting
projects related to their work. In fact, there seems to have been
surprising consensus in some areas, leading to the hope that a common framework
for ilities might be possible, and that some forms of object-web integration
might happen a little sooner. Several workshop participants have
asked about follow-on workshops -- it might make sense to do this again
in a year or so or to choose a related theme. Next time, we'll need
to find a good way to drill down on some specific architectural approaches
and understand them in much more detail. And we'll also have to provide
more hallway time between session -- the sessions were pretty densely packed
together.
Craig wished all a safe trip home -- and reminded any who had not yet
take advantage of Monterey in January that it is the beginning of whale
watching season, the weather's nice, and Pt. Lobos is close by and beautiful.
Next Steps
Send ideas on next steps and concrete ways to accelerate progress in
workshop focus areas to
-
Dave Curtis <curtis@omg.org>
-
Todd Carrico <tcarrico@darpa.mil>
-
Ted Linden <linden@mcc.com>
-
Craig Thompson <thompson@objs.com>