A Position Paper
from
The Component-Based Systems Research Team
Department of Computer
Science
Keele University
Newcastle,
Staffs ST5 5BG
United Kingdom
Our mission
We are under contract to a major British telecommunications company
to explore the feasibility of combining automatic deduction technology
with (among other things) architecture description languages to address
issues in the development and maintenance of distributed object-based applications
such as:
-
configuration and version management
-
system integration
-
visualisation and comprehension of configurations and changes
Snippets from our R&D manifesto
Component semantics:
An interface definition does not address component semantics.
Let's be pragmatic, but not defeatist, about formal specification:
-
any specification of component semantics is better than none
-
any degree of formality is better than none
Software composition:
Let's remember that there is more to software composition than procedures
calling other procedures:
-
higher-order functions (procedures which control other procedures?)
-
program transformation and partial evaluation:
-
unrolling, unfolding, ...
-
factoring-out of common expressions and statements
-
eliminating unreachable code
-
structural optimisations
Automatic programming:
History repeats itself: we need automatic programming technology for
the global distributed object environment; we have to develop high-level
programming languages and models for an emerging computer architecture
whose instruction set and storage model comprise components, objects, services,
middleware, etc.
Beyond craft:
Until we can mechanically compose the "ilities" with abstract
specifications of business logic, we will only be crafting distributed
applications, not engineering them.
A logical starting point
When seeking a rational basis for the management of complex structures,
classical logic is a good place to start.
In DERIVE (see next section),
we have shown how automatic deduction can be deployed as a rigorous framework,
not only for the construction of traditional software deliverables from
their traditional components, but also for exploring the combinatorial
space of actual and potential configurations.
Our technology base: where we're coming from
DERIVE is a powerful, expressive and purely deductive framework
embracing
-
conventional software components
-
configuration and version metadata
-
build dependencies
-
composition and derivation tools
which can mechanically perform hypothetical, abstract, partial and concrete
builds from its component repository.
In particular,
files are treated as values (instances of predicates):
-
no destructive update once they are created
-
named by a hash of their contents, hence "name equality"
(same name implies same contents and vice versa)
variance is denoted by additional parameters within the predicates
which represent files:
-
no limit to the dimensions of variance which can be accommodated
-
variance spaces can (more realistically) be hierarchical rather than orthogonal
compilers (and all other composition and derivation tools) are
packaged as functions:
-
invoked in constrained environments to control side-effects and environmental
influences
build rules are genuine first-order logical implications:
-
variance of components combines and propagates correctly
-
rules can be evaluated abstractly as well as concretely
a build is a proof process:
-
most traditional inference algorithms (and their variants) are applicable:
-
forward and backward chaining
-
depth- and breadth-first
-
cycle-detecting
-
non-deterministic
a build history is a proof:
-
a directed graphical data structure which can be browsed, visualised, simplified,
compared...
a built deliverable is an instantiation of the goal which specified
it:
-
a arbitrarily nested structure of object files, documentation, etc.
-
readily converted to an archive file or a filestore subtree
minimal recompilation is subsumed by memoing:
-
caching of intermediate inferences (lemmas)
-
logically sound
-
more optimal than timestamp-based file reuse
regression testing is achieved by:
-
treating test results as parallel deliverables
-
performing memoing (see above)
confidence reasoning can be embedded in the build rules:
-
to interpret and combine test results
-
to provide a basis for optimisation over alternative configurations
This work is being developed in two directions:
-
microscopically:
applying deductive version and configuration management techniques
to finer-grained structures in a syntax-aware fashion
-
macroscopically:
generalising the model to embrace the radically different binding and
interoperation schemes of distributed component architectures
Paul Singleton
21st November 1997