Austin, Texas
March 10, 1997
OMG Document internet/97-03-01
At the last OMG meeting in Tampa, OMG Internet SIG concluded an analysis of responses to the OMG Internet Services RFI, which consisted of recommendations to ORBOS and to Common Facilities Task Forces for extensions to their roadmaps. We also recommended formation of a working group on Composable Architectures and decided to also drill down on a Information Access Facility (aka OTAM) to wrap data sources to make them more uniformly accessible from inside a CORBA environment. These latter two ideas are both viewed as
The first two sessions of the meeting covered these latter topics. The remainder of the meeting were two informational presentations, one on the MCC Infosleuth project, the other on the proposed Firewalls RFP. On Tuesday morning, Internet SIG met with Common Facilities and ORBOS task forces to go over the RFI response recommendations and review status on two relevant RFPs.
Shel presented on OTAM and led the discussion. OTAM is a proposed Information Access Facility that would provide uniform access to information sources like file systems and database records. It is a facility because it bundles several OMG services. It is called OTAM (Object Transfer and Management) by analogy to the ISO standard FTAM (File transfer and Management). FTAM is an ISO specification, in five volumes. Ulysses Black's book on OSI has a section of FTAM.
The architecture of OTAM consists of
You do not have to know beforehand the schemata of the file store or DBMS. Metadata is fundamental, you are operating on the metadata. There are four categories of metadata
Another concept is service regimes, a period of time in which a common state is valid for the client and server. Regimes provide protocols for object discovery, object selection, object access, data transfer, and recovery. Q: is this similar to the idea of contexts to maintain state? A: These are regimes within an invocation.
Built on all this is concept of OTAM services:
How to pursue this idea?
Q: will meta objects (MOF) feed this? A: Hopefully
Q: Is this the semantic or object file system? A: Yes, can make changes to file or DBMS in place, addressed at the record level, without having to down load information explicitly
Q: Is this related to the Persistent Service? A: Probably builds on persistent service, trader, lifecycle, externalize, concurrency and transactions, query, security, naming, maybe more.
Q: Is FTAM exporting just the file abstraction or also the object-collection-queryable collection abstraction? Seems like it is the former only.
Q: is this similar to thinking about the web where the object is by analogy a page on the web, however created. A: similar but the object is a blob (file) though it might have types (maybe MIME types or IDL types)
Q: how is it related to an OODB? might be similar, not so monolithic, more distributed.
After some discussion, we made the decision to draft a white paper to collect together what we know about OTAM. We outlined the white paper, selected an editor and section authors, encoded as DC = Dave Chi, SS = Shel Sutton, CT = Craig Thompson.
White Paper (Editor - DC)
Issues with OTAM
Context for this presentation: one of the outcomes of the analysis of the OMG Internet Services RFI responses is that we at OMG need to better understand our OMA architecture if we are going to scale it to the Internet. In particular, we need to better understand mix-and-match properties of compositions of services into facilities and we need to understand how to federate services and facilities. At the Tampa meeting, we proposed that there should be a Working Group on Compositional Architectures to better understand these issues. It is understood by all that this is actually a group looking at the OMG OMA architecture in general - scaling OMA to the Internet was just the motivation for getting us to think about these issues but the charter of this working group is intended to be more general, and so we invite anyone else at OMG to this working group.
The outline of recommendations to form the Working Group on Compositional Architectures is repeated below, taken verbatim from the Tampa Internet SIG recommendations:
Composition and Architecture ISIG Working Group
White Paper [Editor: Craig Thompson -- expect a skeleton draft in a few weeks and then to ask people in OMG to comment and add detail.]
Reference: InfoSleuth: Semantic Integration of Information in Open and Dynamic Environments, Sigmod '97. Also see MCC Infosleuth project web page.
Overview: Infosleuth, a project in its third year at MCC completing in June 1997 with an Infosleuth II on the horizon, provides an agent-based framework for accessing heterogeneous data sources over the Internet. Most of Infosleuth is implemented in Java so think of an agent as a stylized Java process that follows certain protocols.
The problem and approach: Most DBMS people have tried to solve the multi-database problem by mapping database DBi to Integrated_Schema (a data centric approach). Not a very scaleable approach - it gets unwieldy as you add DBn+1. Infosleuth uses an ontology in place of the integrated schema. You first define the entities of interest (stocks, portfolio, …). To connect to DBi, you define a resource agent last after first defining the ontology (a user-centric approach). Note: the architecture looks the same except ontology replaces integrated schema.
Ontologies: Ontologies represent semantic concepts, are defined independently of the actual data, Infosleuth uses a frame-slot data model with standard data types: integer, float, string, date, frames, and relationships. They have defined a healthcare, stock market, politics ontology, ontology ontology, … but not overlapping ontologies.
Agents: What is an agent? an "object with an attitude." Agents are independent processes, each is a specialist, they exist in a community. None exist solo. There are agents you know about that are in your community and others you do not know about.
Infosleuth's architecture: a web client interacts with a web server via http, and RMI registry, and RMI-based user agents. These latter use KQML to interact (send messages) to various kinds of agents (ontology agent, broker agent, task planning and execution agent, query decomposition agent, and data mining agent). These in turn talk to each other and to resource agents that effectively wrap data sources (SQL, LDL++, and WAIS). A monitor agent can be turned on to log messages.
More on each sort of agent:
Q: What can cause a resource agent to go away? A: autonomy, at the whim of the person who puts them there.
Q: Do query agents do query optimization? No. Lots of issues, hard to do, do not have stable underlying dbms. Try to process joins in some good orders. But its very slow. Broker is good about pruning off irrelevant parts of query using semantic query optimization.
Example to give feel for some of the issues illustrating a periodic query: every evening at 5PM, select name, exchange, closing provide from stocks where the stock is international and closing price is up at least 2. Run it in London but not NYSE (since not international) but in Warsaw if it is up or is tomorrow.
Advertising information: domain information: name, host, port, and protocol; agent type like broker, execution, resource; agent capabilities e.g., ask, update, subscribe. All is described in LDL (MCC's Prolog-like language). Agents talk to each other via KQML. The agent specifies what languages they talk (SQL, KIF, …) and ontologies the agent understands (frames it knows about, slots it knows about, constraints on frames and slots like NYSE is not International, and closing prices after January 1970).
Brokering: a broker's job is to find resources that contain information relevant to a query. Broker makes some kinds of inferences like NYSE is not international but London is. It looks at what the agents advertise: respond to query in SQL? Stock exchange ontology? Closing price frame? Returns the names of al agents whose advertisements intersect the query constraints.
So in our query: applet periodically communicates to user agent that talks to execution agent that talks to multi-resource query agent that "asks" broker and "asks" London stock marker.
Agent interaction standards: standardize individual messages, what they say, what they mean. Standardize the flow of messages between agents (conversations). Standardize how communities of agents cooperate.
Layered architecture: (1) agent application layer talks via conversations, requests and replies to (2) conversation layer talks via KQML performatives to (3) comm/KQML layer talks via TCP/IP or HTTP to (4) remote agent.
The idea behind conversations: agents send messages to each other: e.g, ask_all or KQML performatives. There are legal and illegal sequences of performatives pertaining to a specific task. Conversations define and enforce legal performative sequences. Conversation layer defines a standard set of conversations used by all agents. Each conversation is a state machine with messages sent and received on transitions. Each conversations is implemented as an out-thread (initiator) and and in-thread (responder) in Java.
OutConversations: Initiated by call to initConversationOut(…), remote agent responds using a call to addNewReply(CNVRemoteResponse), and application can alter the conversation using interrupt(CNVRequest).
InConversation: is initiated by remote agent. Request comes to application in the form of a process(CNVRequest) message, application responds with a sendApplReply(CNVReply) message, and application cannot interrupt the conversation.
Generic Infosleuth Interfaces (in progress): Every agent contains the broker interface (to handle generic advertising and queries to broker), ontology interface (to handle queries to ontologies and parses and caches ontologies), monitor interface, and all operating on top of the conversation layer.
Interconnecting Agent Communities (in progress): multiple peer brokers, need an inter-broker protocol (as with IIOP), now agents advertise their meta-information to their broker. In future brokers will advertise to other brokers.
System is in pre-alpha state. Sponsoring companies all have copies of Infosleuth. No cases where it has spread across firewalls. Not used outside a firewall.
Q: Are you connected to OMG? A: No, instead active in KQML community. CORBA was never semantically rich enough.
Q: Sun uses Java RMI and Java ORB, why not use IIOP? A: They never go out of the Java world in the prototype so RMI is convenient.
Q: Why not use chron job and objects, why agents? A: Agent flexibility is good since it is closer to way we think but is also more complex, as is true with all higher level abstractions.
See briefing and draft ORBOS RFP security/97-02-07: CORBA/Firewall Security Request for Proposals, draft
What are issues relative to Internet? Why is the RFP needed?
Firewalls requirements
Feels that this is not a final answer in and of itself. SSL provides edge to edge security - another part of the answer. Secure IIOP allows third party security - based on RSA cryptography.
Java applet communicating from outside the firewall (single firewall) - requirement included in draft RFP?
Implementing a firewall that works with IIOP is easier than other technologies (e.g., DCE)
Thirty plus people attended this session. Peter Walker (ORBOS TF chair) and Bill Cox (CF TF chair) presided.
Cannot map some Java types to IDL (e.g., threads). Round trip mappings are an optional requirement - not natural but might be done if some constraints are used in specifying Java. Proposals should discuss compatibility with JDK 1.02. Timetable: initial submissions in Dublin, revised in Princeton. Intent is to allow Java programmer to start with Java interfaces, map to IIOP automatically, then map back on server to {java, c++, etc.) Comment is: if Java IIOP Java then you better have a symmetric mapping, which is not required. So maybe we need programming guidelines? Also, if the mapping to IIOP adds complexity and the mapping back to C++ adds complexity then you have complexity squared, which might be unusable.
No progress from Tampa but expect a completed draft by the Italy meeting, Bill Cox said.
Internet SIG recommendations to Common Facilities Task Force roadmap were reviewed by Shel Sutton. Bill Cox asked for volunteers for a second CFTF RFP on Internet Services.
Bill Cox then discussed some roadmap issues from the ORBOS roadmap meeting:
Jeff M. pointed out that we need extensions that would be useful for a given language (Java RMI) and that do not map to IDL. The idea is for OMG to move into language-specific environments.
Craig Thompson reviewed the recommendations from OMG Internet SIG to ORBOS. Peter Walker said the ORBOS roadmap committee will respond to each line item.