Developers Contract

Document Sample
Developers Contract Powered By Docstoc
					  Service and Contract Developers Guide. Version 1.X
                                       DRAFT!
                               Last Updated 5/13/2002
                              Contact: ncombs@bbn.com


Purpose
This is a detailed technical guide. Its purpose is to provide low-level description and
discussion of the Service and Contract Workflow language. It is aimed primarily at
Cougaar language designers and Service and Contract (S+C) developers.

Introduction
The Service and Contract (S+C) workflow mediates the interactions between software
components, idealized services, DASADA gauges, and tasks. Tasks are user or
application requests for service: they are the stimulus for creating/using dynamic service
workflows. The S+C language consists of a workflow Logical Data Model (LDM) plus
the publish/subscribe events related to their use in a distributed blackboard system.

Components may be localized to a single node or distributed across many nodes.
Component interactions are managed by a distributed blackboard infrastructure
(DARPA’s Cougaar agent-based infrastructure). However, the pattern or interaction
“language” should be portable to other event-based frameworks. The role of the Service
and Contract workflow is to coordinate invocation of software components and services
within a dynamic multi-node environment.

Service Providers are specialized Cougaar "Plugins". A Cougaar Plugin conforms to a a
publish/subscribe interface against a local agent blackboard. The Service and Contract
infrastructure requires that the Service Provider Plugin also be capable of declaring its
service type (against a community ontology). A Service Provider can choose to extend a
convenience Cougaar Plugin base-class. A Service Provider Plugin may be a proxy for
an external process, a service, or an entire infrastructure.


The Foundations
For a basic understanding of the Cougaar computing model, consult documentation at:
http://www.cougaar.org

Before reading this document you should understand the high-level design and purpose of
the Service and Contract Workflow language. Please consult:
http://aai.bbn.com/cougaar/CDSA_final_3.pdf



                                             1
Contents
Service and Contract Developers Guide. Version 1.X ..................................................... 1
Purpose................................................................................................................................ 1
Introduction......................................................................................................................... 1
The Foundations.................................................................................................................. 1
Contents .............................................................................................................................. 2
   Service and Contract Workflow Developer Topics ........................................................ 4
   Introductory Questions.................................................................................................... 4
      Q: What is OPENZONE, what is S+C Workflow, what is BBN-DASADA?............ 4
      Q: Do you describe any examples?............................................................................ 4
      Q: What are Service Requests?................................................................................... 4
      Q: What are Concepts? ............................................................................................... 4
      Q: What are Infrastructure Plugins? ........................................................................... 5
      Q: What is Assessment?.............................................................................................. 5
      Q: What are Contracts?............................................................................................... 6
   Advanced Topics ............................................................................................................ 7
      Q: What does an OPENZONE DAML Ontology look like?...................................... 7
      Q: The S+C Logical Data Model, what does it look like?.......................................... 7
      Q: How do the Infrastructure Components interact? .................................................. 8
      Q: What does the information “flow” among Service Providers look like?............. 10
      Q: What User Interfaces are available in the 1.X version?....................................... 10
      Q: How do you programmatically issue a “Request for Service”?........................... 12
      Q: Why only one Concept per Service Request definition? ..................................... 12
      Q: How do you “look up” Workflows? .................................................................... 14
      Q: What does it mean to Recycle Workflows?......................................................... 14
      Q: Why is there an Acceptance step for Service Providers? .................................... 15
      Q: What are Streaming DataConnectors? ................................................................. 15
      Q: Why are multiple Requests allowed beneath each Acceptance? ......................... 15
      Q: What are substitute Service Providers?................................................................ 15
      Q: Does the infrastructure “retry” new Service Providers in case of failures?......... 17
      Q: Is it true that the SCRouter may “Accept” a Request? What does this mean? ... 17
      Q: What are S+C Workflow Constraints? ................................................................ 17
      Q: What is “Governance” and how does this relate to how Constraints interact with
      distributed S+C Workflows? .................................................................................... 20
      Q: What is a Service Provider Plugin? ..................................................................... 20
      Q: How do I create Service Provider Plugins? ......................................................... 20
      Q: What is the relationship between a Service Provider Plugin’s execute() and the
      Service and Contract implementation methods?....................................................... 22



                                                                   2
Q: How do I add new Service Provider Plugins to an agent?................................... 22
Q: How do I implement the Invoke method on Service Provider Plugins?.............. 22
Q: How do you obtain the results from a completed Workflow? ............................. 23
Q: How does a Service Provider obtain and use the results from its dependents? ... 24
Q: How does the SCRouter decide where to send Requests?................................... 25
Q: Can I specialize S+C Reference Infrastructure Plugins for my own needs? ....... 26
Q: Can I view logged execution events from an Agent/Node? ................................ 26
Q: Does the infrastructure use “scripted” components? Can I modify these scripts?
................................................................................................................................... 28
Citations .................................................................................................................... 28




                                                              3
Service and Contract Workflow Developer Topics

Introductory Questions
Q: What is OPENZONE, what is S+C Workflow, what is BBN-
DASADA?

In the various documentation you may see all three references. They refer to the same
implementation: a light-weight Cougaar workflow language for coupling distributed
services.

Q: Do you describe any examples?

This document will not detail, explore, or otherwise use any sample applications –
consult “Service and Contract Workflow Samples Document” (forthcoming).
However, for discussion clarity, this document, will refer to a example. It is a simple
four node/ four agent example, simply called the “REGRESSION” example. It is
located in

\smartchannels\pub\configs\regression


Q: What are Service Requests?

Services are described within the system using a service ontology in a Resource
Description Format (XML, W3C). The ontology description language is the DARPA
Agent Markup Language (DAML). A benefit of using DAML is that services can be
hierarchically related in the ontology. This is useful in matching services at different
levels of abstractions. So, for example, a request for a “Search Engine” service might be
matched with a “GOOGLE Search Engine” service.

 Service Providers are discovered, assembled, and invoked by request. An incoming task
is the stimulus by which a distributed “chain of events” is started leading to the
composition and invocation of a distributed workflow.


Q: What are Concepts?

Concepts are DAML resources. They are used to define what services are Requested,
Accepted etc… Each Concept is globally identified by a URI, however a Concept can
be identified locally by a “Local Name”. A Local Name, such as “GOOGLE SERVICE”
is ambiguous across a society of agents where each agent has its own local instantiation
of the ontology model. When Concepts flow across agents (e.g. a Request for service is
sent afield) – Concepts are “re-bound” against the local model based on the Local Name.




                                             4
This is important. An individual agent ultimately “owns” its own ontology model. To
the extent it chooses to re-map Local Names into different DAML resources, it can
flexibly redefine terminology into its own local resource set. While this can be a
dangerous step – the infrastructure provides this freedom for flexibility and scalability.


Q: What are Infrastructure Plugins?

In the Service and Contract agent universe, there is a distinction between “domain” and
“infrastructure” Plugins. Both are Cougaar Plugins. However, Infrastructure Plugins are
used to implement the Service and Contract protocol whereas the Service and Contract
Domain Plugins implement Service Providers (do the work in the domain/application).

Below are the infrastructure Plugins that are loaded in the REGRESSION example
agents. The below snippet was taken from an agent initialization (“.ini”) file (see
Cougaar Plugin documentation): it identifies a typical set of Infrastructure Plugins.
       #
       # SC INFRASTRUCTURE
       #
       plugin = com.bbn.openzone.base.plugins.DAMLConceptManager(daml=TEST.app.daml,implies_out=false)
       plugin =
       com.bbn.openzone.base.plugins.SCRouter(COMMS=USE_YP,MAX_SENDREQUESTS=1,CONCEPTNAME=ROU
       TER_SERVICE)
       plugin = com.bbn.openzone.base.plugins.SCConnector
       plugin = com.bbn.openzone.base.plugins.WorkflowRegulator
       plugin = com.bbn.openzone.base.plugins.WorkflowAssessor
       plugin = com.bbn.openzone.base.plugins.WorkflowExecutor




Q: What is Assessment?

Intrinsic to the Service and Contract language/ workflow protocol is an explicit
Assessment step or “Contracting” step. This step exists to enforce a distributed
workflow evaluation during the construction of a Service and Contract workflow. The
Workflow Assessor is an infrastructure Plugin that awards Contracts on Acceptances.
Only Contracted Acceptances can have their underlying Service Providers be invoked.
How a Workflow Assessor chooses to discriminate from amongst all the service
Acceptances pledged against a Request is arbitrary subject to these caveats:

   •     Only Contract Acceptances from well-formed and complete Service Branches
         should have their underlying Services invoked. Otherwise, the Workflow
         Executor cannot be expected to successfully invoke the Contracts.

   •     The Workflow Assessor should, where applicable, reflect application concerns
         (domain logic). The Assessment/Contracting algorithm should consider the costs
         of either being too “optimistic” vs. too “pessimistic”.




                                                     5
In the current baseline, assessment logic can be expressed via a jscheme script or by
implementing a new Java version of the Workflow Assessor plugin (can subclass existing
Workflow Assessor and re-implement execute() method to access your logic engine).
Should an application decide to dispense with the assessment step, it can do so simply by
using the default openzone.basic.scm script: doing so would only award Contracts to
complete (well-formed) Service Chains without any other consideration.


Q: What are Contracts?

When a service is Requested, a Service Provider may decide to Accept it. An
Acceptance represents a tentative commitment on behalf of that Service Provider to
perform the Requested service. It isn’t until the Assessor (infrastructure Plugin) awards
a Contract that these tentative commitments are “bound” and recognized by the Executor
(infrastructure Plugin) and services are invoked. Furthermore, it isn’t until a Request is
Accepted and then Contracted to an SCRouter (see how SCRouters can “stand-in” for
remote agents) before it can be sent out to remote agents.



                                                                                               Contracting process steps

                                                                                               Invocation process steps
                                       1.
                        R                   R             R

                        A                   A             A
                             C                   C              C
                                                                             9.
                             SP                  SP            SP
                                                                                           3.
                                                                        2.
                 10.                                                              R        R                R

                                                                                  A        A                A
                                                                                      C         C                C
                                  5.
                                                                                      SP        SP              SP


                    R                  R              R
                                                                        4.
                    A
                        C
                                       A
                                            C
                                                      A
                                                           C                               8.
                        SP                  SP            SP
                                                                         7.

                                  6.



 Figure 1 Inter-relationship of "Contracting" vs. invocation in a distributed Service and Contract
   environment. Contracting process propagates "outward". Then after successful completion,
                            invocation progates in the reverse direction.




                                                                    6
Advanced Topics
Q: What does an OPENZONE DAML Ontology look like?

The REGRESSION example uses a test ontology located in \smartchannels\pub\scripts
The following textbox contains a fragment:

   <!--
   * INFRASTRUCTURE SERVICES
   -->
   <rdfs:Class rdf:ID="WORKFLOW_SERVICE">
   <rdfs:label>WORKFLOW_SERVICES</rdfs:label>
   <rdfs:comment>functions related to processing XML</rdfs:comment>
   </rdfs:Class>
   <rdfs:Class rdf:ID="ROUTER_SERVICE">
   <rdfs:subClassOf rdf:resource="#WORKFLOW_SERVICES"/>
   <rdfs:label>ROUTER_SERVICES</rdfs:label>
   <rdfs:comment>Encapsulates business rules for routing workflow events between AAI agents</rdfs:comment>
   </rdfs:Class>

   <!--
   * APPLICATION/DOMAIN SERVICES
   -->
   <rdfs:Class rdf:ID="TEST_SERVICES">
   <rdfs:label>TEST_SERVICES</rdfs:label>
   <rdfs:comment>TEST Services</rdfs:comment>
   </rdfs:Class>
   <rdfs:Class rdf:ID="GAUGE_SERVICES">
   <rdfs:label>GAUGE_SERVICES</rdfs:label>
   <rdfs:comment>TEST Services</rdfs:comment>
   </rdfs:Class>
   <rdfs:Class rdf:ID="TESTER1">
   <rdfs:subClassOf rdf:resource="#TEST_SERVICES"/>
   <rdfs:label>Tester PlugIn 1</rdfs:label>
   <rdfs:comment>Tester Service Provider PlugIn 1</rdfs:comment>
   </rdfs:Class>
   <rdfs:Class rdf:ID="TESTER11">
   <rdfs:subClassOf rdf:resource="#TESTER1"/>
   <rdfs:label>Tester PlugIn 11</rdfs:label>
   <rdfs:comment>Tester Service Provider PlugIn 11</rdfs:comment>
   </rdfs:Class>
   <rdfs:Class rdf:ID="GAUGE1">
   <rdfs:subClassOf rdf:resource="#GAUGE_SERVICES"/>
   <rdfs:label>Tester PlugIn 3</rdfs:label>
   <rdfs:comment>Tester Service Provider PlugIn 3</rdfs:comment>
   </rdfs:Class>
   </rdf:RDF>




BBN-DASADA uses a “vanilla” DAML ontology to define services within the system.
It’s a hierarchical language (DAML property) so that services can be established into
hierarchical categories. In the REGRESSION example, only two categories of services
have been defined: TEST_SERVICES and WORKFLOW_SERVICES.


Q: The S+C Logical Data Model, what does it look like?



                                                           7
Figure 2. The core Service and Contract Logical Data Model interfaces.
The S+C workflow implementation uses Java ™ interfaces to define its “core” definitions
(see Figure 2). “Simple” adapters (com.bbn.openzone.core.ldm.adapters.*) are provided
for this baseline implementation. The advantage of using interfaces is that new behaviors
can be later used to extend the language. For some ideas and examples, take a look at:

http://aai.bbn.com/cougaar/CDSA_CONF_PRESENTATION.pdf


Q: How do the Infrastructure Components interact?

The S+C workflow language is a Cougaar “language”: its primitives are Logical Data
Model objects and publish / subscribe events. Figure 3. illustrates a simple but
representative scenario - in this case, an intial Request requires assembling three services
among two agents to get the job done.




                                                8
For this scenario, a user (UI) issues a Request, a Service Provider Plugin matches and
stipulates a dependency (another service is required), that is answered by a Service
Provider who in turn stipulates a third service dependency. The last service request is
not satisfied locally (e.g. doesn’t exist), so instead the SCRouter sends it on to another
agent who has it. Consult [Combs01] for detailed discussion on the high-level
pattern/design.

The service is invoked at the remote agent first (invocation process starts at the edges of
the workflow and works inwards), a receipt is sent from the remote agent to the first
agent where invocation completes and results are integrated into the workflow.

                                                                                                                                 Remote Agent

                               Service Provider Plugin
                                    (dependent)                               Workflow Executor
                                                                                                         SC Connector




     UI         Service Provider Plugin                     Workflow Assessor                     SC Router



            Request

                                          Acceptance

                               Request

                                                  Acceptance

                                                                    Request

                                                                          Acceptance (deferred)
                                                                     ContractGroup

                                                                                Workflow executor stalls - missing
                                                                                         contract data
                                                                                                                  RequestGroup


                                                                                                                    ReceiptGroup




                                                                                          Contract (updated)



                                                                     Workflow executor continues

                                 Contract Group (updated)



 UI notes Workflow is done




Figure 3. High-level interactions between SC infrastrucure components. In this scenario two local
Service Providers and one remote Service Provider are recruited.




                                                                                          9
Q: What does the information “flow” among Service Providers look
like?

Logically, the relationship between Service Providers can be conceptualized as an
information flow:




                        R                      R                       R




                        A                      A                       A




                            C                       C                           C




                                SP                      SP                 SP




Figure 4. Forward and backward information flows along service chains (fan-out not represented).
Pink represents forward chain of data connectors, blue is the return data flow.




Q: What User Interfaces are available in the 1.X version?

Two developer HTML interfaces are available. These interfaces are implemented via
PSP.

In comparison to the BBN-DASADA demonstration baseline (2001), the number (and
variety) of HTML PSP based UIs has diminished considerably. A primary focus for
2002 is to transit the PSP UIs to ADL and other display tools. Note that the 9.X+
versions of Cougaar will no long support “PSPs” – the baseline will be switching to
Servlet + Tomcat based HTTP infrastructure.




                                               10
          Figure 5. HTML form for injecting service Requests + Constraints. PSP path:
                          “openzone/core/TEST_SERVICES.PSP”




Figure 6. Basic S+C blackboard viewer. Supports drill-down into blackboard (S+C LDM) elements.
                           PSP path: “openzone/core/VIEW_BB.PSP”




                                              11
Q: How do you programmatically issue a “Request for Service”?

Simple: Publish an “External Request” object onto the local agent blackboard. If a
ServiceProvider at the local agent can reply to the request, it may do so (excepting private
reasons). If no Service Provider is available at the local agent, the infrastructure will look
for Service Providers elsewhere in the agent society.

Programmatically, to publish an External Request looks something like this:

           String uri = (String)nameAttrs.get(k);
           Concept c = conceptService.getConceptByName(k);
           if( c != null ) {
                ExternalRequest er = factory.newExternalRequestAndPublish(
                      c,
                     "[BBN SmartChannel Test Data Object]",
                     wfService,
                     constraints,
                     delegate);
           }else {
               new RuntimeException("Concept does not exist!");
           }


Users should use the ExternalRequest interface to request services from an S+C system.
ExternalRequests, once placed into the S+C system, are translated into an internal
Request representation (com.bbn.openzone.base.ldm.Request.java). The “Request.java”
form can be created and injected directly – but should be avoided from external / user
applications as it may hinder some infrastructure features. Furthermore, the
ExternalRequest API is much easier to use!

In a nutshell:

    •   Create a data connector (encapsulating input data)
    •   Create a root level Request (top-level request – not part of service dependency
        chain)
    •   Look up and attach service “Concept” (DAML expression) to Request. This
        defines the Service Provider type whom this Request is directed.
    •   Create and attach any PRE (“pre condition”) – Contract Constraints (optional)
    •   Place Request in a RequestGroup and attach to a Workflow
    •   Publish Request and Workflow objects on blackboard.


Q: Why only one Concept per Service Request definition?

This was done for simplicity. Two enhancements are considered. One idea is to
introduce a more sophisticated Concept predicate language – the other is to allow


                                               12
services to advertise and Requests to match upon different service “facets”. Each facet
would be composed of one or more Concepts under some logical relationship (AND,
OR).

The latter is favored at this time. While it is not fully implemented in the current
baseline, we’re heading there. Facets will map to “Roles” (Requests) and “Ports”
(Service Providers) – these notions are analogous to forms in many Architecture
Description Languages (ADLs).

It should be also noted that because of the richness of DAML/RDF languages, some of
the reasoning points that might be pushed into the Concept language can be captured in
the workgroup ontology. Example, by exploiting the ability to specify hierarchical
relationships amongst Concepts we are able to align service “discussion “
(requests/acceptances) at different levels of abstraction. This was demonstrated in 2001
and 2002. See Figure 7.




                                                “Search Engine”
        “I want to query a
        Search Engine”
                                                     “Google”
         R
                       R       R2     R3

                        A       A      A                “Google, Advanced Query”




                                                                    Service
                                                                    Negotiation able to
                 SP           SP           SP                       occur @
                                                                    different levels of
                                                                    abstraction
                           “Query Formulation” Connectivity”
             “Search Engine”           “Network




 Figure 7. Example, by using ontology (RDF/DAML) structure - can enrich the Concept language.
                   In this case, exploiting hierarchical relationship definitions.




                                                13
Q: How do you “look up” Workflows?

Using the OPENZONE factory, one can either create workflows or lookup up existing
ones by toggling Boolean parameter. For lookup, workflows are matched on the basis
their root Request(s) – do the Request(s) match up in terms of their services requested?


 // From within a Plugin…
 OpzFactory factory = (OpzFactory)this.getDelegate().getFactory("OPENZONE");




Q: What does it mean to Recycle Workflows?

The idea behind workflow recycling is attractive - once a workflow is created, why can’t
we just “re-use” it when we wish to “run it again” –say with new data, for example. This
would be very useful in cases where you wished to multiply re-use service commitments
from a number of short-lived processes.

From the infrastructure perspective, the cost of creating workflows is mainly in the
remote service discovery step. The actual “wiring up” of local services is light-weight.
The overhead associated with “spinning up” service providers is specific to the Service
Provider Plugin and at this point unknowable to the infrastructure (and hence not
considered).

Because the actual cost of workflow composition within a single agent blackboard is
small, from the perspective of the infrastructure, the motivation for the infrastructure to
reuse existing workflow object graphs is minimal. Furthermore, because there are costs,
we instead will opt for a different approach.

Note that an S+C workflow represents two points of commitment for each Service
Provider – the Acceptance and the Contract (read earlier reference design documents for
explanation). To recycle a workflow structure “in place” would require another context
for interpreting pub/sub events –with impact on infrastructure implementation. For
example, we would need to be able to differentiate and implement different types of
“acceptance” and “contracting.”

What we instead intend to do is provide is an agent service that will keep track of the
actual names of previously used Services. In this scheme Service Providers will also then
offer a “Lease” (roughly a “window of time they are available”) that is also tracked.
Then when it comes time to create a workflow similar to one already used – biases
towards service lookup will be towards those original Service Provider participants so
long as Leases are still valid.


                                            14
Q: Why is there an Acceptance step for Service Providers?

Service Providers which match upon service Requests, may “accept” that Request.

A Service Provider is notified if a Request has been published (onto the local blackboard)
by way of a publish event (per Cougaar protocol – execute() is called).

The Service Provider Plugin then has the right to inspect these Requests and decide
whether to accept them. If it does, it publishes an Acceptance.

Besides acceptance, Service Providers have the right to inspect and potentially modify
data on the down-stream path (upon their acceptance). In this way Service Providers are
saying that they provisionally accept Requests subject to service and data dependencies
being satisfied.


Q: What are Streaming DataConnectors?

Different from the case of where a workflow of “short-lived” processes is recycled, is the
case of where a number of long-lived processes are linked by streaming data. Arranging
such a configuration is not now explicitly supported by the infrastructure.

Implicit solutions do exist, however. For example, the workflow can serve to “stitch” all
the services together and then to distribute URLs (as data) amongst themselves. URLs
would reference an external resource that would channel the streaming data. In other
words, push the streaming data exchange to an external data bus. Etc.

The S+C infrastructure is heading in the direction of providing more support for
applications and services to negotiate varied data-delivery, protocol, and service time-
scale assumptions. We are intending to enrich the services description language (DAML-
S etc) – this would help in disambiguating the different service behavior, delivery options
when constructing workflows.


Q: Why are multiple Requests allowed beneath each Acceptance?

When a Service Provider “accepts” a Request, it may stipulate dependencies – other
services that it may need for it to do its job. A Service Provider can stipulate multiple
dependencies. These services are logically AND’ed or OR’ed.


Q: What are substitute Service Providers?

There are two opportunities for Service Providers to contribute as substitutes.


                                            15
The first opportunity is when an application decides to re-issue a Request (new Request)
based on observation of the workflow. For example, an application may monitor for the
return value of a workflow operation and based on the result, issue a new Request. An
application may choose to either perform this monitor “externally” (at the User Interface
etc.) or may in fact have a Service Provider somewhere along the workflow chain
perform this function.

Having the application decide how to handle service substitutions is the more strategic
option in the sense that the application is capable of deciding upon many more options –
as it has greater knowledge about the services and how they interact. An application
may, for example, choose to implement a Service Provider that ORs a number of
compatible dependent services requests together.

An implementer of a Service Provider Plugin may decide to test the results of a Request
after it has returned (from a dependent service) and re-issue a new Request. On what
basis they determine failure (how long they wait, in case of timeout, etc) is an application
decision.

The second opportunity for service substation is limited and is performed by the
infrastructure. As the infrastructure knows less about the application, its range of options
are more limited.

General infrastructure-driven solution for retries is difficult for two reasons. Guaranteed
knowledge of remote service failure is not possible, and frequently adapting to failure
needs to be driven by application/domain knowledge (e.g. retry a whole new service
branch - find a substitute!).

Currently, services can be substituted by the infrastructure under the following
circumstances:

   •   If multiple Service Providers have Accepted a common Request, then all
       Acceptances beyond the initial Acceptance) are considered potential substitutes
       by the infrastructure.

   •   Substitutes (in the sense here) must be local services (contained within the local
       Agent), as the SCRouter won’t forward Requests if accepted locally)

   •   The Workflow Assessor may choose to Contract all Accepting (substitute)
       services simultaneously. The current Workflow Assessor does this.

   •   The Workflow Executor will then arbitrarily select and invoke a service from
       among the substitutes. The current Workflow Executor tries all Contracted
       substitutes serially, until the first successfully invoked service is found.




                                            16
Q: Does the infrastructure “retry” new Service Providers in case of
failures?

The current baseline does not allow for infrastructure-driven “retries” after assessment
has occurred. In other words, after the Workflow Assessor has Contracted an Acceptance
a new Contract will not be “reinserted” after the fact. The infrastructure will not open a
search for a new Service Providers AFTER an Acceptance/Contract has failed.

An application may, however, decide to take action if it observes failure with a Contract.
For example, a Service Provider may choose to issue a new Request etc. See (Q: What
are substitute Service Providers?)


Q: Is it true that the SCRouter may “Accept” a Request? What does
this mean?

The 1.0 SCRouter prefers to direct Requests that can't be satisfied locally to Remote
Agents that have matching services (as advertised in YP). If it can't find a matching
Service Provider at a remote specific agent, it will will broadcast the Request to all.

Before the SCRouter can send a Request to a remote agent, it needs to have been
“Contracted”, meaning that it needs to have been Accepted by someone. The SCRouter
will accept Requests that have not been Accepted by a local service. This act is
considered a “good faith” hypothesis on the part of the SCRouter that there exists some
service out there that can be found. Once the SCRouter Accepts an unfulfilled Request
(as a remote service proxy), then if the Workflow Accessor Contracts it, the SCRouter
will proceed and actually send it out.

Thus. There are two important nuances on how the SCRouter works within the Service
and Contract framework.

   1.) It acts as a “good faith” proxy for hypothetical remote services
   2.) It is checked by the local Workflow Assessor. It cannot send a Request out until
       it is Contracted – and therefore completes the Assessment step.


Q: What are S+C Workflow Constraints?

Service and Contract Constraints specify a target service, a gauge service, and an
optional Constraint expression (String). See the example below.


 //
 // deprecated Plan Service Provider “servlet” model for accessing Agent Service (ConceptService)
 //
 conceptService = (ConceptService)pd.getServiceBroker().getService(this,ConceptService.class, null);
 Concept cGauging = conceptService.getConceptByName( gaugingServiceName);
 conceptService.getConceptByName(name);
 Constraint con = factory.newConstraint(cAccepting,cGauging, parsedConstraintExpression);
                                                  17
Constraints are attached to Requests. In the current infrastructure, only “Contract
Constraints” are used. Contract Constraints are attached to a Request and govern all
Contracts associated with any Acceptance “beneath” that Request.

Currently only the “PRE” (invocation) evaluation of Contract Constraint is implemented.
Implementation for “POST” evaluation of Contract Constraints is pending.

The way it works is this : service Contracts before they are invoked are checked for any
Constraints that govern them (matching the contracted service provider). For every
Constraint found, another Gauge service needs to be recruited (via the same
Request/Acceptance/Contract assembly paradigm) and invoked as a test.

Gauge services have access to the Contract data, a constraint expression as part of the
Constraint, and the normal Service Provider and agent runtime to determine when
deciding whether to accept a Contract. A gauge service, for example, may evaluate the
data, may consult an instrumentation substrate (DASADA RTI) or otherwise examine
evidence in its operating or network environment to decide.

If any of these Gauge services fails to respond with a Boolean true – then the PRE
condition test failed and the target Contract is failed before it is invoked.




                                           18
                              A.




                                              B.




Figure 8. Using the "Basic S+C blackboard viewer" – time-phased view of the runtime interaction of
the PRE Contract Constraint and gauge recruitment.


Figure 8. illustrates the time-phased interaction of Constraint and Gauges using the “basic
S+C blackboard viewer”. In this test scenario, a Constraint for application of a gauge
service “GAUGE1” is stipulated on the Contract of “TESTER11” service.

        A. An initial service chain (Request=”Tester11” service) is not registered as
        “invoked” until the Gauge service as been Requested and Accepted and
        Contracted!

        B. Assuming the Gauge service responds with a successful evaluation, then the
           Contract of TESTER11 continues/succeeds.




                                               19
Q: What is “Governance” and how does this relate to how
Constraints interact with distributed S+C Workflows?


              R                R                R              R


              A                A                A              A

                  C                C                 C              C




                      SP               SP           SP             SP




Figure 9. Constraints (as well as other Directives) flow "downstream" from their point of insertion
in an S+C workflow. Hence, a Constraint inserted at a point in a Service Branch (Request) will
“Govern” or influence the service branch beneath the point of insertion.



Q: What is a Service Provider Plugin?

Cougaar Plugins encapsulate the domain services within the system. Specifically
Cougaar Plugins which implement the Service Provider interface are “service providers”.
A Service Provider Plugin may either encapsulate the implementation itself (Java code,
for example), it may be a bridge to a native service (via JNI, JDBC, etc), or it could be a
proxy for an external process or infrastructure, e.g. JINI or JDBC lookup, etc.
Notionally, it may choose to model a service to the system as a short-lived invocation, or
as a long-lived process.


Q: How do I create Service Provider Plugins?

A Cougaar Plugin which implements the ServiceProvider interface can be a Service
Provider Plugin in an S+C system. So for example, the base class ServiceProviderPlugin
and all its specializations will be recognized as “service providers” to an S+C Cougaar
system.


                                                20
package com.bbn.openzone.core.plugins;

/**
 * Base class from which Service Provider PlugIns (domain) can
 * use (extend) for basic behaviors.
 */
public class ServiceProviderPlugIn extends OpzSimplePlugIn implements ServiceProvider




The ServiceProvider interface primarily requires that a Cougaar Plugin be able to declare
and answer (when asked) its service type. Service types are defined by an agent
community ontology (DAML Concepts). To be useful, a ServiceProvider need also be
able to reply to Requests for service as well as do something – providing some function /
execution behavior.

An example of a most simple ServiceProviderPlugin that performs rudimentary
Request/Acceptance bookkeeping is provided in Figure 10.



     /**
      * A MOST BASIC SP PATTERN PLUGIN
      * Which launches Dependency REQUESTS when notes a Request which matches
      * its request.
      */
     public class TestSPPlugIn extends ServiceProviderPlugIn
     {
           private List myDependencies = new ArrayList();

         public void setupSubscriptions() {
            // super.setupSubscriptions() must be called
            super.setupSubscriptions();

            myDependencies = getAllStringParameters(getParameters(), "DEPENDENCY=", "");

            // setup subscription for Requests, noArg = use default SP predicate
            setRequestsSubscription();
         }
         public void execute() {
             // super.execute() must be called
            super.execute();

            List myNewRequests = getOutstandingNewRequests();
            Iterator it = myNewRequests.iterator();
            while( it.hasNext() ) {
               Request req = (Request)it.next();
               //System.out.println("[SP, ID=" + this.getPlugInID() + "> received request: " + req + "]");
               BlackboardService bb = this.getSubscriber();

               //
               // accept() implicitly publishes Acceptance and any dependent Requests
               // if no dependencies, myDependencies is empty list...
               Acceptance ac = accept(req, myDependencies, Relationship.AND, bb, req.getData() );
            }
         }
         public void invoke(List importsBindings, Acceptance accept, DataConnector exportDC) throws NonInvocableContractException
         {
            System.out.println("--------------------------------------------------");
            System.out.println("[TestSPPlugIn] accept.getParent().getData().toString()=" + accept.getParent().getData() );
            System.out.println("--------------------------------------------------");
         }
     }




Figure 10. A minimalist example of a Service Provider Plugin – it is a Cougaar styled Plugin that
accepts matching Requests and issues a dependency Request (from Plugin parameters). Its Invoke()
method is stubbed - an actual domain Plugin would provide implementation. This sample code can
be found (with some embellishment) at com.bbn.openzone.core.plugins.ServiceProviderPlugin




                                                                                                         21
Q: What is the relationship between a Service Provider Plugin’s
execute() and the Service and Contract implementation methods?

Because a “Service Provider Plugin” is both Service and Contract Plugin and a Cougaar
Plugin, there are two ways the Plugin may be activated. The first way occurs when the
invoke is called – this would be called via the Service and Contract process [Combs01].
The execute() method may be called via a Cougaar infrastructure mechanism
[Cougaar01].

A Service Provider Plugin is invoked after it has Accepted a Service and has been
Contracted; furthermore all of its dependencies need to have been successfully invoked
before it can be invoked. A Service Provider Plugin’s accept method is called after a
matching Request is found on the blackboard (publishAdd); this occurs via the Plugins
“execute()” method. In other words, the Plugin is stimulated by Request subscription
and the accept method is called by way of the execute() implementation on the
ServiceProviderPlugin base class.


Q: How do I add new Service Provider Plugins to an agent?

Consult Cougaar Plugins Developer Guide (Cougaar01) for background and detailed
information about Cougaar Plugins. An S+C Service Provider Plugin is a Cougaar
Plugin with a few additional features (DAML interface, invocation method).

If you derive a Service Provider Plugin from the
com.bbn.openzone.core.plugins.ServiceProviderPlugIn base class the following plugin
parameter options are available:

       CONCEPTNAME=XXXXXX
       ID=YYYYYYY

CONCEPTNAME is required to define the DAML Local Name of the Service this Plugin
will provide.

ID is optional if you wish to identify your ServiceProvider Plugin with an ID (String) of
your own choosing. Otherwise, the agent will supply a UID.


Q: How do I implement the Invoke method on Service Provider
Plugins?

After completing a workflow (all Requests (including dependencies) have been accepted
and Contracts have been awarded), the Workflow Executor will invoke the chain of



                                            22
Service Provider Plugins in reverse order (from the leaves to the root). This invocation
process may cross agent boundaries.

The execution of services within an agent is synchronized – services are invoked in
order, one after another. The order of execution is arbitrary across parallel/independent
branches.

An agent waiting upon another remote agent to provide a result (service) is blocked until
a reply is received. Note that there can potentially be more than one remote agent
responding to a request for a service. Redundancy of potential providers may be a
strategy for risk mitigation (failure, poor service).

From an infrastructure perspective, the only obligation of a Service Provider Plugin is to
not block. If a Service Provider were implementing a long-lived process it would to run
in a different thread (or its controller would). A long-lived process would need to co-
exist in a workflow where services through prior understanding/agreement would
understand how to communicate data to and from it (see answer to “Q: What are
Streaming DataConnectors”).

An alternative is for the Service Provider Plugin to model an interface or a gateway to an
external service rather than the service itself. The external process may be “continuous”
and the Service Provider provides a single “session” access to that external service.


Q: How do you obtain the results from a completed Workflow?

For discussion, assume that the “root” Request for Service came from a Service Provider
Plugin. For example, a service provider reading from a database, or listening to a UI
channel. Question. Once it publishes its Request for Service (See Q: How do you
programmatically issue a “Request for Service”?), the Service Provider may want to
wait for the return result.

There are a couple ways of doing this. The simplest way is to save a handle to the
Workflow object and occasionally poll it to see if it is “done”.

       RequestGroup rg = w.getRequestGroup();
       Boolean ok = w.isSuccessfullyInvoked();

From a Service and Contract Cougaar Plugin, one can periodically “wake up” and check
the Workflow handle using the “wakeAfter(Xms)” command in the execute method of
the Plugin:

        execute(…){
           ….
           wakeAfter(2000); // Wake after 2 secs (repeatedly).
       }


                                            23
For more complete description of Cougaar Plugin services (e.g. “wakeafter”, execute()
method), see Plugin Developer’s Guide at http://www.cougaar.org

Rather than polling, this example Plugin could wait for notification before inspecting
the Workflow. This may be especially useful should it take significant time before the
workflow completes.

To be notified, a Workflow needs to setup a standard Cougaar “publishChange”
subscription for Workflows. The execute() method then will be called each time a
Workflow instance receives a publishChange event. Workflow objects are “publish
changed” by the Workflow Executor after they are successfully invoked.

for(Enumeration e = myWorkflowSubscription.getChangedList(); e.hasMoreElements();)
{
        Workflow w = (Workflow)e.nextElement();
       RequestGroup rg = w.getRequestGroup();
       List contracts = w.getSuccessfullyInvokedContracts();
       if( contracts.size() > 0 ) {
             // must be one successfully invoked contract for workflow
             // to have succeeded.
       }
}

Its important to note that because Workflow publish/change events may be redundant, it
is important to check Workflows before committing action. Note in the examples above,
how w.isSuccessfullyInvoked() and w.getSuccessfullyInvokedContract() tests are used.


Q: How does a Service Provider obtain and use the results from its
dependents?

When a Service Provider is invoked, the infrastructure (via the invoke interface) provides
data from both the dependent services as well as from the original Request that the
Service Provider accepted. Under the Service and Contract mechanism – a Service
Provider Plugin will not be invoked until all its dependencies have been invoked. This
data is then available via the importsBindings list (invoke interface argument - below
example).

The importBindings list contains ServiceBindingReader instances – these are restrictive
interfaces upon the Contracts associated with its dependent services. Using a
ServiceBindingReader interface, an “upstream” Service Provider can obtain results and
information about the “downstream” Services it is relying upon for data.
ServiceBindingReaders correspond to “facets” or Ports of ServiceProviders.




                                           24
Q: How does the SCRouter decide where to send Requests?

The SCRouter is an S+C infrastructure Plugin; it is responsible for determining if a
Request cannot be matched against local services. In which case the SCRouter is
responsible to send it to other agents in hopes of routing an unfulfilled Request to another
Service Provider.

The reference SCRouter tries to send that Request to all agents which have registered that
type of Service Provider within the service registry. It will only do this assuming the
“COMMS=USE_YP” Plugin parameter is set. If none are found, then the SCRouter
does a broadcast to all agents it knows about. Below is a Cougaar ini file [Cougaar01]
declaration for a reference SCRouter that uses broadcast (note missing
“COMMS=USE_YP”).

       plugin =
       com.bbn.openzone.base.plugins.SCRouter(CONCEPTNAME=ROUTER_SERVIC
       E,BROADCAST=EXAMPLE1,BROADCAST=EXAMPLE2,BROADCAST=EXAM
       PLE3,BROADCAST=EXAMPLE4)

It is significant to note the following from the above ini file declaration.

   1.) the SCRouter is an infrastructure Plugin and a Service Provider Plugin
       (“CONCEPTNAME=” declaration). The reason for this is because SCRouters
       accept unfulfilled Requests on a deferred (future) basis – anticipating
       (optimistically) that a remote Service Provider can be found. If none is found,
       the Service Branch will time-out and fail.
   2.) Use the “BROADCAST=” parameters (followed by node/agent name) to tune the
       broadcast range of the SCRouter. The default (no “BROADCAST=” params) is
       SCRouter broadcasts nowhere.

In the case where the SCRouter initialization is declared with the “COMMS=USE_YP”
parameter, the reference SCRouter will try to localize Service Providers through the
Cougaar Yellow Pages. All Service Provider plugins register their service category (by
URI) in the Yellow Pages.

To view the contents of the service “yellow pages” registry - select the “Search Yellow
Pages” button on the “Workflow Tester” HTML form interface. This will enumerate all
the contents of the agent yellow pages. See Figure 11.

Examining the display under the Agents/AGENT-NAME/ Services branch will identify
all the Services that have been registered by Service Provider Plugins at an agent. In the
example of Figure 11., the agent names are “EXAMPLE1”, “EXAMPLE2”,
“EXAMPLE3” and the Service types are “TESTER1”…”TESTER6”, ”GAUGE1”



                                              25
Figure 11. Agents register their services in the yellow pages. This registry assists in guiding where
to send Requests. A Request for a service not registered, results in a broadcast.




Q: Can I specialize S+C Reference Infrastructure Plugins for my own
needs?

Absolutely. Good example is with the provided
com.bbn.openzone.base.plugins.ExtendedSCConnector infrastructure Plugin (extends the
reference SCConnector). In this case, aside from these simple requirements (below), it is
a simple matter.

    1.) Calling super.setupsubscriptions() in setupsubscriptions()
    2.) Calling super.execute() in execute()


Q: Can I view logged execution events from an Agent/Node?

BBN-DASADA Cougaar Agents uses the Log4J (Apache Jakarta) logger tool to manage
internal logging. Consult http://jakarta.apache.org/log4j/docs/index.html for further
information on Log4J. Using the Open Source Lumbermill product (Contributor product
to Log4J on Apache website), a useful visual presentation and display of log output from
an agent. See Figure 12.



                                                 26
From the working directory of where a Cougaar S+C agent was launched, launch the
script runlogview.bat. This batch file simply establishes an environment and launches the
LumberMill product. In the “sample_agent” example a batch file (runlogview.bat) is
already provided as well as the “opz.lcf” client configuration file that is used to define the
link (socket) from the UI to the agent logging output.

You can may run the viewer remotely, however, you may need to regenerate a new
opz.lcf file to establish different network IP address from the Lumbermill product (simple
to do, see Lumbermill documentation).

   set SCBASE_WORKSPACE=\smartchannels\pub
   set COUGAAR_INSTALL_PATH=%SCBASE_WORKSPACE%\install\cougaar
   set MY_SCRIPTS=.;%SCBASE_WORKSPACE%\scripts
   call %SCBASE_WORKSPACE%\bin\env.bat
   java -jar %SCBASE_WORKSPACE%\install\lumbermill-1.0.1\lib\lumbermill.jar




Currently, the Log4J convention is simple. Major classes are provided a DEBUG
Category that is class scoped. Example:

Category DEBUG = Category.getInstance(this.getClass().getName() + ".DEBUG");

Categories will also be defined to “slice” different views into the infrastructure for
different purposes. For example, there exists a simple Category structure to capture
Publish/Subscribe events intended for visualization (later to be used for Architecture
Description Language generation) etc.




                                             27
Figure 12. Lumbermill display for agent logging. Lumbermill is a Swing(tm) log processing and
distribution system for Log4j.



Q: Does the infrastructure use “scripted” components? Can I modify
these scripts?

Yes, and yes. Currently only the “Workflow Assessor” infrastructure Plugin is scripted.
It uses a jscheme script to define its process logic. In the /sample_agent directory, for
example, there is an “openzone.basic.scm” file which contains a text jscheme script. This
script is used by the Assessor to evaluate an Acceptance graph to decide whether to
award Contracts. The “openzone.basic.scm” script will evaluate an acceptance graph
successfully only if it is syntactically well-formed. One, however, can provide scripts to
include evaluation logic about the contents of the data model etc.


Citations

    1. [Combs01] N. Combs. Reliable Recruitment and Assembly of Peer-to-peer
       Services and Distributed Workflow, In Working Conference on Complex and
       Dynamic Systems Architecture, Brisbane, Australia, December 2001.
       http://aai.bbn.com/cougaar/CDSA_final_3.pdf.
    2. [Cougaar01] The Open Source Cognitive Agent Architecture on the website at
       http://www.cougaar.org
    3. The Jscheme Web Programming Project on the website at
       http://jscheme.sourceforge.net/jscheme/mainwebpage.html




                                              28
This work is sponsored by the Defense Advanced Research Projects Agency (DARPA). In
particular we acknowledge the support of Dr. John Salasin and the Dynamic Assembly for
System Adaptability, Dependability, and Assurance (DASADA) program. Contract Number:
F30602-00-C-0203




                                         29

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:29
posted:1/21/2011
language:English
pages:29
Description: Developers Contract document sample