Custom components

Warning

Custom components support is experimental, and compatibility even between minor platform versions is not guaranteed.

Tonomi can be extended with custom components implemented in any scripting language. The scripts should speak a simple YAML-based protocol described below and be packaged into a Docker image. Also, two applications should be created: one for the component itself and one for component “factory”. The latter can also provide discovery capabilities if needed.

Components

The Platform provides two special component types which should be used to interface with external scripts. One component type stands for the scripted component itself, and another one is used for calling into the scripting service and for discovery, which we call a factory.

Scripted component

The central component type in custom components functionality is scripted.Component, which represent the entity in the external system the user wishes to manage.

The state of this component is defined entirely by the external scripts: more concretly, the result of scripts execution is expected to be a set of update commands which are applied to the state of the respective component. These update commands may change instance status (either of the instance as a whole or specifically for each interface), its published signals and (in the future) add items to its Activity Log.

The user may also configure custom commands (i.e. send-command pins) which are passed through to some script.

And finally, the user may add configuration pins whose values will be passed to the launch script when the component is launched through the Portal.

Here is a simple example of how a scripted.Component usage could look like:

application:
  interfaces:
    petclinic:
      "*": bind(petclinic#petclinic.*)
  components:
    petclinic:
      type: scripted.Component
      configuration:
        factory.name: PetClinic Factory
        factory.launchScript:          /petclinic/launch.py
        factory.destroyScript:         /petclinic/destroy.py
        factory.healthCheckScript:     /petclinic/healthCheck.py
      interfaces:
        petclinic:
          entrypoint: publish-signal(string)

One needs to specify paths to launch, destroy and other scripts (see below for more information about scripts) and factory name, which is the most important thing. The factory name is a Portal name of a component which contains a factory component. An instance of this component must be added as a service to the environment before managed components can be launched in it.

scripted.Component have the following configuration pins:

factory:
  name:                    configuration(string)
  launchScript:            configuration(string)
  destroyScript:           configuration(string)
  healthCheckScript:       configuration(string)
  reconfigurationScript:   configuration(string)
  commandScripts:          configuration(map<string, string>)

The following is the description of the configuration parameters. You can find more information about scripts in the following sections of this document.

Name Type Description
factory.name string Name of the factory which should be used to execute scripts for this component and for discovery of instances of this component. Usually managed components and factories come in pairs, so if the scripted component is called, for example, “Virtual Machine”, then the respective factory would be called “Virtual Machine Factory”. Then "Virtual Machine Factory" should be specified in this parameter.
factory.launchScript string Path to the launch script inside a Docker container. This parameter is mandatory for all scripted components.
factory.destroyScript string Path to the destroy script inside a Docker container. This parameter is mandatory for all scripted components.
factory.healthCheckScript string Path to the health-check script inside a Docker container. This parameter is optional.
factory.reconfigurationScript string Path to the reconfiguration script inside a Docker container. This parameter is optional.
factory.commandScripts map<string, string> A map from command pin identifiers to paths to their respective script names inside a Docker container. This parameter is optional.

It is possible to define custom interfaces on scripted components. Currently it is possible to define pins of three types: configuration, publish-signal and receive-command.

publish-signal pins receive their values from the scripts output. configuration pin values will be passed to the scripts as input. receive-command pin invocations will lead to calling the configured scripts.

Warning

There must be only one scripted.Component present in a component manifest. Validations for that will be added in the future, but for now the result of launching an instance with multiple scripted.Component components inside it is undefined.

Factory component

In order for scripted components to be usable, an instance of a factory component must be present in the environment as a service. Its component type is scripted.ComponentFactory. Here is an example of a manifest with this component:

application:
  interfaces:
    component:
      "*":      bind(factory#component.*)
    vms:
      vm-by-id: bind(factory#factory.instances)
    configuration:
      "*":      bind(factory#configuration.*)
  components:
    factory:
      type: scripted.ComponentFactory
      configuration:
        component.application: "PetClinic"
        component.discoveryScript: /petclinic/discover.py
        component.discoverySchedule: "0/30 * * * * ? *"
      interfaces:
        configuration:
          registry: configuration(string)
        vms:
          vm-by-id: consume-signal(map<string, string>)
    vm-factory:
      type: reference.Service
      interfaces:
        vms:
          vm-by-id: publish-signal(map<string, string>)
  bindings:
  - [factory, vm-factory]

A factory component requires a scripting service in the environment. See below for more information about the scripting service. A factory component must map its component interface to a top-level interface in order for the factory to be recognized as a factory when it is added as a service to some environment.

Factory component has only three configuration pins:

component:
  application:              configuration(string)
  discoveryScript:          configuration(string)
  discoverySchedule:        configuration(string)
Name Type Description
component.application string Name of the scripted component which is managed by this factory. The factory uses it to load the latest manifest for freshly discovered components from the portal. Usually scripted components and factories come in pairs, so if the scripted component is called “Virtual Machine”, then its respective factory would be called “Virtual Machine Factory”. Then "Virtual Machine" value should be specified in this configuration parameter. This parameter is mandatory if two other parameters are present, otherwise it should not be present.
component.discoveryScript string Path to the discovery script inside a Docker container. This parameter is mandatory if two other parameters are present, otherwise it should not be set.
component.discoverySchedule string A cron-like string which describes how often the discovery script should be executed. You can find the exact syntax here. This parameter is mandatory if two other parameters are present, otherwise it should not be set.

In order for discovery to work, all of the above configuration options must be set. Additionally, the factory must be added as a service to one and only one environment; it will create discovered components in that environment. If a factory is added as a service to multiple environments or to none at all, discovery functioning will be disabled.

Do not set the discovery interval to a small value. Intervals larger than 1 minute are preferred. Because discovery procedure includes running scripts through the Docker service, running these scripts too often may overload the connection to the Docker daemon and cause overall slowdowns.

The factory component also has another inteface:

factory:
  instances:  publish-signal(map<string, string>)

factory.instances pin exposes the current mapping between natural IDs and instance IDs for all scripted components managed by it; see below for more information about different kinds of IDs. This interface is important in conjunction with user-defined interfaces described in the following paragraph.

It is also possible to define additional interfaces on scripted.ComponentFactory; these interfaces must contain only pins of type consume-signal(map<string, string>). These interfaces then can be used to resolve component references returned by the scripts. The user may establish a binding between two factories through a reference.Service, with factory.instances pin on one side and a user-defined pin on the other. Then references with the respective mapping will be resolved using the value received by the factory on this pin.

Note that for the above to work, one needs to add multiple services of the same type to the same environment. Normally this is prohibited, but specifically for factories this restriction is lifted, and it is possible to add multiple factory instances as services to the same environment if one does it through the factory instance panel menu.

If some script invocation has failed, the factory will log a message about that to the activity log. For most of component scripts failures in their execution will also be reported to the component activity log, when possible.

Docker image

You need to package all scripts into a Docker image. Then Tonomi Platform will start a container with these scripts and invoke them when needed. The basic image for Python scripts is qubell/python-scripting. It is recommended to have a single image for all related components with the following directory structure:

component1/
  launch.py
  destroy.py
  healthCheck.py
  reconfigure.py
  discover.py
  custom.py
component2/
  ...

However, it is possible to write the actual implementation in any language. The only requirement is that the language of choice allows working with YAML, because it is used as a message representation medium.

Scripting service

You need to have a running Scripting Service configured with a Docker image name. The manifest can be installed from Bazaar: Scripting Service. You will also need a configured Docker Service.

A scripting service, at the moment, is simply a Docker container component which has execute: receive-command(string script, object arguments => object results) pin. All factories bind with the scripting service on the interface which contains this pin. It is possible to use other components, like workflow.Instance, which would allow one to provide the same interface, but it is not supported as of now. Using Docker components is the preferred way to create scripting services.

Scripts

Custom component behavior should be implemented as a set of scripts speaking a simple YAML protocol. Scripts for component launch and destroy are mandatory, scripts for health-check, reconfiguration and component discovery. are optional. You may also have a number of scripts for handling custom component commands.

The scripts should read a YAML document from the standard input, and output a sequence of YAML documents to the standard output.

Note

At this moment only a single document on the standard output is supported.

Data model

The YAML documents that scripts output is used to update component instances in the platform. The internal model that these updates work on is the following:

instances:
  <id>:
    instanceId: <instance id>
    name: <string>
    status:
      flags:
        active:     <boolean>
        converging: <boolean>
        failed:     <boolean>
      message: <status details>
    interfaces:
      <interface>:
        signals:
          <pin>: <value>
        status:
          flags:
            active:     <boolean>
            converging: <boolean>
            failed:     <boolean>
          message: <status details>
    components:
      <component 1>:
        reference:
          mapping: <interface name>.<id to instance mapping pin name>
          key:     <linked component id>
      <component 2>:
        components:
          <component 3>:
            reference:
              mapping: <interface name>.<id to instance mapping pin name>
              key:     <linked component id>
    activityLog:
    - severity: <severity>
      message:  <log message>
    commands:
      <command id>:
      - $intermediate: true
        <result key>: <result value>
      - <result key>: <result value>

Instance ID

There are two identifiers associated with a scripted component instance: a globally unique synthetic ID used in the Portal, and a natural ID which is unique within a given component factory instance. The first ones are represented in examples as <instance id>, the second ones as <id>.

Instance name

Display name for an instance. You can provide an instance name for the discovered component instances, but you cannot update it later.

Status

Every component instance has a status of the entire component and can also override this status for individual interfaces. The status is a set of boolean flags with an optional status message. The default value for all flags is false. There are the following flags available:

Flag Description
active Whether component is successfully performing it’s duties or not.
converging Whether component is currently doing something to come to the desired state (provisions/deprovisions a resource, installs software etc.)
failed Indicates a failure. Note, that this flag is not mutualy exclusive with active: for example you may have a failure that affects component resilience but its SLA is not broken yet.

It is also possible to set an optional string message associated with the status.

Child references

Component instances can have references to other component instances. Scripts may return a tree of references, where each reference is a pair of mapping name and mapping key, where mapping name equals to the respective interface.pin on the factory, and mapping key is (usually) the natural ID of the referenced component.

Each factory exposes a mapping from natural IDs to instance IDs on its factory.instances signal. Component scripts provide natural IDs for referenced components, and these mappings are used to resolve instance IDs by natural IDs. For this to work, an interface must be declared on the factory of the referencing component, which should be bound with the factory interface on the factory of referenced components. This can be done with service references; see the example manifest to find out how can this be done.

Activity log

Component instances may push entries to the activity log of their representation in the portal using activityLog section of their model. To append a message, one of the scripts must yield a $pushAll command to activityLog field of the model.

severity of the entry may be one of the following values: TRACE, DEBUG, INFO, WARNING, ERROR. Note that any case, even mixed, is acceptable: ERROR, error and ERrOr all mean the same severity level. If severity value is absent or invalid, no error is produced anywhere, and INFO level is assumed. On the other hand, message field is required and must contain a string, which can be multiline.

Update commands

Scripts may issue the following update commands:

<id>:
  # replace field value
  <field>: <value>
  # replace value at path
  $set:
    <path>: <value>
  # remove path with it's value
  $unset:
    <path>:
  # append all values, supported for activity log only
  $pushAll:
    <path>:
    - <value>
    - <value>

All update operators work in the scope of a single component instance. Note that not all operators are supported for each particular path inside the document. For example, it does not make sense to $unset one of status flags, because they have boolean values and cannot be absent.

Launch script

This script is executed when one or more new components are launched by the user. It receives factory configuration and all component configurations as input and returns component states. Multiple components may be requested at once.

Launch script must return the instanceId field to correlate the created instances with a requested configuration sets. It may either wait until the component is fully initialized, or initiate the process, return a “converging” status and leave the rest to the health-check script.

Input

configuration: <factory configuration>
launch:
  <instance id>:
    configuration: <instance configuration>

Output

instances:
  <id>:
    instanceId: <instance id>
    <other updates>

Destroy script

This script is executed when one or more components are requested to be destroyed by the user. It receives factory configuration and component identifiers as input and returns component states. Multiple components may be destroyed at once. The component is considered successfully destroyed when it has a status with all flags set to false; only after that it will be marked as destroyed in the portal.

Input

configuration: <factory configuration>
instances:
  <id>:

Output

instances:
  <id>: <updates>

Health check script

This script is executed periodically to update component statuses and signals. It receives factory configuration and component identifiers as input and returns component states.

See also Discovery script for additional information.

Note

At the moment it is not possible to configure the interval between health checks, which is set to 1 minute. This may change in future versions.

Input

configuration: <factory configuration>
instances:
  <id>:

Output

instances:
  <id>: <updates>

Discovery script

This script is executed periodically to adopt externally created component instances. It receives factory configuration as input and returns component states for discovered components. It is important to return instance names for newly created instances because they cannot be changed afterwards. Other instance model properties can be safely omitted if they are going to be updated by the health-check script later.

Often discovery logic (adopting new instances) and health-check logic (updating existing instance properties) requires a different implementation, so the recommended way is to have them as separate scripts. This also allows to have different schedules for discovery and health-check processes, because health-check is likely to result in “lighter” queries to the external system than full discovery. In terms of model modification, however, discovery and health check are equivalent, except the bit about instance names.

Discovery script is the only one which is configured on the factory component. All other scripts are configured on the managed component instead.

Input

configuration: <factory configuration>

Output

instances:
  <id 1>: <updates>
  <id 2>: <updates>

Custom command scripts

These scripts are executed when the user launches a custom command on a component instance. A command script receives command arguments as input and returns command results, which will be returned to the caller or logged to the Activity Log if invoked from the Portal. It may also update instance model if necessary with the usual update commands.

In order to correlate command results with command invocation, a special identifier is assigned to the command invocation when it is started; it is <command id> in the examples below and the scripts should assume that it is an arbitrary string with no inner structure. The respective <command id> must be used in $pushAll to return command results.

Note that because $pushAll is just a regular model update command, it is possible to return command results not directly from the command script, but afterwards, for example, by the health-check script. This allows asynchronous commands to be implemented, for example, ones which start some long-running external process.

Input

configuration: <factory configuration>
instances:
  <id>:
    commands:
      <command id>:
        <interface>:
          <pin>:
            <argument key>: <argument value>

Output

instances:
  <id>:
    $pushAll:
      commands.<command id>:
      - $intermediate: true
        <intermediate result key>: <intermediate result value>
      - <result key>: <result value>

Reconfiguration script

This script is executed when a component instance is reconfigured from the Tonomi Portal. It receives factory configuration, a new instance configuration and returns the usual instance model updates. Configuration for multiple instances can be specified in its input.

Input

configuration: <factory configuration>
instances:
  <instance id>:
    configuration: <instance configuration>

Output

instances:
  <id>: <updates>

Example

Suppose that someone wants to create a custom “PetClinic on cloud VMs” component. They want to be able to launch PetClinic instances from Tonomi and to discover the ones created externally. For the latter they use some registry where all PetClinic instances (either created from Tonomi or not) are listed.

Application manifests

They create a VM application:

application:
  interfaces:
    configuration:
      "*": bind(vm#configuration.*)
    compute:
      "*": bind(vm#compute.*)
    control:
      "*": bind(vm#control.*)
  components:
    vm:
      type: scripted.Component
      configuration:
        factory.name: VM Factory
        factory.launchScript:          /vm/launch.py
        factory.destroyScript:         /vm/destroy.py
        factory.healthCheckScript:     /vm/healthCheck.py
        factory.reconfigurationScript: /vm/reconfigure.py
        factory.commandScripts:
          control.reboot: /vm/reboot.py
      interfaces:
        configuration:
          instanceType: configuration(string)
        compute:
          ip: publish-signal(string)
        control:
          reboot: receive-command()

А VM Factory application:

application:
  interfaces:
    component:
      "*":      bind(factory#component.*)
    vms:
      vm-by-id: bind(factory#factory.instances)
    configuration:
      "*":      bind(factory#configuration.*)
  components:
    factory:
      type: scripted.ComponentFactory
      configuration:
        factory.application: "VM"
        factory.discoveryScript: /vm/discover.py
        factory.discoverySchedule: "0/30 * * * * ? *"
      interfaces:
        configuration:
          awsAccessKey: configuration(string)
          awsSecretKey: configuration(string)

A PetClinic application:

application:
  interfaces:
    petclinic:
      "*": bind(petclinic#petclinic.*)
  components:
    petclinic:
      type: scripted.Component
      configuration:
        factory.name: PetClinic Factory
        factory.launchScript:          /petclinic/launch.py
        factory.destroyScript:         /petclinic/destroy.py
        factory.healthCheckScript:     /petclinic/healthCheck.py
      interfaces:
        petclinic:
          entrypoint: publish-signal(string)
application:
  interfaces:
    component:
      "*":      bind(factory#component.*)
    vms:
      vm-by-id: bind(factory#factory.instances)
    configuration:
      "*":      bind(factory#configuration.*)
  components:
    factory:
      type: scripted.ComponentFactory
      configuration:
        component.application: "PetClinic"
        component.discoveryScript: /petclinic/discover.py
        component.discoverySchedule: "0/30 * * * * ? *"
      interfaces:
        configuration:
          registry: configuration(string)
        vms:
          vm-by-id: consume-signal(map<string, string>)
    vm-factory:
      type: reference.Service
      interfaces:
        vms:
          vm-by-id: publish-signal(map<string, string>)
  bindings:
  - [factory, vm-factory]

Environment preparation

They build a docker image from the following Dockerfile and publish it to the Docker Hub:

Dockerfile
FROM qubell/python-scripting:latest

ADD vm/launch.py        /vm/launch.py
ADD vm/destroy.py       /vm/destoy.py
ADD vm/healthCheck.py   /vm/healthCheck.py
ADD vm/discover.py      /vm/discover.py
ADD vm/reconfigure.py   /vm/reconfigure.py
ADD vm/reboot.py        /vm/reboot.py

ADD petclinic/launch.py      /petclinic/launch.py
ADD petclinic/destroy.py     /petclinic/destoy.py
ADD petclinic/healthCheck.py /petclinic/healthCheck.py
ADD petclinic/discover.py    /petclinic/discover.py

They add Docker Service and Scripting Service manifests from Bazaar, launch an instance of the Docker Service, add it as a service in some environment, and then launch an instance of Scripting Service configured with the image name they just have built and add it as a service to some environment too.

Then they launch instances of VM Factory and PetClinic Factory applications and add them as services to the same environment as the Scripting Service. Note that because PetClinic Factory depends on VM Factory, the latter must be launched and added as a service to the environment before the former.

Scripts

The Python scripts they add to the Docker image may vary depending on the cloud provider and software deployment method, but their input and output may look as follows:

vm/discovery.py

Input:

configuration:
  configuration.awsAccessKey: XXX
  configuration.awsSecretKey: YYY

Output:

instances:
  i-123123:
  i-456446:

vm/launch.py

Input:

configuration:
  configuration.awsAccessKey: XXX
  configuration.awsSecretKey: YYY
launch:
  56e6e3c8b171d3b57dd3f348:
    configuration:
      configuration.instanceType: m1.small

Output:

instances:
  i-789789:
    instanceId: 56e6e3c8b171d3b57dd3f348
    $set:
      status.flags.active: true
    interfaces:
      compute:
        signals:
          ip: 203.0.113.1

vm/healthCheck.py

Input:

configuration:
  configuration.awsAccessKey: XXX
  configuration.awsSecretKey: YYY
instances:
  i-789789:

Output:

instances:
  i-789789:
    $set:
      status.flags.active: true
    interfaces:
      compute:
        signals:
          ip: 203.0.113.1

vm/reconfigure.py

Input:

configuration:
  configuration.awsAccessKey: XXX
  configuration.awsSecretKey: YYY
instances:
  i-789789:
    configuration:
      configuation.instanceType: m3.large

Output:

instances:
  i-789789:
    status:
      flags:
        active: false
        converging: true
        failed: false

vm/destroy.py

Input:

configuration:
  configuration.awsAccessKey: XXX
  configuration.awsSecretKey: YYY
instances:
  i-789789:

Output:

instances:
  i-789789:
    status:
      flags:
        active: false
        converging: false
        failed: false

petclinic/discovery.py

Input:

configuration:
  configuration.registry: "http://petclinic.etcd.io/"

Output:

instances:
  pearl-white-petclinic:
  dark-blue-petclinic:

petclinic/launch.py

Input:

configuration:
  configuration.registry: "http://petclinic.etcd.io/"
launch:
  56e79f37954cc4440350814e:
    configuration: {}

Output:

instances:
  flaring-green-petclinic:
    instanceId: 56e79f37954cc4440350814e
    $set:
      status.flags.active: true
    interfaces:
      petclinic:
        signals:
          entrypoint: http://203.0.113.1/
    components:
      haproxy-vm:
        reference:
          mapping: vms.vm-by-id
          key:     i-789789
      tomcat-vm:
        reference:
          mapping: vms.vm-by-id
          key:     i-123123

petclinic/healthCheck.py

Input:

configuration:
  configuration.registry: "http://petclinic.etcd.io/"
instances:
  pearl-white-petclinic:

Output:

instances:
  pearl-white-petclinic:
    $set:
      status.flags.active: true
    interfaces:
      petclinic:
        signals:
          entrypoint: http://203.0.113.1/
    components:
      haproxy-vm:
        reference:
          mapping: vms.vm-by-id
          key:     i-789789
      tomcat-vm:
        reference:
          mapping: vms.vm-by-id
          key:     i-123123

petclinic/destroy.py

Input:

configuration:
  configuration.registry: "http://petclinic.etcd.io/"
instances:
  pearl-white-petclinic:

Output:

instances:
  pearl-white-petclinic:
    status:
      flags:
        active: false
        converging: false
        failed: false