Using command pins as actions

Note

This feature is available only in Tonomi Platform version 31.1 and above, and only in component-based manifests.

Generic commandCall step requires passing command name and arguments as step parameters, and adds additional wrapping around step outputs, resulting in more complex, harder to read manifests. It also doesn’t allow to override parameters for specific actions in the environment.

To solve this problem, ability to use command pins as actions was introduced. To use command as an action, the command pin should be declared in interfaces section of corresponding workflow.Instance:

workflow:
  type: workflow.Instance
  interfaces:
    resource-pool:
      allocate-resources: send-command(string type, int count => list<object> resources)

When a command is referred as an action, its arguments are passed as action parameters, and command return values become outputs, so you can call the action like this:

- allocate:
    action: resource-pool.allocate-resources
    parameters:
      type: vm
      count: 1
    output:
      resources: resources

The call above is equivalent this code, except output format:

- allocate:
    action: commandCall
    parameters:
      service: resource-pool.allocate-resources
      arguments:
        type: vm
        count: 1
    output:
      result: result  # will contain {"resources": <resources value here>}

You will be able to define environment policies for this particular call. To do that, you should set action name (“When asked to execute” on UI) to resource-pool.allocate-resources, and parameter name (“Override value of” on UI) to count for example. After adding such policy, all manifests reffering to resource-pool.allocate-resources will be affected with overriden value.

Note

Such policies do not affect direct direct commandCall step calls, and require exact match of both an interface and a pin name. In other words, policy for vm-pool.allocate-resources does not affect calls to server-pool.allocate-resources.

Common Parameters

Name Type Constraints Default Definition
commandCallTimeout int or string   30 Timeout to finish with optional time unit suffix. Supported time units are hour, minute, and second (both singular and plural forms). If no time unit specified, seconds will be used.
commandCallAggregation map   {} A map from a result key to an aggregation strategy for intermediate command responses. Supported values are default, last, and concatenate. The default value is last.
commandCallMulti boolean   false If true, the command will be executed on all conntected peers, and a map from peer IDs to command results will be returned. If false, a request will be send to the single peer. The command will fail if there are more than one peer or none at all. (unless peerId is defined).
commandCallPeerId string     Defines a peer the request will be sent to, when commandCallMulti is false and there are several peers connected.
commandCallBatchSize int optional   Defines how many values will be processed simultaneously. See throttling for details.
commandCallBatchBy list   [ “commandCallPeerId” ] Defines a set of arguments that participate in batching.

Migrating existing manifests

Let’s start with the following snippet:

steps:
  - allocate:
      action: commandCall
      parameters:
        service: resource-pool.allocate-resources
        arguments:
          type: vm
          count: 1
        timeout: 10 seconds
      output:
        result: result
return:
  resource:
    value: '{$.result.resources[0]}'
  1. Use value of service parameter as the action name.
  2. Unwrap command parameters, previously passed as separate arguments parameter.
  3. The rest of the commandCall parameters should have first letter captilatized and prefixed with commandCall, i.e. timeout becomes commandCallTimeout, aggregation becomes commandCallAggregation and so on.
  4. The output format will also change.Instead of the single output result, containing the map of command results, you will get each command result in its own output, see the example below.
  5. Dynamic links to command results will loose one level of nesting depth and have to be updated. Instead of {$.result.resources} you should use {$.resources}.

Finally, our example will look like this:

steps:
  - allocate:
      action: resource-pool.allocate-resources
      parameters:
        type: vm
        count: 1
        commandCallTimeout: 10 seconds
      output:
        resources: resources
return:
  resource:
    value: '{$.resources[0]}'

If you used multi parameter, be careful changing dynamic links referring to step outputs. Old command call returns all outputs in a single map, containing peer ids as keys, and a map from command output name to its value. But commands with short syntax will return each command output in a separate step output parameter, each contaning a map from peer IDs to output values.

In other words, if the orignal multi-peer command call with signature send-command(=> string x, int y) returned this:

result:
  peer_1:
    x: "test"
    y: 42
  peer_2:
    x: "example"
    y: 0

The updated command call will return this:

x:
  peer_1: "test"
  peer_2: "example"
y:
  peer_1: 42
  peer_2: 0

This means, that if you previously referred to a particular peer output as {$.result_output[$.peer].x}, you should use {$.x[$.peer]} instead.

Throttling

It is possible to limit the level of parallelism of the commandCall step. You might need this when there is a limited external resource (CPU, memory, HTTP API with a limited request rate) or when you need to do something in a rolling manner, for example when you are upgrading a cluster. By default the number of peers processing requests is limited, but you can split other list-typed arguments into batches as well by passing a commandCallBatchBy parameter. You can set commandCallBatchBy to a list of command arguments, and there is also one special value commandCallPeerId, that refers to a list of component peers. Command arguments referenced in the commandCallBatchBy parameter must be of type list.

For command arguments that are present in the commandCallBatchBy list we will limit a number of values sent in one request. There is no guaranteed specific processing order, but the product of lengths of all list arguments referenced in commandCallBatchBy in one batch won’t exceed commandCallBatchSize. Also, all possible combinations of argument values will be processed exactly once.

For example, if you have the following step executed on a single peer:

execute:
    action: pinger.ping
    parameters:
        commandCallBatchBy: [ hosts, ports ]
        commandCallBatchSize: 2
        hosts: [ "127.0.0.1", "127.0.0.2" ]
        ports: [ 8080, 8081, 8082 ]

then the request will be split into three batches:

Batch 1: hosts = [ "127.0.0.1" ],              ports = [ 8081, 8082 ]
Batch 2: hosts = [ "127.0.0.2" ],              ports = [ 8081, 8082 ]
Batch 3: hosts = [ "127.0.0.1", "127.0.0.2" ], ports = [ 8083 ]

An example of batching in a multi-peer scenario is:

execute:
    action: cluster.install
    parameters:
        commandCallMulti: true
        commandCallBatchBy: [ commandCallPeerId, package ]
        commandCallBatchSize: 7
        package: [ "a.rpm", "b.rpm" ]

If there are three peers (peer1, peer2, and peer3) in the cluster, the following batches will be formed:

Batch 1: peerId = [ "peer1", "peer2" ], package = [ "a.rpm", "b.rpm" ]
Batch 2: peerId = [ "peer3" ],          package = [ "a.rpm", "b.rpm" ]

Example

This is full manifest, contaning both standard and multi-peer calls.

Download shortSyntax.yml or InstallButton

application:
  interfaces:
    result:
      data: bind(workflow#result.data)
      data-multi: bind(workflow#result.data-multi)
  components:
    pool:
      type: cobalt.common.ResourcePool
      configuration:
        configuration.resources:
          vm:
            - {ip: 127.0.0.1}
            - {ip: 127.0.0.2}
    workflow:
      type: workflow.Instance
      interfaces:
        result:
          data: publish-signal(map<string,object>)
          data-multi: publish-signal(map<string,object>)
        resource-pool:
          allocate-resources: send-command(string type, int count => list<object> resources)
      required: [resource-pool]
      configuration:
        configuration.workflows:
          launch:
            steps:
              - allocate:
                  action: resource-pool.allocate-resources
                  parameters:
                    type: vm
                    count: 1
                  output:
                    resources: resources
              - allocate-multi:
                  action: resource-pool.allocate-resources
                  parameters:
                    type: vm
                    count: 1
                    commandCallMulti: true
                  output:
                    resources-multi: resources
            return:
              data:
                value: '{$.resources[0]}'
              data-multi:
                value: '{$.resources-multi}'
  bindings:
    - [workflow, pool]