With the release of vRealize Automation 7, VMware introduced a new method of extending the capabilities of vRealize Automation with the Event Broker Service (EBS). This is a message bus integrated (RabbitMQ) extensibility engine to provide UI-driven extensibility options for some serious lifecycle state automation options, among others. EBS replaces the “legacy” .net-based Cloud Development Kit (CDK) for complex extensibility and integration use cases.

With the Event Broker the extensibility of vRealize Automation is a lot easier. You are now able to filter messages on the internal vRealize Automation message bus and attach Orchestrator workflows to those filtered event. So instead of having a limited set of workflow stubs, we can now create policies that define when to kickoff a workflow. There are sixty+ different lifecycle events to which you can attach workflows at pre-, during- or post-event.

For example, you can create a filter only selecting virtual machines, with a name starting with ‘VM-DB’, in the post provisioning stage and attach a specific workflow to perform actions in this particular stage of this limited set of virtual machines.

But while setting up a vRealize Automation 7 demo for the Dutch VMUG on 17 March, I ran into some trouble.

How do I forward payload information to Orchestrator?

First of all, what is payload?
Payload is information which is configured in a blueprint or becomes active/available during the provisioning process. For instance, the operating systems, hostname, number of CPUs, amount of memory or IP address used during the deployment of a blueprint.

Why would you need this?
If you do proper automation and automate the whole process of deploying infrastructure or applications, one of the steps will probable be ‘Register this item in my configuration database (CMDB)‘. This is an easy task to add to the provisioning process by calling a Orchestrator workflow.

How does this work?
First create the blueprint to create a single- or multi-tier piece of infrastructure or application. After that create a Orchestrator workflow to register the virtual machines, applications, etc. in the CMDB. Check if that works by manually entering the appropriate information. Now glue the two together using an EBS Subscription and forwarding the payload information from vRealize Automation to Orchestrator.

The theory is easy, but now in real life.

For this example I use the sample LAMP stack blueprint which comes with vRealize Automation 7.

LAMP blueprint

This blueprint consists of four virtual machines, a load balancer, two application servers and a database. Next I published this blueprint to the Catalog.

LAMP Catalog

Next, I imported a workflow (remove .zip extension) to create an entry in my CMDB which, in my case, is iTop an open source ITSM/CMDB tool. The workflow is called ‘Create iTop virtual machine with additional values‘. This workflow communicates with iTop using the RestAPI which has been setup in advance.

iTop workflow

This workflow uses the following inputs to create an entry in the iTOP CMDB and when I run it manually, I can successfully add an entry in the CMDB.

iTop workflow inputs

Now here’s were it becomes difficult, we have to get the information from vRealize Automation, the payload, to fill in the input values needed for this workflow. To do this we use a small script which collects all the information available during the provisioning phase. At the bottom of this default script I added a few lines to map the information collected to inputs of the iTop workflow.

//COLLECT vRA INFO TO USE IN LOGGING
System.log("------List Properties------");
System.log("requestId: " + requestId);
System.log("machine.id: " + machine.get("id"))
System.log("machine.name: " + machine.get("name"))
System.log("machine.type: " + machine.get("type"))
System.log("machine.owner: " + machine.get("owner"))
System.log("machine.externalReference: " + machine.get("externalReference"))
System.log("virtualMachineEvent: " + virtualMachineEvent);
System.log("lifecycleState.event: " + lifecycleState.get("event"))
System.log("lifecycleState.phase: " + lifecycleState.get("phase"))
System.log("lifecycleState.state: " + lifecycleState.get("state"))
System.log("componentId: " + componentId);
System.log("blueprintName: " + blueprintName );
System.log("componentTypeId: " + componentTypeId);
System.log("endpointId: " + endpointId);
System.log("workflowNextState: " + workflowNextState);

//COLLECT ALL BLUEPRINT PAYLOAD
var properties = new Properties();
properties.put("VirtualMachineID", machine.get("id"));

virtualMachineEntity = vCACEntityManager.readModelEntity(host.id, "ManagementModelEntities.svc", "VirtualMachines", properties, null);
var vmProperties = new Properties();

var virtualMachinePropertiesEntities = virtualMachineEntity.getLink(host, "VirtualMachineProperties");
for each (var virtualMachinePropertiesEntity in virtualMachinePropertiesEntities) {
	var propertyName = virtualMachinePropertiesEntity.getProperty("PropertyName");
	var propertyValue = virtualMachinePropertiesEntity.getProperty("PropertyValue");
	System.log("Found property " + propertyName + " = " + propertyValue);
	vmProperties.put(propertyName, propertyValue);
}

//MAP BLUEPRINT PAYLOAD to WORKFLOW INPUT VALUES
nodeName = machine.get("name");
cpuCount = vmProperties.get("VirtualMachine.CPU.Count"); 
memory = vmProperties.get("VirtualMachine.Memory.Size");
ipaddress = vmProperties.get("VirtualMachine.Network0.Address");

Note: You may notice that I did not map all inputs, the ‘virtualHostName‘ and ‘orgName‘ are static values in my environment.

Note: More custom properties which can be used in this script can be found in the vRealize Automation 7 Custom Properties Reference document.

You need to add this script to the iTop workflow you created earlier. You can do this in two ways;

  1. Add a scriptable task to the original workflow and add the code from above;
  2. Create a new workflow with a scriptable task, add the code and add the original workflow as a Workflow element (see below).

Add workflow

I followed step 2 which results in the following workflow named ‘EBS iTop VirtualMachine Create‘. Now define all the required input and output attributes and parameters.

Add script to workflow

You can download the complete workflow here. (replace .txt extension with .workflow, file name should be EBS iTOP VirtualMachine Create.workflow)

The next step is to setup an Event Subscription (vRA > Administration > Events > Subscription).

Event subscription

In this case the workflow is triggered when all of the following conditions are met:

  • Lifecycle state name = VMPSMasterWorkflow32.MachineProvisioned;
  • Lifecycle state phase = PRE;
  • Machine type = Virtual Machine.

So in plain text, the workflow is triggered when a virtual machine enters the provisioning stage within vRealize Automation.

Now add the EBS iTOP VirtualMachine Create workflow to the Event Subscribtion.

Event subscription to workflow

Finish/Save the Event Subscribtion and Publish it.

Now request the LAMP Stack blueprint we published in the Catalog.

vRA Request

Before the deployment, the iTop CMDB is not showing any virtual machines.

iTop Initial

The LAMP Stack consists of four virtual machines, a load balancer, two application servers and a database. So when the deployment is finished we should see four virtual machines in our iTop CMDB.

After a few minutes the deployment is finished and shows as virtual machines NLVMUG-034 to 037 under ‘Items > Deployments‘.

vRA Result

When we now check iTop, it shows four new registrations for NLVMUG-034 to 037.

iTop Result

With the following details.

iTop Details

We did the same with an Event Subscription which deletes the iTop registration when a virtual machine is deleted. You can download the workflow here (replace .txt extension with .workflow, file name should be EBS iTOP VirtualMachine Delete.workflow), the Event Subscription should trigger this workflow when all of the following conditions are met:

  • Lifecycle state name = VMPSMasterWorkflow32.Disposing;
  • Lifecycle state phase = PRE;
  • Machine type = Virtual Machine.

To get some practice you could now create a workflow and a Event Subscription which updates the CMDB when CPU or memory is added or removed.

Good luck!

Note: Special thanks to Adam and Dimitri for helping out with this and solving the issues we encountered.