Automate Directory Sync in vRO 7.4

I took quite a break from blogging this past year – and now that it’s VMworld 2018 again, I figured I’d contribute some useful vRO content as it’s always my biggest gripe when I’m there!

One of the big gaps that has been in vRA 7.x has been that you cannot perform a synchronization of your identity sources in a programmatic way. Since the transition from PSC/SSO lookups to vIDM, this is a pretty big feature to miss when you want to automate management of your users and groups.

Thankfully, a new API is available in vRA 7.4+ that is exposed by both the REST API, as well as the CAFE plugin making it easy to embed in your workflows.

It’s staggering to me that VMware didn’t include a pretty basic workflow to go along with this, but that’s OK. Here’s one for you that you can extend however you want! Given it’s relative simplicity, this is probably a good candidate for a vRO Action.

NOTE: This API only works for vRA 7.4 or higher!

First, you’ll need a couple of inputs:
cafeHost (Type: vCACCAFE:VCACHost) – the tenant where the identity source exists.
identitySource (Type: string) – the name of the identity source. The actual domain is extracted in the code.

Once you have those, bind them to a new Scriptable Task and put in the following code:

// Use tenant to create a connection to the Identity Store service.
var service = cafeHost.createAuthenticationClient().getAuthenticationIdentityStoreClientService()
var store = service.getIdentityStore(cafeHost.tenant,identitySource)
var domain = store.domain

// What's the current status of the identity source?
var status = service.getIdentityStoreStatus(cafeHost.tenant,domain).getSyncStatus().getStatus().value()
if(status !== "RUNNING") {
  // The source is currently not performing a sync, so begin one!
  try {
    System.debug("Sync of "+domain+" has started.")
  } catch(e) {
    throw("Failed at sync start: "+e)

  // Poll the sync operation to completion, whether it succeeds or fails.
  var retries = 36
  for(i=0; i<retries; i++) {
    try {
      var status = service.getIdentityStoreStatus(cafeHost.tenant,domain).getSyncStatus().getStatus().value()
      if(status == "RUNNING") {
        System.debug("\tSync of domain "+domain+" is currently in progress...checking again in 10 seconds.")
      } else {
        System.debug("Sync status of "+domain+" is now: "+status)
    } catch(e) {
      throw("Unable to query status: "+e)

A useful aside to this code: in case you ever need to know what possible values exist for any .value() method in the CAFE plugin, you can just output the .values() to the log, and it will return a list.

For this particular operation, these are your options: COMPLETED, FAILED, RUNNING, UNDEFINED

I’ve not run into a Undefined situation before, but these values should help you in extending or mapping out your logic for the workflow. My retries logic is pretty simple and easily modified. Find out what works in your environment and adjust it accordingly!

As an Action, I think the best case would be to use the code above, and once completed return a boolean on whether the synchronization process successfully completed, and you can error handle from there.

I hope to be bringing more stuff forward for the rest of the year! Happy orchestrating!


Coming soon on this blog, vRA pickup lines. Watch this space!

Deploying OVFs with vRealize Automation

vRA is good at only a few things out of the box – primarily VM template based blueprints.

One thing that doesn’t work at all out of the box is deploying OVF/OVA machines, which I will cover a way to do so in this post. There are many ways to do this thanks to vRO, but I think this way is fairly easy to digest, using appliances a vRA admin would typically have access to.

Use Case

For me it is simple – deploying a fully functional vSphere or vRealize environment requires a number of appliances. Being able to build a full reference VSAN environment with automation is pretty cool and nice to be able to hand back to other members of my team for testing or experimentation. Deploying vRO/vRA is a lot more complex, and with the 7.2 releases of these solutions, you can do the whole thing via API calls.

Caveats and Assumptions

Not all appliances are equal! IHV or ISV appliances often tend to only set an IP address and not enable post-configuration, so while this will showcase a pretty end-to-end solution, other stuff outside of VMware appliances may not work as well.

Also, this post assumes vRA is already set up and is working pointed to an existing vCenter Server for data collection.

This also assumes you have a method of allocating IP addresses in place, as well as a process to pre-create appropriate DNS records for the appliances  you build.

Setting the stage

For this particular post, I will detail deployment of a vCenter Server Appliance, version 6.0U2. Some parameters may change between versions but not too much.

When you download the ISO for the vCenter Server Appliance, extract the contents to a temporary folder. Search inside the extracted content in the folder VCSA to find the actual OVA file called vmware-vcsa.ova.

Launch the vSphere Client and deploy the OVA to your choice of datastore/host.

You’ll be prompted during the wizard to populate numerous values that would normally be handled during the VCSA UI Installer – leave them alone and continue.

Do not power on the appliance as part of the process!

Once the OVA has been deployed, simply convert the new appliance into a template.

Build your VCSA blueprint

In vRA, execute a data collection against your vCenter Server resources.

Once that is successful, go to Design->Blueprints and create a new composite blueprint.

Drag a vSphere Machine object onto the canvas, and set the names and descriptions as you like.

In the Build Information tab, specify these values

  • Blueprint Type – Server
  • Action – Clone
  • Provisioning Workflow – CloneWorkflow
  • Clone From – <The template>
  • Customization Spec – <blank>

Lastly, in case it is not part of your existing Property Groups (aka Build Profiles), you will want to pass this property in:

  • Extensibility.Lifecycle.Properties.CloneWorkflow = *,__*

Most documents emphasize passing the VMPSMasterWorkflow32 properties in as part of setting up Event Broker, so this is just calling out an additional requirement.

From here, set your custom properties and build profiles as you like. One recommendation I have for vRA admins is to add a custom property on your endpoint that denotes the vCenter Server UUID – this helps identify which vCenter your machine deployed to, which is needed for steps coming up. If you have a single vCenter, this is optional, but you may want to put it in place now, just in case!

Publish and entitle your blueprint.

Create vApp properties workflow in vRO

The key to making this work is all about the vApp properties of the virtual machine once deployed. Once the VM is cloned from the OVF template you made earlier, but before the VM starts up, you need to be able to inject your custom properties at the vCenter VM level so that the appliance can bootstrap itself and initiate firstboot scripts successfully.

In vRO, create a new workflow. Add an input parameter named payload of type Properties. This will hold the content that the vRA Event Broker will use to populate your cloned VM with what it needs to build properly.

Create an attribute called vc of type VC:SDKConnection and set the value to your vCenter Server.
Create an attribute called vcVM of type VC:VirtualMachine and leave it empty.
Create an attribute called iaasProperties of type Properties and leave it empty.

In the workflow schema, create a Scriptable Task and bind the payload input parameter to it, as well as the iaasHost, and vc attributes on the input side. Bind the vcVM and iaasProperties attributes on the output side of the task.

Insert the following code:

// create search query IaaS for the VM and its properties.
var properties = new Properties();
// query IaaS for entity
var virtualMachineEntity = vCACEntityManager.readModelEntity(, "ManagementModelEntities.svc", "VirtualMachines", properties, null);
// now get the properties you need for the applianceType value
var vmProperties = new Properties()
var props = virtualMachineEntity.getLink(iaasHost, "VirtualMachineProperties");
for each (var prop in props) { 
vmProperties.put(prop.getProperty("PropertyName"), prop.getProperty("PropertyValue"));
// get your output to an attribute
iaasProperties = vmProperties
// construct the Managed Object Reference to find the VM object in vCenter
var object = new VcManagedObjectReference()
object.type = "VirtualMachine"
object.value = payload.machine.externalReference

// query the vCenter for the actual object
vcVM = VcPlugin.convertToVimManagedObject(vc,object)

The above code should be fairly straightforward – it is pulling in your IaaS custom properties into a vRO object, and it is also finding the vCenter Virtual Machine so that you can manipulate it. Next will be the code that updates the OVF Properties of the VM in question.

Updating the OVF Properties

Create a Scriptable Task in the schema of your workflow.

On the input side, bind the values iaasProperties and vcVM.

Paste the following code into the task:

// create the properties variable to be called later
var ovfProps = new Properties()

// construct the reconfiguration specification, which will add the vApp Properties from IaaS to your VM.
var vmspec = new VcVirtualMachineConfigSpec()
vmspec.vAppConfig = new VcVmConfigSpec()

var newOVFProperties = new Array()
// get the existing VM's OVF property list.
// this is needed to match the 'key' value to the 'id' and subsequently 'value' parameters. The 'key' is required.
var vmProperties =
for each(prop in vmProperties) {
// get the key value
var key = prop.key
// define new property spec, which will edit/update the existing values.
var newProp = new VcVAppPropertySpec()
newProp.operation = VcArrayUpdateOperation.fromString("edit") = new VcVAppPropertyInfo()
// assign the new key = key
// get the values from ovfProps - these are of type VcKeyValue, properties are .key and .value = = ovfProps.get( = true
// add the new OVF Property to the array

// assign the properties to the spec = newOVFProperties
// now update the VM - it should be OFF or this will fail
try {
// reconfigure the VM
var task = vcVM.reconfigVM_Task(vmspec)
System.log("OVF Properties updated for "".")
} catch(e) {
throw("Error updating the OVF properties: "+e)

Save your workflow and exit.

With the workflow completed, now we need to go into the Event Broker and create a subscription so that this workflow will be executed at the correct time during the provisioning process.

Setting up the Event Broker Subscription

In vRA, go to Administration->Events->Subscriptions, and create a new subscription.

For the Event Topic, choose Machine Provisioning and click next.

For the Conditions, you will want to set it up with All of the following and at least these values:

  • Data->Lifecycle State->Name equals CloneWorkflow.CloneMachine
  • Data->Lifecycle State->Event equals CloneWorkflow.CloneMachine.EVENT.OnCloneMachineComplete
  • Data->Lifecycle State->Phase equals EVENT

For the Workflow, choose the workflow you created above.

Ensure this is a blocking subscription, otherwise it will execute asynchronously. Since this absolutely has to complete successfully to ensure a proper build, blocking on this one is important!

And finally, because it always seems to happen to me – make sure to PUBLISH the subscription! Friends don’t let friends troubleshoot an Event Subscription that is a draft…


At this point you can begin testing your deployments, and verify that the workflow is firing at the appropriate time.

Additionally, it may make sense to build an XaaS service that allows for you to customize the names, sizes of the appliances, or changing default domain, VLAN, etc.

Thanks for reading! If you have any questions feel free to ping me on Twitter.

If you’re curious as to the other available states you can use, check out the Life Cycle Extensibility – vRealize Automation 7.0 document, starting around page 33. Each release has a version of the document with the latest information, so find the one for your version.



A love letter to vRO’s LockingSystem

Oh LockingSystem, how do I love thee? Let me count the infinite ways.

The LockingSystem scriptable objects have been around since at least the vCO 5.5 days. In later revisions VMware put a few demo workflows to show how you could take advantage of it, but for the most part it’s not talked about too much. I’ll say this though – when you really start building complex workflows with many interlocking systems, LockingSystem saves lives.

A use case that comes almost immediately to mind is that of vRA and provisioning a large number of machines at once. A notable difference could be that in this environment, we do not use the internal IPAM, but rather the Infoblox IPAM system and the Event Broker to allocate an IP in the PRE BuildingMachine phase.
In a situation where 20-30 VMs are requested for a build, you want to make sure there are no concurrency problems where multiple VMs get the same IP address allocated, right?

Thankfully implementing the LockingSystem is extremely easy, but there are few things to keep in mind to make sure it is implemented well.

The process you wish to lock should be 1 or more separate workflows
In the example above, I have created a separate workflow that calls Infoblox IPAM via the REST API. It takes a couple of inputs, and finds the next available IP on the correct subnet, allocates it, and then outputs the network information that I want for VM customization.

A high level workflow schema for LockingSystem.

Above is a very high-level schema of what a typical implementation of LockingSystem would look like.
Notice that if the nested workflow fails, you will want to ensure the lock is removed! Alternatively you could bind errors to continue ahead without an exception, though this gives you more flexibility to recover gracefully.

For the first element in the use case detailed above, it could be as simple as this code:

The first parameter in the lockAndWait method is an arbitrary name. This is a global value, so if you have other workflows that may call this same nested one, it is worth considering having a naming schema for your locks.
The second parameter is the current workflow token ID. I typically like to see which workflow token has the lock if I need to, but this is also a very easy way to lock a workflow without excessive binding of attributes. Alternatives to use would be the workflow runner’s account name if needed.

Similarly, it is simple to unlock the workflow so that the next process in line can proceed.

Adjusting LockingSystem concurrency
A single unique locking mechanism is nice, but what if you want to tune the concurrency throughput? This would help limit requests to an external system, plugin, or API so that more than one flow can run, but not overwhelm it. Thankfully the LockingSystem has methods that can help you do this, with a little bit of setup.

Pick a line, any line
Let’s say you wanted to be able to limit your number of locks to a maximum of 5, because the plugin or external system can get overwhelmed.
You can request a lock between 1 and 5, such as “vRA-Machine-Build-1” through “vRA-Machine-Build-5.” All you have to do is generate that suffix number, either randomly or through a controlled loop.

A simple random inclusive number function can be found below, and it is probably useful enough to simply make into a vRO Action for future use!
In this sample, your return value would be a number type, and your two inputs, min and max would also be number type.
// Reference code from Mozilla
min = Math.ceil(min);
max = Math.floor(max);
return Math.floor(Math.random() * (max - min + 1)) + min

Then in your Locking scriptable task, you can call the function or action to return a value from 1 to 5 (or whatever values you want!) and append it.

var lane = System.getModule("us.nsfv.shared.locking").getRandomNumber(1,5)

Find the first open lane
While the random inclusive number (or in this case, the “lane”) can solve the concurrency, you may want there to be more intelligence – i.e. have the LockingSystem get the first available, regardless of which lane it is. There is a simple method simply called .lock() that will return true/false if you successfully acquire a lock of that value.
Thus, you can simply loop through the available locks and wait for one to be true.

for(i=1;i<=5;i++) {
var lockAcquired = LockingSystem.lock("vRA-Machine-Build-"+i,;
if(!lockAcquired) {
// no luck getting this one
} else {
// lock acquired, output the lock ID to an attribute so it can be unlocked later.
System.log("Lock "+i+" acquired!");
var attributeLock = "vRA-Machine-Build-"+i;

// reset the loop if you've reached the end
if(i == 5) {
i = 0;

When this loop runs, it will stay essentially in an infinite loop until a lock is freed up, at which point it will exit and continue with whatever process was locked, until it is unlocked afterward.

The functions and scripts above are a good starting point in tuning your vRO workflow concurrency. I hope these help you as much as they have helped me out!

vSphere SSL and vCO Part 4 – Venafi API

In parts 1 through 3, we setup workflows to generate OpenSSL configuration files, private key and CSR for an ESXi host.

This portion will concentrate on creating REST API calls to Venafi, an enterprise system that can be used to issue and manage the lifecycle of PKI infrastructure. I find it pretty easy to work with, and know some colleagues who are interested in this capability, so I decided to blog it!

At a high level, this post will build workflows that do the following tasks in the API:

  • Acquire an authorization token
  • Send a new certificate request
  • Retrieve a signed certificate

Let’s get to it!

Setup REST Host and Operations for Venafi

Since there are 3 operations above, thankfully there are 3 corresponding operations that will require setup in vCO.


Find the workflow under the HTTP-REST folder called Add a REST Host and execute it.

Setting up the Venafi REST Host.
Setting up the Venafi REST Host.

Fill in the name and URL fields to match your environment.
The Venafi API endpoint ends with /vedsdk – don’t add a trailing slash!

For the Authentication section, choose Basic.
Then, for the credentials choose Shared Session, and your username and password.

If you haven’t already imported the SSL certificate for Venafi, you may be asked to do so for future REST operations.

REST Operation(s)

Find the workflow under the HTTP-REST folder called Add a REST Operation and execute it.

Setting up the Venafi REST Operation.
Setting up the Venafi REST Operation.

Choose the REST Host you just created above in the first field.
Set a name for the Operation – in this case I used Get Authorization.
For the Template URL, input /Authorize/ – remember the trailing slash!
Change the method to POST, and input application/json for the content type.

Submit the workflow, and the operation is now stored.

Repeat the above workflow, but swap out the Name and Template URL as needed with the below values. These will add the necessary operations to request and download signed certificates.

Name: Request Certificate
Template URL: /Certificates/Request

Name: Retrieve Certificate
Template URL: /Certificates/Retrieve

With the REST Host and Operations set up, let’s work on creating our basic workflows.

Workflow: Acquire your authorization token

There are a number of ways to tackle the creation of the workflow, with error handling and the like, so I will just focus on giving the code that is needed to construct the API call and send it out.

Create a new workflow.

In the workflow General tab, create an attribute named restOperation, of type REST:Operation. Click the value column and choose the Venafi operation you created earlier for /Authorize/ and click OK.Click the Output tab and create a new parameter called api_key. This is the resulting authorization key that you can then pass to other flows later.

Drag and drop a new Scriptable Task into the Schema, and bind the restOperation attribute on the IN tab, and the api_key output parameter on the OUT tab, like this:

Now, copy and paste this code into the Scripting tab.

// get the username/password from the Venafi REST Host.
var venafiUser =
var venafiPass =
// create Object with username/password to send to Venafi
var body = new Object()
body.Username = venafiUser
body.Password = venafiPass
// convert Object to JSON string for API request
var content = System.getModule("com.vmware.web.webview").objectToJson(body)

// create API request to Venafi
var inParamtersValues = [];
var request = restOperation.createRequest(inParamtersValues, content);
// set the request content type
request.contentType = "application\/json";
System.log("Request URL for Venafi Token: " + request.fullUrl);
// execute
var response = request.execute();
// handle response and output
if(response.statusCode != 200) {
	System.log("There was an error getting an API key. Verify username and password.")
} else {
	api_key = JSON.parse(response.contentAsString).APIKey
	System.log("Token received. Add header name \"X-Venafi-Api-Key\" with value \""+api_key+"\" to future workflows.")

Now you may be wondering, why am I pulling the Username and Password from the host in this way? Shouldn’t there be some input parameters?

While we did build the REST Host object to use Basic Authentication, Venafi actually doesn’t support it – you can only POST the credentials to the API with a simple object and get the API key back.

So this is just how I elected to store the username and password – you could set them as attributes in the workflow, as Input parameters, or link those attributes to ConfigurationElement values if you wanted to.

Assuming your service account used for authentication is enabled for API access to Venafi, you should be able to run the workflow and see the log return a message with your API key!

And since it is an output parameter, you can bind that value to the next couple of workflows we are about to create.

Workflow: Submit a new Certificate Request for your ESXi host.

Create a new workflow.

In the workflow General tab, create an attribute named restOperation, of type REST:Operation. Click the value column and choose the Venafi operation you created earlier for /Certificates/Request and click OK.

Attributes for the Request Certificate workflow.
Attributes for the Request Certificate workflow.

Click the Input tab and create 3 parameters:

  • api_key – type is String
  • inputHost – type is VC:HostSystem
  • inputCSR – type is String
Inputs for the Request Certificate workflow.
Inputs for the Request Certificate workflow.

These values should be familiar for those who have been following along.
The first is the API key you created earlier so you can pass the key into this workflow and avoid having to login again. Each time you use the Venafi API key, the expiration time is extended, so you should be good to use a single API key for your entire workflow.

Click the Output tab and create a new parameter called outputDN. This value represents the Certificate object created in Venafi at the end of the workflow.

Outputs for the Request Certificate workflow.
Output for the Request Certificate workflow.

Once again drop a new Scriptable Task into the Schema for the new workflow, and bind the restOperation attribute on the IN tab, along with the 3 other inputs api_key, inputHost, and inputCSR. On the OUT tab, bind the outputDN value into the task, so that your Visual Binding looks like this:

Visual Bindings for the Request Certificate workflow.
Visual Bindings for the Request Certificate workflow.

Now, it’s time to add the necessary code to make the magic happen.

var object = new Object()
object.PolicyDN = "\\VED\\Policy\\My_Folder"
object.CADN = "\\VED\\Policy\\My_CA_Template"
object.PKCS10 = inputCSR
object.ObjectName =
var content = System.getModule("com.vmware.web.webview").objectToJson(object)
//prepare request
var inParametersValues = [];
var request = restOperation.createRequest(inParametersValues, content);
//set the request content type
request.contentType = "application\/json";
System.log("Request URL for Certificate Request: " + request.fullUrl);

//Customize the request here with your API key
request.setHeader("X-Venafi-Api-Key", api_key);

//execute request
var response = request.execute();
// handle response and output
if(response.statusCode != 200) {
	System.log("There was an error requesting a certificate. The response was: "+response.contentAsString)
} else {
outputDN = JSON.parse(response.contentAsString).CertificateDN

The only things in this code you will want to change are the PolicyDN and CADN values, as those are unique to each environment. You’ll want to consult your Venafi admin, or look in the /VEDAdmin website running with Venafi for the proper value.

Workflow: Retrieve the signed Certificate

So you’ve gotten this far, and were able to post a new certificate request. The last step is to download it and get ready to upload it to the ESXi host.

Create a new workflow.

In the workflow, create an attribute named restOperation, of type REST:Operation. Click the value field and choose the Venafi operation you created earlier for /Certificates/Retrieve and click OK.

Attributes for the Retrieve Certificate workflow.
Attributes for the Retrieve Certificate workflow.

Click the Input tab and create 2 parameters:

  • api_key – type is String
  • inputDN – type is String
Inputs for the Retrieve Certificate workflow.
Inputs for the Retrieve Certificate workflow.

The first value api_key is back once again to re-use the existing API key created before.
The second value is related to the outputDN from the previous workflow. You will be feeding this value into the new workflow so that it knows which certificate object to query and work with.

Click the Output tab and create a new parameter called outputCertificateData. This value represents the signed certificate file encoded in Base64.

Outputs for the Retrieve Certificate workflow.
Outputs for the Retrieve Certificate workflow.

Now, drop a new Scriptable Task into the Schema for the new workflow, and bind the restOperation attribute on the IN tab, along with the 2 other inputs api_key and inputDN. On the OUT tab, bind the outputCertificateData value into the task, so that your Visual Binding looks like this:

Visual Bindings for the Retrieve Certificate workflow.
Visual Bindings for the Retrieve Certificate workflow.

Now to execute the API call to get the certificate, using the below snippet of code:

// setup request object
var object = new Object()
object.CertificateDN = inputDN
object.Format = "Base64"
var content = System.getModule("com.vmware.web.webview").objectToJson(object)

//prepare request
var inParametersValues = [];
var request = restOperation.createRequest(inParametersValues, content);
//set the request content type
request.contentType = "application\/json";
System.log("Request URL for Retrieving Signed Certificate: " + request.fullUrl);

//Customize the request here
request.setHeader("X-Venafi-Api-Key", api_key);

//execute request
var response = request.execute();

// deal with response data
if(response.statusCode == 200) {
	// the result is in Base64, so decode it and assign to output parameter.
	var certBase64 = JSON.parse(response.contentAsString).CertificateData
	outputCertificateData = System.getModule("us.nsfv.lab").decodeBase64(certBase64)
	// done
} else {
	System.debug("Unhandled Error with response. JSON response: "+response.contentAsString)

Now, once these workflows run in sequence, you should have a Base64 decoded string that looks like a signed certificate file would. We haven’t strung these flows together yet, so don’t panic that it isn’t working! It will all make sense, I promise!

In the next post, we will write that content to disk, and then the fun part: uploading it!

vSphere SSL and vCO Part 2 – Appliance Setup

The first step to getting this process to work is to realize that ultimately under the hood, vCO is a Linux appliance, with a lot of functionality not exposed, or not immediately obvious. There are a lot of tweaks you can make to really enable great functionality, and this process may give you other interesting ideas!

NOTE: The below steps assume vCO Appliance 5.5+

You’ll need to start out by using PuTTY to SSH into the appliance. If SSH is turned off, you’ll either need to use the console, or enable Administrator SSH from the VAMI interface.

Once logged in, change directory to /etc/vco/app-server, and then type vi to open the file in a text editor.

Logging into vCO with SSH.
Logging into vCO with SSH.

Inside you will want to see if you have a line that looks like this:


If it doesn’t exist, press i to change to Insert mode, and then add a new line and put it in there. Once done, press ESC to exit Insert mode, and type :wq to write the file and quit the editor.

When vCO Server is restarted, you will be able to execute Linux commands against the VM from within your workflows. The catch is, you have to make sure that the vco account on the appliance has the ability to execute it.

To enable this, type vi js-io-rights.conf from the shell. This file may already exist and have some data in it. If not,  you get to define explicit rights at this point. Here’s mine for reference:

Example JS-IO-RIGHTS.CONF file.
Example JS-IO-RIGHTS.CONF file.

Add the below lines to the file by pressing i, again going to Insert Mode and then the below information, with each line corresponding to a specific executable on the appliance. The prefix characters are adding read and execute rights for the vco user.

+rx /usr/bin/openssl
+rx /usr/bin/curl

Press ESC, then :wq to save the file and exit.

With these tweaks enabled, you will need to restart vCO Server. You can do this a number of ways, but since you’re in the shell this is the fastest:

service vco-server restart

Now, you will be able to execute these commands in a workflow when you use the Command scripting object, which will run the command and return the standard output back as a variable, as well as the execution result, such as success or failure!

With that in mind, let’s do a quick experiment in a workflow to ensure it works as intended.

Proof of Concept Workflow Creation

  • Create a new Workflow as you normally would. No inputs are necessarily required for this test as we will get into those values in later posts.
  • Drag and drop a fresh Scriptable Task into the schema, and edit it.
  • Paste the code below into the scripting tab:
// Creates new Command object, with the command to run as your argument
var myCommand = new Command("/usr/bin/openssl version")
// Executes the command
// Outputs the result code, and the output of the command
System.log("Exit Code: "+myCommand.result)
System.log("Output: "+myCommand.output)

Close the Scriptable Task, run the workflow and check the log – you should see something like this:

OpenSSL version running on the appliance.
OpenSSL version running on the appliance.

If you were to type the same command in the shell, the result would be the same. So while we’re here, let’s update the code in the task to verify cURL also works. Change line 2 in the task to look like this (note the case-sensitive argument!) :

var myCommand = new Command("/usr/bin/curl -V")
cURL version running on the appliance.
cURL version running on the appliance.

You’ll probably note that the OpenSSL version installed on the VCO appliance is the same one that is required by VMware for the entire SSL implementation in the 5.x release! With this working, now we can do some really cool stuff.

In the next post, we will build out the workflow that will create the private key and CSR for any and all of your ESXi hosts! This same flow can be used as the basis for vCenter SSL, vROPS, or even vCO itself!

Working with Customization Specifications in vCO

I recently had a minor project to assist with refreshing a deployment of hundreds of remote backup servers, which were moving from a standard Windows 2008 R2 image to 2012 R2. The process was going to leverage some custom workflows I had written using the Deploy OVF functionality in vCO.

A pretty straightforward setup, so I asked how they planned to do the customization for each site. The answer was essentially: “We’re going to do the customization manually like always.”


I checked into it, and sure enough that had always been the process. That process included fat fingering the host names, inputting incorrect IP address information, not installing the software properly, not updating the Infoblox IPAM system, the list goes on and on.

I’m an automation freak for a number of reasons but most of all it seemed like this was a perfect use case to leverage vCO capabilities I knew were possible out of the box, and also do some digging on a few parts I didn’t know about.

The Ingredients

First, I did some REST API testing and querying of our internal Infoblox IPAM system to ensure I could get the correct or available IP addresses on the pertinent location and VLAN programmatically.

Second, I had to build a workflow that created the AD Computer object in advance in a particular OU, so that the company’s GPO would take effect immediately upon joining the machine to the domain. The interesting problem to solve here was using a particular service account to perform the work, rather than the default AD Host.

Finally, I had the team put together a Windows 2012 template that was not sysprepped, so that I could use vCenter’s Customization Specifications to do that for me, along with executing first boot scripts, configuring the Network, join the domain, and the like. This meant figuring out how to programmatically use the Customization Spec APIs, which I wasn’t familiar with.

1/4 Cup REST API – Getting the correct Network CIDR

The first piece I worked on was fairly simple after learning some syntax which was querying the Infoblox REST API for the relevant networks.
Each remote location had specific Extensible Attributes used that contained the VLAN ID and the Location code. The location code corresponded to the Datacenter object in our vCenter Server, so creating a correlation just got a whole lot simpler.

Create a REST Host using the default Add a REST Host workflow, substituting your IPAM server URL and credentials appropriately.

Configuring IPAM REST Host.
Configuring IPAM REST Host.
Configuring IPAM REST Host
Configuring IPAM REST Host

Once that’s created, run Add a REST Operation workflow so that we can add the actual REST API URL to the host just created with a couple of parameters that can be invoked at runtime. This operation will be a GET operation, also.

For the URL Template field, you will want to put this:

This URL may be a bit of a puzzle, so I’ll break it down:
/wapi/v1.6/network – the base portion, indicating we are querying available network objects in the API.
?_return_fields%2B=network – this indicates what values we want returned, rather than a bunch of values. In this case, I just wanted the CIDR value.
&*SiteNumber~:={MY_LOCATION_ID} – The ampersand is a parameter of the query, and the asterisk preceding “SiteNumber” indicates an Extensible Attribute inside the IPAM database. Finally the ~:= indicates a Regular Expression search.
&*VLAN={VLAN_ID} – another parameter of the query, using the VLAN Extensible Attribute.

NOTE: The Extensible Attributes used here were unique to this installation – you will have to create your own or ask your Network team for the attributes they use!

Of note, there is a pretty great example page full of Infoblox IPAM API calls you can find here:

Upon executing the Invoke a REST Operation workflow, you can plug in the values for MY_LOCATION_ID and VLAN_ID and assuming it exists, you will get a JSON string back with the network CIDR that can be used for further parsing. In my case, I only needed to replace the last octet with a specific value that was going to be used at all locations, as well as the gateway.

Now, how to connect to AD using a specific service account rather than the default?

1/4 Cup Active Directory – Connecting with a specific service account

I would start by getting version 2.0 or higher of the plugin, as even up to vRO 6.0.1 didn’t ship with it.
You can download it from this link:

This version enables multiple AD connections which is required for this use case. If you’re lucky enough to be on vRO 7.0 or higher, you’re good!

Run the Add an Active Directory Server workflow and configure it to use a Shared Session with your service account.

With that done, create a Scriptable Task in a workflow that has a single output binding, called adHost, and then copy in the following code:

var allAD = AD_HostManager.findAllHosts()
for each(ad in allAD) {
 if(ad.hostConfiguration.sharedUserName == "LAB\\service_account") {
   adHost = AD_HostManager.findHost(
   System.debug("AD Connection ID: "

In this code, we’re essentially getting a list of all available AD Host connections, then looping through them to find the one assigned to our particular service account, and saving it to an attribute named adHost. This attribute can then be bound to another workflow or element to create the AD Computer Account, along with all other methods available in the plugin.

1/2 Cup Customization Spec Manager API – A Journey

This part of the journey was fairly new to me. I hadn’t tried using the API to tinker with Customization Specs before so it was an interesting journey, and I learned both the hard and subsequent easy way of accomplishing my task.

My initial thought on this was to create a specification for each location with the NIC1 interface configured using the IP settings gathered from the REST API call. After some tinkering and testing, I was able to create a Scriptable Task that did just that. Running my workflow to create a pile of hundreds of Customization Specs all named and IP’d properly was admittedly pretty cool to me. But, when I attempted to clone one of my test templates with the spec, it never completed. The reason?

The vCenter public key error.
The vCenter public key error.

As it turns out, when creating a Customization Spec the object is encrypted using the vCenter public key. This is to protect the stored credentials for the local Administrator account, as well as the account used for joining the system to the domain. Digging into the API Explorer shows that you can specify a clearText property to true and bypass this, but it doesn’t help as it seems the whole object is encrypted. Of note, you also get this error pop up if you export a Customization from one vCenter to another one.

But once you re-input those credentials and save the Specification, the cloning works as expected. So, can I modify the Customization Spec during workflow runtime? Turns out, that is the best way to approach this problem.

In vCO there is no data type that corresponds to a Customization Spec, so to pass it around, you’ll need to use the wonderful Any type.

To begin, create a Scriptable Task with inputs for the IP Address, Subnet Mask, and Gateway variables, and a single output of type Any to hold the new version of the Customization Spec.

To get a Customization Spec into a variable you can manipulate, you can use the following bit of code using a sample input type of VC:Datacenter, named inputDC.

var customizationSpec = inputDC.sdkConnection.customizationSpecManager.getCustomizationSpec("My_Customization_Spec")

Now, we’ll populate the Default Gateway with the one you input, either from Infoblox IPAM, or through a normal string input.

// gateway - the input type must be an array of a single value.
var staticGW = new Array()
customizationSpec.spec.nicSettingMap[0].adapter.gateway = staticGW

Here, we declare a Fixed IP type, and set it to a value from your Infoblox IPAM query, or string input.

// Static IP - input type must be declared
var staticIP = new VcCustomizationFixedIp()
staticIP.ipAddress = inputIP
customizationSpec.spec.nicSettingMap[0].adapter.ip = staticIP

Finally, a simple string assignment for the subnet mask.

// Subnet Mask - can simply be a string value
customizationSpec.spec.nicSettingMap[0].adapter.subnetMask = inputNetmask

With the adjustments to the spec made, assign the modified specification to the output attribute.

// assign the updated spec to the output
outSpec = customizationSpec.spec

With that done, you can now use the CloneVM_Task API to create a clone of the VM template with the specification you just created. You’ll need to make sure you have a way of applying the target host, resource pool, datastore, as well as the Customization Spec to ensure this is successful.

It is worth noting that you did not actually modify the Customization Spec in vCenter directly – you just grabbed it, messed with it and sent it on its way! This makes the Specification useful for any environment, any location, any vCenter!

I hope you found this useful. If you have any questions or thoughts, hit me on Twitter or email!

Deploying OVF Templates with VCO

A feature that I have ended up using a lot lately, especially in delegating large deployments through workflows, is the vCenter Plugin’s ImportOVF feature.

No workflows come with the appliance that handle deploying OVFs, as there are a lot of variables involved. But once you know how to construct the request once, it’s really helpful to have handy!

To call the importOVF method, these are the arguments / types you will need to provide:

ovfUri (string) : This is the URI to the .OVF descriptor file. It can be either a valid HTTP link, or if you have a mounted share, it could be a local FILE:// location.
hostSystem (VC:HostSystem) : You’ll have to specify the ESXi target host with this argument.
importFolderName (string) : You can specify a VM folder name here that already exists – if you don’t have one, you should put in “Discovered virtual machine,” as it should always exist by default.
vmName (string) : This argument is the VM name as it will exist on the target host.
networks (array of VcOvfNetworkMapping): This argument is to map each virtual NIC interface from its source portgroup to the target portgroup.
datastore (VC:Datastore) : The datastore where the VM will reside.
props (array of VcKeyValue) : This is an optional parameter that can be used to provide inputs for the OVF settings, if they are being used.

Pointing to the OVF Files on a local share

It’s safe to say that most VCO appliances or installations won’t have direct internet access to pull OVFs from the Solution Exchange.
If you have mounted a CIFS share to the VCO appliance, you can access the OVF files there. The path requires additional escape slashes due to how VCO reads the attribute, so it would look something like this:


In this example, I have a CIFS share mounted on the appliance at mount point /mnt/cifsShare.
You will want to make sure that you set appropriate read/execute permissions in the js-io-rights.conf file to this location.
If you have no idea what that means, go HERE.

Setting up the OVF Network Mappings

The only thing that needs a little explanation on using this method is the VcOvfNetworkMapping portion of things.
If you think about it, during the deployment of an OVF in the vSphere Client, there is a window that asks you to map between the source and destination networks.

The vSphere Client OVF Network Mapping menu.
The vSphere Client OVF Network Mapping menu.

The ‘source’ network is whatever the portgroup was called when it was exported. You just need to map it to the destination portgroup on the target host. This requires you to know both sides so you can code for it, if you need to.

Below is some sample code you can use in a Scriptable Task to construct the network mapping values you need to execute the method.

// setup OVF to move the 'source_network' network to 'target_network'
var myVcOvfNetworkMapping = new VcOvfNetworkMapping() ; = "source_network"
// check for 'target_network' Portgroup on the ESX host bound to the task
for each(net in {
  if( == "target_network") { = net
// create empty array of Network Mappings
var ovfNets = []
// add the mapping to the array

If you have multiple NICs on different networks, simply repeat the steps above and adjust the mappings.

Optional: OVF Properties

If you are deploying OVFs that utilize OVF properties, you can create an object to specify them.
To ensure you get the right values, you may want to deploy an example OVF, and then check the OVF environment variables in the VM settings.

Here’s an example:

OVF Properties for a VM.
OVF Properties for a VM.

Given the above, notice the PropertySection which has the values you want. Here’s a bit of example code to populate these values:

// Create empty array of key/value pairs
var keys = []
// Create new key/value for OVF property
var myOVFValue01 = new VcKeyValue() ;
myOVFValue01.key = "OVF.OVF_Network_Settings.Network"
myOVFValue01.value = ""
// Create new key/value for OVF property (that totally doesn't exist in the shot above, but you get it)
var myOVFValue02 = new VcKeyValue();
myOVFValue02.key = "OVF.OVF_Network_Settings.Netmask"
myOVFValue02.value = ""
// add both of them to the array
keys.push(myOVFValue01, myOVFValue02)

Add the array to your method call and when the VM boots, depending on the underlying machine, it could auto configure! If you integrated this capability with an IP Management system, you could automate provisioning of OVFs top to bottom!

Troubleshooting the importOvf workflow
The VcPlugin.importOvf() method doesn’t give too much back. It is recommended to enable DEBUG level logging on the Configurator page for VCO during your testing of this plugin so you can narrow down what it could be
[WrappedJavaMethod] Calling method : public com.vmware.vmo.plugin.vi4.model.VimVirtualMachine com.vmware.vmo.plugin.vi4.VMwareInfrastructure.importOvf(java.lang.String,com.vmware.vmo.plugin.vi4.model.VimHostSystem,java.lang.String,java.lang.String,com.vmware.vim.vi4.OvfNetworkMapping[],com.vmware.vmo.plugin.vi4.model.VimDatastore,com.vmware.vim.vi4.KeyValue[]) throws java.lang.Exception

The most likely cause of errors in the method is either a missing portgroup or incorrect key value. Since just about everything is just strings, you need to make sure that on the target host, the portgroup is there and matches exactly.

Thanks for reading!

Workflowing with VUM and VCO – The Basics

If you’re using VCO/VRO as your automation engine, sooner or later you will probably add the vSphere Update Manager plugin.
Makes sense, right? The opportunity to schedule or delegate the ability to update and patch your hosts is pretty compelling.

It’s unfortunate that getting up and running with VUM isn’t quite as simple as it looks on the surface. In this particular case, PowerCLI  wins for its ease of integration.
Hopefully, this post helps turn that tide back.

It’s a little bit behind

The installation of the plugin is just like any other – upload it from the Configuration interface on port 8283. I don’t think it really even needs a reboot.
From there, you have to punch in your vCenter URL – which may not be obvious to many as there is little help or documentation.

So just to be clear, add the URL like this : 

You can also add others in the list if you have multiple vCenters in the same authentication scope in a similar way.

Next up, check your plugin status within the VCO client. Inevitably you will run into an error that simply says ERROR IN PLUGIN.
Unless you are the white knight of IT, this isn’t too helpful.
If you see this, I’m willing to bet that it’s because you didn’t import the SSL certificate to the trusted store.
How would you know to do that? You wouldn’t, unless you like staring at logs set to DEBUG level!

So, how do I import the certificate?
Easy – just point VCO to the SOAP endpoint of your VUM box. You can get the service port piece of this information from the Admin section of the Update Manager plugin through the vSphere Client. You can do this through the Configurator page too, but since Configurator is going away, this is probably the best way.

Locating the SOAP port for VUM.
Locating the SOAP port for VUM.

Now, you can run a workflow to import the VUM certificate into the SSL Trust Store.
You can find the builtin workflow in Library->Configuration->SSL Trust Manager.
Choose to run the Import a certificate from URL workflow.

The Import Certificate workflow.
The Import Certificate workflow.

Execute the workflow, and for the input, use the FQDN to the server running the VUM service, and append port 8084 to the end, like you saw earlier.
The FQDN portion is important! If you don’t put it in there, you will likely have an error.

Importing the VUM SSL Certificate.
Importing the VUM SSL Certificate.

Once the certificates are imported, relaunch your VCO client. After you login, you should see some progress being made.


That wasn’t so hard

So next up, you just bind a HostSystem value to a workflow that remediates and you’re good right?
Unfortunately not quite yet. But we’ll get there!

VUM uses a completely different service and object model unrelated to vCenter, thus direct integration is not as simple. Out of the box, you have to write your own code and do some heavy debugging in the logs.

Connecting the dots

The first thing we will do is make a simple Action element that takes an input of type VC:HostSystem and extracts the necessary info out of it to create its VUM-equivalent object.

Digging into the API Explorer, look for the VUM:VIInventory Scriptable Action type. This will tell you what you need in order to construct the corresponding object.

The VUM:VIInventory Type.
The VUM:VIInventory Type.

Thankfully, this is a pretty simple type to instantiate – it only requires 4 arguments to construct.
VumVIInventory(string): The string to input here is the vCenter Server.
id: This is the “vSphere Object ID” – essentially the Managed Object Reference ID.
name: This is the host object name in vCenter, whether it’s the FQDN or if you have it connected by IP. (Shame on you!)
type: This is the asset type. Keep in mind, VUM has the ability to patch Virtual Appliances too, so specifying this is required.

So, let’s get some code in place to make the VC -> VUM conversion happen! Create a new Action, and use the below setup as your guide.

Setting up the conversion VCO Action.
Setting up the conversion VCO Action.

Wait, why is the return type an Array?
The VUM plugin expects an array of VUM:VIInventory, even if that means it is an array of one object.

Here’s the Action code to use, with some notes.

// create the new VUM Inventory object and assign values.
// first define a new instance of the object, pointing to the vCenter Server of the host
var vumItem = new VumVIInventory(;

// get the Managed Object Reference = inputHostSystem.reference.value;
// get the Managed Object Type
vumItem.type = inputHostSystem.vimType;
// get the ESXi Host name =;

// the VUMVIInventory object must be an array, so create a dummy one and push the host value in
var vumHosts = [];

// return the array of VUM objects (even if it is just one)
return vumHosts;

With this new instance of a VUM:VIInventory type, you can bind it to a Remediate Host workflow as you normally would for your patching pleasure, right?
Theoretically, yes. But you may want to check something else before celebrating.

java.lang.NullPointerException -or- Lessons in Error-Checking

One thing that you will want to verify prior to attempting to remediate using the VCO plugin is whether or not a set VUM Baselines are attached.
If you have no baselines attached, or not specified, your Remediate workflow will error out and throw you a generally unhelpful message.
Here’s how you can check for, attach, and bind the necessary data to the ESX host you want to remediate.

There is an Action as part of the plugin that will attach Baselines to a host object, but you have to tell it which ones you want. Below is sample code you can use in a Scriptable Task (or Action) to output a list of Baselines in your vCenter Server. Since you may have specific Baselines you wish to use, you’ll have to modify it to your liking – but it should be pretty easy.

The input binding on this task can be any object – for this example, it is the ESX host.

// create a search query for VUM baselines whose name begins with "SuperCool"

// query all baselines on the VM Host's vCenter Server
var searchSpec = new VumBaselineSearchSpec( ;
// define regex to search baselines
var expression = "^SuperCool"

// get the list of baselines
var baselineList = VumObjectManager.getFilteredBaselines(searchSpec)
// VumBaseline must be an array, so make a dummy one
var baselineArray = []

// Loop through the findings and if they match, add to the baseline list.
for each(baseline in baselineList) {
  if( {
    System.log("Baseline: "

// assign the array values to the output binding, which is 'vumBaselines'
// if this were an Action, you would just return 'baselineArray'
vumBaselines = baselineArray

So after this, you’ll have your hosts to Remediate, and the Baselines you wish to use for the Remediation. Simply bind these arrays to the inputs of the Remediate workflow, and you’re off to the races!

As an aside, I’m hopeful the next iteration of the plugin, along with the 6.x release with VUM integrated into the VCSA will make life simple for us Orchestrator types. We will see!

VMware VSA Deepdive – Part 5 – Use SOAP!

Fake Edit: It has been a while! It’s been lots of great weather, people visits, and then VMworld happened. Time to get back on track!

In the last post about the VSA, we leveraged the SSH plugin in VCO to send the necessary command to the host that would force the VM to power off, as we can’t do it from vCenter. There are pros and cons to that approach, though.

First of all, not particularly great from a security perspective – you have to have SSH open and the service started to make that happen. Depending on your environment, this may not be seen as a good thing.

You’re also using the root credentials to accomplish the feat. You could use another one, but that’s a whole lot of work just to set that whole thing up in preparation for this scenario. You’d have to take into account rolling the root password and how you could work that into the workflow and managing it over time.

And finally, related to the above–this process isn’t going to pop up in a log very easily since you’re bypassing the API *AND* vCenter.

So, given these faults I needed to find another way to proceed. To be fair this process also required root to the host so it wasn’t that much better, but it would at least show up in host logging. The answer?

SOAP. The yardstick of all APIs.
SOAP. The yardstick of all APIs.

Yep. I was desperate. But if it helps me to automate working with 1500+ hosts, it’s WORTH IT.

I want you to hit me as hard as you can

I haven’t had to make SOAP requests to anything in ages, so I was pretty rusty. Thankfully VCO to the rescue again with a built-in SOAP plugin.
First things first. Make a new Workflow and define two input Parameters – one for the ESXi host, and the VSA VM you wish to power down.

Inputs for the SOAP workflow.
Inputs for the SOAP workflow.

For your attributes, define the following as type String:

  • inputHostWSDLUri – this is to specify the WSDL URI used to talk to the ESXi host.
  • inputHostPreferredEndpoint – this is for talking to the API endpoint.
  • inputHostName – this is just to hold a string value of the ESXi host for later.

The rest of the attributes in the workflow can simply be bound as you add the other workflows that come with VCO.

Preparing the SOAP Host entry

One thing to keep in mind – when adding an ESXi host as a SOAP Host, it sets some values automatically that do not allow this to complete as expected. The first problem is the SOAP Endpoint and the SOAP WSDL URI. Both of these, when enumerated by the SOAP plugin point to https://localhost which makes sense for when the ESX host is doing calls to itself, but not for VCO to reach out to it remotely. The first order of business is to fix these values.

Create a Scriptable Task element in your schema, and bind the input Parameter inputHost to it. Bind the 3 attributes you defined above for output. Then, input the code found below.

Setting up the WSDL and Endpoint URI.
Setting up the WSDL and Endpoint URI.

This code is pretty straightforward. It is simply replacing the values with the correct ones to perform remote SOAP calls.

Next up, drop a copy of the Add a SOAP Host workflow into your schema. Bind the inputHostName and inputHostWSDLUri values to the name and wsdlUri parameters, and bind the rest to new attributes/values as you desire.

Binding new attributes to the Add a SOAP Host Workflow.
Binding new attributes to the Add a SOAP Host Workflow.

You’ll need to provide things such as the username/password to the host, timeout values and other values, all of which can be static values, or linked to a Configuration Element.

For the OUT parameter of this workflow element, bind it to a new attribute named soapHost so we can use it later.

Modify the SOAP Host

Before you proceed, you need to update the new SOAP Host with the new value of inputHostPreferredEndpoint.
Drop a copy of the Update a SOAP Host with an endpoint URL workflow into the schema.
Simply bind the attributes soapHost and inputHostPreferredEndpoint to their respective input parameters, and bind soapHost to the output parameter, so that it completes.

Using SOAP to find the VSA and shut it down

Add a Scriptable Task to your schema. On the inputs, bind the soapHost, inputVM, username, and password attributes.

Below is the code you can copy/paste, with comments as needed.

// get the initial operation you want to go for from the SOAP host specified.
var operation = soapHost.getOperation("RetrieveServiceContent");
// Once you have the SOAP Operation, create the request object for it
var request = operation.createSOAPRequest();

// set Parameters and their attributes.
request.setInParameter("_this", "ServiceInstance"); // creating the input Parameter itself
request.addInParameterAttribute("_this", "type", "ServiceInstance");

// make the request, save as response variable
var response = operation.invoke(request);

// retrieved values to be passed on down the line
var searchIndex = response.getOutParameter("returnval.searchIndex")
var rootFolder = response.getOutParameter("returnval.rootFolder")
var sessionMgr = response.getOutParameter("returnval.sessionManager")

// get Login Session to add to future headers.
var hostLoginOp = soapHost.getOperation("Login")
// create Login request
var loginReq = hostLoginOp.createSOAPRequest()
loginReq.setInParameter("_this", sessionMgr) // using value from initial query
loginReq.addInParameterAttribute("_this", "type", "SessionManager")
loginReq.setInParameter("userName", inputUser)
loginReq.setInParameter("password", inputPassword)
var loginResp = hostLoginOp.invoke(loginReq)
var sessionKey = loginResp.getOutParameter("returnval.key")

// find the VSA VM on the host.
var vmoperation = soapHost.getOperation("FindChild")
System.log("VM Search Operation is: "
var vmreq = vmoperation.createSOAPRequest()
// define parameters
vmreq.setInParameter("_this", "ha-searchindex") // get the SearchIndex
vmreq.addInParameterAttribute("_this", "type", "SearchIndex")
vmreq.setInParameter("entity", "ha-folder-vm") // representing the root VM Folder
vmreq.setInParameter("name", // your search criteria

// send request, get the response in a variable
var vmresp = vmoperation.invoke(vmreq)
// assign moref to variable
var vmMoRef = vmresp.getOutParameter("returnval")
// this log shows the output value
System.log("MoREF of VM [""] on [""]: "+vmMoRef )

// now that you have the MoRef of the VSA VM, you can kick off the Power Off task with a decision/parameter.
var pwroffOp = soapHost.getOperation("PowerOffVM_Task") // get the Power Off operation
var pwroffOpReq = pwroffOp.createSOAPRequest() // create the Power Off request
// define parameters
pwroffOpReq.setInParameter("_this", vmMoRef) // assign the MoRef of the VM to power off
pwroffOpReq.addInParameterAttribute("_this", "type", "VirtualMachine")
// shut off the VM by executing the request.
var offresp = pwroffOp.invoke(pwroffOpReq)

And there you have it. If you direct connect to the ESXi host when you run this workflow, you will see a task for powering off the VM appear and you are good to go.

One thing I prefer to do at the end of this workflow is to drop in the Remove a SOAP Host workflow and bind appropriately so that my host list doesn’t get too large, but this is obviously optional.

A final note on SSL Certificates

If you run this as it is out of the gate, you will probably get a pop-up regarding whether to trust the SSL certificate of the host.
Of course in a perfect world, all of your certificates are trusted top to bottom and are maintained. But anyone who has tried to do this at scale has struggled and probably doesn’t bother. In order to bypass this, you’ll need to make a few adjustments and duplicate the stock workflows so you can make changes to them.

In the VCO workflow list, go down to Library -> SOAP -> Configuration and right-click Manage SSL Certificates.
Choose Duplicate Workflow.
Duplicate SSL Workflow
Save your copy of the workflow wherever you like. You may want to change the name a bit to reflect that it isn’t the standard workflow.

Now, you can edit the workflow and make a minor adjustment. Here is the workflow by default.

The default Manage SSL Certificates workflow schema.
The default Manage SSL Certificates workflow schema.

You’ll notice the Accept SSL certificate schema element. Simply click it and delete it from the schema.

The "custom" Manage SSL Certificates workflow schema.
The “custom” Manage SSL Certificates workflow schema.

Finally, click on the General tab of your workflow, and look for an attribute named installCertificates. Inside of the value, input the text Install. The workflow element Install certificate does a simple check to see if the attribute is requesting to install, and continues from there.

Updating the new Manage SSL Certificates attributes.
Updating the new Manage SSL Certificates attributes.

As a final step, you will want to duplicate the Add a SOAP Host workflow, and replace the Manage SSL Certificates element with this new one you have created.
Ensure that the two attributes are rebound to the values the old one were using.

Re-adding the workflow bindings for SSL Certificates.
Re-adding the workflow bindings for SSL Certificates.

With these changes, you can Add a SOAP Host and not get stopped for SSL verification.

Of course, this isn’t really a best practice, but it gets the job done.

Next up, the final and maybe the most elegant solution for working with the VSA VM Power Off situation.

VSA Deepdive Part 3 - Enter the Orchestrator!

VMware VSA Deepdive – Part 3 – Enter the Orchestrator

Previously we setup your VSA Home lab, and beefed up our knowledge of WSCLI.
There may have also been some general complaints  about the limitations of the VSA solution from a management perspective.

Now, it’s time to bring things together with an unlikely savior.

(Just kidding, it’s VCO to the rescue as always.)

What do you mean?

VCO being the general swiss-army knife tool it is, you probably already know it has virtually unlimited possibilities.
So how can VCO and VSA coming together make magic happen? There’s no native plugin!

Factoid 1: WSCLI is a Java JAR, and is portable code that can run directly from JRE.
Factoid 2: VCO runs Tomcat for its server software, which is executed using JRE.

For me, reading this excellent post on the VCOTeam blog was like a Eureka! moment. Surely if I can find where JRE is installed on the VCO Appliance, I can execute WSCLI commands in a programmatic way, right?

I feel like the answer is Yes?

Nailed it.  Here’s what you need to set this up.

Orchestrator Appliance Setup

First, you need to allow VCO the ability to create local processes (ie. execute local code).

To do this, SSH into the VCO Appliance as root.
Then, using VI as your text editor, type this in the SSH prompt:
vi /etc/vco/app-server/vmo/conf/

Once in the file, press I to enter Insert Mode, so you can add/edit content to the file.
Add this line anywhere in the file:

Once added, press ESC, then type :wq to save changes and exit.

NOTE: Once you have made this change, a reboot of the appliance is needed.

Next up, Since SSH is enabled by default on the VCO Appliance, that means SCP is also enabled.
Using your favorite SCP Client, login to your appliance and upload WSCLI.jar to a folder of your choosing.
For purposes of this post we will put it in the folder /var/run/vco.

Now that you have the files there, the next thing to do is find the path to the JRE.
Thankfully, it’s extremely easy to find – simply type which java at the SSH prompt.

Finding the JRE on the VCO Appliance.
Finding the JRE on the VCO Appliance.

Next, you need to edit your js-io-rights.conf to allow VCO to read and execute content from both folders.

To edit the file, start by typing vi /etc/vco/app-server/js-io-rights.conf
Pressto change to Insert Mode.  You can now edit the file appropriately.
Here is what my js-io-rights.conf file looks like, in case you want to do a comparison.


My changes are specifically these to grant read/execute access:
+rwx /var/run/vco
+rx /usr/java/jre-vmware/bin

Once you have these lines in your file, press ESC to leave Insert Mode. Then type :wq and press Enter to write your changes and quit VI.

So, now you have the path to your JRE, and your WSCLI, and your permissions to the necessary folders. Let’s conduct a quick test!

Run this from the SSH prompt: /usr/java/jre-vmware/bin/java -jar /var/run/vco/WSCLI.jar

WSCLI running in VCO!
WSCLI running in VCO!


Create your Configuration Elements for WSCLI

Configuration Elements are your friend for using this tool. Since the path of JRE and your WSCLI upload shouldn’t really change, why not make it a global variable? That’s what a Configuration Element is essentially for.

In the VCO Client, change to Design View if you aren’t already there. Then, go to the Configuration Elements Tab.

The VCO Configurations Tab.
The VCO Configurations Tab.

Create a new Folder (or not), and create a new Element, as I have done in the above screenshot.
Once you have created it, you will go straight into the menu to customize the element.
Our goal here is to define two attributes: the path to JRE, and the path to WSCLI.jar.
Simply create two attributes, and populate them with the paths you have found earlier.
These paths are CASE-sensitive!

The WSCLI Configuration Element.
The WSCLI Configuration Element.

Save and Close your changes.

Using the Configuration Element in a Workflow

This one is almost TOO easy, and you’ll immediately see the value of Configuration Elements.

Create a new Workflow, and add two Attributes to it.  Once you have done so, you’ll see a small icon similar to Configuration Elements.


Click the small arrow highlighted above for the pathWSCLI attribute, and you will get a dialog window to find your Configuration Element.

Available ConfigurationElements are listed in this window.
Available ConfigurationElements are listed in this window.

You’ll find your previously created ConfigurationElement, and you can bind the attribute in your workflow to it, and use it however you want in the Workflow!

Running WSCLI in a Workflow

The VCO Command Class.
The VCO Command Class.

The final step is using the Command class in VCO. This class is meant to perform the local execution of code and return the output. So if you run a command like ls -l you’ll get a list of files in a directory. But obviously it’s better when you can run WSCLI in a workflow.

In the Schema of the workflow, drop a Scriptable Task in, and bind the attributes you created to it.
To execute WSCLI commands from the appliance you will be using the Command Scripting Class, which is detailed below.

Not much there, but that’s OK! If you’re this far, I suspect you’re thinking of use cases for this, of which there are many!

Follow the below code snippet as an example.  Note that the pathJRE and pathWSCLI bits are the attributes you defined earlier.

// define command line to execute
var cmdString = pathJRE+" -jar "+pathWSCLI+" help shutdownSvaServer"
// create a new command object with your command string
var myCommand = new Command(cmdString)
// execute the command
// show the output

At this point, you should see the results of WSCLI in the Logging tab.

WSCLI - now in VCO!
WSCLI – now in VCO!

You can now execute WSCLI in your workflows, but that is just part of the ongoing puzzle. As we know already, WSCLI can’t actually power down the VSA node. How do we tackle that when we’re denied that ability in vCenter?

In the next few posts, I’ll detail the various ways I was able to accomplish this, each with their pros and cons. In the meantime, I’ll leave you to create a few Actions or Workflows to add WSCLI functionality!