Coming soon on this blog, vRA pickup lines. Watch this space!

Deploying OVFs with vRealize Automation

vRA is good at only a few things out of the box – primarily VM template based blueprints.

One thing that doesn’t work at all out of the box is deploying OVF/OVA machines, which I will cover a way to do so in this post. There are many ways to do this thanks to vRO, but I think this way is fairly easy to digest, using appliances a vRA admin would typically have access to.

Use Case

For me it is simple – deploying a fully functional vSphere or vRealize environment requires a number of appliances. Being able to build a full reference VSAN environment with automation is pretty cool and nice to be able to hand back to other members of my team for testing or experimentation. Deploying vRO/vRA is a lot more complex, and with the 7.2 releases of these solutions, you can do the whole thing via API calls.

Caveats and Assumptions

Not all appliances are equal! IHV or ISV appliances often tend to only set an IP address and not enable post-configuration, so while this will showcase a pretty end-to-end solution, other stuff outside of VMware appliances may not work as well.

Also, this post assumes vRA is already set up and is working pointed to an existing vCenter Server for data collection.

This also assumes you have a method of allocating IP addresses in place, as well as a process to pre-create appropriate DNS records for the appliances  you build.

Setting the stage

For this particular post, I will detail deployment of a vCenter Server Appliance, version 6.0U2. Some parameters may change between versions but not too much.

When you download the ISO for the vCenter Server Appliance, extract the contents to a temporary folder. Search inside the extracted content in the folder VCSA to find the actual OVA file called vmware-vcsa.ova.

Launch the vSphere Client and deploy the OVA to your choice of datastore/host.

You’ll be prompted during the wizard to populate numerous values that would normally be handled during the VCSA UI Installer – leave them alone and continue.

Do not power on the appliance as part of the process!

Once the OVA has been deployed, simply convert the new appliance into a template.

Build your VCSA blueprint

In vRA, execute a data collection against your vCenter Server resources.

Once that is successful, go to Design->Blueprints and create a new composite blueprint.

Drag a vSphere Machine object onto the canvas, and set the names and descriptions as you like.

In the Build Information tab, specify these values

  • Blueprint Type – Server
  • Action – Clone
  • Provisioning Workflow – CloneWorkflow
  • Clone From – <The template>
  • Customization Spec – <blank>

Lastly, in case it is not part of your existing Property Groups (aka Build Profiles), you will want to pass this property in:

  • Extensibility.Lifecycle.Properties.CloneWorkflow = *,__*

Most documents emphasize passing the VMPSMasterWorkflow32 properties in as part of setting up Event Broker, so this is just calling out an additional requirement.

From here, set your custom properties and build profiles as you like. One recommendation I have for vRA admins is to add a custom property on your endpoint that denotes the vCenter Server UUID – this helps identify which vCenter your machine deployed to, which is needed for steps coming up. If you have a single vCenter, this is optional, but you may want to put it in place now, just in case!

Publish and entitle your blueprint.

Create vApp properties workflow in vRO

The key to making this work is all about the vApp properties of the virtual machine once deployed. Once the VM is cloned from the OVF template you made earlier, but before the VM starts up, you need to be able to inject your custom properties at the vCenter VM level so that the appliance can bootstrap itself and initiate firstboot scripts successfully.

In vRO, create a new workflow. Add an input parameter named payload of type Properties. This will hold the content that the vRA Event Broker will use to populate your cloned VM with what it needs to build properly.

Create an attribute called vc of type VC:SDKConnection and set the value to your vCenter Server.
Create an attribute called vcVM of type VC:VirtualMachine and leave it empty.
Create an attribute called iaasProperties of type Properties and leave it empty.

In the workflow schema, create a Scriptable Task and bind the payload input parameter to it, as well as the iaasHost, and vc attributes on the input side. Bind the vcVM and iaasProperties attributes on the output side of the task.

Insert the following code:

// create search query IaaS for the VM and its properties.
var properties = new Properties();
// query IaaS for entity
var virtualMachineEntity = vCACEntityManager.readModelEntity(, "ManagementModelEntities.svc", "VirtualMachines", properties, null);
// now get the properties you need for the applianceType value
var vmProperties = new Properties()
var props = virtualMachineEntity.getLink(iaasHost, "VirtualMachineProperties");
for each (var prop in props) { 
vmProperties.put(prop.getProperty("PropertyName"), prop.getProperty("PropertyValue"));
// get your output to an attribute
iaasProperties = vmProperties
// construct the Managed Object Reference to find the VM object in vCenter
var object = new VcManagedObjectReference()
object.type = "VirtualMachine"
object.value = payload.machine.externalReference

// query the vCenter for the actual object
vcVM = VcPlugin.convertToVimManagedObject(vc,object)

The above code should be fairly straightforward – it is pulling in your IaaS custom properties into a vRO object, and it is also finding the vCenter Virtual Machine so that you can manipulate it. Next will be the code that updates the OVF Properties of the VM in question.

Updating the OVF Properties

Create a Scriptable Task in the schema of your workflow.

On the input side, bind the values iaasProperties and vcVM.

Paste the following code into the task:

// create the properties variable to be called later
var ovfProps = new Properties()

// construct the reconfiguration specification, which will add the vApp Properties from IaaS to your VM.
var vmspec = new VcVirtualMachineConfigSpec()
vmspec.vAppConfig = new VcVmConfigSpec()

var newOVFProperties = new Array()
// get the existing VM's OVF property list.
// this is needed to match the 'key' value to the 'id' and subsequently 'value' parameters. The 'key' is required.
var vmProperties =
for each(prop in vmProperties) {
// get the key value
var key = prop.key
// define new property spec, which will edit/update the existing values.
var newProp = new VcVAppPropertySpec()
newProp.operation = VcArrayUpdateOperation.fromString("edit") = new VcVAppPropertyInfo()
// assign the new key = key
// get the values from ovfProps - these are of type VcKeyValue, properties are .key and .value = = ovfProps.get( = true
// add the new OVF Property to the array

// assign the properties to the spec = newOVFProperties
// now update the VM - it should be OFF or this will fail
try {
// reconfigure the VM
var task = vcVM.reconfigVM_Task(vmspec)
System.log("OVF Properties updated for "".")
} catch(e) {
throw("Error updating the OVF properties: "+e)

Save your workflow and exit.

With the workflow completed, now we need to go into the Event Broker and create a subscription so that this workflow will be executed at the correct time during the provisioning process.

Setting up the Event Broker Subscription

In vRA, go to Administration->Events->Subscriptions, and create a new subscription.

For the Event Topic, choose Machine Provisioning and click next.

For the Conditions, you will want to set it up with All of the following and at least these values:

  • Data->Lifecycle State->Name equals CloneWorkflow.CloneMachine
  • Data->Lifecycle State->Event equals CloneWorkflow.CloneMachine.EVENT.OnCloneMachineComplete
  • Data->Lifecycle State->Phase equals EVENT

For the Workflow, choose the workflow you created above.

Ensure this is a blocking subscription, otherwise it will execute asynchronously. Since this absolutely has to complete successfully to ensure a proper build, blocking on this one is important!

And finally, because it always seems to happen to me – make sure to PUBLISH the subscription! Friends don’t let friends troubleshoot an Event Subscription that is a draft…


At this point you can begin testing your deployments, and verify that the workflow is firing at the appropriate time.

Additionally, it may make sense to build an XaaS service that allows for you to customize the names, sizes of the appliances, or changing default domain, VLAN, etc.

Thanks for reading! If you have any questions feel free to ping me on Twitter.

If you’re curious as to the other available states you can use, check out the Life Cycle Extensibility – vRealize Automation 7.0 document, starting around page 33. Each release has a version of the document with the latest information, so find the one for your version.



vSphere SSL and vCO Part 4 – Venafi API

In parts 1 through 3, we setup workflows to generate OpenSSL configuration files, private key and CSR for an ESXi host.

This portion will concentrate on creating REST API calls to Venafi, an enterprise system that can be used to issue and manage the lifecycle of PKI infrastructure. I find it pretty easy to work with, and know some colleagues who are interested in this capability, so I decided to blog it!

At a high level, this post will build workflows that do the following tasks in the API:

  • Acquire an authorization token
  • Send a new certificate request
  • Retrieve a signed certificate

Let’s get to it!

Setup REST Host and Operations for Venafi

Since there are 3 operations above, thankfully there are 3 corresponding operations that will require setup in vCO.


Find the workflow under the HTTP-REST folder called Add a REST Host and execute it.

Setting up the Venafi REST Host.
Setting up the Venafi REST Host.

Fill in the name and URL fields to match your environment.
The Venafi API endpoint ends with /vedsdk – don’t add a trailing slash!

For the Authentication section, choose Basic.
Then, for the credentials choose Shared Session, and your username and password.

If you haven’t already imported the SSL certificate for Venafi, you may be asked to do so for future REST operations.

REST Operation(s)

Find the workflow under the HTTP-REST folder called Add a REST Operation and execute it.

Setting up the Venafi REST Operation.
Setting up the Venafi REST Operation.

Choose the REST Host you just created above in the first field.
Set a name for the Operation – in this case I used Get Authorization.
For the Template URL, input /Authorize/ – remember the trailing slash!
Change the method to POST, and input application/json for the content type.

Submit the workflow, and the operation is now stored.

Repeat the above workflow, but swap out the Name and Template URL as needed with the below values. These will add the necessary operations to request and download signed certificates.

Name: Request Certificate
Template URL: /Certificates/Request

Name: Retrieve Certificate
Template URL: /Certificates/Retrieve

With the REST Host and Operations set up, let’s work on creating our basic workflows.

Workflow: Acquire your authorization token

There are a number of ways to tackle the creation of the workflow, with error handling and the like, so I will just focus on giving the code that is needed to construct the API call and send it out.

Create a new workflow.

In the workflow General tab, create an attribute named restOperation, of type REST:Operation. Click the value column and choose the Venafi operation you created earlier for /Authorize/ and click OK.Click the Output tab and create a new parameter called api_key. This is the resulting authorization key that you can then pass to other flows later.

Drag and drop a new Scriptable Task into the Schema, and bind the restOperation attribute on the IN tab, and the api_key output parameter on the OUT tab, like this:

Now, copy and paste this code into the Scripting tab.

// get the username/password from the Venafi REST Host.
var venafiUser =
var venafiPass =
// create Object with username/password to send to Venafi
var body = new Object()
body.Username = venafiUser
body.Password = venafiPass
// convert Object to JSON string for API request
var content = System.getModule("com.vmware.web.webview").objectToJson(body)

// create API request to Venafi
var inParamtersValues = [];
var request = restOperation.createRequest(inParamtersValues, content);
// set the request content type
request.contentType = "application\/json";
System.log("Request URL for Venafi Token: " + request.fullUrl);
// execute
var response = request.execute();
// handle response and output
if(response.statusCode != 200) {
	System.log("There was an error getting an API key. Verify username and password.")
} else {
	api_key = JSON.parse(response.contentAsString).APIKey
	System.log("Token received. Add header name \"X-Venafi-Api-Key\" with value \""+api_key+"\" to future workflows.")

Now you may be wondering, why am I pulling the Username and Password from the host in this way? Shouldn’t there be some input parameters?

While we did build the REST Host object to use Basic Authentication, Venafi actually doesn’t support it – you can only POST the credentials to the API with a simple object and get the API key back.

So this is just how I elected to store the username and password – you could set them as attributes in the workflow, as Input parameters, or link those attributes to ConfigurationElement values if you wanted to.

Assuming your service account used for authentication is enabled for API access to Venafi, you should be able to run the workflow and see the log return a message with your API key!

And since it is an output parameter, you can bind that value to the next couple of workflows we are about to create.

Workflow: Submit a new Certificate Request for your ESXi host.

Create a new workflow.

In the workflow General tab, create an attribute named restOperation, of type REST:Operation. Click the value column and choose the Venafi operation you created earlier for /Certificates/Request and click OK.

Attributes for the Request Certificate workflow.
Attributes for the Request Certificate workflow.

Click the Input tab and create 3 parameters:

  • api_key – type is String
  • inputHost – type is VC:HostSystem
  • inputCSR – type is String
Inputs for the Request Certificate workflow.
Inputs for the Request Certificate workflow.

These values should be familiar for those who have been following along.
The first is the API key you created earlier so you can pass the key into this workflow and avoid having to login again. Each time you use the Venafi API key, the expiration time is extended, so you should be good to use a single API key for your entire workflow.

Click the Output tab and create a new parameter called outputDN. This value represents the Certificate object created in Venafi at the end of the workflow.

Outputs for the Request Certificate workflow.
Output for the Request Certificate workflow.

Once again drop a new Scriptable Task into the Schema for the new workflow, and bind the restOperation attribute on the IN tab, along with the 3 other inputs api_key, inputHost, and inputCSR. On the OUT tab, bind the outputDN value into the task, so that your Visual Binding looks like this:

Visual Bindings for the Request Certificate workflow.
Visual Bindings for the Request Certificate workflow.

Now, it’s time to add the necessary code to make the magic happen.

var object = new Object()
object.PolicyDN = "\\VED\\Policy\\My_Folder"
object.CADN = "\\VED\\Policy\\My_CA_Template"
object.PKCS10 = inputCSR
object.ObjectName =
var content = System.getModule("com.vmware.web.webview").objectToJson(object)
//prepare request
var inParametersValues = [];
var request = restOperation.createRequest(inParametersValues, content);
//set the request content type
request.contentType = "application\/json";
System.log("Request URL for Certificate Request: " + request.fullUrl);

//Customize the request here with your API key
request.setHeader("X-Venafi-Api-Key", api_key);

//execute request
var response = request.execute();
// handle response and output
if(response.statusCode != 200) {
	System.log("There was an error requesting a certificate. The response was: "+response.contentAsString)
} else {
outputDN = JSON.parse(response.contentAsString).CertificateDN

The only things in this code you will want to change are the PolicyDN and CADN values, as those are unique to each environment. You’ll want to consult your Venafi admin, or look in the /VEDAdmin website running with Venafi for the proper value.

Workflow: Retrieve the signed Certificate

So you’ve gotten this far, and were able to post a new certificate request. The last step is to download it and get ready to upload it to the ESXi host.

Create a new workflow.

In the workflow, create an attribute named restOperation, of type REST:Operation. Click the value field and choose the Venafi operation you created earlier for /Certificates/Retrieve and click OK.

Attributes for the Retrieve Certificate workflow.
Attributes for the Retrieve Certificate workflow.

Click the Input tab and create 2 parameters:

  • api_key – type is String
  • inputDN – type is String
Inputs for the Retrieve Certificate workflow.
Inputs for the Retrieve Certificate workflow.

The first value api_key is back once again to re-use the existing API key created before.
The second value is related to the outputDN from the previous workflow. You will be feeding this value into the new workflow so that it knows which certificate object to query and work with.

Click the Output tab and create a new parameter called outputCertificateData. This value represents the signed certificate file encoded in Base64.

Outputs for the Retrieve Certificate workflow.
Outputs for the Retrieve Certificate workflow.

Now, drop a new Scriptable Task into the Schema for the new workflow, and bind the restOperation attribute on the IN tab, along with the 2 other inputs api_key and inputDN. On the OUT tab, bind the outputCertificateData value into the task, so that your Visual Binding looks like this:

Visual Bindings for the Retrieve Certificate workflow.
Visual Bindings for the Retrieve Certificate workflow.

Now to execute the API call to get the certificate, using the below snippet of code:

// setup request object
var object = new Object()
object.CertificateDN = inputDN
object.Format = "Base64"
var content = System.getModule("com.vmware.web.webview").objectToJson(object)

//prepare request
var inParametersValues = [];
var request = restOperation.createRequest(inParametersValues, content);
//set the request content type
request.contentType = "application\/json";
System.log("Request URL for Retrieving Signed Certificate: " + request.fullUrl);

//Customize the request here
request.setHeader("X-Venafi-Api-Key", api_key);

//execute request
var response = request.execute();

// deal with response data
if(response.statusCode == 200) {
	// the result is in Base64, so decode it and assign to output parameter.
	var certBase64 = JSON.parse(response.contentAsString).CertificateData
	outputCertificateData = System.getModule("us.nsfv.lab").decodeBase64(certBase64)
	// done
} else {
	System.debug("Unhandled Error with response. JSON response: "+response.contentAsString)

Now, once these workflows run in sequence, you should have a Base64 decoded string that looks like a signed certificate file would. We haven’t strung these flows together yet, so don’t panic that it isn’t working! It will all make sense, I promise!

In the next post, we will write that content to disk, and then the fun part: uploading it!

vSphere SSL and vCO Part 2 – Appliance Setup

The first step to getting this process to work is to realize that ultimately under the hood, vCO is a Linux appliance, with a lot of functionality not exposed, or not immediately obvious. There are a lot of tweaks you can make to really enable great functionality, and this process may give you other interesting ideas!

NOTE: The below steps assume vCO Appliance 5.5+

You’ll need to start out by using PuTTY to SSH into the appliance. If SSH is turned off, you’ll either need to use the console, or enable Administrator SSH from the VAMI interface.

Once logged in, change directory to /etc/vco/app-server, and then type vi to open the file in a text editor.

Logging into vCO with SSH.
Logging into vCO with SSH.

Inside you will want to see if you have a line that looks like this:


If it doesn’t exist, press i to change to Insert mode, and then add a new line and put it in there. Once done, press ESC to exit Insert mode, and type :wq to write the file and quit the editor.

When vCO Server is restarted, you will be able to execute Linux commands against the VM from within your workflows. The catch is, you have to make sure that the vco account on the appliance has the ability to execute it.

To enable this, type vi js-io-rights.conf from the shell. This file may already exist and have some data in it. If not,  you get to define explicit rights at this point. Here’s mine for reference:

Example JS-IO-RIGHTS.CONF file.
Example JS-IO-RIGHTS.CONF file.

Add the below lines to the file by pressing i, again going to Insert Mode and then the below information, with each line corresponding to a specific executable on the appliance. The prefix characters are adding read and execute rights for the vco user.

+rx /usr/bin/openssl
+rx /usr/bin/curl

Press ESC, then :wq to save the file and exit.

With these tweaks enabled, you will need to restart vCO Server. You can do this a number of ways, but since you’re in the shell this is the fastest:

service vco-server restart

Now, you will be able to execute these commands in a workflow when you use the Command scripting object, which will run the command and return the standard output back as a variable, as well as the execution result, such as success or failure!

With that in mind, let’s do a quick experiment in a workflow to ensure it works as intended.

Proof of Concept Workflow Creation

  • Create a new Workflow as you normally would. No inputs are necessarily required for this test as we will get into those values in later posts.
  • Drag and drop a fresh Scriptable Task into the schema, and edit it.
  • Paste the code below into the scripting tab:
// Creates new Command object, with the command to run as your argument
var myCommand = new Command("/usr/bin/openssl version")
// Executes the command
// Outputs the result code, and the output of the command
System.log("Exit Code: "+myCommand.result)
System.log("Output: "+myCommand.output)

Close the Scriptable Task, run the workflow and check the log – you should see something like this:

OpenSSL version running on the appliance.
OpenSSL version running on the appliance.

If you were to type the same command in the shell, the result would be the same. So while we’re here, let’s update the code in the task to verify cURL also works. Change line 2 in the task to look like this (note the case-sensitive argument!) :

var myCommand = new Command("/usr/bin/curl -V")
cURL version running on the appliance.
cURL version running on the appliance.

You’ll probably note that the OpenSSL version installed on the VCO appliance is the same one that is required by VMware for the entire SSL implementation in the 5.x release! With this working, now we can do some really cool stuff.

In the next post, we will build out the workflow that will create the private key and CSR for any and all of your ESXi hosts! This same flow can be used as the basis for vCenter SSL, vROPS, or even vCO itself!

vSphere SSL and vCO Part 1 – Thoughts

One of the biggest pains with vSphere is generating and replacing the SSL certificates with your own signed ones, at least in 5.1 and 5.5.

No doubt most of us have read Derek Seaman’s brilliant series a couple of years ago regarding how to get this to work correctly, both in vSphere 5.1 and vSphere 5.5. All the while, we were wishing for a better tool to manage this portion of the environment. Depending on your security department, you may not have a choice!

But then, there was the VMware-sanctioned Certificate Automation Tool! And LO, it was a batch file that was cleverly done.

Now, we have the VMware VMCA in the Platform Services Controller. Looks pretty good, if you’re able to run vSphere 6.0, and your security team allows you to create a subordinate CA for integration, and if you are running all hypervisors with version 6.0 or higher.

But in the real world it’s more complicated than that. 6.0 has had quite a few issues and I think many are still shy about upgrading, though that is changing.

In the meantime, we have to make do and in my opinion, all of this should have been offloaded to vCO in the 5.x releases.
Let’s consider why for a second.

  • vCO is pretty tightly connected to vCenter and the vSphere API – just about anything you can generally automate you can do in vCO.
  • There are also peripheral capabilities such as SOAP, SSH, and general HTTPS requests courtesy of the Axis2 engine running underneath.
  • Also, the appliance is Linux, with a plethora of tools that are built in, such as sed or awk, and as we will see later, openssl and curl.

A well constructed set of workflows that handled this functionality and integrated with a typical Microsoft CA could have easily been a great showcase of the product.

In this series I will present workflows that will:

  • Handle creation of the various components using OpenSSL running from within the vCO appliance, focusing on the ESXi host.
  • Use the HTTP-REST plugin to get a signed certificate – this is made possible primarily due to a tool called Venafi Trust Platform, a system that can integrate with many external PKI infrastructures and manage their lifecycle.
  • Use the vCO appliance to upload the signed certificates and update the vCenter database with the new values.

Hopefully you find it useful and insightful. Once the series is finished, I will release a package of workflows and logic I use for my use case, probably on FlowGrab.

Working with Customization Specifications in vCO

I recently had a minor project to assist with refreshing a deployment of hundreds of remote backup servers, which were moving from a standard Windows 2008 R2 image to 2012 R2. The process was going to leverage some custom workflows I had written using the Deploy OVF functionality in vCO.

A pretty straightforward setup, so I asked how they planned to do the customization for each site. The answer was essentially: “We’re going to do the customization manually like always.”


I checked into it, and sure enough that had always been the process. That process included fat fingering the host names, inputting incorrect IP address information, not installing the software properly, not updating the Infoblox IPAM system, the list goes on and on.

I’m an automation freak for a number of reasons but most of all it seemed like this was a perfect use case to leverage vCO capabilities I knew were possible out of the box, and also do some digging on a few parts I didn’t know about.

The Ingredients

First, I did some REST API testing and querying of our internal Infoblox IPAM system to ensure I could get the correct or available IP addresses on the pertinent location and VLAN programmatically.

Second, I had to build a workflow that created the AD Computer object in advance in a particular OU, so that the company’s GPO would take effect immediately upon joining the machine to the domain. The interesting problem to solve here was using a particular service account to perform the work, rather than the default AD Host.

Finally, I had the team put together a Windows 2012 template that was not sysprepped, so that I could use vCenter’s Customization Specifications to do that for me, along with executing first boot scripts, configuring the Network, join the domain, and the like. This meant figuring out how to programmatically use the Customization Spec APIs, which I wasn’t familiar with.

1/4 Cup REST API – Getting the correct Network CIDR

The first piece I worked on was fairly simple after learning some syntax which was querying the Infoblox REST API for the relevant networks.
Each remote location had specific Extensible Attributes used that contained the VLAN ID and the Location code. The location code corresponded to the Datacenter object in our vCenter Server, so creating a correlation just got a whole lot simpler.

Create a REST Host using the default Add a REST Host workflow, substituting your IPAM server URL and credentials appropriately.

Configuring IPAM REST Host.
Configuring IPAM REST Host.
Configuring IPAM REST Host
Configuring IPAM REST Host

Once that’s created, run Add a REST Operation workflow so that we can add the actual REST API URL to the host just created with a couple of parameters that can be invoked at runtime. This operation will be a GET operation, also.

For the URL Template field, you will want to put this:

This URL may be a bit of a puzzle, so I’ll break it down:
/wapi/v1.6/network – the base portion, indicating we are querying available network objects in the API.
?_return_fields%2B=network – this indicates what values we want returned, rather than a bunch of values. In this case, I just wanted the CIDR value.
&*SiteNumber~:={MY_LOCATION_ID} – The ampersand is a parameter of the query, and the asterisk preceding “SiteNumber” indicates an Extensible Attribute inside the IPAM database. Finally the ~:= indicates a Regular Expression search.
&*VLAN={VLAN_ID} – another parameter of the query, using the VLAN Extensible Attribute.

NOTE: The Extensible Attributes used here were unique to this installation – you will have to create your own or ask your Network team for the attributes they use!

Of note, there is a pretty great example page full of Infoblox IPAM API calls you can find here:

Upon executing the Invoke a REST Operation workflow, you can plug in the values for MY_LOCATION_ID and VLAN_ID and assuming it exists, you will get a JSON string back with the network CIDR that can be used for further parsing. In my case, I only needed to replace the last octet with a specific value that was going to be used at all locations, as well as the gateway.

Now, how to connect to AD using a specific service account rather than the default?

1/4 Cup Active Directory – Connecting with a specific service account

I would start by getting version 2.0 or higher of the plugin, as even up to vRO 6.0.1 didn’t ship with it.
You can download it from this link:

This version enables multiple AD connections which is required for this use case. If you’re lucky enough to be on vRO 7.0 or higher, you’re good!

Run the Add an Active Directory Server workflow and configure it to use a Shared Session with your service account.

With that done, create a Scriptable Task in a workflow that has a single output binding, called adHost, and then copy in the following code:

var allAD = AD_HostManager.findAllHosts()
for each(ad in allAD) {
 if(ad.hostConfiguration.sharedUserName == "LAB\\service_account") {
   adHost = AD_HostManager.findHost(
   System.debug("AD Connection ID: "

In this code, we’re essentially getting a list of all available AD Host connections, then looping through them to find the one assigned to our particular service account, and saving it to an attribute named adHost. This attribute can then be bound to another workflow or element to create the AD Computer Account, along with all other methods available in the plugin.

1/2 Cup Customization Spec Manager API – A Journey

This part of the journey was fairly new to me. I hadn’t tried using the API to tinker with Customization Specs before so it was an interesting journey, and I learned both the hard and subsequent easy way of accomplishing my task.

My initial thought on this was to create a specification for each location with the NIC1 interface configured using the IP settings gathered from the REST API call. After some tinkering and testing, I was able to create a Scriptable Task that did just that. Running my workflow to create a pile of hundreds of Customization Specs all named and IP’d properly was admittedly pretty cool to me. But, when I attempted to clone one of my test templates with the spec, it never completed. The reason?

The vCenter public key error.
The vCenter public key error.

As it turns out, when creating a Customization Spec the object is encrypted using the vCenter public key. This is to protect the stored credentials for the local Administrator account, as well as the account used for joining the system to the domain. Digging into the API Explorer shows that you can specify a clearText property to true and bypass this, but it doesn’t help as it seems the whole object is encrypted. Of note, you also get this error pop up if you export a Customization from one vCenter to another one.

But once you re-input those credentials and save the Specification, the cloning works as expected. So, can I modify the Customization Spec during workflow runtime? Turns out, that is the best way to approach this problem.

In vCO there is no data type that corresponds to a Customization Spec, so to pass it around, you’ll need to use the wonderful Any type.

To begin, create a Scriptable Task with inputs for the IP Address, Subnet Mask, and Gateway variables, and a single output of type Any to hold the new version of the Customization Spec.

To get a Customization Spec into a variable you can manipulate, you can use the following bit of code using a sample input type of VC:Datacenter, named inputDC.

var customizationSpec = inputDC.sdkConnection.customizationSpecManager.getCustomizationSpec("My_Customization_Spec")

Now, we’ll populate the Default Gateway with the one you input, either from Infoblox IPAM, or through a normal string input.

// gateway - the input type must be an array of a single value.
var staticGW = new Array()
customizationSpec.spec.nicSettingMap[0].adapter.gateway = staticGW

Here, we declare a Fixed IP type, and set it to a value from your Infoblox IPAM query, or string input.

// Static IP - input type must be declared
var staticIP = new VcCustomizationFixedIp()
staticIP.ipAddress = inputIP
customizationSpec.spec.nicSettingMap[0].adapter.ip = staticIP

Finally, a simple string assignment for the subnet mask.

// Subnet Mask - can simply be a string value
customizationSpec.spec.nicSettingMap[0].adapter.subnetMask = inputNetmask

With the adjustments to the spec made, assign the modified specification to the output attribute.

// assign the updated spec to the output
outSpec = customizationSpec.spec

With that done, you can now use the CloneVM_Task API to create a clone of the VM template with the specification you just created. You’ll need to make sure you have a way of applying the target host, resource pool, datastore, as well as the Customization Spec to ensure this is successful.

It is worth noting that you did not actually modify the Customization Spec in vCenter directly – you just grabbed it, messed with it and sent it on its way! This makes the Specification useful for any environment, any location, any vCenter!

I hope you found this useful. If you have any questions or thoughts, hit me on Twitter or email!

Deploying OVF Templates with VCO

A feature that I have ended up using a lot lately, especially in delegating large deployments through workflows, is the vCenter Plugin’s ImportOVF feature.

No workflows come with the appliance that handle deploying OVFs, as there are a lot of variables involved. But once you know how to construct the request once, it’s really helpful to have handy!

To call the importOVF method, these are the arguments / types you will need to provide:

ovfUri (string) : This is the URI to the .OVF descriptor file. It can be either a valid HTTP link, or if you have a mounted share, it could be a local FILE:// location.
hostSystem (VC:HostSystem) : You’ll have to specify the ESXi target host with this argument.
importFolderName (string) : You can specify a VM folder name here that already exists – if you don’t have one, you should put in “Discovered virtual machine,” as it should always exist by default.
vmName (string) : This argument is the VM name as it will exist on the target host.
networks (array of VcOvfNetworkMapping): This argument is to map each virtual NIC interface from its source portgroup to the target portgroup.
datastore (VC:Datastore) : The datastore where the VM will reside.
props (array of VcKeyValue) : This is an optional parameter that can be used to provide inputs for the OVF settings, if they are being used.

Pointing to the OVF Files on a local share

It’s safe to say that most VCO appliances or installations won’t have direct internet access to pull OVFs from the Solution Exchange.
If you have mounted a CIFS share to the VCO appliance, you can access the OVF files there. The path requires additional escape slashes due to how VCO reads the attribute, so it would look something like this:


In this example, I have a CIFS share mounted on the appliance at mount point /mnt/cifsShare.
You will want to make sure that you set appropriate read/execute permissions in the js-io-rights.conf file to this location.
If you have no idea what that means, go HERE.

Setting up the OVF Network Mappings

The only thing that needs a little explanation on using this method is the VcOvfNetworkMapping portion of things.
If you think about it, during the deployment of an OVF in the vSphere Client, there is a window that asks you to map between the source and destination networks.

The vSphere Client OVF Network Mapping menu.
The vSphere Client OVF Network Mapping menu.

The ‘source’ network is whatever the portgroup was called when it was exported. You just need to map it to the destination portgroup on the target host. This requires you to know both sides so you can code for it, if you need to.

Below is some sample code you can use in a Scriptable Task to construct the network mapping values you need to execute the method.

// setup OVF to move the 'source_network' network to 'target_network'
var myVcOvfNetworkMapping = new VcOvfNetworkMapping() ; = "source_network"
// check for 'target_network' Portgroup on the ESX host bound to the task
for each(net in {
  if( == "target_network") { = net
// create empty array of Network Mappings
var ovfNets = []
// add the mapping to the array

If you have multiple NICs on different networks, simply repeat the steps above and adjust the mappings.

Optional: OVF Properties

If you are deploying OVFs that utilize OVF properties, you can create an object to specify them.
To ensure you get the right values, you may want to deploy an example OVF, and then check the OVF environment variables in the VM settings.

Here’s an example:

OVF Properties for a VM.
OVF Properties for a VM.

Given the above, notice the PropertySection which has the values you want. Here’s a bit of example code to populate these values:

// Create empty array of key/value pairs
var keys = []
// Create new key/value for OVF property
var myOVFValue01 = new VcKeyValue() ;
myOVFValue01.key = "OVF.OVF_Network_Settings.Network"
myOVFValue01.value = ""
// Create new key/value for OVF property (that totally doesn't exist in the shot above, but you get it)
var myOVFValue02 = new VcKeyValue();
myOVFValue02.key = "OVF.OVF_Network_Settings.Netmask"
myOVFValue02.value = ""
// add both of them to the array
keys.push(myOVFValue01, myOVFValue02)

Add the array to your method call and when the VM boots, depending on the underlying machine, it could auto configure! If you integrated this capability with an IP Management system, you could automate provisioning of OVFs top to bottom!

Troubleshooting the importOvf workflow
The VcPlugin.importOvf() method doesn’t give too much back. It is recommended to enable DEBUG level logging on the Configurator page for VCO during your testing of this plugin so you can narrow down what it could be
[WrappedJavaMethod] Calling method : public com.vmware.vmo.plugin.vi4.model.VimVirtualMachine com.vmware.vmo.plugin.vi4.VMwareInfrastructure.importOvf(java.lang.String,com.vmware.vmo.plugin.vi4.model.VimHostSystem,java.lang.String,java.lang.String,com.vmware.vim.vi4.OvfNetworkMapping[],com.vmware.vmo.plugin.vi4.model.VimDatastore,com.vmware.vim.vi4.KeyValue[]) throws java.lang.Exception

The most likely cause of errors in the method is either a missing portgroup or incorrect key value. Since just about everything is just strings, you need to make sure that on the target host, the portgroup is there and matches exactly.

Thanks for reading!

Workflowing with VUM and VCO – The Basics

If you’re using VCO/VRO as your automation engine, sooner or later you will probably add the vSphere Update Manager plugin.
Makes sense, right? The opportunity to schedule or delegate the ability to update and patch your hosts is pretty compelling.

It’s unfortunate that getting up and running with VUM isn’t quite as simple as it looks on the surface. In this particular case, PowerCLI  wins for its ease of integration.
Hopefully, this post helps turn that tide back.

It’s a little bit behind

The installation of the plugin is just like any other – upload it from the Configuration interface on port 8283. I don’t think it really even needs a reboot.
From there, you have to punch in your vCenter URL – which may not be obvious to many as there is little help or documentation.

So just to be clear, add the URL like this : 

You can also add others in the list if you have multiple vCenters in the same authentication scope in a similar way.

Next up, check your plugin status within the VCO client. Inevitably you will run into an error that simply says ERROR IN PLUGIN.
Unless you are the white knight of IT, this isn’t too helpful.
If you see this, I’m willing to bet that it’s because you didn’t import the SSL certificate to the trusted store.
How would you know to do that? You wouldn’t, unless you like staring at logs set to DEBUG level!

So, how do I import the certificate?
Easy – just point VCO to the SOAP endpoint of your VUM box. You can get the service port piece of this information from the Admin section of the Update Manager plugin through the vSphere Client. You can do this through the Configurator page too, but since Configurator is going away, this is probably the best way.

Locating the SOAP port for VUM.
Locating the SOAP port for VUM.

Now, you can run a workflow to import the VUM certificate into the SSL Trust Store.
You can find the builtin workflow in Library->Configuration->SSL Trust Manager.
Choose to run the Import a certificate from URL workflow.

The Import Certificate workflow.
The Import Certificate workflow.

Execute the workflow, and for the input, use the FQDN to the server running the VUM service, and append port 8084 to the end, like you saw earlier.
The FQDN portion is important! If you don’t put it in there, you will likely have an error.

Importing the VUM SSL Certificate.
Importing the VUM SSL Certificate.

Once the certificates are imported, relaunch your VCO client. After you login, you should see some progress being made.


That wasn’t so hard

So next up, you just bind a HostSystem value to a workflow that remediates and you’re good right?
Unfortunately not quite yet. But we’ll get there!

VUM uses a completely different service and object model unrelated to vCenter, thus direct integration is not as simple. Out of the box, you have to write your own code and do some heavy debugging in the logs.

Connecting the dots

The first thing we will do is make a simple Action element that takes an input of type VC:HostSystem and extracts the necessary info out of it to create its VUM-equivalent object.

Digging into the API Explorer, look for the VUM:VIInventory Scriptable Action type. This will tell you what you need in order to construct the corresponding object.

The VUM:VIInventory Type.
The VUM:VIInventory Type.

Thankfully, this is a pretty simple type to instantiate – it only requires 4 arguments to construct.
VumVIInventory(string): The string to input here is the vCenter Server.
id: This is the “vSphere Object ID” – essentially the Managed Object Reference ID.
name: This is the host object name in vCenter, whether it’s the FQDN or if you have it connected by IP. (Shame on you!)
type: This is the asset type. Keep in mind, VUM has the ability to patch Virtual Appliances too, so specifying this is required.

So, let’s get some code in place to make the VC -> VUM conversion happen! Create a new Action, and use the below setup as your guide.

Setting up the conversion VCO Action.
Setting up the conversion VCO Action.

Wait, why is the return type an Array?
The VUM plugin expects an array of VUM:VIInventory, even if that means it is an array of one object.

Here’s the Action code to use, with some notes.

// create the new VUM Inventory object and assign values.
// first define a new instance of the object, pointing to the vCenter Server of the host
var vumItem = new VumVIInventory(;

// get the Managed Object Reference = inputHostSystem.reference.value;
// get the Managed Object Type
vumItem.type = inputHostSystem.vimType;
// get the ESXi Host name =;

// the VUMVIInventory object must be an array, so create a dummy one and push the host value in
var vumHosts = [];

// return the array of VUM objects (even if it is just one)
return vumHosts;

With this new instance of a VUM:VIInventory type, you can bind it to a Remediate Host workflow as you normally would for your patching pleasure, right?
Theoretically, yes. But you may want to check something else before celebrating.

java.lang.NullPointerException -or- Lessons in Error-Checking

One thing that you will want to verify prior to attempting to remediate using the VCO plugin is whether or not a set VUM Baselines are attached.
If you have no baselines attached, or not specified, your Remediate workflow will error out and throw you a generally unhelpful message.
Here’s how you can check for, attach, and bind the necessary data to the ESX host you want to remediate.

There is an Action as part of the plugin that will attach Baselines to a host object, but you have to tell it which ones you want. Below is sample code you can use in a Scriptable Task (or Action) to output a list of Baselines in your vCenter Server. Since you may have specific Baselines you wish to use, you’ll have to modify it to your liking – but it should be pretty easy.

The input binding on this task can be any object – for this example, it is the ESX host.

// create a search query for VUM baselines whose name begins with "SuperCool"

// query all baselines on the VM Host's vCenter Server
var searchSpec = new VumBaselineSearchSpec( ;
// define regex to search baselines
var expression = "^SuperCool"

// get the list of baselines
var baselineList = VumObjectManager.getFilteredBaselines(searchSpec)
// VumBaseline must be an array, so make a dummy one
var baselineArray = []

// Loop through the findings and if they match, add to the baseline list.
for each(baseline in baselineList) {
  if( {
    System.log("Baseline: "

// assign the array values to the output binding, which is 'vumBaselines'
// if this were an Action, you would just return 'baselineArray'
vumBaselines = baselineArray

So after this, you’ll have your hosts to Remediate, and the Baselines you wish to use for the Remediation. Simply bind these arrays to the inputs of the Remediate workflow, and you’re off to the races!

As an aside, I’m hopeful the next iteration of the plugin, along with the 6.x release with VUM integrated into the VCSA will make life simple for us Orchestrator types. We will see!

VMware VSA Deepdive – Part 6 – Eventing So Hard Right Now

I sure have done a lot of blogging about how to power off a VM lately, don’t you think?

One more time, but this time it’s the best version of them all.

I’m Eventing SO HARD right now

I figured this out while investigating something related to VSA and connecting the dots. Here’s how it went down.

In my setup, we have the VSA Clustering Service running on a separate VM at the remote location to handle quorum. One problem that pops up about once a month is the ongoing struggle of Windows patching. The VMs patch, and reboot as you would expect. This throws an alarm on the vCenter Datacenter object that the cluster service is offline. The other issue is that it doesn’t clear itself once the box comes back up. I wanted to find a way to clear this automatically, so that my monitoring guys didn’t lose their minds when hundreds of these appeared at 2AM.

If you look at the settings for the Alarm in question, you’ll see that it uses Events rather than Conditions for these alerts. I hadn’t dug too hard into how these worked before, so time to get dirty.

The default VSA Cluster Service Alarm.
The default VSA Cluster Service Alarm.

I realized then that the alarms in question were bubbling up from VSA Manager to vCenter using AMQP, and based on prior experience I knew that WSCLI would show events when I used startListener – so I started to do some more testing.

Running wscli [cluster IP] startListener from the command-line, I verified it was listening, and then rebooted the VSA cluster service machine in my lab. And then something neat happened – the listener showed an event called MemberOfflineEvent fire. I then waited for the node to come back online and sure enough, I saw MemberOnlineEvent fire.

After adding the trigger of MemberOnlineEvent to the alarm, and having it set the status back to normal I performed another test. Sure enough, the alarm came up, and a couple of minutes later, it disappeared when the service came back online. Problem solved!

It begs the question – why wasn’t this built in to the standard alarm? I wish I knew. But at the very least, I can correct this one.

I started thinking about this newfound trick and wondered – could it apply elsewhere with the VSA solution? I know that custom alarms are created for each NFS datastore used by the VSA, and custom alarms for the VSA machines too. What could we do with those?

The default VSA VM offline alarm settings.
The default VSA VM offline alarm settings.

This may look familiar! You can make similar changes to this alarm and it will clear things up automatically there too. So what about datastores?

The default VSA Datastore Alarm.
The default VSA Datastore Offline Alarm.

OK, the event is a little different, but otherwise pretty straightforward. We just need to find the corresponding event that indicates all is well again.

Below is the list of alarms created by the VSA Manager during an installation, by object context and name.
Datacenter – VSA Cluster Service Offline
Datacenter – VSA Storage Cluster Offline
Virtual Machine – VSA Member Offline
Datastore – VSA Storage Entity Offline
Datastore – VSA Storage Entity Degraded

All of these only have alarms for triggering, but none are set up by default to clear themselves. After using WSCLI and lots of testing, here are the settings I feel make the most sense to give the best idea of what is actually happening. This list only shows what would be added to the existing alarm to automatically clear it out.

Alarm: VSA Cluster Service Offline, VSA Member Offline
Event: MemberOnlineEvent
Status: Green

This clears up most of the alarms very easily. The datastore alarms are a little more interesting – you can actually have both the Offline and Degraded alarms fired during an outage, which isn’t necessarily helpful. The below changes will ensure only one is showing at a time.

Alarm: VSA Storage Entity Offline
Event: StorageEntityDegradedEvent
Status: Green

Alarm: VSA Storage Entity Degraded
Event: StorageEntityOnlineEvent
Status: Green

With the above changes, the datastore will go from green, to yellow, to red, and back up the list in the proper order.

Eureka Moment

This is fun and all, but this entry is actually about how to Power Off a VSA VM isn’t it?
Once I had figured out the connection between the events sent to vCenter from VSA Manager, I remembered something from my many adventures in WSCLI regarding the shutdownSvaServer command:

This operation sends an SvaEnterMaintenanceModeEvent management event
when the node is marked for entering maintenance mode, and a
SvaShutdownEvent when the VSA service is ready to shut down.

I hope this is helping you catch on to where this is going.
Let’s do an experiment in our VSA Cluster – create a custom alarm on one of the VSA VMs like so:

Creating a custom Maintenance Mode Alarm.
Creating a custom Maintenance Mode Alarm.
Setting custom Maintenance Mode Alarm event values.
Setting custom Maintenance Mode Alarm event values.
Setting custom Maintenance Mode Alarm action values.
Setting custom Maintenance Mode Alarm action values.

This alarm object will Power Off the VSA VM automatically when the shutdownSvaServer argument is sent through WSCLI. It’s truly a beautiful thing!

The alarm works!
The alarm works!

And for those wanting to see what it looks like in the standard client:


There it is! It seems really simple in hindsight, but then again we did start out just trying to power off a VM.
NOTE: Disable HA before shutting the VSA down. If you don’t, your VSA will get restarted automatically. Obviously, once you are done patching and all of that, re-enable it!

This process is pretty easy to update for a single datacenter manually.

Next time, we’ll make a workflow that will go through every datacenter object and update all of the alarms.

Thanks for reading! The VSA onion continues to peel…

VMware VSA Deepdive – Part 5 – Use SOAP!

Fake Edit: It has been a while! It’s been lots of great weather, people visits, and then VMworld happened. Time to get back on track!

In the last post about the VSA, we leveraged the SSH plugin in VCO to send the necessary command to the host that would force the VM to power off, as we can’t do it from vCenter. There are pros and cons to that approach, though.

First of all, not particularly great from a security perspective – you have to have SSH open and the service started to make that happen. Depending on your environment, this may not be seen as a good thing.

You’re also using the root credentials to accomplish the feat. You could use another one, but that’s a whole lot of work just to set that whole thing up in preparation for this scenario. You’d have to take into account rolling the root password and how you could work that into the workflow and managing it over time.

And finally, related to the above–this process isn’t going to pop up in a log very easily since you’re bypassing the API *AND* vCenter.

So, given these faults I needed to find another way to proceed. To be fair this process also required root to the host so it wasn’t that much better, but it would at least show up in host logging. The answer?

SOAP. The yardstick of all APIs.
SOAP. The yardstick of all APIs.

Yep. I was desperate. But if it helps me to automate working with 1500+ hosts, it’s WORTH IT.

I want you to hit me as hard as you can

I haven’t had to make SOAP requests to anything in ages, so I was pretty rusty. Thankfully VCO to the rescue again with a built-in SOAP plugin.
First things first. Make a new Workflow and define two input Parameters – one for the ESXi host, and the VSA VM you wish to power down.

Inputs for the SOAP workflow.
Inputs for the SOAP workflow.

For your attributes, define the following as type String:

  • inputHostWSDLUri – this is to specify the WSDL URI used to talk to the ESXi host.
  • inputHostPreferredEndpoint – this is for talking to the API endpoint.
  • inputHostName – this is just to hold a string value of the ESXi host for later.

The rest of the attributes in the workflow can simply be bound as you add the other workflows that come with VCO.

Preparing the SOAP Host entry

One thing to keep in mind – when adding an ESXi host as a SOAP Host, it sets some values automatically that do not allow this to complete as expected. The first problem is the SOAP Endpoint and the SOAP WSDL URI. Both of these, when enumerated by the SOAP plugin point to https://localhost which makes sense for when the ESX host is doing calls to itself, but not for VCO to reach out to it remotely. The first order of business is to fix these values.

Create a Scriptable Task element in your schema, and bind the input Parameter inputHost to it. Bind the 3 attributes you defined above for output. Then, input the code found below.

Setting up the WSDL and Endpoint URI.
Setting up the WSDL and Endpoint URI.

This code is pretty straightforward. It is simply replacing the values with the correct ones to perform remote SOAP calls.

Next up, drop a copy of the Add a SOAP Host workflow into your schema. Bind the inputHostName and inputHostWSDLUri values to the name and wsdlUri parameters, and bind the rest to new attributes/values as you desire.

Binding new attributes to the Add a SOAP Host Workflow.
Binding new attributes to the Add a SOAP Host Workflow.

You’ll need to provide things such as the username/password to the host, timeout values and other values, all of which can be static values, or linked to a Configuration Element.

For the OUT parameter of this workflow element, bind it to a new attribute named soapHost so we can use it later.

Modify the SOAP Host

Before you proceed, you need to update the new SOAP Host with the new value of inputHostPreferredEndpoint.
Drop a copy of the Update a SOAP Host with an endpoint URL workflow into the schema.
Simply bind the attributes soapHost and inputHostPreferredEndpoint to their respective input parameters, and bind soapHost to the output parameter, so that it completes.

Using SOAP to find the VSA and shut it down

Add a Scriptable Task to your schema. On the inputs, bind the soapHost, inputVM, username, and password attributes.

Below is the code you can copy/paste, with comments as needed.

// get the initial operation you want to go for from the SOAP host specified.
var operation = soapHost.getOperation("RetrieveServiceContent");
// Once you have the SOAP Operation, create the request object for it
var request = operation.createSOAPRequest();

// set Parameters and their attributes.
request.setInParameter("_this", "ServiceInstance"); // creating the input Parameter itself
request.addInParameterAttribute("_this", "type", "ServiceInstance");

// make the request, save as response variable
var response = operation.invoke(request);

// retrieved values to be passed on down the line
var searchIndex = response.getOutParameter("returnval.searchIndex")
var rootFolder = response.getOutParameter("returnval.rootFolder")
var sessionMgr = response.getOutParameter("returnval.sessionManager")

// get Login Session to add to future headers.
var hostLoginOp = soapHost.getOperation("Login")
// create Login request
var loginReq = hostLoginOp.createSOAPRequest()
loginReq.setInParameter("_this", sessionMgr) // using value from initial query
loginReq.addInParameterAttribute("_this", "type", "SessionManager")
loginReq.setInParameter("userName", inputUser)
loginReq.setInParameter("password", inputPassword)
var loginResp = hostLoginOp.invoke(loginReq)
var sessionKey = loginResp.getOutParameter("returnval.key")

// find the VSA VM on the host.
var vmoperation = soapHost.getOperation("FindChild")
System.log("VM Search Operation is: "
var vmreq = vmoperation.createSOAPRequest()
// define parameters
vmreq.setInParameter("_this", "ha-searchindex") // get the SearchIndex
vmreq.addInParameterAttribute("_this", "type", "SearchIndex")
vmreq.setInParameter("entity", "ha-folder-vm") // representing the root VM Folder
vmreq.setInParameter("name", // your search criteria

// send request, get the response in a variable
var vmresp = vmoperation.invoke(vmreq)
// assign moref to variable
var vmMoRef = vmresp.getOutParameter("returnval")
// this log shows the output value
System.log("MoREF of VM [""] on [""]: "+vmMoRef )

// now that you have the MoRef of the VSA VM, you can kick off the Power Off task with a decision/parameter.
var pwroffOp = soapHost.getOperation("PowerOffVM_Task") // get the Power Off operation
var pwroffOpReq = pwroffOp.createSOAPRequest() // create the Power Off request
// define parameters
pwroffOpReq.setInParameter("_this", vmMoRef) // assign the MoRef of the VM to power off
pwroffOpReq.addInParameterAttribute("_this", "type", "VirtualMachine")
// shut off the VM by executing the request.
var offresp = pwroffOp.invoke(pwroffOpReq)

And there you have it. If you direct connect to the ESXi host when you run this workflow, you will see a task for powering off the VM appear and you are good to go.

One thing I prefer to do at the end of this workflow is to drop in the Remove a SOAP Host workflow and bind appropriately so that my host list doesn’t get too large, but this is obviously optional.

A final note on SSL Certificates

If you run this as it is out of the gate, you will probably get a pop-up regarding whether to trust the SSL certificate of the host.
Of course in a perfect world, all of your certificates are trusted top to bottom and are maintained. But anyone who has tried to do this at scale has struggled and probably doesn’t bother. In order to bypass this, you’ll need to make a few adjustments and duplicate the stock workflows so you can make changes to them.

In the VCO workflow list, go down to Library -> SOAP -> Configuration and right-click Manage SSL Certificates.
Choose Duplicate Workflow.
Duplicate SSL Workflow
Save your copy of the workflow wherever you like. You may want to change the name a bit to reflect that it isn’t the standard workflow.

Now, you can edit the workflow and make a minor adjustment. Here is the workflow by default.

The default Manage SSL Certificates workflow schema.
The default Manage SSL Certificates workflow schema.

You’ll notice the Accept SSL certificate schema element. Simply click it and delete it from the schema.

The "custom" Manage SSL Certificates workflow schema.
The “custom” Manage SSL Certificates workflow schema.

Finally, click on the General tab of your workflow, and look for an attribute named installCertificates. Inside of the value, input the text Install. The workflow element Install certificate does a simple check to see if the attribute is requesting to install, and continues from there.

Updating the new Manage SSL Certificates attributes.
Updating the new Manage SSL Certificates attributes.

As a final step, you will want to duplicate the Add a SOAP Host workflow, and replace the Manage SSL Certificates element with this new one you have created.
Ensure that the two attributes are rebound to the values the old one were using.

Re-adding the workflow bindings for SSL Certificates.
Re-adding the workflow bindings for SSL Certificates.

With these changes, you can Add a SOAP Host and not get stopped for SSL verification.

Of course, this isn’t really a best practice, but it gets the job done.

Next up, the final and maybe the most elegant solution for working with the VSA VM Power Off situation.

VMware VSA Deepdive – Part 4 – Shutting Down VSA (SSH Edition)

The first way I elected to try and forcibly shut down the VSA VM was to do everything through the ESXi command line via SSH. ESXCLI itself is not implemented in VCO directly, so this will require some good old fashioned text parsing with AWK.

Enabling SSH on the host through a VCO Action

Unfortunately out of the box, there is no workflow/action that manages ESXi services, so I needed to roll my own.
Below is the Action setup and script code I used to check for the SSH service and start it up, given an input of type VC:HostSystem.
Create the Action, and name it startSSHService. There is no return value necessary on this Action.
Setting up the SSH Service Action

// get the list of services
var hostServices = inputHost.configManager.serviceSystem.serviceInfo.service
var sshService = null
// loop the services, find the SSH service.
for each(svc in hostServices) {
  if(svc.key == "TSM-SSH") {
    sshService = svc
    System.log("Found SSH Service on host [""]")
if(sshService == null) {
  throw "Couldn't find SSH service on [""]!"

// Enable the service
try {
} catch(err) {
  System.log("ERROR--Couldn't start SSH service. Error Text: "+err)
// the end

So, once you have SSH started on your ESXi host, you can send commands through VCO to do what you need.

SSH Service Check Action

For a more robust workflow, you will probably want an Action that will check to see if the service is running, and return a boolean value. That way you can build in a little bit more into the flow.
The setup for the ‘check’ Action is the same, with the exception of the return value being a boolean.

Setting up the Check SSH Action.
Setting up the Check SSH Action.

The code is similar as well, just doing a simple check at the end.

var hostservices = inputHost.config.service.service
var sshSvc = null
for each(svc in hostservices) {
  // System.log("Service: "+svc.label+", Status is: "+svc.running)
  if(svc.label == "SSH") {
    sshSvc = svc

// check status, return true/false
if(sshSvc.running == true) {
  return true
} else {
  return false

Where’s my Burrit–VSA?

Before you power off the VSA VM you’ll want to make sure to vMotion your other guests to another node, or have a foolproof way of finding the VSA appliance on your host. Another Action to the rescue! Given an input ESXi host, this Action will query the VMs running on the host and check its tags out to see if it matches a specific value found on all VSA appliances. Note that these Tags are actually in the vCenter database, and not the Inventory Service Tags in the vSphere Web Client.

For purposes of this post, I’ll name the action getVSAonHost.

Setting up the VSA finder Action.
Setting up the VSA finder Action.
// for when you absolutely, positively need to make sure it's a VSA.
// check the VMs on the host for the tag through a loop
for each(vm in inputHost.vm) {
  if(vm.tag) {
    for each(tag in vm.tag) {
      if(tag.key == vsaKey) {
        return vm

So now, you know you have the VM in question. You can then pass the VirtualMachine’s name property to your SSH command later.

Making a SSHandwich

With the ESXi host and the VSA VM in hand, you can execute the built in Run SSH Command workflow to do the final step.

Here’s the SSH command to send, which will find the VSA VM ID and power it off in one line, no questions asked:

VMID=$(vim-cmd vmsvc/getallvms | grep -i <VSA Name> | awk '{ print $1 }') && vim-cmd vmsvc/ $VMID

Begin by creating a new workflow, and create a single input parameter named inputHost, of type VC:HostSystem.
Then create three attributes in the General tab, naming them sshCommandhostSystemName and vmVSAName, all of type string.
Finally, create another attribute called vsaAppliance of type VC:VirtualMachine for use with the Action.

Next, drop your getVSAonHost Action into the schema, and bind the actionResult to vsaAppliance as seen below.

Binding Actions to the getVSAonHost Action.
Binding Actions to the getVSAonHost Action.

Next, drop a Scriptable Task into the Schema and bind inputHost and vsaAppliance on the IN tab. On the OUT tab, bind the attributes of hostSystemName and vmVSAName. We are effectively going to write a small blurb of code that hands off the name properties of the input objects to the output attributes for use later, along with creating the SSH command string.

Binding values to the Scriptable Task.
Binding values to the Scriptable Task.

In the Scripting Tab, we’ll use a few simple lines of code to perform the handoff of values.

// assign values to output attributes
hostSystemName =
vmVSAName =
// create SSH command string using the input values
sshCommand = "VMID=$(vim-cmd vmsvc/getallvms | grep -i "+vmVSAName+" | awk \'{ print $1 }\') && vim-cmd vmsvc/ $VMID"

Finally, drop a copy of the Run SSH Command workflow into the schema. There are a lot of inputs here, so you will have to do some more complicated bindings. You can either force them to be input each time the workflow is run, set a static value, or bind to existing attributes.

Here’s what it looks like by default.

Setup for the SSH Workflow.
Setup for the SSH Workflow.

How you approach this part is largely up to you, but here is how I did it for this example.

The updated SSH Workflow Setup.
The updated SSH Workflow Setup.

You’ll notice for the username/password I set a static value for root and set passwordAuthentication to Yes, and changed the initial hostNameOrIP and cmd values to the attributes we created earlier. For the outputs, I created new local attributes to show the results.

Run the workflow and you should see that VSA go down!

As an aside, if HA is enabled in your VSA HA Cluster object, it will immediately attempt to restart the machine – so make sure you add in the capability to disable HA into your parent workflows first so that this doesn’t end up being a problem.

Sweet Reprieve

It’s a bit crazy the amount of effort it takes to work around a disabled method, but it does work. I don’t think it’s particularly great, and if you’re the only one who cares about the systems and don’t have audits, this may be enough.

But for my purposes this was just the first step of the journey. I did not end up actually using this way to do things, but figured it may be a good exercise to document the process.

The next post will take it in a different direction altogether, and a little bit closer to an API based solution that can also be audited.