A love letter to vRO’s LockingSystem

Oh LockingSystem, how do I love thee? Let me count the infinite ways.

The LockingSystem scriptable objects have been around since at least the vCO 5.5 days. In later revisions VMware put a few demo workflows to show how you could take advantage of it, but for the most part it’s not talked about too much. I’ll say this though – when you really start building complex workflows with many interlocking systems, LockingSystem saves lives.

A use case that comes almost immediately to mind is that of vRA and provisioning a large number of machines at once. A notable difference could be that in this environment, we do not use the internal IPAM, but rather the Infoblox IPAM system and the Event Broker to allocate an IP in the PRE BuildingMachine phase.
In a situation where 20-30 VMs are requested for a build, you want to make sure there are no concurrency problems where multiple VMs get the same IP address allocated, right?

Thankfully implementing the LockingSystem is extremely easy, but there are few things to keep in mind to make sure it is implemented well.

The process you wish to lock should be 1 or more separate workflows
In the example above, I have created a separate workflow that calls Infoblox IPAM via the REST API. It takes a couple of inputs, and finds the next available IP on the correct subnet, allocates it, and then outputs the network information that I want for VM customization.

A high level workflow schema for LockingSystem.

Above is a very high-level schema of what a typical implementation of LockingSystem would look like.
Notice that if the nested workflow fails, you will want to ensure the lock is removed! Alternatively you could bind errors to continue ahead without an exception, though this gives you more flexibility to recover gracefully.

For the first element in the use case detailed above, it could be as simple as this code:
LockingSystem.lockAndWait("vRA-IP-Allocation",workflow.id)

The first parameter in the lockAndWait method is an arbitrary name. This is a global value, so if you have other workflows that may call this same nested one, it is worth considering having a naming schema for your locks.
The second parameter is the current workflow token ID. I typically like to see which workflow token has the lock if I need to, but this is also a very easy way to lock a workflow without excessive binding of attributes. Alternatives to use would be the workflow runner’s account name if needed.

Similarly, it is simple to unlock the workflow so that the next process in line can proceed.
LockingSystem.unlock("vRA-IP-Allocation",workflow.id)

Adjusting LockingSystem concurrency
A single unique locking mechanism is nice, but what if you want to tune the concurrency throughput? This would help limit requests to an external system, plugin, or API so that more than one flow can run, but not overwhelm it. Thankfully the LockingSystem has methods that can help you do this, with a little bit of setup.

Pick a line, any line
Let’s say you wanted to be able to limit your number of locks to a maximum of 5, because the plugin or external system can get overwhelmed.
You can request a lock between 1 and 5, such as “vRA-Machine-Build-1” through “vRA-Machine-Build-5.” All you have to do is generate that suffix number, either randomly or through a controlled loop.

A simple random inclusive number function can be found below, and it is probably useful enough to simply make into a vRO Action for future use!
In this sample, your return value would be a number type, and your two inputs, min and max would also be number type.
// Reference code from Mozilla
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/random
min = Math.ceil(min);
max = Math.floor(max);
return Math.floor(Math.random() * (max - min + 1)) + min

Then in your Locking scriptable task, you can call the function or action to return a value from 1 to 5 (or whatever values you want!) and append it.

var lane = System.getModule("us.nsfv.shared.locking").getRandomNumber(1,5)
LockingSystem.lockAndWait("vRA-Machine-Build-"+lane,workflow.id)

Find the first open lane
While the random inclusive number (or in this case, the “lane”) can solve the concurrency, you may want there to be more intelligence – i.e. have the LockingSystem get the first available, regardless of which lane it is. There is a simple method simply called .lock() that will return true/false if you successfully acquire a lock of that value.
Thus, you can simply loop through the available locks and wait for one to be true.

for(i=1;i<=5;i++) {
var lockAcquired = LockingSystem.lock("vRA-Machine-Build-"+i,workflow.id);
if(!lockAcquired) {
// no luck getting this one
} else {
// lock acquired, output the lock ID to an attribute so it can be unlocked later.
System.log("Lock "+i+" acquired!");
var attributeLock = "vRA-Machine-Build-"+i;

}
// reset the loop if you've reached the end
if(i == 5) {
i = 0;
}
}

When this loop runs, it will stay essentially in an infinite loop until a lock is freed up, at which point it will exit and continue with whatever process was locked, until it is unlocked afterward.

The functions and scripts above are a good starting point in tuning your vRO workflow concurrency. I hope these help you as much as they have helped me out!

vSphere SSL and vCO Part 4 – Venafi API

In parts 1 through 3, we setup workflows to generate OpenSSL configuration files, private key and CSR for an ESXi host.

This portion will concentrate on creating REST API calls to Venafi, an enterprise system that can be used to issue and manage the lifecycle of PKI infrastructure. I find it pretty easy to work with, and know some colleagues who are interested in this capability, so I decided to blog it!

At a high level, this post will build workflows that do the following tasks in the API:

  • Acquire an authorization token
  • Send a new certificate request
  • Retrieve a signed certificate

Let’s get to it!

Setup REST Host and Operations for Venafi

Since there are 3 operations above, thankfully there are 3 corresponding operations that will require setup in vCO.

REST Host

Find the workflow under the HTTP-REST folder called Add a REST Host and execute it.

Setting up the Venafi REST Host.
Setting up the Venafi REST Host.

Fill in the name and URL fields to match your environment.
The Venafi API endpoint ends with /vedsdk – don’t add a trailing slash!

For the Authentication section, choose Basic.
Then, for the credentials choose Shared Session, and your username and password.

If you haven’t already imported the SSL certificate for Venafi, you may be asked to do so for future REST operations.

REST Operation(s)

Find the workflow under the HTTP-REST folder called Add a REST Operation and execute it.

Setting up the Venafi REST Operation.
Setting up the Venafi REST Operation.

Choose the REST Host you just created above in the first field.
Set a name for the Operation – in this case I used Get Authorization.
For the Template URL, input /Authorize/ – remember the trailing slash!
Change the method to POST, and input application/json for the content type.

Submit the workflow, and the operation is now stored.

Repeat the above workflow, but swap out the Name and Template URL as needed with the below values. These will add the necessary operations to request and download signed certificates.

Name: Request Certificate
Template URL: /Certificates/Request

Name: Retrieve Certificate
Template URL: /Certificates/Retrieve

With the REST Host and Operations set up, let’s work on creating our basic workflows.

Workflow: Acquire your authorization token

There are a number of ways to tackle the creation of the workflow, with error handling and the like, so I will just focus on giving the code that is needed to construct the API call and send it out.

Create a new workflow.

In the workflow General tab, create an attribute named restOperation, of type REST:Operation. Click the value column and choose the Venafi operation you created earlier for /Authorize/ and click OK.Click the Output tab and create a new parameter called api_key. This is the resulting authorization key that you can then pass to other flows later.

Drag and drop a new Scriptable Task into the Schema, and bind the restOperation attribute on the IN tab, and the api_key output parameter on the OUT tab, like this:

Now, copy and paste this code into the Scripting tab.

// get the username/password from the Venafi REST Host.
var venafiUser = restOperation.host.authentication.getRawAuthProperty(1)
var venafiPass = restOperation.host.authentication.getRawAuthProperty(2)
// create Object with username/password to send to Venafi
var body = new Object()
body.Username = venafiUser
body.Password = venafiPass
// convert Object to JSON string for API request
var content = System.getModule("com.vmware.web.webview").objectToJson(body)

// create API request to Venafi
var inParamtersValues = [];
var request = restOperation.createRequest(inParamtersValues, content);
// set the request content type
request.contentType = "application\/json";
System.log("Request URL for Venafi Token: " + request.fullUrl);
// execute
var response = request.execute();
// handle response and output
if(response.statusCode != 200) {
	System.log("There was an error getting an API key. Verify username and password.")
} else {
	api_key = JSON.parse(response.contentAsString).APIKey
	System.log("Token received. Add header name \"X-Venafi-Api-Key\" with value \""+api_key+"\" to future workflows.")
}

Now you may be wondering, why am I pulling the Username and Password from the host in this way? Shouldn’t there be some input parameters?

While we did build the REST Host object to use Basic Authentication, Venafi actually doesn’t support it – you can only POST the credentials to the API with a simple object and get the API key back.

So this is just how I elected to store the username and password – you could set them as attributes in the workflow, as Input parameters, or link those attributes to ConfigurationElement values if you wanted to.

Assuming your service account used for authentication is enabled for API access to Venafi, you should be able to run the workflow and see the log return a message with your API key!

And since it is an output parameter, you can bind that value to the next couple of workflows we are about to create.

Workflow: Submit a new Certificate Request for your ESXi host.

Create a new workflow.

In the workflow General tab, create an attribute named restOperation, of type REST:Operation. Click the value column and choose the Venafi operation you created earlier for /Certificates/Request and click OK.

Attributes for the Request Certificate workflow.
Attributes for the Request Certificate workflow.

Click the Input tab and create 3 parameters:

  • api_key – type is String
  • inputHost – type is VC:HostSystem
  • inputCSR – type is String
Inputs for the Request Certificate workflow.
Inputs for the Request Certificate workflow.

These values should be familiar for those who have been following along.
The first is the API key you created earlier so you can pass the key into this workflow and avoid having to login again. Each time you use the Venafi API key, the expiration time is extended, so you should be good to use a single API key for your entire workflow.

Click the Output tab and create a new parameter called outputDN. This value represents the Certificate object created in Venafi at the end of the workflow.

Outputs for the Request Certificate workflow.
Output for the Request Certificate workflow.

Once again drop a new Scriptable Task into the Schema for the new workflow, and bind the restOperation attribute on the IN tab, along with the 3 other inputs api_key, inputHost, and inputCSR. On the OUT tab, bind the outputDN value into the task, so that your Visual Binding looks like this:

Visual Bindings for the Request Certificate workflow.
Visual Bindings for the Request Certificate workflow.

Now, it’s time to add the necessary code to make the magic happen.

var object = new Object()
object.PolicyDN = "\\VED\\Policy\\My_Folder"
object.CADN = "\\VED\\Policy\\My_CA_Template"
object.PKCS10 = inputCSR
object.ObjectName = inputHost.name
var content = System.getModule("com.vmware.web.webview").objectToJson(object)
//prepare request
var inParametersValues = [];
var request = restOperation.createRequest(inParametersValues, content);
//set the request content type
request.contentType = "application\/json";
System.log("Request URL for Certificate Request: " + request.fullUrl);

//Customize the request here with your API key
request.setHeader("X-Venafi-Api-Key", api_key);

//execute request
var response = request.execute();
// handle response and output
if(response.statusCode != 200) {
	System.log("There was an error requesting a certificate. The response was: "+response.contentAsString)
} else {
outputDN = JSON.parse(response.contentAsString).CertificateDN
}

The only things in this code you will want to change are the PolicyDN and CADN values, as those are unique to each environment. You’ll want to consult your Venafi admin, or look in the /VEDAdmin website running with Venafi for the proper value.

Workflow: Retrieve the signed Certificate

So you’ve gotten this far, and were able to post a new certificate request. The last step is to download it and get ready to upload it to the ESXi host.

Create a new workflow.

In the workflow, create an attribute named restOperation, of type REST:Operation. Click the value field and choose the Venafi operation you created earlier for /Certificates/Retrieve and click OK.

Attributes for the Retrieve Certificate workflow.
Attributes for the Retrieve Certificate workflow.

Click the Input tab and create 2 parameters:

  • api_key – type is String
  • inputDN – type is String
Inputs for the Retrieve Certificate workflow.
Inputs for the Retrieve Certificate workflow.

The first value api_key is back once again to re-use the existing API key created before.
The second value is related to the outputDN from the previous workflow. You will be feeding this value into the new workflow so that it knows which certificate object to query and work with.

Click the Output tab and create a new parameter called outputCertificateData. This value represents the signed certificate file encoded in Base64.

Outputs for the Retrieve Certificate workflow.
Outputs for the Retrieve Certificate workflow.

Now, drop a new Scriptable Task into the Schema for the new workflow, and bind the restOperation attribute on the IN tab, along with the 2 other inputs api_key and inputDN. On the OUT tab, bind the outputCertificateData value into the task, so that your Visual Binding looks like this:

Visual Bindings for the Retrieve Certificate workflow.
Visual Bindings for the Retrieve Certificate workflow.

Now to execute the API call to get the certificate, using the below snippet of code:

// setup request object
var object = new Object()
object.CertificateDN = inputDN
object.Format = "Base64"
var content = System.getModule("com.vmware.web.webview").objectToJson(object)

//prepare request
var inParametersValues = [];
var request = restOperation.createRequest(inParametersValues, content);
//set the request content type
request.contentType = "application\/json";
System.log("Request URL for Retrieving Signed Certificate: " + request.fullUrl);

//Customize the request here
request.setHeader("X-Venafi-Api-Key", api_key);

//execute request
var response = request.execute();

// deal with response data
if(response.statusCode == 200) {
	// the result is in Base64, so decode it and assign to output parameter.
	var certBase64 = JSON.parse(response.contentAsString).CertificateData
	outputCertificateData = System.getModule("us.nsfv.lab").decodeBase64(certBase64)
	// done
} else {
	System.debug("Unhandled Error with response. JSON response: "+response.contentAsString)
}

Now, once these workflows run in sequence, you should have a Base64 decoded string that looks like a signed certificate file would. We haven’t strung these flows together yet, so don’t panic that it isn’t working! It will all make sense, I promise!

In the next post, we will write that content to disk, and then the fun part: uploading it!

vSphere SSL and vCO Part 2 – Appliance Setup

The first step to getting this process to work is to realize that ultimately under the hood, vCO is a Linux appliance, with a lot of functionality not exposed, or not immediately obvious. There are a lot of tweaks you can make to really enable great functionality, and this process may give you other interesting ideas!

NOTE: The below steps assume vCO Appliance 5.5+

You’ll need to start out by using PuTTY to SSH into the appliance. If SSH is turned off, you’ll either need to use the console, or enable Administrator SSH from the VAMI interface.

Once logged in, change directory to /etc/vco/app-server, and then type vi vmo.properties to open the file in a text editor.

Logging into vCO with SSH.
Logging into vCO with SSH.

Inside you will want to see if you have a line that looks like this:

com.vmware.js.allow-local-process=true

If it doesn’t exist, press i to change to Insert mode, and then add a new line and put it in there. Once done, press ESC to exit Insert mode, and type :wq to write the file and quit the editor.

When vCO Server is restarted, you will be able to execute Linux commands against the VM from within your workflows. The catch is, you have to make sure that the vco account on the appliance has the ability to execute it.

To enable this, type vi js-io-rights.conf from the shell. This file may already exist and have some data in it. If not,  you get to define explicit rights at this point. Here’s mine for reference:

Example JS-IO-RIGHTS.CONF file.
Example JS-IO-RIGHTS.CONF file.

Add the below lines to the file by pressing i, again going to Insert Mode and then the below information, with each line corresponding to a specific executable on the appliance. The prefix characters are adding read and execute rights for the vco user.

+rx /usr/bin/openssl
+rx /usr/bin/curl

Press ESC, then :wq to save the file and exit.

With these tweaks enabled, you will need to restart vCO Server. You can do this a number of ways, but since you’re in the shell this is the fastest:

service vco-server restart

Now, you will be able to execute these commands in a workflow when you use the Command scripting object, which will run the command and return the standard output back as a variable, as well as the execution result, such as success or failure!

With that in mind, let’s do a quick experiment in a workflow to ensure it works as intended.

Proof of Concept Workflow Creation

  • Create a new Workflow as you normally would. No inputs are necessarily required for this test as we will get into those values in later posts.
  • Drag and drop a fresh Scriptable Task into the schema, and edit it.
  • Paste the code below into the scripting tab:
// Creates new Command object, with the command to run as your argument
var myCommand = new Command("/usr/bin/openssl version")
// Executes the command
myCommand.execute(true)
// Outputs the result code, and the output of the command
System.log("Exit Code: "+myCommand.result)
System.log("Output: "+myCommand.output)

Close the Scriptable Task, run the workflow and check the log – you should see something like this:

OpenSSL version running on the appliance.
OpenSSL version running on the appliance.

If you were to type the same command in the shell, the result would be the same. So while we’re here, let’s update the code in the task to verify cURL also works. Change line 2 in the task to look like this (note the case-sensitive argument!) :

var myCommand = new Command("/usr/bin/curl -V")
cURL version running on the appliance.
cURL version running on the appliance.

You’ll probably note that the OpenSSL version installed on the VCO appliance is the same one that is required by VMware for the entire SSL implementation in the 5.x release! With this working, now we can do some really cool stuff.

In the next post, we will build out the workflow that will create the private key and CSR for any and all of your ESXi hosts! This same flow can be used as the basis for vCenter SSL, vROPS, or even vCO itself!

vSphere SSL and vCO Part 1 – Thoughts

One of the biggest pains with vSphere is generating and replacing the SSL certificates with your own signed ones, at least in 5.1 and 5.5.

No doubt most of us have read Derek Seaman’s brilliant series a couple of years ago regarding how to get this to work correctly, both in vSphere 5.1 and vSphere 5.5. All the while, we were wishing for a better tool to manage this portion of the environment. Depending on your security department, you may not have a choice!

But then, there was the VMware-sanctioned Certificate Automation Tool! And LO, it was a batch file that was cleverly done.

Now, we have the VMware VMCA in the Platform Services Controller. Looks pretty good, if you’re able to run vSphere 6.0, and your security team allows you to create a subordinate CA for integration, and if you are running all hypervisors with version 6.0 or higher.

But in the real world it’s more complicated than that. 6.0 has had quite a few issues and I think many are still shy about upgrading, though that is changing.

In the meantime, we have to make do and in my opinion, all of this should have been offloaded to vCO in the 5.x releases.
Let’s consider why for a second.

  • vCO is pretty tightly connected to vCenter and the vSphere API – just about anything you can generally automate you can do in vCO.
  • There are also peripheral capabilities such as SOAP, SSH, and general HTTPS requests courtesy of the Axis2 engine running underneath.
  • Also, the appliance is Linux, with a plethora of tools that are built in, such as sed or awk, and as we will see later, openssl and curl.

A well constructed set of workflows that handled this functionality and integrated with a typical Microsoft CA could have easily been a great showcase of the product.

In this series I will present workflows that will:

  • Handle creation of the various components using OpenSSL running from within the vCO appliance, focusing on the ESXi host.
  • Use the HTTP-REST plugin to get a signed certificate – this is made possible primarily due to a tool called Venafi Trust Platform, a system that can integrate with many external PKI infrastructures and manage their lifecycle.
  • Use the vCO appliance to upload the signed certificates and update the vCenter database with the new values.

Hopefully you find it useful and insightful. Once the series is finished, I will release a package of workflows and logic I use for my use case, probably on FlowGrab.

Working with Customization Specifications in vCO

I recently had a minor project to assist with refreshing a deployment of hundreds of remote backup servers, which were moving from a standard Windows 2008 R2 image to 2012 R2. The process was going to leverage some custom workflows I had written using the Deploy OVF functionality in vCO.

A pretty straightforward setup, so I asked how they planned to do the customization for each site. The answer was essentially: “We’re going to do the customization manually like always.”

#recordscratch

I checked into it, and sure enough that had always been the process. That process included fat fingering the host names, inputting incorrect IP address information, not installing the software properly, not updating the Infoblox IPAM system, the list goes on and on.

I’m an automation freak for a number of reasons but most of all it seemed like this was a perfect use case to leverage vCO capabilities I knew were possible out of the box, and also do some digging on a few parts I didn’t know about.

The Ingredients

First, I did some REST API testing and querying of our internal Infoblox IPAM system to ensure I could get the correct or available IP addresses on the pertinent location and VLAN programmatically.

Second, I had to build a workflow that created the AD Computer object in advance in a particular OU, so that the company’s GPO would take effect immediately upon joining the machine to the domain. The interesting problem to solve here was using a particular service account to perform the work, rather than the default AD Host.

Finally, I had the team put together a Windows 2012 template that was not sysprepped, so that I could use vCenter’s Customization Specifications to do that for me, along with executing first boot scripts, configuring the Network, join the domain, and the like. This meant figuring out how to programmatically use the Customization Spec APIs, which I wasn’t familiar with.

1/4 Cup REST API – Getting the correct Network CIDR

The first piece I worked on was fairly simple after learning some syntax which was querying the Infoblox REST API for the relevant networks.
Each remote location had specific Extensible Attributes used that contained the VLAN ID and the Location code. The location code corresponded to the Datacenter object in our vCenter Server, so creating a correlation just got a whole lot simpler.

Create a REST Host using the default Add a REST Host workflow, substituting your IPAM server URL and credentials appropriately.

Configuring IPAM REST Host.
Configuring IPAM REST Host.
Configuring IPAM REST Host
Configuring IPAM REST Host

Once that’s created, run Add a REST Operation workflow so that we can add the actual REST API URL to the host just created with a couple of parameters that can be invoked at runtime. This operation will be a GET operation, also.

For the URL Template field, you will want to put this:
/wapi/v1.6/network?_return_fields%2B=network&*SiteNumber~:={MY_LOCATION_ID}&*VLAN={VLAN_ID}

This URL may be a bit of a puzzle, so I’ll break it down:
/wapi/v1.6/network – the base portion, indicating we are querying available network objects in the API.
?_return_fields%2B=network – this indicates what values we want returned, rather than a bunch of values. In this case, I just wanted the CIDR value.
&*SiteNumber~:={MY_LOCATION_ID} – The ampersand is a parameter of the query, and the asterisk preceding “SiteNumber” indicates an Extensible Attribute inside the IPAM database. Finally the ~:= indicates a Regular Expression search.
&*VLAN={VLAN_ID} – another parameter of the query, using the VLAN Extensible Attribute.

NOTE: The Extensible Attributes used here were unique to this installation – you will have to create your own or ask your Network team for the attributes they use!

Of note, there is a pretty great example page full of Infoblox IPAM API calls you can find here: https://community.infoblox.com/t5/API-Integration/The-definitive-list-of-REST-examples/td-p/1214

Upon executing the Invoke a REST Operation workflow, you can plug in the values for MY_LOCATION_ID and VLAN_ID and assuming it exists, you will get a JSON string back with the network CIDR that can be used for further parsing. In my case, I only needed to replace the last octet with a specific value that was going to be used at all locations, as well as the gateway.

Now, how to connect to AD using a specific service account rather than the default?

1/4 Cup Active Directory – Connecting with a specific service account

I would start by getting version 2.0 or higher of the plugin, as even up to vRO 6.0.1 didn’t ship with it.
You can download it from this link: https://my.vmware.com/group/vmware/get-download?downloadGroup=VRO_AD_PLUGIN_200

This version enables multiple AD connections which is required for this use case. If you’re lucky enough to be on vRO 7.0 or higher, you’re good!

Run the Add an Active Directory Server workflow and configure it to use a Shared Session with your service account.

With that done, create a Scriptable Task in a workflow that has a single output binding, called adHost, and then copy in the following code:

var allAD = AD_HostManager.findAllHosts()
for each(ad in allAD) {
 if(ad.hostConfiguration.sharedUserName == "LAB\\service_account") {
   adHost = AD_HostManager.findHost(ad.hostConfiguration.id)
   System.debug("AD Connection ID: "+ad.hostConfiguration.id)
 }
}

In this code, we’re essentially getting a list of all available AD Host connections, then looping through them to find the one assigned to our particular service account, and saving it to an attribute named adHost. This attribute can then be bound to another workflow or element to create the AD Computer Account, along with all other methods available in the plugin.

1/2 Cup Customization Spec Manager API – A Journey

This part of the journey was fairly new to me. I hadn’t tried using the API to tinker with Customization Specs before so it was an interesting journey, and I learned both the hard and subsequent easy way of accomplishing my task.

My initial thought on this was to create a specification for each location with the NIC1 interface configured using the IP settings gathered from the REST API call. After some tinkering and testing, I was able to create a Scriptable Task that did just that. Running my workflow to create a pile of hundreds of Customization Specs all named and IP’d properly was admittedly pretty cool to me. But, when I attempted to clone one of my test templates with the spec, it never completed. The reason?

The vCenter public key error.
The vCenter public key error.

As it turns out, when creating a Customization Spec the object is encrypted using the vCenter public key. This is to protect the stored credentials for the local Administrator account, as well as the account used for joining the system to the domain. Digging into the API Explorer shows that you can specify a clearText property to true and bypass this, but it doesn’t help as it seems the whole object is encrypted. Of note, you also get this error pop up if you export a Customization from one vCenter to another one.

But once you re-input those credentials and save the Specification, the cloning works as expected. So, can I modify the Customization Spec during workflow runtime? Turns out, that is the best way to approach this problem.

In vCO there is no data type that corresponds to a Customization Spec, so to pass it around, you’ll need to use the wonderful Any type.

To begin, create a Scriptable Task with inputs for the IP Address, Subnet Mask, and Gateway variables, and a single output of type Any to hold the new version of the Customization Spec.

To get a Customization Spec into a variable you can manipulate, you can use the following bit of code using a sample input type of VC:Datacenter, named inputDC.

var customizationSpec = inputDC.sdkConnection.customizationSpecManager.getCustomizationSpec("My_Customization_Spec")

Now, we’ll populate the Default Gateway with the one you input, either from Infoblox IPAM, or through a normal string input.

// gateway - the input type must be an array of a single value.
var staticGW = new Array()
staticGW.push(inputGW)
customizationSpec.spec.nicSettingMap[0].adapter.gateway = staticGW

Here, we declare a Fixed IP type, and set it to a value from your Infoblox IPAM query, or string input.

// Static IP - input type must be declared
var staticIP = new VcCustomizationFixedIp()
staticIP.ipAddress = inputIP
customizationSpec.spec.nicSettingMap[0].adapter.ip = staticIP

Finally, a simple string assignment for the subnet mask.

// Subnet Mask - can simply be a string value
customizationSpec.spec.nicSettingMap[0].adapter.subnetMask = inputNetmask

With the adjustments to the spec made, assign the modified specification to the output attribute.

// assign the updated spec to the output
outSpec = customizationSpec.spec

With that done, you can now use the CloneVM_Task API to create a clone of the VM template with the specification you just created. You’ll need to make sure you have a way of applying the target host, resource pool, datastore, as well as the Customization Spec to ensure this is successful.

It is worth noting that you did not actually modify the Customization Spec in vCenter directly – you just grabbed it, messed with it and sent it on its way! This makes the Specification useful for any environment, any location, any vCenter!

I hope you found this useful. If you have any questions or thoughts, hit me on Twitter or email!

Deploying OVF Templates with VCO

A feature that I have ended up using a lot lately, especially in delegating large deployments through workflows, is the vCenter Plugin’s ImportOVF feature.

No workflows come with the appliance that handle deploying OVFs, as there are a lot of variables involved. But once you know how to construct the request once, it’s really helpful to have handy!

To call the importOVF method, these are the arguments / types you will need to provide:

ovfUri (string) : This is the URI to the .OVF descriptor file. It can be either a valid HTTP link, or if you have a mounted share, it could be a local FILE:// location.
hostSystem (VC:HostSystem) : You’ll have to specify the ESXi target host with this argument.
importFolderName (string) : You can specify a VM folder name here that already exists – if you don’t have one, you should put in “Discovered virtual machine,” as it should always exist by default.
vmName (string) : This argument is the VM name as it will exist on the target host.
networks (array of VcOvfNetworkMapping): This argument is to map each virtual NIC interface from its source portgroup to the target portgroup.
datastore (VC:Datastore) : The datastore where the VM will reside.
props (array of VcKeyValue) : This is an optional parameter that can be used to provide inputs for the OVF settings, if they are being used.

Pointing to the OVF Files on a local share

It’s safe to say that most VCO appliances or installations won’t have direct internet access to pull OVFs from the Solution Exchange.
If you have mounted a CIFS share to the VCO appliance, you can access the OVF files there. The path requires additional escape slashes due to how VCO reads the attribute, so it would look something like this:

file:////mnt//cifsShare//ovf_templates//MySuperCoolApp//MySuperCoolApp.ovf

In this example, I have a CIFS share mounted on the appliance at mount point /mnt/cifsShare.
You will want to make sure that you set appropriate read/execute permissions in the js-io-rights.conf file to this location.
If you have no idea what that means, go HERE.

Setting up the OVF Network Mappings

The only thing that needs a little explanation on using this method is the VcOvfNetworkMapping portion of things.
If you think about it, during the deployment of an OVF in the vSphere Client, there is a window that asks you to map between the source and destination networks.

The vSphere Client OVF Network Mapping menu.
The vSphere Client OVF Network Mapping menu.

The ‘source’ network is whatever the portgroup was called when it was exported. You just need to map it to the destination portgroup on the target host. This requires you to know both sides so you can code for it, if you need to.

Below is some sample code you can use in a Scriptable Task to construct the network mapping values you need to execute the method.

// setup OVF to move the 'source_network' network to 'target_network'
var myVcOvfNetworkMapping = new VcOvfNetworkMapping() ;
myVcOvfNetworkMapping.name = "source_network"
// check for 'target_network' Portgroup on the ESX host bound to the task
for each(net in inputHost.network) {
  if(net.name == "target_network") {
    myVcOvfNetworkMapping.network = net
    break
  }
}
// create empty array of Network Mappings
var ovfNets = []
// add the mapping to the array
ovfNets.push(myVcOvfNetworkMapping)

If you have multiple NICs on different networks, simply repeat the steps above and adjust the mappings.

Optional: OVF Properties

If you are deploying OVFs that utilize OVF properties, you can create an object to specify them.
To ensure you get the right values, you may want to deploy an example OVF, and then check the OVF environment variables in the VM settings.

Here’s an example:

OVF Properties for a VM.
OVF Properties for a VM.

Given the above, notice the PropertySection which has the values you want. Here’s a bit of example code to populate these values:

// Create empty array of key/value pairs
var keys = []
// Create new key/value for OVF property
var myOVFValue01 = new VcKeyValue() ;
myOVFValue01.key = "OVF.OVF_Network_Settings.Network"
myOVFValue01.value = "10.10.10.5"
// Create new key/value for OVF property (that totally doesn't exist in the shot above, but you get it)
var myOVFValue02 = new VcKeyValue();
myOVFValue02.key = "OVF.OVF_Network_Settings.Netmask"
myOVFValue02.value = "255.255.255.0"
// add both of them to the array
keys.push(myOVFValue01, myOVFValue02)

Add the array to your method call and when the VM boots, depending on the underlying machine, it could auto configure! If you integrated this capability with an IP Management system, you could automate provisioning of OVFs top to bottom!

Troubleshooting the importOvf workflow
The VcPlugin.importOvf() method doesn’t give too much back. It is recommended to enable DEBUG level logging on the Configurator page for VCO during your testing of this plugin so you can narrow down what it could be
[WrappedJavaMethod] Calling method : public com.vmware.vmo.plugin.vi4.model.VimVirtualMachine com.vmware.vmo.plugin.vi4.VMwareInfrastructure.importOvf(java.lang.String,com.vmware.vmo.plugin.vi4.model.VimHostSystem,java.lang.String,java.lang.String,com.vmware.vim.vi4.OvfNetworkMapping[],com.vmware.vmo.plugin.vi4.model.VimDatastore,com.vmware.vim.vi4.KeyValue[]) throws java.lang.Exception

The most likely cause of errors in the method is either a missing portgroup or incorrect key value. Since just about everything is just strings, you need to make sure that on the target host, the portgroup is there and matches exactly.

Thanks for reading!

Workflowing with VUM and VCO – The Basics

If you’re using VCO/VRO as your automation engine, sooner or later you will probably add the vSphere Update Manager plugin.
Makes sense, right? The opportunity to schedule or delegate the ability to update and patch your hosts is pretty compelling.

It’s unfortunate that getting up and running with VUM isn’t quite as simple as it looks on the surface. In this particular case, PowerCLI  wins for its ease of integration.
Hopefully, this post helps turn that tide back.

It’s a little bit behind

The installation of the plugin is just like any other – upload it from the Configuration interface on port 8283. I don’t think it really even needs a reboot.
From there, you have to punch in your vCenter URL – which may not be obvious to many as there is little help or documentation.

So just to be clear, add the URL like this : 
https://myvcenterserver.lab.local/sdk

You can also add others in the list if you have multiple vCenters in the same authentication scope in a similar way.

Next up, check your plugin status within the VCO client. Inevitably you will run into an error that simply says ERROR IN PLUGIN.
Unless you are the white knight of IT, this isn’t too helpful.
If you see this, I’m willing to bet that it’s because you didn’t import the SSL certificate to the trusted store.
How would you know to do that? You wouldn’t, unless you like staring at logs set to DEBUG level!

So, how do I import the certificate?
Easy – just point VCO to the SOAP endpoint of your VUM box. You can get the service port piece of this information from the Admin section of the Update Manager plugin through the vSphere Client. You can do this through the Configurator page too, but since Configurator is going away, this is probably the best way.

Locating the SOAP port for VUM.
Locating the SOAP port for VUM.

Now, you can run a workflow to import the VUM certificate into the SSL Trust Store.
You can find the builtin workflow in Library->Configuration->SSL Trust Manager.
Choose to run the Import a certificate from URL workflow.

The Import Certificate workflow.
The Import Certificate workflow.

Execute the workflow, and for the input, use the FQDN to the server running the VUM service, and append port 8084 to the end, like you saw earlier.
The FQDN portion is important! If you don’t put it in there, you will likely have an error.

Importing the VUM SSL Certificate.
Importing the VUM SSL Certificate.

Once the certificates are imported, relaunch your VCO client. After you login, you should see some progress being made.

It's ALIVE!
It’s ALIVE!

That wasn’t so hard

So next up, you just bind a HostSystem value to a workflow that remediates and you’re good right?
Unfortunately not quite yet. But we’ll get there!

VUM uses a completely different service and object model unrelated to vCenter, thus direct integration is not as simple. Out of the box, you have to write your own code and do some heavy debugging in the logs.

Connecting the dots

The first thing we will do is make a simple Action element that takes an input of type VC:HostSystem and extracts the necessary info out of it to create its VUM-equivalent object.

Digging into the API Explorer, look for the VUM:VIInventory Scriptable Action type. This will tell you what you need in order to construct the corresponding object.

The VUM:VIInventory Type.
The VUM:VIInventory Type.

Thankfully, this is a pretty simple type to instantiate – it only requires 4 arguments to construct.
VumVIInventory(string): The string to input here is the vCenter Server.
id: This is the “vSphere Object ID” – essentially the Managed Object Reference ID.
name: This is the host object name in vCenter, whether it’s the FQDN or if you have it connected by IP. (Shame on you!)
type: This is the asset type. Keep in mind, VUM has the ability to patch Virtual Appliances too, so specifying this is required.

So, let’s get some code in place to make the VC -> VUM conversion happen! Create a new Action, and use the below setup as your guide.

Setting up the conversion VCO Action.
Setting up the conversion VCO Action.

Wait, why is the return type an Array?
The VUM plugin expects an array of VUM:VIInventory, even if that means it is an array of one object.

Here’s the Action code to use, with some notes.

// create the new VUM Inventory object and assign values.
// first define a new instance of the object, pointing to the vCenter Server of the host
var vumItem = new VumVIInventory(inputHostSystem.sdkConnection.name);

// get the Managed Object Reference
vumItem.id = inputHostSystem.reference.value;
// get the Managed Object Type
vumItem.type = inputHostSystem.vimType;
// get the ESXi Host name
vumItem.name = inputHostSystem.name;

// the VUMVIInventory object must be an array, so create a dummy one and push the host value in
var vumHosts = [];
vumHosts.push(vumItem);

// return the array of VUM objects (even if it is just one)
return vumHosts;

With this new instance of a VUM:VIInventory type, you can bind it to a Remediate Host workflow as you normally would for your patching pleasure, right?
Theoretically, yes. But you may want to check something else before celebrating.

java.lang.NullPointerException -or- Lessons in Error-Checking

One thing that you will want to verify prior to attempting to remediate using the VCO plugin is whether or not a set VUM Baselines are attached.
If you have no baselines attached, or not specified, your Remediate workflow will error out and throw you a generally unhelpful message.
Here’s how you can check for, attach, and bind the necessary data to the ESX host you want to remediate.

There is an Action as part of the plugin that will attach Baselines to a host object, but you have to tell it which ones you want. Below is sample code you can use in a Scriptable Task (or Action) to output a list of Baselines in your vCenter Server. Since you may have specific Baselines you wish to use, you’ll have to modify it to your liking – but it should be pretty easy.

The input binding on this task can be any object – for this example, it is the ESX host.

// create a search query for VUM baselines whose name begins with "SuperCool"

// query all baselines on the VM Host's vCenter Server
var searchSpec = new VumBaselineSearchSpec(inHost.sdkConnection.name) ;
// define regex to search baselines
var expression = "^SuperCool"

// get the list of baselines
var baselineList = VumObjectManager.getFilteredBaselines(searchSpec)
// VumBaseline must be an array, so make a dummy one
var baselineArray = []

// Loop through the findings and if they match, add to the baseline list.
for each(baseline in baselineList) {
  if(baseline.name.match(expression)) {
    System.log("Baseline: "+baseline.name)
    baselineArray.push(baseline)
  }
}

// assign the array values to the output binding, which is 'vumBaselines'
// if this were an Action, you would just return 'baselineArray'
vumBaselines = baselineArray

So after this, you’ll have your hosts to Remediate, and the Baselines you wish to use for the Remediation. Simply bind these arrays to the inputs of the Remediate workflow, and you’re off to the races!

As an aside, I’m hopeful the next iteration of the plugin, along with the 6.x release with VUM integrated into the VCSA will make life simple for us Orchestrator types. We will see!

VMware VSA Deepdive – Part 5 – Use SOAP!

Fake Edit: It has been a while! It’s been lots of great weather, people visits, and then VMworld happened. Time to get back on track!

In the last post about the VSA, we leveraged the SSH plugin in VCO to send the necessary command to the host that would force the VM to power off, as we can’t do it from vCenter. There are pros and cons to that approach, though.

First of all, not particularly great from a security perspective – you have to have SSH open and the service started to make that happen. Depending on your environment, this may not be seen as a good thing.

You’re also using the root credentials to accomplish the feat. You could use another one, but that’s a whole lot of work just to set that whole thing up in preparation for this scenario. You’d have to take into account rolling the root password and how you could work that into the workflow and managing it over time.

And finally, related to the above–this process isn’t going to pop up in a log very easily since you’re bypassing the API *AND* vCenter.

So, given these faults I needed to find another way to proceed. To be fair this process also required root to the host so it wasn’t that much better, but it would at least show up in host logging. The answer?

SOAP. The yardstick of all APIs.
SOAP. The yardstick of all APIs.

Yep. I was desperate. But if it helps me to automate working with 1500+ hosts, it’s WORTH IT.

I want you to hit me as hard as you can

I haven’t had to make SOAP requests to anything in ages, so I was pretty rusty. Thankfully VCO to the rescue again with a built-in SOAP plugin.
First things first. Make a new Workflow and define two input Parameters – one for the ESXi host, and the VSA VM you wish to power down.

Inputs for the SOAP workflow.
Inputs for the SOAP workflow.

For your attributes, define the following as type String:

  • inputHostWSDLUri – this is to specify the WSDL URI used to talk to the ESXi host.
  • inputHostPreferredEndpoint – this is for talking to the API endpoint.
  • inputHostName – this is just to hold a string value of the ESXi host for later.

The rest of the attributes in the workflow can simply be bound as you add the other workflows that come with VCO.

Preparing the SOAP Host entry

One thing to keep in mind – when adding an ESXi host as a SOAP Host, it sets some values automatically that do not allow this to complete as expected. The first problem is the SOAP Endpoint and the SOAP WSDL URI. Both of these, when enumerated by the SOAP plugin point to https://localhost which makes sense for when the ESX host is doing calls to itself, but not for VCO to reach out to it remotely. The first order of business is to fix these values.

Create a Scriptable Task element in your schema, and bind the input Parameter inputHost to it. Bind the 3 attributes you defined above for output. Then, input the code found below.

Setting up the WSDL and Endpoint URI.
Setting up the WSDL and Endpoint URI.

This code is pretty straightforward. It is simply replacing the values with the correct ones to perform remote SOAP calls.

Next up, drop a copy of the Add a SOAP Host workflow into your schema. Bind the inputHostName and inputHostWSDLUri values to the name and wsdlUri parameters, and bind the rest to new attributes/values as you desire.

Binding new attributes to the Add a SOAP Host Workflow.
Binding new attributes to the Add a SOAP Host Workflow.

You’ll need to provide things such as the username/password to the host, timeout values and other values, all of which can be static values, or linked to a Configuration Element.

For the OUT parameter of this workflow element, bind it to a new attribute named soapHost so we can use it later.

Modify the SOAP Host

Before you proceed, you need to update the new SOAP Host with the new value of inputHostPreferredEndpoint.
Drop a copy of the Update a SOAP Host with an endpoint URL workflow into the schema.
Simply bind the attributes soapHost and inputHostPreferredEndpoint to their respective input parameters, and bind soapHost to the output parameter, so that it completes.

Using SOAP to find the VSA and shut it down

Add a Scriptable Task to your schema. On the inputs, bind the soapHost, inputVM, username, and password attributes.

Below is the code you can copy/paste, with comments as needed.

// get the initial operation you want to go for from the SOAP host specified.
var operation = soapHost.getOperation("RetrieveServiceContent");
// Once you have the SOAP Operation, create the request object for it
var request = operation.createSOAPRequest();

// set Parameters and their attributes.
request.setInParameter("_this", "ServiceInstance"); // creating the input Parameter itself
request.addInParameterAttribute("_this", "type", "ServiceInstance");

// make the request, save as response variable
var response = operation.invoke(request);

// retrieved values to be passed on down the line
var searchIndex = response.getOutParameter("returnval.searchIndex")
var rootFolder = response.getOutParameter("returnval.rootFolder")
var sessionMgr = response.getOutParameter("returnval.sessionManager")

// get Login Session to add to future headers.
var hostLoginOp = soapHost.getOperation("Login")
// create Login request
var loginReq = hostLoginOp.createSOAPRequest()
loginReq.setInParameter("_this", sessionMgr) // using value from initial query
loginReq.addInParameterAttribute("_this", "type", "SessionManager")
loginReq.setInParameter("userName", inputUser)
loginReq.setInParameter("password", inputPassword)
var loginResp = hostLoginOp.invoke(loginReq)
var sessionKey = loginResp.getOutParameter("returnval.key")

// find the VSA VM on the host.
var vmoperation = soapHost.getOperation("FindChild")
System.log("VM Search Operation is: "+vmoperation.name)
var vmreq = vmoperation.createSOAPRequest()
// define parameters
vmreq.setInParameter("_this", "ha-searchindex") // get the SearchIndex
vmreq.addInParameterAttribute("_this", "type", "SearchIndex")
vmreq.setInParameter("entity", "ha-folder-vm") // representing the root VM Folder
vmreq.setInParameter("name", inputVM.name) // your search criteria

// send request, get the response in a variable
var vmresp = vmoperation.invoke(vmreq)
// assign moref to variable
var vmMoRef = vmresp.getOutParameter("returnval")
// this log shows the output value
System.log("MoREF of VM ["+inputVM.name+"] on ["+soapHost.name+"]: "+vmMoRef )

// now that you have the MoRef of the VSA VM, you can kick off the Power Off task with a decision/parameter.
var pwroffOp = soapHost.getOperation("PowerOffVM_Task") // get the Power Off operation
var pwroffOpReq = pwroffOp.createSOAPRequest() // create the Power Off request
// define parameters
pwroffOpReq.setInParameter("_this", vmMoRef) // assign the MoRef of the VM to power off
pwroffOpReq.addInParameterAttribute("_this", "type", "VirtualMachine")
// shut off the VM by executing the request.
var offresp = pwroffOp.invoke(pwroffOpReq)

And there you have it. If you direct connect to the ESXi host when you run this workflow, you will see a task for powering off the VM appear and you are good to go.

One thing I prefer to do at the end of this workflow is to drop in the Remove a SOAP Host workflow and bind appropriately so that my host list doesn’t get too large, but this is obviously optional.

A final note on SSL Certificates

If you run this as it is out of the gate, you will probably get a pop-up regarding whether to trust the SSL certificate of the host.
Of course in a perfect world, all of your certificates are trusted top to bottom and are maintained. But anyone who has tried to do this at scale has struggled and probably doesn’t bother. In order to bypass this, you’ll need to make a few adjustments and duplicate the stock workflows so you can make changes to them.

In the VCO workflow list, go down to Library -> SOAP -> Configuration and right-click Manage SSL Certificates.
Choose Duplicate Workflow.
Duplicate SSL Workflow
Save your copy of the workflow wherever you like. You may want to change the name a bit to reflect that it isn’t the standard workflow.

Now, you can edit the workflow and make a minor adjustment. Here is the workflow by default.

The default Manage SSL Certificates workflow schema.
The default Manage SSL Certificates workflow schema.

You’ll notice the Accept SSL certificate schema element. Simply click it and delete it from the schema.

The "custom" Manage SSL Certificates workflow schema.
The “custom” Manage SSL Certificates workflow schema.

Finally, click on the General tab of your workflow, and look for an attribute named installCertificates. Inside of the value, input the text Install. The workflow element Install certificate does a simple check to see if the attribute is requesting to install, and continues from there.

Updating the new Manage SSL Certificates attributes.
Updating the new Manage SSL Certificates attributes.

As a final step, you will want to duplicate the Add a SOAP Host workflow, and replace the Manage SSL Certificates element with this new one you have created.
Ensure that the two attributes are rebound to the values the old one were using.

Re-adding the workflow bindings for SSL Certificates.
Re-adding the workflow bindings for SSL Certificates.

With these changes, you can Add a SOAP Host and not get stopped for SSL verification.

Of course, this isn’t really a best practice, but it gets the job done.

Next up, the final and maybe the most elegant solution for working with the VSA VM Power Off situation.

VMware VSA Deepdive – Part 4 – Shutting Down VSA (SSH Edition)

The first way I elected to try and forcibly shut down the VSA VM was to do everything through the ESXi command line via SSH. ESXCLI itself is not implemented in VCO directly, so this will require some good old fashioned text parsing with AWK.

Enabling SSH on the host through a VCO Action

Unfortunately out of the box, there is no workflow/action that manages ESXi services, so I needed to roll my own.
Below is the Action setup and script code I used to check for the SSH service and start it up, given an input of type VC:HostSystem.
Create the Action, and name it startSSHService. There is no return value necessary on this Action.
Setting up the SSH Service Action

// get the list of services
var hostServices = inputHost.configManager.serviceSystem.serviceInfo.service
var sshService = null
// loop the services, find the SSH service.
for each(svc in hostServices) {
  if(svc.key == "TSM-SSH") {
    sshService = svc
    System.log("Found SSH Service on host ["+inputHost.name+"]")
    break
  }
}
if(sshService == null) {
  throw "Couldn't find SSH service on ["+inputHost.name+"]!"
}

// Enable the service
try {
  inputHost.configManager.serviceSystem.startService(sshService.key)
} catch(err) {
  System.log("ERROR--Couldn't start SSH service. Error Text: "+err)
}
// the end

So, once you have SSH started on your ESXi host, you can send commands through VCO to do what you need.

SSH Service Check Action

For a more robust workflow, you will probably want an Action that will check to see if the service is running, and return a boolean value. That way you can build in a little bit more into the flow.
The setup for the ‘check’ Action is the same, with the exception of the return value being a boolean.

Setting up the Check SSH Action.
Setting up the Check SSH Action.

The code is similar as well, just doing a simple check at the end.

var hostservices = inputHost.config.service.service
var sshSvc = null
for each(svc in hostservices) {
  // System.log("Service: "+svc.label+", Status is: "+svc.running)
  if(svc.label == "SSH") {
    sshSvc = svc
    break
  }
}

// check status, return true/false
if(sshSvc.running == true) {
  return true
} else {
  return false
}

Where’s my Burrit–VSA?

Before you power off the VSA VM you’ll want to make sure to vMotion your other guests to another node, or have a foolproof way of finding the VSA appliance on your host. Another Action to the rescue! Given an input ESXi host, this Action will query the VMs running on the host and check its tags out to see if it matches a specific value found on all VSA appliances. Note that these Tags are actually in the vCenter database, and not the Inventory Service Tags in the vSphere Web Client.

For purposes of this post, I’ll name the action getVSAonHost.

Setting up the VSA finder Action.
Setting up the VSA finder Action.
// for when you absolutely, positively need to make sure it's a VSA.
var vsaKey = "SYSTEM/COM.VMWARE.VIM.SVA"
// check the VMs on the host for the tag through a loop
for each(vm in inputHost.vm) {
  if(vm.tag) {
    for each(tag in vm.tag) {
      if(tag.key == vsaKey) {
        return vm
        break
      }
    }
  }
}

So now, you know you have the VM in question. You can then pass the VirtualMachine’s name property to your SSH command later.

Making a SSHandwich

With the ESXi host and the VSA VM in hand, you can execute the built in Run SSH Command workflow to do the final step.

Here’s the SSH command to send, which will find the VSA VM ID and power it off in one line, no questions asked:

VMID=$(vim-cmd vmsvc/getallvms | grep -i <VSA Name> | awk '{ print $1 }') && vim-cmd vmsvc/power.off $VMID

Begin by creating a new workflow, and create a single input parameter named inputHost, of type VC:HostSystem.
Then create three attributes in the General tab, naming them sshCommandhostSystemName and vmVSAName, all of type string.
Finally, create another attribute called vsaAppliance of type VC:VirtualMachine for use with the Action.

Next, drop your getVSAonHost Action into the schema, and bind the actionResult to vsaAppliance as seen below.

Binding Actions to the getVSAonHost Action.
Binding Actions to the getVSAonHost Action.

Next, drop a Scriptable Task into the Schema and bind inputHost and vsaAppliance on the IN tab. On the OUT tab, bind the attributes of hostSystemName and vmVSAName. We are effectively going to write a small blurb of code that hands off the name properties of the input objects to the output attributes for use later, along with creating the SSH command string.

Binding values to the Scriptable Task.
Binding values to the Scriptable Task.

In the Scripting Tab, we’ll use a few simple lines of code to perform the handoff of values.

// assign values to output attributes
hostSystemName = inputHost.name
vmVSAName = vsaAppliance.name
// create SSH command string using the input values
sshCommand = "VMID=$(vim-cmd vmsvc/getallvms | grep -i "+vmVSAName+" | awk \'{ print $1 }\') && vim-cmd vmsvc/power.off $VMID"

Finally, drop a copy of the Run SSH Command workflow into the schema. There are a lot of inputs here, so you will have to do some more complicated bindings. You can either force them to be input each time the workflow is run, set a static value, or bind to existing attributes.

Here’s what it looks like by default.

Setup for the SSH Workflow.
Setup for the SSH Workflow.

How you approach this part is largely up to you, but here is how I did it for this example.

The updated SSH Workflow Setup.
The updated SSH Workflow Setup.

You’ll notice for the username/password I set a static value for root and set passwordAuthentication to Yes, and changed the initial hostNameOrIP and cmd values to the attributes we created earlier. For the outputs, I created new local attributes to show the results.

Run the workflow and you should see that VSA go down!

As an aside, if HA is enabled in your VSA HA Cluster object, it will immediately attempt to restart the machine – so make sure you add in the capability to disable HA into your parent workflows first so that this doesn’t end up being a problem.

Sweet Reprieve

It’s a bit crazy the amount of effort it takes to work around a disabled method, but it does work. I don’t think it’s particularly great, and if you’re the only one who cares about the systems and don’t have audits, this may be enough.

But for my purposes this was just the first step of the journey. I did not end up actually using this way to do things, but figured it may be a good exercise to document the process.

The next post will take it in a different direction altogether, and a little bit closer to an API based solution that can also be audited.

VSA Deepdive Part 3 - Enter the Orchestrator!

VMware VSA Deepdive – Part 3 – Enter the Orchestrator

Previously we setup your VSA Home lab, and beefed up our knowledge of WSCLI.
There may have also been some general complaints  about the limitations of the VSA solution from a management perspective.

Now, it’s time to bring things together with an unlikely savior.

(Just kidding, it’s VCO to the rescue as always.)

What do you mean?

VCO being the general swiss-army knife tool it is, you probably already know it has virtually unlimited possibilities.
So how can VCO and VSA coming together make magic happen? There’s no native plugin!

Factoid 1: WSCLI is a Java JAR, and is portable code that can run directly from JRE.
Factoid 2: VCO runs Tomcat for its server software, which is executed using JRE.

For me, reading this excellent post on the VCOTeam blog was like a Eureka! moment. Surely if I can find where JRE is installed on the VCO Appliance, I can execute WSCLI commands in a programmatic way, right?

I feel like the answer is Yes?

Nailed it.  Here’s what you need to set this up.

Orchestrator Appliance Setup

First, you need to allow VCO the ability to create local processes (ie. execute local code).

To do this, SSH into the VCO Appliance as root.
Then, using VI as your text editor, type this in the SSH prompt:
vi /etc/vco/app-server/vmo/conf/vmo.properties

Once in the file, press I to enter Insert Mode, so you can add/edit content to the file.
Add this line anywhere in the file:
com.vmware.js.allow-local-process=true

Once added, press ESC, then type :wq to save changes and exit.

NOTE: Once you have made this change, a reboot of the appliance is needed.

Next up, Since SSH is enabled by default on the VCO Appliance, that means SCP is also enabled.
Using your favorite SCP Client, login to your appliance and upload WSCLI.jar to a folder of your choosing.
For purposes of this post we will put it in the folder /var/run/vco.

Now that you have the files there, the next thing to do is find the path to the JRE.
Thankfully, it’s extremely easy to find – simply type which java at the SSH prompt.

Finding the JRE on the VCO Appliance.
Finding the JRE on the VCO Appliance.

Next, you need to edit your js-io-rights.conf to allow VCO to read and execute content from both folders.

To edit the file, start by typing vi /etc/vco/app-server/js-io-rights.conf
Pressto change to Insert Mode.  You can now edit the file appropriately.
Here is what my js-io-rights.conf file looks like, in case you want to do a comparison.

04-js-io-conf-perms

My changes are specifically these to grant read/execute access:
+rwx /var/run/vco
+rx /usr/java/jre-vmware/bin

Once you have these lines in your file, press ESC to leave Insert Mode. Then type :wq and press Enter to write your changes and quit VI.

So, now you have the path to your JRE, and your WSCLI, and your permissions to the necessary folders. Let’s conduct a quick test!

Run this from the SSH prompt: /usr/java/jre-vmware/bin/java -jar /var/run/vco/WSCLI.jar

WSCLI running in VCO!
WSCLI running in VCO!

BOOM!

Create your Configuration Elements for WSCLI

Configuration Elements are your friend for using this tool. Since the path of JRE and your WSCLI upload shouldn’t really change, why not make it a global variable? That’s what a Configuration Element is essentially for.

In the VCO Client, change to Design View if you aren’t already there. Then, go to the Configuration Elements Tab.

The VCO Configurations Tab.
The VCO Configurations Tab.

Create a new Folder (or not), and create a new Element, as I have done in the above screenshot.
Once you have created it, you will go straight into the menu to customize the element.
Our goal here is to define two attributes: the path to JRE, and the path to WSCLI.jar.
Simply create two attributes, and populate them with the paths you have found earlier.
These paths are CASE-sensitive!

The WSCLI Configuration Element.
The WSCLI Configuration Element.

Save and Close your changes.

Using the Configuration Element in a Workflow

This one is almost TOO easy, and you’ll immediately see the value of Configuration Elements.

Create a new Workflow, and add two Attributes to it.  Once you have done so, you’ll see a small icon similar to Configuration Elements.

07-empty-attributes

Click the small arrow highlighted above for the pathWSCLI attribute, and you will get a dialog window to find your Configuration Element.

Available ConfigurationElements are listed in this window.
Available ConfigurationElements are listed in this window.

You’ll find your previously created ConfigurationElement, and you can bind the attribute in your workflow to it, and use it however you want in the Workflow!

Running WSCLI in a Workflow

The VCO Command Class.
The VCO Command Class.

The final step is using the Command class in VCO. This class is meant to perform the local execution of code and return the output. So if you run a command like ls -l you’ll get a list of files in a directory. But obviously it’s better when you can run WSCLI in a workflow.

In the Schema of the workflow, drop a Scriptable Task in, and bind the attributes you created to it.
To execute WSCLI commands from the appliance you will be using the Command Scripting Class, which is detailed below.

Not much there, but that’s OK! If you’re this far, I suspect you’re thinking of use cases for this, of which there are many!

Follow the below code snippet as an example.  Note that the pathJRE and pathWSCLI bits are the attributes you defined earlier.

// define command line to execute
var cmdString = pathJRE+" -jar "+pathWSCLI+" help shutdownSvaServer"
// create a new command object with your command string
var myCommand = new Command(cmdString)
// execute the command
myCommand.execute(true)
// show the output
System.log(myCommand.output)

At this point, you should see the results of WSCLI in the Logging tab.

WSCLI - now in VCO!
WSCLI – now in VCO!

You can now execute WSCLI in your workflows, but that is just part of the ongoing puzzle. As we know already, WSCLI can’t actually power down the VSA node. How do we tackle that when we’re denied that ability in vCenter?

In the next few posts, I’ll detail the various ways I was able to accomplish this, each with their pros and cons. In the meantime, I’ll leave you to create a few Actions or Workflows to add WSCLI functionality!