Automate Directory Sync in vRO 7.4

I took quite a break from blogging this past year – and now that it’s VMworld 2018 again, I figured I’d contribute some useful vRO content as it’s always my biggest gripe when I’m there!

One of the big gaps that has been in vRA 7.x has been that you cannot perform a synchronization of your identity sources in a programmatic way. Since the transition from PSC/SSO lookups to vIDM, this is a pretty big feature to miss when you want to automate management of your users and groups.

Thankfully, a new API is available in vRA 7.4+ that is exposed by both the REST API, as well as the CAFE plugin making it easy to embed in your workflows.

It’s staggering to me that VMware didn’t include a pretty basic workflow to go along with this, but that’s OK. Here’s one for you that you can extend however you want! Given it’s relative simplicity, this is probably a good candidate for a vRO Action.

NOTE: This API only works for vRA 7.4 or higher!

First, you’ll need a couple of inputs:
cafeHost (Type: vCACCAFE:VCACHost) – the tenant where the identity source exists.
identitySource (Type: string) – the name of the identity source. The actual domain is extracted in the code.

Once you have those, bind them to a new Scriptable Task and put in the following code:

// Use tenant to create a connection to the Identity Store service.
var service = cafeHost.createAuthenticationClient().getAuthenticationIdentityStoreClientService()
var store = service.getIdentityStore(cafeHost.tenant,identitySource)
var domain = store.domain

// What's the current status of the identity source?
var status = service.getIdentityStoreStatus(cafeHost.tenant,domain).getSyncStatus().getStatus().value()
if(status !== "RUNNING") {
  // The source is currently not performing a sync, so begin one!
  try {
    service.syncIdentityStore(cafeHost.tenant,domain)
    System.debug("Sync of "+domain+" has started.")
  } catch(e) {
    throw("Failed at sync start: "+e)
  }

  // Poll the sync operation to completion, whether it succeeds or fails.
  var retries = 36
  for(i=0; i<retries; i++) {
    try {
      var status = service.getIdentityStoreStatus(cafeHost.tenant,domain).getSyncStatus().getStatus().value()
      if(status == "RUNNING") {
        System.debug("\tSync of domain "+domain+" is currently in progress...checking again in 10 seconds.")
  System.sleep(10000)
  i++
      } else {
        System.debug("Sync status of "+domain+" is now: "+status)
  break
      }
    } catch(e) {
      throw("Unable to query status: "+e)
    }
  }
}

A useful aside to this code: in case you ever need to know what possible values exist for any .value() method in the CAFE plugin, you can just output the .values() to the log, and it will return a list.

For this particular operation, these are your options: COMPLETED, FAILED, RUNNING, UNDEFINED

I’ve not run into a Undefined situation before, but these values should help you in extending or mapping out your logic for the workflow. My retries logic is pretty simple and easily modified. Find out what works in your environment and adjust it accordingly!

As an Action, I think the best case would be to use the code above, and once completed return a boolean on whether the synchronization process successfully completed, and you can error handle from there.

I hope to be bringing more stuff forward for the rest of the year! Happy orchestrating!

 

Coming soon on this blog, vRA pickup lines. Watch this space!

Deploying OVFs with vRealize Automation

vRA is good at only a few things out of the box – primarily VM template based blueprints.

One thing that doesn’t work at all out of the box is deploying OVF/OVA machines, which I will cover a way to do so in this post. There are many ways to do this thanks to vRO, but I think this way is fairly easy to digest, using appliances a vRA admin would typically have access to.

Use Case

For me it is simple – deploying a fully functional vSphere or vRealize environment requires a number of appliances. Being able to build a full reference VSAN environment with automation is pretty cool and nice to be able to hand back to other members of my team for testing or experimentation. Deploying vRO/vRA is a lot more complex, and with the 7.2 releases of these solutions, you can do the whole thing via API calls.

Caveats and Assumptions

Not all appliances are equal! IHV or ISV appliances often tend to only set an IP address and not enable post-configuration, so while this will showcase a pretty end-to-end solution, other stuff outside of VMware appliances may not work as well.

Also, this post assumes vRA is already set up and is working pointed to an existing vCenter Server for data collection.

This also assumes you have a method of allocating IP addresses in place, as well as a process to pre-create appropriate DNS records for the appliances  you build.

Setting the stage

For this particular post, I will detail deployment of a vCenter Server Appliance, version 6.0U2. Some parameters may change between versions but not too much.

When you download the ISO for the vCenter Server Appliance, extract the contents to a temporary folder. Search inside the extracted content in the folder VCSA to find the actual OVA file called vmware-vcsa.ova.

Launch the vSphere Client and deploy the OVA to your choice of datastore/host.

You’ll be prompted during the wizard to populate numerous values that would normally be handled during the VCSA UI Installer – leave them alone and continue.

Do not power on the appliance as part of the process!

Once the OVA has been deployed, simply convert the new appliance into a template.

Build your VCSA blueprint

In vRA, execute a data collection against your vCenter Server resources.

Once that is successful, go to Design->Blueprints and create a new composite blueprint.

Drag a vSphere Machine object onto the canvas, and set the names and descriptions as you like.

In the Build Information tab, specify these values

  • Blueprint Type – Server
  • Action – Clone
  • Provisioning Workflow – CloneWorkflow
  • Clone From – <The template>
  • Customization Spec – <blank>

Lastly, in case it is not part of your existing Property Groups (aka Build Profiles), you will want to pass this property in:

  • Extensibility.Lifecycle.Properties.CloneWorkflow = *,__*

Most documents emphasize passing the VMPSMasterWorkflow32 properties in as part of setting up Event Broker, so this is just calling out an additional requirement.

From here, set your custom properties and build profiles as you like. One recommendation I have for vRA admins is to add a custom property on your endpoint that denotes the vCenter Server UUID – this helps identify which vCenter your machine deployed to, which is needed for steps coming up. If you have a single vCenter, this is optional, but you may want to put it in place now, just in case!

Publish and entitle your blueprint.

Create vApp properties workflow in vRO

The key to making this work is all about the vApp properties of the virtual machine once deployed. Once the VM is cloned from the OVF template you made earlier, but before the VM starts up, you need to be able to inject your custom properties at the vCenter VM level so that the appliance can bootstrap itself and initiate firstboot scripts successfully.

In vRO, create a new workflow. Add an input parameter named payload of type Properties. This will hold the content that the vRA Event Broker will use to populate your cloned VM with what it needs to build properly.

Create an attribute called vc of type VC:SDKConnection and set the value to your vCenter Server.
Create an attribute called vcVM of type VC:VirtualMachine and leave it empty.
Create an attribute called iaasProperties of type Properties and leave it empty.

In the workflow schema, create a Scriptable Task and bind the payload input parameter to it, as well as the iaasHost, and vc attributes on the input side. Bind the vcVM and iaasProperties attributes on the output side of the task.

Insert the following code:

// create search query IaaS for the VM and its properties.
var properties = new Properties();
properties.put("VirtualMachineID", payload.machine.id);
// query IaaS for entity
var virtualMachineEntity = vCACEntityManager.readModelEntity(iaasHost.id, "ManagementModelEntities.svc", "VirtualMachines", properties, null);
// now get the properties you need for the applianceType value
var vmProperties = new Properties()
var props = virtualMachineEntity.getLink(iaasHost, "VirtualMachineProperties");
for each (var prop in props) { 
vmProperties.put(prop.getProperty("PropertyName"), prop.getProperty("PropertyValue"));
}
// get your output to an attribute
iaasProperties = vmProperties
// construct the Managed Object Reference to find the VM object in vCenter
var object = new VcManagedObjectReference()
object.type = "VirtualMachine"
object.value = payload.machine.externalReference

// query the vCenter for the actual object
vcVM = VcPlugin.convertToVimManagedObject(vc,object)

The above code should be fairly straightforward – it is pulling in your IaaS custom properties into a vRO object, and it is also finding the vCenter Virtual Machine so that you can manipulate it. Next will be the code that updates the OVF Properties of the VM in question.

Updating the OVF Properties

Create a Scriptable Task in the schema of your workflow.

On the input side, bind the values iaasProperties and vcVM.

Paste the following code into the task:

// create the properties variable to be called later
var ovfProps = new Properties()
ovfProps.put("guestinfo.cis.appliance.net.addr.family","ipv4")
ovfProps.put("guestinfo.cis.appliance.net.mode","static")
ovfProps.put("guestinfo.cis.appliance.net.addr",iaasProperties.get("VirtualMachine.Network0.Address"))
ovfProps.put("guestinfo.cis.appliance.net.prefix",iaasProperties.get("VirtualMachine.Network0.SubnetMask"))
ovfProps.put("guestinfo.cis.appliance.net.gateway",iaasProperties.get("VirtualMachine.Network0.Gateway"))
ovfProps.put("guestinfo.cis.appliance.net.dns.servers",iaasProperties.get("VirtualMachine.Network0.PrimaryDns"),iaasProperties.get("VirtualMachine.Network0.SecondaryDns"))
ovfProps.put("guestinfo.cis.appliance.net.pnid",iaasProperties.get("Hostname")+".domain.lab")
ovfProps.put("guestinfo.cis.appliance.net.ports","{}")
ovfProps.put("guestinfo.cis.vmdir.password","VMware1!")
ovfProps.put("guestinfo.cis.vmdir.domain-name","vsphere.local")
ovfProps.put("guestinfo.cis.vmdir.site-name","vRA-Provisioned-Default-Site")
ovfProps.put("guestinfo.cis.vmdir.first-instance","True")
ovfProps.put("guestinfo.cis.db.type","embedded")
ovfProps.put("guestinfo.cis.appliance.root.passwd","VMware1!")
ovfProps.put("guestinfo.cis.appliance.ssh.enabled","True")
ovfProps.put("guestinfo.cis.appliance.time.tools-sync","False")
ovfProps.put("guestinfo.cis.appliance.ntp.servers","ntp.domain.lab")
ovfProps.put("guestinfo.cis.upgrade.silent","True")
ovfProps.put("guestinfo.cis.kv.new","True")
ovfProps.put("guestinfo.cis.silentinstall","True")
ovfProps.put("guestinfo.cis.clientlocale","en")

// construct the reconfiguration specification, which will add the vApp Properties from IaaS to your VM.
var vmspec = new VcVirtualMachineConfigSpec()
vmspec.vAppConfig = new VcVmConfigSpec()

var newOVFProperties = new Array()
// get the existing VM's OVF property list.
// this is needed to match the 'key' value to the 'id' and subsequently 'value' parameters. The 'key' is required.
var vmProperties = vcVM.config.vAppConfig.property
for each(prop in vmProperties) {
// get the key value
var key = prop.key
// define new property spec, which will edit/update the existing values.
var newProp = new VcVAppPropertySpec()
newProp.operation = VcArrayUpdateOperation.fromString("edit")
newProp.info = new VcVAppPropertyInfo()
// assign the new key
newProp.info.key = key
// get the values from ovfProps - these are of type VcKeyValue, properties are .key and .value
newProp.info.id = prop.id
newProp.info.value = ovfProps.get(prop.id)
newProp.info.userConfigurable = true
// add the new OVF Property to the array
newOVFProperties.push(newProp)

// assign the properties to the spec
vmspec.vAppConfig.property = newOVFProperties
// now update the VM - it should be OFF or this will fail
try {
// reconfigure the VM
var task = vcVM.reconfigVM_Task(vmspec)
System.log("OVF Properties updated for "+vcVM.name+".")
} catch(e) {
throw("Error updating the OVF properties: "+e)
}

Save your workflow and exit.

With the workflow completed, now we need to go into the Event Broker and create a subscription so that this workflow will be executed at the correct time during the provisioning process.

Setting up the Event Broker Subscription

In vRA, go to Administration->Events->Subscriptions, and create a new subscription.

For the Event Topic, choose Machine Provisioning and click next.

For the Conditions, you will want to set it up with All of the following and at least these values:

  • Data->Lifecycle State->Name equals CloneWorkflow.CloneMachine
  • Data->Lifecycle State->Event equals CloneWorkflow.CloneMachine.EVENT.OnCloneMachineComplete
  • Data->Lifecycle State->Phase equals EVENT

For the Workflow, choose the workflow you created above.

Ensure this is a blocking subscription, otherwise it will execute asynchronously. Since this absolutely has to complete successfully to ensure a proper build, blocking on this one is important!

And finally, because it always seems to happen to me – make sure to PUBLISH the subscription! Friends don’t let friends troubleshoot an Event Subscription that is a draft…

Wrap-Up

At this point you can begin testing your deployments, and verify that the workflow is firing at the appropriate time.

Additionally, it may make sense to build an XaaS service that allows for you to customize the names, sizes of the appliances, or changing default domain, VLAN, etc.

Thanks for reading! If you have any questions feel free to ping me on Twitter.

If you’re curious as to the other available states you can use, check out the Life Cycle Extensibility – vRealize Automation 7.0 document, starting around page 33. Each release has a version of the document with the latest information, so find the one for your version.

 

 

A love letter to vRO’s LockingSystem

Oh LockingSystem, how do I love thee? Let me count the infinite ways.

The LockingSystem scriptable objects have been around since at least the vCO 5.5 days. In later revisions VMware put a few demo workflows to show how you could take advantage of it, but for the most part it’s not talked about too much. I’ll say this though – when you really start building complex workflows with many interlocking systems, LockingSystem saves lives.

A use case that comes almost immediately to mind is that of vRA and provisioning a large number of machines at once. A notable difference could be that in this environment, we do not use the internal IPAM, but rather the Infoblox IPAM system and the Event Broker to allocate an IP in the PRE BuildingMachine phase.
In a situation where 20-30 VMs are requested for a build, you want to make sure there are no concurrency problems where multiple VMs get the same IP address allocated, right?

Thankfully implementing the LockingSystem is extremely easy, but there are few things to keep in mind to make sure it is implemented well.

The process you wish to lock should be 1 or more separate workflows
In the example above, I have created a separate workflow that calls Infoblox IPAM via the REST API. It takes a couple of inputs, and finds the next available IP on the correct subnet, allocates it, and then outputs the network information that I want for VM customization.

A high level workflow schema for LockingSystem.

Above is a very high-level schema of what a typical implementation of LockingSystem would look like.
Notice that if the nested workflow fails, you will want to ensure the lock is removed! Alternatively you could bind errors to continue ahead without an exception, though this gives you more flexibility to recover gracefully.

For the first element in the use case detailed above, it could be as simple as this code:
LockingSystem.lockAndWait("vRA-IP-Allocation",workflow.id)

The first parameter in the lockAndWait method is an arbitrary name. This is a global value, so if you have other workflows that may call this same nested one, it is worth considering having a naming schema for your locks.
The second parameter is the current workflow token ID. I typically like to see which workflow token has the lock if I need to, but this is also a very easy way to lock a workflow without excessive binding of attributes. Alternatives to use would be the workflow runner’s account name if needed.

Similarly, it is simple to unlock the workflow so that the next process in line can proceed.
LockingSystem.unlock("vRA-IP-Allocation",workflow.id)

Adjusting LockingSystem concurrency
A single unique locking mechanism is nice, but what if you want to tune the concurrency throughput? This would help limit requests to an external system, plugin, or API so that more than one flow can run, but not overwhelm it. Thankfully the LockingSystem has methods that can help you do this, with a little bit of setup.

Pick a line, any line
Let’s say you wanted to be able to limit your number of locks to a maximum of 5, because the plugin or external system can get overwhelmed.
You can request a lock between 1 and 5, such as “vRA-Machine-Build-1” through “vRA-Machine-Build-5.” All you have to do is generate that suffix number, either randomly or through a controlled loop.

A simple random inclusive number function can be found below, and it is probably useful enough to simply make into a vRO Action for future use!
In this sample, your return value would be a number type, and your two inputs, min and max would also be number type.
// Reference code from Mozilla
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/random
min = Math.ceil(min);
max = Math.floor(max);
return Math.floor(Math.random() * (max - min + 1)) + min

Then in your Locking scriptable task, you can call the function or action to return a value from 1 to 5 (or whatever values you want!) and append it.

var lane = System.getModule("us.nsfv.shared.locking").getRandomNumber(1,5)
LockingSystem.lockAndWait("vRA-Machine-Build-"+lane,workflow.id)

Find the first open lane
While the random inclusive number (or in this case, the “lane”) can solve the concurrency, you may want there to be more intelligence – i.e. have the LockingSystem get the first available, regardless of which lane it is. There is a simple method simply called .lock() that will return true/false if you successfully acquire a lock of that value.
Thus, you can simply loop through the available locks and wait for one to be true.

for(i=1;i<=5;i++) {
var lockAcquired = LockingSystem.lock("vRA-Machine-Build-"+i,workflow.id);
if(!lockAcquired) {
// no luck getting this one
} else {
// lock acquired, output the lock ID to an attribute so it can be unlocked later.
System.log("Lock "+i+" acquired!");
var attributeLock = "vRA-Machine-Build-"+i;

}
// reset the loop if you've reached the end
if(i == 5) {
i = 0;
}
}

When this loop runs, it will stay essentially in an infinite loop until a lock is freed up, at which point it will exit and continue with whatever process was locked, until it is unlocked afterward.

The functions and scripts above are a good starting point in tuning your vRO workflow concurrency. I hope these help you as much as they have helped me out!

vSphere SSL and vCO Part 4 – Venafi API

In parts 1 through 3, we setup workflows to generate OpenSSL configuration files, private key and CSR for an ESXi host.

This portion will concentrate on creating REST API calls to Venafi, an enterprise system that can be used to issue and manage the lifecycle of PKI infrastructure. I find it pretty easy to work with, and know some colleagues who are interested in this capability, so I decided to blog it!

At a high level, this post will build workflows that do the following tasks in the API:

  • Acquire an authorization token
  • Send a new certificate request
  • Retrieve a signed certificate

Let’s get to it!

Setup REST Host and Operations for Venafi

Since there are 3 operations above, thankfully there are 3 corresponding operations that will require setup in vCO.

REST Host

Find the workflow under the HTTP-REST folder called Add a REST Host and execute it.

Setting up the Venafi REST Host.
Setting up the Venafi REST Host.

Fill in the name and URL fields to match your environment.
The Venafi API endpoint ends with /vedsdk – don’t add a trailing slash!

For the Authentication section, choose Basic.
Then, for the credentials choose Shared Session, and your username and password.

If you haven’t already imported the SSL certificate for Venafi, you may be asked to do so for future REST operations.

REST Operation(s)

Find the workflow under the HTTP-REST folder called Add a REST Operation and execute it.

Setting up the Venafi REST Operation.
Setting up the Venafi REST Operation.

Choose the REST Host you just created above in the first field.
Set a name for the Operation – in this case I used Get Authorization.
For the Template URL, input /Authorize/ – remember the trailing slash!
Change the method to POST, and input application/json for the content type.

Submit the workflow, and the operation is now stored.

Repeat the above workflow, but swap out the Name and Template URL as needed with the below values. These will add the necessary operations to request and download signed certificates.

Name: Request Certificate
Template URL: /Certificates/Request

Name: Retrieve Certificate
Template URL: /Certificates/Retrieve

With the REST Host and Operations set up, let’s work on creating our basic workflows.

Workflow: Acquire your authorization token

There are a number of ways to tackle the creation of the workflow, with error handling and the like, so I will just focus on giving the code that is needed to construct the API call and send it out.

Create a new workflow.

In the workflow General tab, create an attribute named restOperation, of type REST:Operation. Click the value column and choose the Venafi operation you created earlier for /Authorize/ and click OK.Click the Output tab and create a new parameter called api_key. This is the resulting authorization key that you can then pass to other flows later.

Drag and drop a new Scriptable Task into the Schema, and bind the restOperation attribute on the IN tab, and the api_key output parameter on the OUT tab, like this:

Now, copy and paste this code into the Scripting tab.

// get the username/password from the Venafi REST Host.
var venafiUser = restOperation.host.authentication.getRawAuthProperty(1)
var venafiPass = restOperation.host.authentication.getRawAuthProperty(2)
// create Object with username/password to send to Venafi
var body = new Object()
body.Username = venafiUser
body.Password = venafiPass
// convert Object to JSON string for API request
var content = System.getModule("com.vmware.web.webview").objectToJson(body)

// create API request to Venafi
var inParamtersValues = [];
var request = restOperation.createRequest(inParamtersValues, content);
// set the request content type
request.contentType = "application\/json";
System.log("Request URL for Venafi Token: " + request.fullUrl);
// execute
var response = request.execute();
// handle response and output
if(response.statusCode != 200) {
	System.log("There was an error getting an API key. Verify username and password.")
} else {
	api_key = JSON.parse(response.contentAsString).APIKey
	System.log("Token received. Add header name \"X-Venafi-Api-Key\" with value \""+api_key+"\" to future workflows.")
}

Now you may be wondering, why am I pulling the Username and Password from the host in this way? Shouldn’t there be some input parameters?

While we did build the REST Host object to use Basic Authentication, Venafi actually doesn’t support it – you can only POST the credentials to the API with a simple object and get the API key back.

So this is just how I elected to store the username and password – you could set them as attributes in the workflow, as Input parameters, or link those attributes to ConfigurationElement values if you wanted to.

Assuming your service account used for authentication is enabled for API access to Venafi, you should be able to run the workflow and see the log return a message with your API key!

And since it is an output parameter, you can bind that value to the next couple of workflows we are about to create.

Workflow: Submit a new Certificate Request for your ESXi host.

Create a new workflow.

In the workflow General tab, create an attribute named restOperation, of type REST:Operation. Click the value column and choose the Venafi operation you created earlier for /Certificates/Request and click OK.

Attributes for the Request Certificate workflow.
Attributes for the Request Certificate workflow.

Click the Input tab and create 3 parameters:

  • api_key – type is String
  • inputHost – type is VC:HostSystem
  • inputCSR – type is String
Inputs for the Request Certificate workflow.
Inputs for the Request Certificate workflow.

These values should be familiar for those who have been following along.
The first is the API key you created earlier so you can pass the key into this workflow and avoid having to login again. Each time you use the Venafi API key, the expiration time is extended, so you should be good to use a single API key for your entire workflow.

Click the Output tab and create a new parameter called outputDN. This value represents the Certificate object created in Venafi at the end of the workflow.

Outputs for the Request Certificate workflow.
Output for the Request Certificate workflow.

Once again drop a new Scriptable Task into the Schema for the new workflow, and bind the restOperation attribute on the IN tab, along with the 3 other inputs api_key, inputHost, and inputCSR. On the OUT tab, bind the outputDN value into the task, so that your Visual Binding looks like this:

Visual Bindings for the Request Certificate workflow.
Visual Bindings for the Request Certificate workflow.

Now, it’s time to add the necessary code to make the magic happen.

var object = new Object()
object.PolicyDN = "\\VED\\Policy\\My_Folder"
object.CADN = "\\VED\\Policy\\My_CA_Template"
object.PKCS10 = inputCSR
object.ObjectName = inputHost.name
var content = System.getModule("com.vmware.web.webview").objectToJson(object)
//prepare request
var inParametersValues = [];
var request = restOperation.createRequest(inParametersValues, content);
//set the request content type
request.contentType = "application\/json";
System.log("Request URL for Certificate Request: " + request.fullUrl);

//Customize the request here with your API key
request.setHeader("X-Venafi-Api-Key", api_key);

//execute request
var response = request.execute();
// handle response and output
if(response.statusCode != 200) {
	System.log("There was an error requesting a certificate. The response was: "+response.contentAsString)
} else {
outputDN = JSON.parse(response.contentAsString).CertificateDN
}

The only things in this code you will want to change are the PolicyDN and CADN values, as those are unique to each environment. You’ll want to consult your Venafi admin, or look in the /VEDAdmin website running with Venafi for the proper value.

Workflow: Retrieve the signed Certificate

So you’ve gotten this far, and were able to post a new certificate request. The last step is to download it and get ready to upload it to the ESXi host.

Create a new workflow.

In the workflow, create an attribute named restOperation, of type REST:Operation. Click the value field and choose the Venafi operation you created earlier for /Certificates/Retrieve and click OK.

Attributes for the Retrieve Certificate workflow.
Attributes for the Retrieve Certificate workflow.

Click the Input tab and create 2 parameters:

  • api_key – type is String
  • inputDN – type is String
Inputs for the Retrieve Certificate workflow.
Inputs for the Retrieve Certificate workflow.

The first value api_key is back once again to re-use the existing API key created before.
The second value is related to the outputDN from the previous workflow. You will be feeding this value into the new workflow so that it knows which certificate object to query and work with.

Click the Output tab and create a new parameter called outputCertificateData. This value represents the signed certificate file encoded in Base64.

Outputs for the Retrieve Certificate workflow.
Outputs for the Retrieve Certificate workflow.

Now, drop a new Scriptable Task into the Schema for the new workflow, and bind the restOperation attribute on the IN tab, along with the 2 other inputs api_key and inputDN. On the OUT tab, bind the outputCertificateData value into the task, so that your Visual Binding looks like this:

Visual Bindings for the Retrieve Certificate workflow.
Visual Bindings for the Retrieve Certificate workflow.

Now to execute the API call to get the certificate, using the below snippet of code:

// setup request object
var object = new Object()
object.CertificateDN = inputDN
object.Format = "Base64"
var content = System.getModule("com.vmware.web.webview").objectToJson(object)

//prepare request
var inParametersValues = [];
var request = restOperation.createRequest(inParametersValues, content);
//set the request content type
request.contentType = "application\/json";
System.log("Request URL for Retrieving Signed Certificate: " + request.fullUrl);

//Customize the request here
request.setHeader("X-Venafi-Api-Key", api_key);

//execute request
var response = request.execute();

// deal with response data
if(response.statusCode == 200) {
	// the result is in Base64, so decode it and assign to output parameter.
	var certBase64 = JSON.parse(response.contentAsString).CertificateData
	outputCertificateData = System.getModule("us.nsfv.lab").decodeBase64(certBase64)
	// done
} else {
	System.debug("Unhandled Error with response. JSON response: "+response.contentAsString)
}

Now, once these workflows run in sequence, you should have a Base64 decoded string that looks like a signed certificate file would. We haven’t strung these flows together yet, so don’t panic that it isn’t working! It will all make sense, I promise!

In the next post, we will write that content to disk, and then the fun part: uploading it!

vSphere SSL and vCO Part 2 – Appliance Setup

The first step to getting this process to work is to realize that ultimately under the hood, vCO is a Linux appliance, with a lot of functionality not exposed, or not immediately obvious. There are a lot of tweaks you can make to really enable great functionality, and this process may give you other interesting ideas!

NOTE: The below steps assume vCO Appliance 5.5+

You’ll need to start out by using PuTTY to SSH into the appliance. If SSH is turned off, you’ll either need to use the console, or enable Administrator SSH from the VAMI interface.

Once logged in, change directory to /etc/vco/app-server, and then type vi vmo.properties to open the file in a text editor.

Logging into vCO with SSH.
Logging into vCO with SSH.

Inside you will want to see if you have a line that looks like this:

com.vmware.js.allow-local-process=true

If it doesn’t exist, press i to change to Insert mode, and then add a new line and put it in there. Once done, press ESC to exit Insert mode, and type :wq to write the file and quit the editor.

When vCO Server is restarted, you will be able to execute Linux commands against the VM from within your workflows. The catch is, you have to make sure that the vco account on the appliance has the ability to execute it.

To enable this, type vi js-io-rights.conf from the shell. This file may already exist and have some data in it. If not,  you get to define explicit rights at this point. Here’s mine for reference:

Example JS-IO-RIGHTS.CONF file.
Example JS-IO-RIGHTS.CONF file.

Add the below lines to the file by pressing i, again going to Insert Mode and then the below information, with each line corresponding to a specific executable on the appliance. The prefix characters are adding read and execute rights for the vco user.

+rx /usr/bin/openssl
+rx /usr/bin/curl

Press ESC, then :wq to save the file and exit.

With these tweaks enabled, you will need to restart vCO Server. You can do this a number of ways, but since you’re in the shell this is the fastest:

service vco-server restart

Now, you will be able to execute these commands in a workflow when you use the Command scripting object, which will run the command and return the standard output back as a variable, as well as the execution result, such as success or failure!

With that in mind, let’s do a quick experiment in a workflow to ensure it works as intended.

Proof of Concept Workflow Creation

  • Create a new Workflow as you normally would. No inputs are necessarily required for this test as we will get into those values in later posts.
  • Drag and drop a fresh Scriptable Task into the schema, and edit it.
  • Paste the code below into the scripting tab:
// Creates new Command object, with the command to run as your argument
var myCommand = new Command("/usr/bin/openssl version")
// Executes the command
myCommand.execute(true)
// Outputs the result code, and the output of the command
System.log("Exit Code: "+myCommand.result)
System.log("Output: "+myCommand.output)

Close the Scriptable Task, run the workflow and check the log – you should see something like this:

OpenSSL version running on the appliance.
OpenSSL version running on the appliance.

If you were to type the same command in the shell, the result would be the same. So while we’re here, let’s update the code in the task to verify cURL also works. Change line 2 in the task to look like this (note the case-sensitive argument!) :

var myCommand = new Command("/usr/bin/curl -V")
cURL version running on the appliance.
cURL version running on the appliance.

You’ll probably note that the OpenSSL version installed on the VCO appliance is the same one that is required by VMware for the entire SSL implementation in the 5.x release! With this working, now we can do some really cool stuff.

In the next post, we will build out the workflow that will create the private key and CSR for any and all of your ESXi hosts! This same flow can be used as the basis for vCenter SSL, vROPS, or even vCO itself!

vSphere SSL and vCO Part 1 – Thoughts

One of the biggest pains with vSphere is generating and replacing the SSL certificates with your own signed ones, at least in 5.1 and 5.5.

No doubt most of us have read Derek Seaman’s brilliant series a couple of years ago regarding how to get this to work correctly, both in vSphere 5.1 and vSphere 5.5. All the while, we were wishing for a better tool to manage this portion of the environment. Depending on your security department, you may not have a choice!

But then, there was the VMware-sanctioned Certificate Automation Tool! And LO, it was a batch file that was cleverly done.

Now, we have the VMware VMCA in the Platform Services Controller. Looks pretty good, if you’re able to run vSphere 6.0, and your security team allows you to create a subordinate CA for integration, and if you are running all hypervisors with version 6.0 or higher.

But in the real world it’s more complicated than that. 6.0 has had quite a few issues and I think many are still shy about upgrading, though that is changing.

In the meantime, we have to make do and in my opinion, all of this should have been offloaded to vCO in the 5.x releases.
Let’s consider why for a second.

  • vCO is pretty tightly connected to vCenter and the vSphere API – just about anything you can generally automate you can do in vCO.
  • There are also peripheral capabilities such as SOAP, SSH, and general HTTPS requests courtesy of the Axis2 engine running underneath.
  • Also, the appliance is Linux, with a plethora of tools that are built in, such as sed or awk, and as we will see later, openssl and curl.

A well constructed set of workflows that handled this functionality and integrated with a typical Microsoft CA could have easily been a great showcase of the product.

In this series I will present workflows that will:

  • Handle creation of the various components using OpenSSL running from within the vCO appliance, focusing on the ESXi host.
  • Use the HTTP-REST plugin to get a signed certificate – this is made possible primarily due to a tool called Venafi Trust Platform, a system that can integrate with many external PKI infrastructures and manage their lifecycle.
  • Use the vCO appliance to upload the signed certificates and update the vCenter database with the new values.

Hopefully you find it useful and insightful. Once the series is finished, I will release a package of workflows and logic I use for my use case, probably on FlowGrab.

Working with Customization Specifications in vCO

I recently had a minor project to assist with refreshing a deployment of hundreds of remote backup servers, which were moving from a standard Windows 2008 R2 image to 2012 R2. The process was going to leverage some custom workflows I had written using the Deploy OVF functionality in vCO.

A pretty straightforward setup, so I asked how they planned to do the customization for each site. The answer was essentially: “We’re going to do the customization manually like always.”

#recordscratch

I checked into it, and sure enough that had always been the process. That process included fat fingering the host names, inputting incorrect IP address information, not installing the software properly, not updating the Infoblox IPAM system, the list goes on and on.

I’m an automation freak for a number of reasons but most of all it seemed like this was a perfect use case to leverage vCO capabilities I knew were possible out of the box, and also do some digging on a few parts I didn’t know about.

The Ingredients

First, I did some REST API testing and querying of our internal Infoblox IPAM system to ensure I could get the correct or available IP addresses on the pertinent location and VLAN programmatically.

Second, I had to build a workflow that created the AD Computer object in advance in a particular OU, so that the company’s GPO would take effect immediately upon joining the machine to the domain. The interesting problem to solve here was using a particular service account to perform the work, rather than the default AD Host.

Finally, I had the team put together a Windows 2012 template that was not sysprepped, so that I could use vCenter’s Customization Specifications to do that for me, along with executing first boot scripts, configuring the Network, join the domain, and the like. This meant figuring out how to programmatically use the Customization Spec APIs, which I wasn’t familiar with.

1/4 Cup REST API – Getting the correct Network CIDR

The first piece I worked on was fairly simple after learning some syntax which was querying the Infoblox REST API for the relevant networks.
Each remote location had specific Extensible Attributes used that contained the VLAN ID and the Location code. The location code corresponded to the Datacenter object in our vCenter Server, so creating a correlation just got a whole lot simpler.

Create a REST Host using the default Add a REST Host workflow, substituting your IPAM server URL and credentials appropriately.

Configuring IPAM REST Host.
Configuring IPAM REST Host.
Configuring IPAM REST Host
Configuring IPAM REST Host

Once that’s created, run Add a REST Operation workflow so that we can add the actual REST API URL to the host just created with a couple of parameters that can be invoked at runtime. This operation will be a GET operation, also.

For the URL Template field, you will want to put this:
/wapi/v1.6/network?_return_fields%2B=network&*SiteNumber~:={MY_LOCATION_ID}&*VLAN={VLAN_ID}

This URL may be a bit of a puzzle, so I’ll break it down:
/wapi/v1.6/network – the base portion, indicating we are querying available network objects in the API.
?_return_fields%2B=network – this indicates what values we want returned, rather than a bunch of values. In this case, I just wanted the CIDR value.
&*SiteNumber~:={MY_LOCATION_ID} – The ampersand is a parameter of the query, and the asterisk preceding “SiteNumber” indicates an Extensible Attribute inside the IPAM database. Finally the ~:= indicates a Regular Expression search.
&*VLAN={VLAN_ID} – another parameter of the query, using the VLAN Extensible Attribute.

NOTE: The Extensible Attributes used here were unique to this installation – you will have to create your own or ask your Network team for the attributes they use!

Of note, there is a pretty great example page full of Infoblox IPAM API calls you can find here: https://community.infoblox.com/t5/API-Integration/The-definitive-list-of-REST-examples/td-p/1214

Upon executing the Invoke a REST Operation workflow, you can plug in the values for MY_LOCATION_ID and VLAN_ID and assuming it exists, you will get a JSON string back with the network CIDR that can be used for further parsing. In my case, I only needed to replace the last octet with a specific value that was going to be used at all locations, as well as the gateway.

Now, how to connect to AD using a specific service account rather than the default?

1/4 Cup Active Directory – Connecting with a specific service account

I would start by getting version 2.0 or higher of the plugin, as even up to vRO 6.0.1 didn’t ship with it.
You can download it from this link: https://my.vmware.com/group/vmware/get-download?downloadGroup=VRO_AD_PLUGIN_200

This version enables multiple AD connections which is required for this use case. If you’re lucky enough to be on vRO 7.0 or higher, you’re good!

Run the Add an Active Directory Server workflow and configure it to use a Shared Session with your service account.

With that done, create a Scriptable Task in a workflow that has a single output binding, called adHost, and then copy in the following code:

var allAD = AD_HostManager.findAllHosts()
for each(ad in allAD) {
 if(ad.hostConfiguration.sharedUserName == "LAB\\service_account") {
   adHost = AD_HostManager.findHost(ad.hostConfiguration.id)
   System.debug("AD Connection ID: "+ad.hostConfiguration.id)
 }
}

In this code, we’re essentially getting a list of all available AD Host connections, then looping through them to find the one assigned to our particular service account, and saving it to an attribute named adHost. This attribute can then be bound to another workflow or element to create the AD Computer Account, along with all other methods available in the plugin.

1/2 Cup Customization Spec Manager API – A Journey

This part of the journey was fairly new to me. I hadn’t tried using the API to tinker with Customization Specs before so it was an interesting journey, and I learned both the hard and subsequent easy way of accomplishing my task.

My initial thought on this was to create a specification for each location with the NIC1 interface configured using the IP settings gathered from the REST API call. After some tinkering and testing, I was able to create a Scriptable Task that did just that. Running my workflow to create a pile of hundreds of Customization Specs all named and IP’d properly was admittedly pretty cool to me. But, when I attempted to clone one of my test templates with the spec, it never completed. The reason?

The vCenter public key error.
The vCenter public key error.

As it turns out, when creating a Customization Spec the object is encrypted using the vCenter public key. This is to protect the stored credentials for the local Administrator account, as well as the account used for joining the system to the domain. Digging into the API Explorer shows that you can specify a clearText property to true and bypass this, but it doesn’t help as it seems the whole object is encrypted. Of note, you also get this error pop up if you export a Customization from one vCenter to another one.

But once you re-input those credentials and save the Specification, the cloning works as expected. So, can I modify the Customization Spec during workflow runtime? Turns out, that is the best way to approach this problem.

In vCO there is no data type that corresponds to a Customization Spec, so to pass it around, you’ll need to use the wonderful Any type.

To begin, create a Scriptable Task with inputs for the IP Address, Subnet Mask, and Gateway variables, and a single output of type Any to hold the new version of the Customization Spec.

To get a Customization Spec into a variable you can manipulate, you can use the following bit of code using a sample input type of VC:Datacenter, named inputDC.

var customizationSpec = inputDC.sdkConnection.customizationSpecManager.getCustomizationSpec("My_Customization_Spec")

Now, we’ll populate the Default Gateway with the one you input, either from Infoblox IPAM, or through a normal string input.

// gateway - the input type must be an array of a single value.
var staticGW = new Array()
staticGW.push(inputGW)
customizationSpec.spec.nicSettingMap[0].adapter.gateway = staticGW

Here, we declare a Fixed IP type, and set it to a value from your Infoblox IPAM query, or string input.

// Static IP - input type must be declared
var staticIP = new VcCustomizationFixedIp()
staticIP.ipAddress = inputIP
customizationSpec.spec.nicSettingMap[0].adapter.ip = staticIP

Finally, a simple string assignment for the subnet mask.

// Subnet Mask - can simply be a string value
customizationSpec.spec.nicSettingMap[0].adapter.subnetMask = inputNetmask

With the adjustments to the spec made, assign the modified specification to the output attribute.

// assign the updated spec to the output
outSpec = customizationSpec.spec

With that done, you can now use the CloneVM_Task API to create a clone of the VM template with the specification you just created. You’ll need to make sure you have a way of applying the target host, resource pool, datastore, as well as the Customization Spec to ensure this is successful.

It is worth noting that you did not actually modify the Customization Spec in vCenter directly – you just grabbed it, messed with it and sent it on its way! This makes the Specification useful for any environment, any location, any vCenter!

I hope you found this useful. If you have any questions or thoughts, hit me on Twitter or email!

Deploying OVF Templates with VCO

A feature that I have ended up using a lot lately, especially in delegating large deployments through workflows, is the vCenter Plugin’s ImportOVF feature.

No workflows come with the appliance that handle deploying OVFs, as there are a lot of variables involved. But once you know how to construct the request once, it’s really helpful to have handy!

To call the importOVF method, these are the arguments / types you will need to provide:

ovfUri (string) : This is the URI to the .OVF descriptor file. It can be either a valid HTTP link, or if you have a mounted share, it could be a local FILE:// location.
hostSystem (VC:HostSystem) : You’ll have to specify the ESXi target host with this argument.
importFolderName (string) : You can specify a VM folder name here that already exists – if you don’t have one, you should put in “Discovered virtual machine,” as it should always exist by default.
vmName (string) : This argument is the VM name as it will exist on the target host.
networks (array of VcOvfNetworkMapping): This argument is to map each virtual NIC interface from its source portgroup to the target portgroup.
datastore (VC:Datastore) : The datastore where the VM will reside.
props (array of VcKeyValue) : This is an optional parameter that can be used to provide inputs for the OVF settings, if they are being used.

Pointing to the OVF Files on a local share

It’s safe to say that most VCO appliances or installations won’t have direct internet access to pull OVFs from the Solution Exchange.
If you have mounted a CIFS share to the VCO appliance, you can access the OVF files there. The path requires additional escape slashes due to how VCO reads the attribute, so it would look something like this:

file:////mnt//cifsShare//ovf_templates//MySuperCoolApp//MySuperCoolApp.ovf

In this example, I have a CIFS share mounted on the appliance at mount point /mnt/cifsShare.
You will want to make sure that you set appropriate read/execute permissions in the js-io-rights.conf file to this location.
If you have no idea what that means, go HERE.

Setting up the OVF Network Mappings

The only thing that needs a little explanation on using this method is the VcOvfNetworkMapping portion of things.
If you think about it, during the deployment of an OVF in the vSphere Client, there is a window that asks you to map between the source and destination networks.

The vSphere Client OVF Network Mapping menu.
The vSphere Client OVF Network Mapping menu.

The ‘source’ network is whatever the portgroup was called when it was exported. You just need to map it to the destination portgroup on the target host. This requires you to know both sides so you can code for it, if you need to.

Below is some sample code you can use in a Scriptable Task to construct the network mapping values you need to execute the method.

// setup OVF to move the 'source_network' network to 'target_network'
var myVcOvfNetworkMapping = new VcOvfNetworkMapping() ;
myVcOvfNetworkMapping.name = "source_network"
// check for 'target_network' Portgroup on the ESX host bound to the task
for each(net in inputHost.network) {
  if(net.name == "target_network") {
    myVcOvfNetworkMapping.network = net
    break
  }
}
// create empty array of Network Mappings
var ovfNets = []
// add the mapping to the array
ovfNets.push(myVcOvfNetworkMapping)

If you have multiple NICs on different networks, simply repeat the steps above and adjust the mappings.

Optional: OVF Properties

If you are deploying OVFs that utilize OVF properties, you can create an object to specify them.
To ensure you get the right values, you may want to deploy an example OVF, and then check the OVF environment variables in the VM settings.

Here’s an example:

OVF Properties for a VM.
OVF Properties for a VM.

Given the above, notice the PropertySection which has the values you want. Here’s a bit of example code to populate these values:

// Create empty array of key/value pairs
var keys = []
// Create new key/value for OVF property
var myOVFValue01 = new VcKeyValue() ;
myOVFValue01.key = "OVF.OVF_Network_Settings.Network"
myOVFValue01.value = "10.10.10.5"
// Create new key/value for OVF property (that totally doesn't exist in the shot above, but you get it)
var myOVFValue02 = new VcKeyValue();
myOVFValue02.key = "OVF.OVF_Network_Settings.Netmask"
myOVFValue02.value = "255.255.255.0"
// add both of them to the array
keys.push(myOVFValue01, myOVFValue02)

Add the array to your method call and when the VM boots, depending on the underlying machine, it could auto configure! If you integrated this capability with an IP Management system, you could automate provisioning of OVFs top to bottom!

Troubleshooting the importOvf workflow
The VcPlugin.importOvf() method doesn’t give too much back. It is recommended to enable DEBUG level logging on the Configurator page for VCO during your testing of this plugin so you can narrow down what it could be
[WrappedJavaMethod] Calling method : public com.vmware.vmo.plugin.vi4.model.VimVirtualMachine com.vmware.vmo.plugin.vi4.VMwareInfrastructure.importOvf(java.lang.String,com.vmware.vmo.plugin.vi4.model.VimHostSystem,java.lang.String,java.lang.String,com.vmware.vim.vi4.OvfNetworkMapping[],com.vmware.vmo.plugin.vi4.model.VimDatastore,com.vmware.vim.vi4.KeyValue[]) throws java.lang.Exception

The most likely cause of errors in the method is either a missing portgroup or incorrect key value. Since just about everything is just strings, you need to make sure that on the target host, the portgroup is there and matches exactly.

Thanks for reading!

Workflowing with VUM and VCO – The Basics

If you’re using VCO/VRO as your automation engine, sooner or later you will probably add the vSphere Update Manager plugin.
Makes sense, right? The opportunity to schedule or delegate the ability to update and patch your hosts is pretty compelling.

It’s unfortunate that getting up and running with VUM isn’t quite as simple as it looks on the surface. In this particular case, PowerCLI  wins for its ease of integration.
Hopefully, this post helps turn that tide back.

It’s a little bit behind

The installation of the plugin is just like any other – upload it from the Configuration interface on port 8283. I don’t think it really even needs a reboot.
From there, you have to punch in your vCenter URL – which may not be obvious to many as there is little help or documentation.

So just to be clear, add the URL like this : 
https://myvcenterserver.lab.local/sdk

You can also add others in the list if you have multiple vCenters in the same authentication scope in a similar way.

Next up, check your plugin status within the VCO client. Inevitably you will run into an error that simply says ERROR IN PLUGIN.
Unless you are the white knight of IT, this isn’t too helpful.
If you see this, I’m willing to bet that it’s because you didn’t import the SSL certificate to the trusted store.
How would you know to do that? You wouldn’t, unless you like staring at logs set to DEBUG level!

So, how do I import the certificate?
Easy – just point VCO to the SOAP endpoint of your VUM box. You can get the service port piece of this information from the Admin section of the Update Manager plugin through the vSphere Client. You can do this through the Configurator page too, but since Configurator is going away, this is probably the best way.

Locating the SOAP port for VUM.
Locating the SOAP port for VUM.

Now, you can run a workflow to import the VUM certificate into the SSL Trust Store.
You can find the builtin workflow in Library->Configuration->SSL Trust Manager.
Choose to run the Import a certificate from URL workflow.

The Import Certificate workflow.
The Import Certificate workflow.

Execute the workflow, and for the input, use the FQDN to the server running the VUM service, and append port 8084 to the end, like you saw earlier.
The FQDN portion is important! If you don’t put it in there, you will likely have an error.

Importing the VUM SSL Certificate.
Importing the VUM SSL Certificate.

Once the certificates are imported, relaunch your VCO client. After you login, you should see some progress being made.

It's ALIVE!
It’s ALIVE!

That wasn’t so hard

So next up, you just bind a HostSystem value to a workflow that remediates and you’re good right?
Unfortunately not quite yet. But we’ll get there!

VUM uses a completely different service and object model unrelated to vCenter, thus direct integration is not as simple. Out of the box, you have to write your own code and do some heavy debugging in the logs.

Connecting the dots

The first thing we will do is make a simple Action element that takes an input of type VC:HostSystem and extracts the necessary info out of it to create its VUM-equivalent object.

Digging into the API Explorer, look for the VUM:VIInventory Scriptable Action type. This will tell you what you need in order to construct the corresponding object.

The VUM:VIInventory Type.
The VUM:VIInventory Type.

Thankfully, this is a pretty simple type to instantiate – it only requires 4 arguments to construct.
VumVIInventory(string): The string to input here is the vCenter Server.
id: This is the “vSphere Object ID” – essentially the Managed Object Reference ID.
name: This is the host object name in vCenter, whether it’s the FQDN or if you have it connected by IP. (Shame on you!)
type: This is the asset type. Keep in mind, VUM has the ability to patch Virtual Appliances too, so specifying this is required.

So, let’s get some code in place to make the VC -> VUM conversion happen! Create a new Action, and use the below setup as your guide.

Setting up the conversion VCO Action.
Setting up the conversion VCO Action.

Wait, why is the return type an Array?
The VUM plugin expects an array of VUM:VIInventory, even if that means it is an array of one object.

Here’s the Action code to use, with some notes.

// create the new VUM Inventory object and assign values.
// first define a new instance of the object, pointing to the vCenter Server of the host
var vumItem = new VumVIInventory(inputHostSystem.sdkConnection.name);

// get the Managed Object Reference
vumItem.id = inputHostSystem.reference.value;
// get the Managed Object Type
vumItem.type = inputHostSystem.vimType;
// get the ESXi Host name
vumItem.name = inputHostSystem.name;

// the VUMVIInventory object must be an array, so create a dummy one and push the host value in
var vumHosts = [];
vumHosts.push(vumItem);

// return the array of VUM objects (even if it is just one)
return vumHosts;

With this new instance of a VUM:VIInventory type, you can bind it to a Remediate Host workflow as you normally would for your patching pleasure, right?
Theoretically, yes. But you may want to check something else before celebrating.

java.lang.NullPointerException -or- Lessons in Error-Checking

One thing that you will want to verify prior to attempting to remediate using the VCO plugin is whether or not a set VUM Baselines are attached.
If you have no baselines attached, or not specified, your Remediate workflow will error out and throw you a generally unhelpful message.
Here’s how you can check for, attach, and bind the necessary data to the ESX host you want to remediate.

There is an Action as part of the plugin that will attach Baselines to a host object, but you have to tell it which ones you want. Below is sample code you can use in a Scriptable Task (or Action) to output a list of Baselines in your vCenter Server. Since you may have specific Baselines you wish to use, you’ll have to modify it to your liking – but it should be pretty easy.

The input binding on this task can be any object – for this example, it is the ESX host.

// create a search query for VUM baselines whose name begins with "SuperCool"

// query all baselines on the VM Host's vCenter Server
var searchSpec = new VumBaselineSearchSpec(inHost.sdkConnection.name) ;
// define regex to search baselines
var expression = "^SuperCool"

// get the list of baselines
var baselineList = VumObjectManager.getFilteredBaselines(searchSpec)
// VumBaseline must be an array, so make a dummy one
var baselineArray = []

// Loop through the findings and if they match, add to the baseline list.
for each(baseline in baselineList) {
  if(baseline.name.match(expression)) {
    System.log("Baseline: "+baseline.name)
    baselineArray.push(baseline)
  }
}

// assign the array values to the output binding, which is 'vumBaselines'
// if this were an Action, you would just return 'baselineArray'
vumBaselines = baselineArray

So after this, you’ll have your hosts to Remediate, and the Baselines you wish to use for the Remediation. Simply bind these arrays to the inputs of the Remediate workflow, and you’re off to the races!

As an aside, I’m hopeful the next iteration of the plugin, along with the 6.x release with VUM integrated into the VCSA will make life simple for us Orchestrator types. We will see!

VMware VSA Deepdive – Part 4 – Shutting Down VSA (SSH Edition)

The first way I elected to try and forcibly shut down the VSA VM was to do everything through the ESXi command line via SSH. ESXCLI itself is not implemented in VCO directly, so this will require some good old fashioned text parsing with AWK.

Enabling SSH on the host through a VCO Action

Unfortunately out of the box, there is no workflow/action that manages ESXi services, so I needed to roll my own.
Below is the Action setup and script code I used to check for the SSH service and start it up, given an input of type VC:HostSystem.
Create the Action, and name it startSSHService. There is no return value necessary on this Action.
Setting up the SSH Service Action

// get the list of services
var hostServices = inputHost.configManager.serviceSystem.serviceInfo.service
var sshService = null
// loop the services, find the SSH service.
for each(svc in hostServices) {
  if(svc.key == "TSM-SSH") {
    sshService = svc
    System.log("Found SSH Service on host ["+inputHost.name+"]")
    break
  }
}
if(sshService == null) {
  throw "Couldn't find SSH service on ["+inputHost.name+"]!"
}

// Enable the service
try {
  inputHost.configManager.serviceSystem.startService(sshService.key)
} catch(err) {
  System.log("ERROR--Couldn't start SSH service. Error Text: "+err)
}
// the end

So, once you have SSH started on your ESXi host, you can send commands through VCO to do what you need.

SSH Service Check Action

For a more robust workflow, you will probably want an Action that will check to see if the service is running, and return a boolean value. That way you can build in a little bit more into the flow.
The setup for the ‘check’ Action is the same, with the exception of the return value being a boolean.

Setting up the Check SSH Action.
Setting up the Check SSH Action.

The code is similar as well, just doing a simple check at the end.

var hostservices = inputHost.config.service.service
var sshSvc = null
for each(svc in hostservices) {
  // System.log("Service: "+svc.label+", Status is: "+svc.running)
  if(svc.label == "SSH") {
    sshSvc = svc
    break
  }
}

// check status, return true/false
if(sshSvc.running == true) {
  return true
} else {
  return false
}

Where’s my Burrit–VSA?

Before you power off the VSA VM you’ll want to make sure to vMotion your other guests to another node, or have a foolproof way of finding the VSA appliance on your host. Another Action to the rescue! Given an input ESXi host, this Action will query the VMs running on the host and check its tags out to see if it matches a specific value found on all VSA appliances. Note that these Tags are actually in the vCenter database, and not the Inventory Service Tags in the vSphere Web Client.

For purposes of this post, I’ll name the action getVSAonHost.

Setting up the VSA finder Action.
Setting up the VSA finder Action.
// for when you absolutely, positively need to make sure it's a VSA.
var vsaKey = "SYSTEM/COM.VMWARE.VIM.SVA"
// check the VMs on the host for the tag through a loop
for each(vm in inputHost.vm) {
  if(vm.tag) {
    for each(tag in vm.tag) {
      if(tag.key == vsaKey) {
        return vm
        break
      }
    }
  }
}

So now, you know you have the VM in question. You can then pass the VirtualMachine’s name property to your SSH command later.

Making a SSHandwich

With the ESXi host and the VSA VM in hand, you can execute the built in Run SSH Command workflow to do the final step.

Here’s the SSH command to send, which will find the VSA VM ID and power it off in one line, no questions asked:

VMID=$(vim-cmd vmsvc/getallvms | grep -i <VSA Name> | awk '{ print $1 }') && vim-cmd vmsvc/power.off $VMID

Begin by creating a new workflow, and create a single input parameter named inputHost, of type VC:HostSystem.
Then create three attributes in the General tab, naming them sshCommandhostSystemName and vmVSAName, all of type string.
Finally, create another attribute called vsaAppliance of type VC:VirtualMachine for use with the Action.

Next, drop your getVSAonHost Action into the schema, and bind the actionResult to vsaAppliance as seen below.

Binding Actions to the getVSAonHost Action.
Binding Actions to the getVSAonHost Action.

Next, drop a Scriptable Task into the Schema and bind inputHost and vsaAppliance on the IN tab. On the OUT tab, bind the attributes of hostSystemName and vmVSAName. We are effectively going to write a small blurb of code that hands off the name properties of the input objects to the output attributes for use later, along with creating the SSH command string.

Binding values to the Scriptable Task.
Binding values to the Scriptable Task.

In the Scripting Tab, we’ll use a few simple lines of code to perform the handoff of values.

// assign values to output attributes
hostSystemName = inputHost.name
vmVSAName = vsaAppliance.name
// create SSH command string using the input values
sshCommand = "VMID=$(vim-cmd vmsvc/getallvms | grep -i "+vmVSAName+" | awk \'{ print $1 }\') && vim-cmd vmsvc/power.off $VMID"

Finally, drop a copy of the Run SSH Command workflow into the schema. There are a lot of inputs here, so you will have to do some more complicated bindings. You can either force them to be input each time the workflow is run, set a static value, or bind to existing attributes.

Here’s what it looks like by default.

Setup for the SSH Workflow.
Setup for the SSH Workflow.

How you approach this part is largely up to you, but here is how I did it for this example.

The updated SSH Workflow Setup.
The updated SSH Workflow Setup.

You’ll notice for the username/password I set a static value for root and set passwordAuthentication to Yes, and changed the initial hostNameOrIP and cmd values to the attributes we created earlier. For the outputs, I created new local attributes to show the results.

Run the workflow and you should see that VSA go down!

As an aside, if HA is enabled in your VSA HA Cluster object, it will immediately attempt to restart the machine – so make sure you add in the capability to disable HA into your parent workflows first so that this doesn’t end up being a problem.

Sweet Reprieve

It’s a bit crazy the amount of effort it takes to work around a disabled method, but it does work. I don’t think it’s particularly great, and if you’re the only one who cares about the systems and don’t have audits, this may be enough.

But for my purposes this was just the first step of the journey. I did not end up actually using this way to do things, but figured it may be a good exercise to document the process.

The next post will take it in a different direction altogether, and a little bit closer to an API based solution that can also be audited.