Category Archives: SolidFire

SFCollector version .5 is now available

I’ve had some time to make extensive improvements to the SFCollector over the past couple of weeks. The major changes for the v.5 release include
  • Extensively reworked dashboards that now cover ESXi as well as SolidFire components. These dashboards are also now published on grafana.com
  • Added more metrics to the collections
  • Updated to Trident 18.01 for persistent storage
  • Tons of other small under the covers changes
Everything you need to get rolling is available at github.com/jedimt/sfcollector so head on over and take it for a spin!

Updates to the sfcollector

I’ve made some updates to the sfcollector to help improve the accuracy of graphs and make the delivery of stats more timely. If you have the prior version of this deployed consider this a mandatory upgrade 🙂 It’s also been tested against the new NetApp HCI product.

Future work will include improving the collector for scale as well as using a new metric set that moves away from point in time measurements that should be more representative of the actual IO demands on the system.

Head on over to https://github.com/jedimt/sfcollector to pick up the new version!

SolidFire stats collection w/Grafana and Graphite

I’ve been working on putting together a completely container based Grafana+Graphite dashboard for SolidFire for the past two weeks and I’ve got something I’m ready to call an early beta ready to roll.  

At a high level, there are three containers built using docker-compose that make up the application stack. 
  1. SFCollector -> runs the actual Python collection script that makes API calls to one or more SolidFire clusters to collect metrics, parse them and then push them into the Graphite container.
  2. Graphite -> stores the time series data pushed from the collector
  3. Grafana -> graphs the data from Graphite



If you are a SolidFire customer and you want to give it a shot, head over to https://github.com/jedimt/sfcollector, clone the repo and let me know what you think. There is a detailed install PDF included in the repo, but I’ve also cut a quick video of me doing an install. 

Please note, this IS a beta and there are some rough edges that need to be smoothed out. 

vSphere Replication w/SolidFire VVols

I just realized I had written up a quick ‘how to’ on configuring vSphere replication 6 with SolidFire VVols and then forgot to ever publish it. So yeah… without further ado…

Initial Install and Configuration for vSphere Replication

This link has a fairly detailed walkthrough for deploying the vSphere replication appliances.

http://www.settlersoman.com/how-to-install-and-configure-vmware-vsphere-replication-hypervisor-based-replication-6-0/

If the VMware self-signed certificates are being replaced by new certificates from a CA then the following KB article should be followed to ensure the correct SSL certificates are being used by the VMware Solutions

https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2109074

Configure a SPBM Policy for Replicated VMs

1. Connect to the vCenter that will host the replicated VM

2. Navigate to Home -> Policies and Profiles -> VM Storage Policies

3. Create a new storage policy as follows

a. Step 1 Name and Description -> Select the target vCenter server and enter a name for the policy
clip_image002

b. Step 2a Rule-Set 1 -> Select ‘com.solidfire.vasa.capabilities’ from the Rule based on data services drop down box. Set the data VVol minimum, maximum and burst IOPS to 1000/100000/100000
clip_image004

c. Step 3 Storage Compatibility -> Ensure the VVol datastore to be used for replication shows as compatible and then click Finish.

Configuring a VM for Replication with PIT Instances

1. Right click the VM to be replicated and select All vSphere Replication Actions -> Configure Replication
clip_image006

2. In the Configure Replication for <VM Name> wizard that appears configure as follows

a. Step 1 Replication Type -> Select Replicate to a vCenter Server

b. Step 2 Target Site -> Select the target vCenter instance

c. Step 3 Replication Server -> Select “Auto-assign vSphere Replication server”

d. Step 4 Target Location -> Click the ‘Edit’ link and choose the target VVol datastore
clip_image008
clip_image010

e. Step 5 Replication Options -> Enable Guest OS quiescing and Network Compression if desired.

f. Step 6 Recovery Settings -> Set the desired RPO for the virtual machine and enable Point in time instances with the desired number of instances to keep for the VM. These will be converted to snapshots during recovery of the virtual machine. The maximum number of PIT copies for a VM is 24.
clip_image012

g. Click Next and then Finish to complete configuration of replication

3. The replication will now start an initial sync after a couple minutes. After the initial sync the replication status should show a status of OK.
clip_image013

4. Navigate to Home -> vSphere Replication -> and select the target vCenter server. Click ‘Monitor’
clip_image014

5. Select ‘Incoming Replications’ and then select the virtual machine. Select Replication Details tab to verify details of the replication relationship for the VM.
clip_image016

6. Select the Point in Time tab to show PIT instances of the VM on the replicated site
clip_image018

Recovering a VM at the Replication Target Site

1. On the recovery vCenter navigate to Home -> vSphere Replication. Highlight the target vCenter instance and then click the monitor tab.
clip_image019

2. Highlight the VM to recover under “Incoming Replications” and then click the red ‘play’ button to start the recovery process
clip_image020

3. In the recovery wizard perform the following actions

a. Step 1 Recovery Options -> Select ‘Synchronize recent changes’ if the source VM is unavailable or shut down. If the source VM is still powered on and reachable this will not be an option. In this case, select “Use latest available data” and click Next. clip_image022

b. Step 2 Folder -> Select a folder on the target vCenter to receive the recovered VM

c. Step 3 Resource -> Select a host or cluster to host the restored VM. Click Next then Finish to complete the recovery of the VM.

4. The Recover Virtual Machine task will now run and register the restored VM on the target vCenter server. After a successful recovery the status of the VM will be ‘Recovered’. clip_image024

5. The recovered VM will be powered on with no network connectivity to avoid potential IP conflicts. clip_image026

6. If the VM needs to be restored to a previous PIT instance, right click the VM and navigate to Snapshots -> Manage Snapshots to select the PIT image to restore
clip_image028

7. To restore networking for the recovered VM (assuming the primary VM is offline), edit the settings of the VM and check the ‘Connected’ box for the network adapter for the VM.
clip_image029

Re-protecting a Recovered VM

1. Right click the recovered VM and select Configure vSphere Replication.

a. Step 1 Replication Type -> Select Replicate to a vCenter Server

b. Step 2 Target Site -> Select the vCenter to restore the VM to

c. Step 3 Replication Server -> Accept the default unless there are more than one replication appliances deployed

d. Step 4 Target Location -> Ensure the source VM is powered off. Then click the Edit link and select the original storage location for the VMclip_image031

i. Click OK. This will bring up a pop-up stating the target folder already exists. Click Use Existing button to replace the original VM with the contents of the recovered VMclip_image032

ii. Select the “Use all seeds” link to tell vSphere Replication to use the old VMDK files as a seed for the reverse replication to save on bandwidth during the recovery and click Nextclip_image034

e. Step 5 Replication Options -> Select replication options that fit the requirements for re-seeding the original production VM

f. Step 6 Recovery Settings -> Set the RPO and PIT options as required. Then click Finish to start the re-seeding of the recovered VM to the production vCenter.

Element OS 9.2

FluorineBanner

Earlier this month SolidFire has released Element OS 9 Patch 2 (Fluorine build 9.2.0.43). It includes support for all platforms and is a valid upgrade for any SolidFire cluster running Element OS 8.2 (Oxygen) or higher.

Some of the new features in patch 2 include

  • MDSS feature (more metadata) is not supported on all platforms. It is a support enabled feature at this time.
  • The vCenter Plugin has been updated to version 3. This was a complete re-write of the plugin and it now includes support for vSphere 6.5 and VVols 1/VASA 2. The new plugin focuses on pulling much more functionality from the SF GUI into vCenter and includes about 75% coverage of all storage tasks. I particularly like the VVols piece 🙂

There are also some very notable bug fixes that are included with this patch that will go a long way to improving the behavior of the cluster in a number of areas.

  • Fixed some issues when doing VM level replication to a VVol datastore
  • Resolved issues with tagging the primary storage VLAN and other general network improvements related to VLANs
  • Better performance under conditions with a very high number of iSCSI sessions
  • Better behavior when adding Element OS 8.x nodes to a 9.x cluster
  • Fix to allow faster loading of the SF GUI in ‘dark sites’
  • Lots of other under the covers fixes for general improvements in performance and stability

Get your update on!

Capacity and Efficiency per VM

This (probably poorly written) Powershell code will go grab all the VVols on a SolidFire system, group them by VM and then give you a read out of allocated and consumed capacity. It also reports on the overall efficiency of each VM.

image

#Connect to SolidFire system if no connection already exists
if(!$sfconnection){
Connect-SFCluster 172.27.40.200 -Username admin -Password solidfire | Out-Null
}

#Misc stuff
$separator = ".","_"

#Get a list of all the VVols and their stats
$VVols = Get-SFVirtualVolume -Details $true

#Group VVols by VM
#The VMW_VmID uniquely identifies the VVols associated with a VM
#Get list of unique VMW_VmID

foreach ($VmID in $VVols.metadata.VMW_VmID | Sort | Get-Unique) {

    #Get a unique VM object
    $VMObject = $VVols | select VirtualVolumeID,VolumeInfo,Metadata | where-object {$_.Metadata.VMW_VmID -match $VmID}
    $VMName = $VMObject.metadata.VMW_VVolName 
    $NumVVols = $VMObject.count
    
    $VMVVolStats = $VMObject.VolumeInfo.VirtualVolumeID | Get-SFVolumeStatsByVirtualVolume
    $VMVolEfficiencies = $VMObject.VolumeInfo.VolumeID | Get-SFVolumeEfficiency
      
    #Do some math for Agreegate capacity
    foreach ($VMVVolStat in $VMVVolStats) {
        $VMAllocatedCapactiy += $VMVVolStat.volumesize 
        $VMConsumedBlocks += $VMVVolStat.NonZeroBlocks
    }    
        
    #Create estimated efficiency per volume
    For ($i=0; $i -lt $VMVVolStats.Count; $i++) {

    #Catch if there is a 0 compression or deduplication number, if so set to 1
    If ($VMVolEfficiencies[$i].Deduplication -eq "0") { 
        $VMVolEfficiencies[$i].Deduplication = 1
    }
    If ($VMVolEfficiencies[$i].Compression -eq "0") {
        $VMVolEfficiencies[$i].Compression = 1
    }

    $VMVVolEfficiency = $VMVolEfficiencies[$i].Deduplication * $VMVolEfficiencies[$i].Compression
    $VMDiskEffectiveSize = (($VMVVolstats[$i].NonZeroBlocks * 4096) /1gb) / $VMVVolEfficiency
    $TotalVMEffective += [math]::Round($VMDiskEffectiveSize,2)                
    }

    #Turn blocks into bytes (not quite as exciting as water to wine)
    $VMConsumedCapacity = [math]::Round($VMConsumedBlocks * 4096 /1gb,2)
    $VMEffectiveEfficiency = [math]::Round($VMConsumedCapacity / $TotalVMEffective,2)

    #Total Counters
    $AllVMConsumed += $VMConsumedCapacity
    $AllVMEffective += $TotalVMEffective
    $AllEfficiency = [math]::Round(($AllVMConsumed / $AllVMEffective),2)

    #Note, all the capacity values are in GiB, not GB
    Write-Host "`nVM Name: $($VMName.split($separator) | select -first 1)" -ForegroundColor DarkGray
    Write-Host "Number of VVols: $NumVVols" -ForegroundColor Green
    Write-Host "VM Allocated GiB: $($VMAllocatedCapactiy /1gb)" -ForegroundColor Green
    Write-Host "VM Consumed GiB: $VMConsumedCapacity" -ForegroundColor Green
    Write-Host "VM Effective GiB: $TotalVMEffective" -ForegroundColor Green
    Write-Host "VM Effective Efficiency: $VMEffectiveEfficiency" -ForegroundColor Green

    #Clear out counters
    $VMAllocatedCapactiy = 0
    $VMConsumedBlocks = 0
    $TotalVMEffective = 0
    $VMDiskEffectiveSize = 0
    $TotalVMEffective = 0
    $VMConsumedCapacity = 0
    $VMEffectiveEfficiency = 0
}
Write-Host "`nTotal Consumed GiB Used by VMs:"  $AllVMConsumed
Write-Host "Total Effective GiB Used by VMs:" $AllVMEffective
Write-Host "Total Effective Efficiency:" $AllEfficiency

I’ll just come right out and say that the efficiency number really isn’t all that useful by itself. The reason being that the GetVolumeEfficiency call looks at a volume in isolation. This means it only reports the efficiency within the individual volumes, not across volumes. Practically that means the reported number is a worst case scenario.

However, it is useful in the sense that it provides a pretty decent expected ratio among a set of VMs. In the example above, I can see that my vra-2012r2 template reduced 1.91x and my vra-sql2014 template reduced 1.62x – a difference of ~ 15%. Generally speaking that means I can expect to get roughly 15% better efficiency rates with the vra-2012r2 VMs vs the vra-SQL2014 VMs. That might be useful information to have.

To test if this holds in a deployment, I created two storage containers (vra-2012r2 and vra-SQL2014) and deployed 50 VMs to each from the respective VM templates.

imageimage

This allows SolidFire to report the efficiency for all the VMs in those two storage containers independently so we can see how well we dedupe/compress across the VMs in the same storage container. Given the numbers reported per-VM earlier I would expect the vra-2012r2 storage container to get roughly 15% better numbers.

image

So we did, in fact, get better efficiency for the vra-2012r2 VMs and the difference in efficiency is ~20% which is in the ball park Smile After 24 hours the the difference was ~17%, almost exactly matching the original difference in the templates. So, not an exact science, but useful for general sizing estimates.

Measuring Efficiencies on VVols

Something that always comes up with discussions around VVols is “how do I report on how much I’ve allocated and consumed?”. This usually is expressed in one of the following ways:

  • Can I know total capacity allocated and consumed for the system and what efficiencies am I getting?
  • Can I know capacity allocated/consumed and efficiencies for each storage container?
  • Can I know capacity allocated/consumed and efficiencies for each VM?

Fortunately, the answer is yet to all three Smile There are a number of ways to get at this, but for the sake of simplicity I’ll be showing examples using the SolidFire PowerShell module. If you don’t have the PSM installed you can grab it and the installation instructions here: https://github.com/solidfire/powershell

Total System Capacity and Efficiencies

Total system metrics can be viewed by running Get-SFClusterCapacity

C:\> $ClusterCapacity = Get-SFClusterCapacity
C:\> $ClusterCapacity

ActiveBlockSpace : 48349817348
ActiveSessions : 16
AverageIOPS   : 4
ClusterRecentIOSize   : 5602
CurrentIOPS   : 4
MaxIOPS   : 200000
MaxOverProvisionableSpace : 276546135777280
MaxProvisionedSpace   : 55309227155456
MaxUsedMetadataSpace : 432103337164
MaxUsedSpace : 8642066743296
NonZeroBlocks : 52358440
PeakActiveSessions : 16
PeakIOPS : 502
ProvisionedSpace : 662498705408
SnapshotNonZeroBlocks : 0
Timestamp : 2017-02-17T18:05:17Z
TotalOps : 3586717
UniqueBlocks : 16370852
UniqueBlocksUsedSpace : 49468393216
UsedMetadataSpace : 876314624
UsedMetadataSpaceInSnapshots : 876314624
UsedSpace : 49468393216
ZeroBlocks : 271127256

Let me decode those bolded entries for you

  • NonZeroBlocks – The number of 4K blocks in the system that have been ‘touched’. Basically, there has been data written to them.
  • ProvisionedSpace – The number of bytes that have been provisioned. In the VVol case, this will line up with the aggregate size of all the VMDKs. For example adding an 80GB VMDK to a VM will increase this counter 80GB. 
  • UsedSpace – The actual amount of consumed block space on SolidFire for all NonZeroBlocks post deduplication/ compression savings. 

So the answers to our questions are:

  • Total Capacity Allocated = ProvsionedSpace (662498705408 bytes; 662GB)
  • Total Capacity Consumed [Pre-Efficiencies] = nonZeroBlocks * 4096 = 52358440 * 4096 = 214460170240 bytes; 214GB)
  • Total Capacity Consumed [Post Efficiencies] = UsedSpace (49468393216 bytes; 49GB) –> This implies a 4.6 cluster efficiency metric (deduplication * compression)

This lines up nicely with the numbers reported on the cluster reporting overview screen.

imageimage

 

So if you wanted to pull those numbers into some PowerShell variables you could do this. Note values would be stored in GiB, not GB notation when using the “/1gb” built in PowerShell constant.

$ClusterStats = Get-SFClusterCapacity
$ClusterAllocated = $ClusterStats.ProvisionedSpace /1gb
$ClusterConsumedPre = $ClusterStats.NonZeroBlocks * 4096 /1gb
$ClusterConsumedPost = $ClusterStats.UsedSpace /1gb

Capacity and Efficiency per Storage Container

[Note: If this section seems a little convoluted it is. There is a bug in the SDK that leaves out some details needed to match up VVols and storage containers, so I had to follow a bit of a circuitous route to get at what I wanted. I’ll amend this post when the hotfix is available.]

Ok, so lets take a look at how this breaks down with storage containers in the mix. There is not an API call to get all the capacity related counters for a storage container directly. We have to get at it by summing up the values for all VVols in the storage container.

There are a couple ways to go about this, but (until the SDK is fixed) I settled on the following logic.


#First, get storage containers and account objects
$StorageContainers = Get-SFStorageContainer
$StorageContainerAccounts = Get-SFAccount -AccountID $StorageContainers.AccountID

foreach ($StorageContainerAccount in $StorageContainerAccounts){

    #Get VVols for the Storage Container
    $VVols = Get-SFVolume -accountID $StorageContainerAccount.AccountID
    $VVolStats = $VVols | Get-SFVolumeStatsByVirtualVolume

        #Add up capacity stats for all VVols under a single storage container
        foreach ($vvol in $VVolStats) {
        $StorageContainerAllocatedBytes += $vvol.VolumeSize 
        $StorageContainerNonZeroblocks += $vvol.NonZeroBlocks 
        }

    #Do some math
    $StorageContainerConsumedGB = [math]::Round(($StorageContainerNonZeroblocks * 4096) /1gb,2)
    $StorageContainerAllocatedGB = $StorageContainerAllocatedBytes /1gb
    $TempSC = $storagecontainers | select AccountID,StoragecontainerID | where-object {$_.AccountID -match $StorageContainerAccount.AccountID}
    $StorageContainerEfficiency = Get-SFStorageContainerEfficiency -StorageContainerID $TempSC.StorageContainerID 
    $scdedupe = [math]::Round($StorageContainerEfficiency.Deduplication,2)
    $sccompres = [math]::Round($StorageContainerEfficiency.Compression,2)
    $sctotalefficiency = [math]::Round($StorageContainerEfficiency.Deduplication * $StorageContainerEfficiency.Compression,2)

    #Write out results to console on each pass
    Write-Host "`nStorage Container $($StorageContainerAccount.Username) VVol Stats (Account ID $($StorageContainerAccount.AccountID)):" -ForegroundColor DarkGray
    Write-Host "Allocated GB: $($StorageContainerAllocatedGB)" -ForegroundColor Green
    Write-Host "Consumed GB: $($StorageContainerConsumedGB)" -ForegroundColor Green
    Write-Host "Storage Container Efficiency: " -ForegroundColor Green
    Write-Host "Deduplication: $($scdedupe)" -ForegroundColor Green
    Write-Host "Compression: $($sccompres)" -ForegroundColor Green
    Write-Host "Total: $($sctotalefficiency)" -ForegroundColor Green

#Clear out variables
$StorageContainerAllocatedBytes = 0
$StorageContainerNonZeroblocks= 0
$StorageContainerConsumedGB = 0
$StorageContainerAllocatedGB = 0
$TempSC = $null
$StorageContainerEfficiency = 0
$scdedupe = $null
$sccompres = $null
$sctotalefficiency = $null
}


That produces the following results:

image

Note that the cluster wide efficiency numbers will look better than the individual storage container metrics because the deduplication domain is larger when you look at the full cluster. 

Now to get to work on that per-VM reporting…

Capturing API SSL Traffic with Wireshark

I ran into an issue with Postman today that required me to examine the API responses from my SolidFire cluster. However, since the API payload is encrypted it takes a little extra work. When I first captured the event in question I couldn’t find the API traffic I was expecting in the Wireshark trace.

To decrypt the traffic I needed the private key used to secure the exchange. Once I had that, in Wireshark I navigated to Edit -> Preferences -> Protocols -> SSL -> RSA Keys List to get to the SSL Decrypt dialog. Here I added the management IP address  (MVIP) of the SolidFire cluster, specified http as the protocol and loaded the private key file.

Once that was done I could see the HTTP JSON POST and response calls I was interested in. Not something I think I’ll do often, but it is a neat tool to have in your back pocket. 

Incidentally, this helped me verify that some garbage characters I was seeing in a API response for GetStorageContainerEfficiency was an issue with Postman itself, not the API. Reinstalling Postman cleared up the issue.

vRA + SPBM + VVols = Awesome

When I first joined SolidFire in June of 2014 I remember sitting in my first ‘deep dive’ on the product and hearing Dave Cahill (@dcahill8) say something along the lines of “SolidFire + an automation/orchestration tool = NGDC”. That sounded cool, but personally I’ve never done much with automation and orchestration tools. Frankly, I found them overly complex and intimidating. I always had a full plate anyways, so why invest the time?

Fast forward a couple years and it certainly has become apparent that the saying ‘evolve or die’ applies fairly well to your job skills 🙂 I’ve found the best way to learn something is find something you are passionate about and extend that. So in my case, that means ‘how do I use this with VVols/SPBM?’. 

I should note, this is a spiritual extension of the “Project Magic” work that Josh Atwell, Rawlinson Rivera and others developed back in 2015. While that was focused on VMFS implementations, this work is centered on VVols. I also wanted to give more details on how one would actually implement this since it was mostly PoC material. If you are interested, you can find an example of Project Magic + SolidFire here: 

 


Getting Started

I won’t lie, it took me a week of intermittent work to get vRA to the point it would deploy a VM.  That being said, it would have been longer if not for the excellent work by Erik Shanks at theithollow.com and Michael Rudloff at open902.com Both have very good guides on getting vRA 7.x off the ground the right way.  For my purposes, I’m setting up a configuration that aligns with the “Multi-tenant Example with Infrastructure  Configuration Only in Default Tenant” example from this webinar. More to follow on that in future posts.



vRealize Orchestrator vRA SPBM Integration Plug-in

vRA has no integrated support for SPBM yet, so this plugin gives us the next best thing. First step is to grab the plugin files from the Solutions Exchange. Getting the plugin working is pretty straightforward if you follow the installation guide, with a couple of exceptions. In my particular case, I had to make the following changes:
  • On page 14 “Enable the Set Storage Policy for Virtual Machine Provisioning” I had to make two changes:
    • Your endpoint should be the name of your actual vCenter endpoint, not named “endpoint”. 
    • The “bind” checkbox for your endpoint should not be checked. This resolved a “Unable to refresh request form from the server. Required field with ID: vCenter  missing” error I was getting when requesting a blueprint.  
  • On page 16, step 9c when creating the event subscription, I ended up removing the blueprint name conditional because I didn’t want to create a new subscription for each blueprint I created. This will make the plugin attempt to apply SPBM policies to all VMs stamped out through vRA which is what I wanted in my environment. 

I will be adding a couple of disks to some of my blueprints so I defined five property definitions under Administration -> Property Dictionary -> Property Definitions. 


Creating SPBM Policies

So what do SPBM policies buy you on SolidFire? Well there is only a single published capability and that is QoS. “Why only one” I hear you say? We don’t have to advertise things that the platform takes care of for us. So even though you are taking advantage of the deduplication, compression, data protection, QoS (and other) features, we don’t have to call them out specifically via an SPBM policy since they are always on and you get them automatically by simply using the platform. The below graphic helps tease that out a bit. 



That being the case, our SPBM policies are there simply to configure QoS on a per VMDK level. In this case there are five policies I’m going to leverage:
  • ServerOS-Base – Sets QoS to 1000/2500/10000 IOPS for Min/Max/Burst respectively
  • ServerOS-SQL – Sets QoS to 2000/4000/8000 IOPS for Min/Max/Burst respectively
  • SQL-Backup – Sets QoS to 2500/5000/10000 IOPS for Min/Max/Burst respectively
  • SQL-DB – Sets QoS to 2500/5000/10000 IOPS for Min/Max/Burst respectively
  • SQL-Logs – Sets QoS to 1000/4000/8000 IOPS for Min/Max/Burst respectively

Creating vRA Blueprints with a Single Disk/Policy

In my environment I have created three blueprints as shown below. The two datacenter blueprints have a single disk configured. These blueprints will assign a single SPBM policy shared by VMHome and “Hard Disk 1” of the VM. Pretty straightforward, even for me.

To configure the VM Home storage policy, open the blueprint and navigate to Properties -> Custom Properties and add “VMHomeStoragePolicy” and set the value to “Datastore Default”.  I also set the VMware.VirtualCenter.OperatingSystem and Hostname custom properties to aid in naming and getting the VM type right.

 

Next step is to configure disk0. To do that, head over to the storage tab and add a disk if you haven’t done so already. Then set any options you want, such as label and storage reservations. 

Now click the custom properties edit symbol and add a new property called “VirtualMachine.Disk0.DiskStoragePolicy” and set the value to “Use VM Home Storage Policy”. Note: If you have a 2nd disk that should get the same policy, then add the “VirtualMachine.Disk1.DiskStoragePolicy” custom property to the 2nd disk with the value “Use VM Home Storage Policy”. Repeat as needed if you have multiple disks that will share the same policy.

 

Now, publish the blueprint and publish it to the catalog (Administration -> Catalog Items). If you don’t see the blueprint make sure you have created a service under Administration -> Services and that your catalog item shows as being part of the service. 

 

Request the blueprint you just created. Under the “vSphere Machine” General tab you will see the particulars of your blueprint. If everything is set up right you will see a dropdown box listing the SPBM policies available for the “VM Home Storage Policy”.

 

Submit the request and assuming everything is good, a couple minutes later you will be the proud owner of a new VM with your specified SPBM policy applied to the VM Home and Hard Disk 1.

 

Did it work? Great! Have a drink and celebrate! Didn’t work? You are going to need more drinks.

As an aside, the timing of the SPBM policy application is a bit different if you are creating a VM from scratch vs cloning from a template. For newly created VMs, the SPBM policy is applied very quickly after the VM is created. For cloned VMs,  if you look at the VM right after it is created, it will still have whatever SPBM policy was applied to the template. This is because the vRO workflow that sets SPBM policy is executed as the last step before powering on the VM. If the policy is still wrong by the time the VM powers up, the SPBM policies didn’t get applied. 

Creating vRA Blueprints with Multiple Disks and Policies

Creating blueprints where multiple disks will have disparate policies starts off the same way. For each additional disk after ID 0, add the custom property “VirtualMachine.DiskN.DiskStoragePolicy”, where “N” is the number of the disk. 

 

Request the blueprint and you will see the request prompts for multiple policies to be applied to the VM.

 

Once the VM is deployed, check out all those policies! 😉

And what does that get us? Custom QoS for all our mount points 🙂 

 

Hopefully this is useful and you learned something along the way. I’m looking forward to posting more on vRO and vRA stuff in the near future. I’m just getting started, so if you have tips to share, I’m all ears.