How to deploy a Windows Virtual Desktop host pool using Infrastructure as code from Azure DevOps

If like me you have come from an infrastructure background and always built servers, virtual machines etc. manually, then the thought of doing all that hard work via code does not always come naturally.

Building out a gold image was a time intensive process with lots of manual steps plus deploying that at scale needed tools like Citrix MCS, or tools within the Hypervisor or others.

But if you have done anything in the public cloud you will have heard of “Infrastructure as Code! and if you have done anything in Azure then you will likely have heard of Azure DevOps, which provides people to have continuous integration, testing and delivery, the ability to deploy code, or whole applications with the press of a button.

But how can this apply to a Virtual Desktop capability when you are deploying hundreds or thousands of Azure IaaS VM's as part of a Windows Virtual Desktop deployment? Well not coming from a development background, but working at Microsoft as a Windows Virtual Desktop Global Black Belt, and importantly always hearing about Azure DevOps I thought it was time to see how one could use ADO to automate the steps that have always been somewhat manual but were nonetheless always required when doing large virtual desktop deployments.

There must be a way to bring the power of DevOps to the Virtual Desktop capability, I thought. Turns out there is and in fact there are multiple ways. Hence this article is designed for traditional Infrastructure people to learn the basics - (genuinely the basics here - there is a lot more to learn) to deploy that infrastructure via code - repeatedly.

This is not an exclusive work but rather a community effort with lots of information I have compiled from various sources, in particular, thanks to Jack Rudlin whose article on this is really the main basis for this.

This article makes the massive assumption that you are an EUC expert with an Infrastructure background and you already know WVD and the ways that you can deploy a host pool, and the plumbing required for this, including having a WVD Service Principal. However, it assumes that you don't know much about DevOps.

If you already know Azure DevOps you won’t learn anything here! However, if you are a traditional IT Pro doing EUC then hopefully you will and you will be able to deploy or update your WVD host pools with the press of a button, saving you massive amounts of time and error.

By the end of this article you will have
1. Two golden images. A Windows 10 Multi Session 1903 and 1909, but with the knowledge of how to easily create a 20nn image when it becomes available. This will use an ADO Build Pipeline
2. An automated method to update an existing host pool, which is the main use case here, as we are talking about making the ongoing life of a WVD admin as easy as possible. But also, how to build a new Host Pool based upon one of your images. This will use an ADO Release Pipeline. We will then enable it so that when you build a new Image your existing WVD host pool is automatically updated with this new image. This takes care of our need to regularly update the session host VM's in a host pool with an updated image, in order to take advantage of new applications, patches or anything else we have added to the image.

Within Azure DevOps we will create:

  • A code repository to store the build templates as well as the WVD Deploy and Update ARM Templates. 
  • Then use a Build Pipeline. The Pipeline uses that repository as well as an Azure Files SMB file share where we store the Application Install media for apps in our golden image. It then uses Key vault where we store the passwords for access for some sensitive services. This Build Pipeline builds an Azure Managed Image with those apps installed. It uses Hashicorp Packer to do that build.
  • The second step uses a Release Pipeline to take that image and use the existing WVD ARM Templates to either build a new Host Pool or update an existing host pool. This will be configured so that we kick off a new build and ADO will not only complete the build but automatically deploy or update the host pool with no further interaction.

This all looks like this:

The pre-reqs for this are an Azure Subscription, a WVD tenant and an Azure DevOps account.
Throughout this guide you will also need several files already created. These are available in my GitHub Repo here. Download all these files to a local location.

This article has two main sections:

Section 1 – Creating the Azure DevOps Repo where we store all of the JSON templates. It also goes through the process of creating the required Azure Infrastructure that we use in the build.

Section 2 – Creating the Azure DevOps Build and Release Pipelines that build an image and create or update WVD host pools with the new image.

Section 1

To start you need an Azure DevOps account. Go to this site to create your ADO account. This will take you through the process to get your own ADO account set up so that you can start building things.

So, if you have your ADO account let’s start.

The first thing to do is create a New Project.

Step 1. In the ADO Homepage click on + New Project:

Step 2. Give your project a meaningful name. Then depending on your ADO setup choose the appropriate visibility. Click on Advanced and then select "Git" in Version Control and "Basic" in Work Item process:

Click on Create and you will have your new ADO project ready to go:

Step 3.  Create a Repo.
On the left Click on Repos.

Leave the defaults and click on Initialize at the bottom.

This has created an empty Repo and initialized it ready to receive some content.

Step 4. Create some folders to store your files and then upload them

On the top right click on the vertical ellipsis and select + New > Folder:

Name this folder ARM Templates and enter as the New File name:

In the top right click on the Commit button.

In the Commit window just click on the Commit button at the bottom.

Step 5. Click on WVD-Auto-HostPool (or your Project name) at the top of the Repo screen and repeat these steps and create a folder named - Packer Build - Win 10 1903 EVD

Step 6. Repeat the above and create a folder named - Packer Build - Win 10 1909 EVD

You should now have a folder structure that looks like this:

Step 7. Now we will upload the two "deployment" ARM templates for WVD Host pool deployments, that we will use right at the end in the "Release Pipeline" that builds or updates a Host Pool
Click on the ARM Templates folder and then click on the ellipsis icon at the top right and click on Upload Files.

Then Browse to the mainTemplate,json file you downloaded earlier, then click on Commit in the bottom right.

Delete the file as that is no longer required. Select the file then click on the ellipsis and Delete.

Then click on Commit in the bottom right.

Step 8. Repeat this step and upload the UpdateTemplate.json file.

This should leave you with a folder structure like this.

Step 9. Now we will upload the two Packer Templates that build the Windows 10 1903 EVD image and the Windows 10 1909 EVD image.

Click on the "Packer Build - Win 10 1903 EVD" folder and click the ellipses in the top right and upload and commit the packer-win10_1903.json file you downloaded previously.

Also delete the file in this folder. You can also delete a file by clicking the ellipsis to the right of the file and selecting Delete - then clicking on Commit.

Now let’s have a look at this template file. Click on the packer-win10_1903.json file.

The variables section has several variables that are set either in Key vault or in the Pipeline which we will set a little later.

The builders section is where the action happens and builds the Azure Managed Image.

It is in here that we specify which build of Windows 10 we want. At line 32 we call 19h1-evd:

This is the 1903 build. If you change this to 19h2-evd it will build 1909.
In the future this where you would enter 20h1-evd and it will build the 200n build.

Also note that lines 41-44 reference a Virtual Network. This vNet is the one that the VM used to create the Image is placed on. We will need to create that later.

The provisioners section does the application installs. Now in the file I have uploaded I have entered two methods for installing applications. You should select only the one that is the most appropriate for you. I have added both methods in the one file just to show the choices (at least two choices as there are many more) you have. You will need to select one and delete the other in a moment.

The application installs are done in this section from lines 79 - 86:

The first method has two examples in lines 79-83.

These two examples install two apps (Notepad++ and FSLogix).

This depends on having an Azure Files SMB share available which we will create a little later where we have placed all the app installers. It also needs to have the "I" drive mapped which we do in lines 71-74.

Then creating the cmd file for each app to silently install the apps.
For reference the FSlogix.cmd just has this command within it:
i:\fslogix\x64\Release\FSLogixAppsSetup.exe /install /quiet /norestart

This approach allows the flexibility of including or removing apps as required.
If you are doing app installs this way remember to remove the "," at the end of line 83 and then delete lines 85 & 86.

However, lines 85 & 86 show the second way to deploy apps, which is to create a PowerShell script that has all the apps you want to install within the one file:

This calls another cmd file which itself calls a PS1 file with all the app installs done within the one file.
The cmd file contains just this line:
powershell.exe -executionpolicy bypass -windowstyle hidden -noninteractive -nologo -file "I:\GoldImage\Installapps.ps1"

The Installapps.ps1 file has all the PowerShell commands to download and install all the apps you require within your Golden Image. This method downloads the app installers directly from the internet locally and then installs them. This saves the need to download the app installers and place on the Azure Files share.

The file looks like this:

If you would rather deploy apps this way, then delete lines 79-83.

Step 10. Click on the Packer Build - Win 10 1909 EVD folder and upload the packer-win10_1909.json file as above.

Once uploaded note that line 32 has 19h2-evd. Plus, also note the app installation section between lines 76-83. Delete the sections that you do not want.

Delete the file

You will now be left with a Repo structure that looks like this:

Our Repo is now complete, and we can move on to the Build Pipeline which will use Hashicorp's Packer to do the actual build.

Section 2

 Step 11. First thing we need is a Service Principal for Packer.

Packer like every other service that integrates with your Azure subscription needs a secure mechanism for doing this. WVD itself is another example of this type of access. Packer creates some infrastructure components in your Azure subscription as does WVD and thus needs this SPN.

Two ways to create a Service Principal:
1. Via the Azure portal.
2. Powershell.

Let’s start by doing this via the Azure Portal.

In the Azure Portal go to your Azure Active Directory:

Select App Registrations.

Click on + New Registration.

Step 12. Give it a sensible name, select "Accounts in this organizational directory only ...." and in the Redirect URI enter a fictitious URL as this is actually not required in this context.

Click on Register at the bottom.

Step 13. Assign a Role to this SPN.

Go to your "Subscription" blade.

Select your subscripton, and then Access Control (IAM).

On the right selct Add in the Add a role assignbement section.

Select the appropriate Role I have selected Contributor, then Assign access to Azure AD user, group or service principal. In the Select field type the first few characters of your SPN and then select your SPN.

Click on Save at the bottom.

Step 14. Get the sign in values for this SPN.

Go back to the SPN in the App Registrations section of AAD

On the overview tab copy the Application (client) ID and store it somewhere. I suggest you store this temporarily in Notepad and keep Notepad open as there will be a number of items, we will need access to later on. We will need this later in step 18 when we add this to Key vault for secure storage.

Click on Certificates & secrets.

Click on + New client secret.

Enter a description and select an expiration.

Copy out Value and temporarily store it in Notepad. You will need to copy this now as it won’t be shown again. If you don’t copy it, you will need to create a new one. We need this and the App ID later. We will securely store these in Azure Key vault, later in step 18.

The other way which will be quicker is to use Powershell.

Step 15. Run this Powershell commands to create the SPN:

$aadContext = Connect-AzureAD
$svcPrincipal = New-AzureADApplication -AvailableToOtherTenants $true -DisplayName "Packer-ServicePrincipal"
$svcPrincipalCreds = New-AzureADApplicationPasswordCredential -ObjectId $svcPrincipal.ObjectId

#Get Password (secret)

#Get App id:

Repeat the steps above to give this SPN at least Contributor access to your subscription.

Step 16. Create a share for storing the application installers.

We will use Azure Files to create a simple SMB file share. This share is what we map to in the Packer build and is defined in the Packer Build templates that we uploaded to the Repo and edited earlier.

In the Azure Portal create a new Resource Group. Call it Packer-Build. This resource group will remain and will end up containing a few services such as the Storage account and Key vault. It will also be where we permanently store the resulting Images.

Create a new storage account.

In your Resource Group Click on + Add and Type Storage account.

Click on Create.

Give it a name and make sure you select the same Azure Region as you would like to deploy your new session hosts.

Either click on Review + Create or go through the other sections.

Create the Storage account.

Step 17. Create the Azure File Share.

Now we just need to create the Azure File share in the storage account. This is where we will place the file for the app installs.

Go to your newly created Storage Account and on the Overview, tab click on File Shares.

Click on + File Share:

Give the File share a name and a quota:

Click on Create at the bottom.
Go into the File share and click in Upload

Now upload the application install files and the install cmd files for each.
Also note that if you are going to deploy apps via the Powershell file you don’t need to upload the installers. All you need to do is create a folder such as "GoldImage" and place the "InstallApps.cmd" and "InstallApps.ps1" files.

Either way (or both) you should end up with something that looks like:

Also, we need to be able to map a drive to this share a little later.

With the File Share selected, click on Connect at the top:

Select Windows and copy out the whole text:

Paste it into Notepad or something similar and then copy out just the path to the share and store that as we will need it later. This is the line that starts with New-PSDrive....

i.e. in my example it is:


On the Storage Account go back the Access Keys:

Copy out and store the Storage Account name and the Key. We will need them in Step 18.

Step 18. Create an Azure Key vault.

Go back to the Resource Group you just created and click on +Add

Search for Key vault and click on Create

Give the Key Vault a name and click on Create at the bottom:

You will now have a new Key vault. We will now add the various details you have stored from earlier as secrets in key vault. Later on, we will retrieve these secrets from the Build Pipeline so that we are not storing them anywhere in code.

Step 19. Add secrets to Key vault

Open the new Key vault and click on Secrets and then click on + Generate/Import:

Now add the following eight secrets. This requires the items you have saved from above, plus some of the credentials used within your existing WVD deployment:

 Name of the Azure Storage Account
 Storage account Access Key
 AD account for doing doman joins
 Password for above account
 DevOps SPN App ID
 Secret for above
 WVD Service Principal Account
 Secret for above

You should now have a list like this:

Step 20. Add an Access Policy.

We need the Service Principal to at least List and Read these keys in order to pull them back into DevOps.

Click on Access policies:

Then click on + Add Access Policy:

In the configure from template (optional) drop down you can select a preconfigured set of permissions. These easiest is Key & Secret Management. You can optionally select more granular permissions in the Key permissions drop down

We will select that template.

In the Select Principal search for and add your Service Principal created earlier and click on Select at the bottom.

Make sure you click on Save at the top.

Step 21. Get those secrets from Key vault and place them in DevOps.

To be able to grab these values from with DevOps we need to add them to a Library inside the Pipeline.

Back in Azure DevOps click on Pipelines and Library:

Then click on + Variable Group.

Give the Variable group a name and enable the "Link Secrets from an Azure key vault as variables" button. This is required to connect to your Key vault to retrieve the secrets. Without this it will only allow you to enter variables manually.

Step 22. In the Azure Subscription drop down select the subscription you have been using and into which you want DevOps to deploy into.

In the Blue Authorize button drop down select Advanced options
We will now create a DevOps Service Connection to the Azure subscription.

Step 23. Click on the “...use the full version of the service connection dialog”. Link.

Enter the Packer Service Principal client ID and the Service principal key or secret that you previously created and stored in Key vault:

Click on Verify Connection, and you should get a green Verified tick:

If this fails, go back to the Subscription blade and into the Access Control blade and check that the Packer Service Principle you created does have at least Contributor access to your subscription.

Click on OK.

Step 24.  In the Key vault name drop down, select your newly created Key vault and click on the blue Authorize button.

This will open the standard AAD authentication dialog. Enter your Azure admin credentials.

Once connected the red text will disappear if not click the refresh button on the right, and should look like this:

Click on Save at the top.

Now in the Variables section click on + Add. This will list all the Secrets in your Key vault.

Select them all and click on OK.

Your Variables section will now list all your secrets within it.

Click on Save at the top.

Back in the DevOps Library section your Azure Key Vault Variables is now listed.

Next step is to create another Variables group that does not require to be stored in Key vault.

Step 25. Create a Variable Group.

In DevOps Go to Pipelines > Library.

At the Top click on + Variable group.

Give your Variable group a name and click on + Add four times to create four new variables.

Now add in four new variables:
Your Subscription ID
Your Azure AD Tenant ID
This is the name of the resource group where the build pripeline will place the Image it builds. Use the same Resource Group that has your Key vault and storage account
This is the path to the Azure Files SMB file share. You should have copied this in step 17. It will look like \\\appinstalls

Click on Save at the top.

Next requirement here is to create a Service Connection for DevOps to be able to authenticate itself seamlessly during the build of the image.

Step 26.  DevOps – Azure Service Connection.

In your DevOps project in the bottom left Click on Project settings > Service connections in the Pipelines Section. Click on New service connection and Select Azure Resource Manager:

Click on Next.

Select Service principal (automatic) and Click on Next.

In the New Azure service connection section, select your Azure subscription (you may be prompted for your Azure credentials again).
Select the Resource Group previously created

Give you connection a name and optionally a description.

Click on Save at the bottom, this will create your DevOps Azure subscription service connection.

So now you have two sets of variables. One stored in your Key vault, the other in the project itself.

This now means that the variables section of the packer-win10_1903.json file in the repo makes sense.

These are the variables we have defined already and should now be obvious what they relate to.

Within the builders section are the values for the temporary resource group where the image is compiled, as well as the permanent vNet that needs to exist to place the VM on.

Step 27. Create a Virtual Network for the Image Template VM to reside upon
Very importantly we need to create a vNet upon which the VM from which we take the image will reside. As you will likely build many of these images the vNet needs to be permanent (it has no cost) so we keep this in the Resource Group with the other items that are also permanent.
The vNet and Subnet defined in lines 41 and 42 need to be created now in the Resource Group defined in 44 above.

So as a recap before starting a build pipeline.

  • We have created a DevOps project, into which we have uploaded the standard WVD ARM deployment and update templates.
  • We have also uploaded two JSON templates for the building of a Windows 10 1903 and 1909 image.
  • We have created an Azure Files share with software located in it for use in that build.
  • We have created a key vault to store sensitive secrets, that we call from the build templates, as well as storing other variables within the project. 

Create a Build Pipeline

So, the next thing to do is to create a Build Pipeline in order to build our first image. From this image we will create a host pool with multiple session hosts based upon this image. This image is your corporate image.

A build Pipeline is a collection of build type tasks combined together to produce some code, an app, infrastructure and in our case a Windows 10 VM that is converted to an Image. The build actually happens from a Hosted Agent which is a generic VM behind the scenes that runs your pipeline and produces the output.

Step 28. Back in Azure DevOps select Pipelines > Pipelines.

Click on Create Pipeline

Select Use the classic editor at the bottom.

Select Azure Repos Git and click on Continue.

Select start with an empty job.

Step 29. Give your job a short but meaningful name. This name will also be used for the Image name that gets created at the end of this build process.

Select windows-2019 as the Agent Specification.

In the Agent Job 1 section click on +

On the right type packer in to the search field, and then Select "Packer Tool Installer"

Step 30. Now repeat these steps and add the following four remaining tasks:

A Build Machine Image task:

A Copy Files Task:

A Variable Save Task;
Finally a Publish Build Artifacts Task;

You should now have a list of build Tasks in your Pipeline that looks like this:

Now we need to apply some settings for some of these tasks.

Step 31. Click on Agent job 1.
On the right change the Display name to something more appropriate such as Packer Build.

Step 32. Click on the Use Packer task.

Modify the Display name and append 1.3.4 and in the Version field enter 1.3.4

Step 33. Move to the Build immutable image task.
Change the Packer Template to User Provided:

In the Packer template location browse to the packer-win10_1903.json file from your repo.

Click on OK.

In Template Parameters delete the existing {} and paste in the below.


Now click on the ellipsis on the right to confirm in a more readable format the variables you just pasted in. 

Move down to the Output section and in the Image URL or Name field enter BuildImage.

Step 34. Select the Copy Files to: task.

Change the display name to Copy Files to: $(build.artifactstagingdirectory)

In the Source Folder Browse to the ARM Templates in the Repo.

In the Target Folder paste in $(build.artifactstagingdirectory)

Your Copy Files to: task should now look like this:

Step 35. Select the Save Build Variables task.

Change the Display name to: Save Build Variables BuildImage
Change the Prefixes to: BuildImage.

This task will now look like:

Step 36. Select the Publish Artifact task.

Change the Display name to: Publish Artifact: Build Image and WVD Template.

Change the Artifact name to: Build Image.

It will now look like:

Step 37. We also need to some further variables to this specific Pipeline.

At the top click on Variables:

Click on the Link variable group:

Click the Radio button to the Left of the Azure Key Vault Variables,

Click on Link at the bottom.

Repeat this for your the My Variables Group:

Step 38. Importantly save this Pipeline.

At the top click on the drop down on the Save & Queue button and then click on Save.

Pipeline Recap

So, we have just built a DevOps Build Pipeline. This pipeline:

  • Installs Packer 1.3.4.
  • It then builds an Azure VM using Packer. This build uses four secure variables from Key vault. It creates an output variable of BuildImage.
  • It copies the contents of the ARM Templates folder which are the build and update ARM Templates from our Repo into the build artifact directory. They can then be used in our Release pipelines to build a new host pool or update an existing one.
  • It then converts our variables into artifacts, so that they are preserved and available for use later in the Release Pipeline.
  • It also uses the other variables needed in this pipeline that are not stored in Key vault.

You will more than likely need to edit this pipeline, to do that. Go back to the Pipelines section, click on Pipelines.
The view has now changed, and it shows your "Windows 10 1903 build" pipeline. Click on the pipeline and the resulting screen is where you would run it. But at the top right there is an Edit button. Clicking on this takes you back to the Edit section where we just created the pipeline.

Now you are ready to run this Windows 10 1903 Multi Session Build.

Step 39. Run the Build Pipeline.

If you are editing the Pipeline the easiest way to start the build is to Click on the Queue button at the top right.

On the Run pipeline screen click on Run at the bottom:

Your build will now commence.
Click on Packer Build in the Jobs section to see the progress:

The progress and any errors will be displayed in the Job run:

This will now create a new Resource Group call PackerBuild-Temp and start deploying some infrastructure into it (VM, Public IP, NIC, Disk) in order to build an Image. The resulting image will be placed back into the PackerBuild Resource Group, so it is kept permanently.

The PackerBuild-Temp Resource Group will be deleted at the end of this Pipeline

This will now leave you with an Image in your PackerBuild Resource Group called:

Windows-10-1903-Build-"Date and time"-"Build No." i.e.

Your Build Pipeline Job will have finished and should take approximately 10 minutes - you will also receive an email to this effect from Azure DevOps Notifications.

You can confirm the Artifacts have been published correctly by clicking on the Packer Build section at the top and on the Right click on the 1 Artifact Produced link:

That link will take you to the artifacts which will look like this:

Troubleshooting. Things that could go wrong:

  • You have typos in the names of your Azure Key vault variables or DevOps variables.
  • You have errors in the name and or paths for your Azure File Share. Make sure the names and paths are as defined in the JSON template.
  • The Install cmd files or the PS1 files are not correct.

How do you build a Windows 10 1909 build image?

All you have to do is repeat steps 27-38 and replace references to 1903 with 1909, and select the packer-win10_1909.json file in the Packer Template location, in the Build Immutable image task within the Build Pipeline:

This is the exact same process to build a 20nn (20h1) build when it is released.

Build a Release Pipeline

We now need to build a Release Pipeline that will take our newly created image and the artifacts from the build and then build a new host pool, or update an existing one.

My use case here, just as a reminder is that you are an IT Pro who knows WVD but not ADO. In which case you will likely already have done many Host Pool deployments but now you want a simple, quick, repeatable and reliable method to update the session hosts in the host pool.

In this process we will build a new host pool just to show the process and then update using continuous deployment.

Step 40. In ADO go to Pipelines > Releases.

Then click on New Pipeline

Select Empty Job at the top.

Step 41. Give your host pool a name.

Click the Save button at the top and save it into the default folder.

Click on OK.

Step 42. Click on the task link in the Update Stage just created.

Click on Agent Job and then in the Agent Specification select windows-2019.

Click on Save at the top and OK

Step 43. Go to the Options tab at the top

Replace the Release name format with: REL$(rev:r). This will be used in the session host naming, and is important this is replaced as without the VM name will be too long.

Click on Save at the top and OK.

Step 44. Go to the Pipelines tab at the top.

We will now add an artifact.
Click on + Add

The Build Source type is the default, your ADO project will be selected in the Project field.
In the Source (build pipeline) drop down select your Windows 10 1903 Build pipeline.
Default version should be Latest.

In Source alias give it an appropriate name: Latest Windows 10 1903 Build Artifacts.

Click on Add.

Step 45. Go back and edit the task again. This is similar to the Build Pipeline creation process.

Add a Variable Load task. Search for Load variable.

Select Variable Load Task and click on Add.

Step 46. Add an Azure Resource group Deployment.

Click on Add.

Step 47. The Load Build Variables doesn't need any changes. This will copy the variables from the build into the release.

In the Azure Resource Group, we need to make some updates.

Come back to the Azure Resource Group Deployment Task. Now update the following sections

  • The Azure Subscription.
  • The default action is fine of Create or update resource group.
  • The Resource Group you want the Host Pool resources to be created within.
  • The desired Location.

A little further in the Template section, the Template Location is "Linked artifact".

Then browse within the Template field and select your mainTemplate.json file:

In the Override template parameters paste in the following text:

-_artifactsLocation "" -_artifactsLocationSasToken "" -rdshImageSource CustomImage -vmImageVhdUri "" -rdshGalleryImageSKU "Windows-10-Enterprise-multi-session-with-Office-365-ProPlus" -rdshCustomImageSourceName $(BuildImage) -rdshCustomImageSourceResourceGroup $(wvd_goldimage_rg) -rdshNamePrefix VM-WVD-$(Release.ReleaseName) -rdshNumberOfInstances 1 -rdshVMDiskType Premium_LRS -rdshVmSize Standard_D2s_v3 -enableAcceleratedNetworking false -rdshUseManagedDisks true -storageAccountResourceGroupName "" -domainToJoin LOCALAD -existingDomainUPN $(DomainJoinAccountUpn) -existingDomainPassword $(DomainJoinAccountPassword) -ouPath OU=WVD,DC=LOCALAD,DC=COM -existingVnetName YOURVNET -newOrExistingVnet existing -existingSubnetName YOURSUBNET -virtualNetworkResourceGroupName YOURRG -rdBrokerURL -existingTenantGroupName "Default Tenant Group" -existingTenantName YOURTENANT -hostPoolName YOURHOSTPOOL -serviceMetadataLocation United-States -enablePersistentDesktop false -defaultDesktopUsers AUSERACCOUNT -tenantAdminUpnOrApplicationId $(WVDServicePrincipalAppID) -tenantAdminPassword $(WVDServicePrincipalSecret) -isServicePrincipal false -aadTenantId $(az_tenant_id) -location "North Europe"

Then at the right of this field click on the ellipse, this will read in those variables.
This will show the variables in an easier to read format and show any errors if there are some.

You will notice some items are in BOLD. These are values you need to change to reflect your environment. Updating them is slightly easier in this UI.

The values that will need updating are:

  • domainToJoin - this is your FQDN domain name
  • existingVnetName - this is your vnet that the WVD Session hosts need to reside upon, i.e. one that has access to your AD DC.
  • existingSubnetName - the subnet in the above vNet
  • VirtualNetworkResourceGroupName - the RG the above vNet is in
  • existingTenantName - your WVD tenants name
  • HostPoolName -Your WVD Host Pool Name
  • defaultDesktopUsers - a user account to present this published desktop to.

In Deployment Mode we will select Incremental which will deploy all new infrastructure and leave anything not in the template. Note you can do a validation of the template if you select Validate mode.

Click on Save at the top and OK.

Step 48. Remove the Pre-deployment condition trigger. Currently this task will automatically start when a new build completes and a new Artifact is created. We will remove that.

Go back to the Pipeline at the top.

Click on the Pre-deployment conditions button.

Then Select the Manual only Trigger.

Click on Save at the top and OK.

Step 49. Add the Variables to the Release.

Click on Variables at the top and select Variable Groups.

Click on Link variable.

Select Azure Key Vault Variables and then Link at the bottom.

Repeat this for the My Variables group, and click on Link at the bottom.

Click on Save at the top and OK.

Step 50. Build the new Host Pool using this Pipeline.

Go back to the Pipeline tab at the top and then select Create release on the right-hand side.

The Deploy host pool is now manual so can’t be triggered here, so just click on Create at the bottom and we will manually start it.

A release has now been created but nothing is yet running.

Click on the “Release-no." link.

Now in the Stages section hover over Deploy Host Pool and you will see a Deploy bottom beneath it.

Click on the Deploy button and then Deploy.

The stage changes to Queued and then to In Progress.

Clicking on In Progress and takes you to the processing of the release.

The ARM Template is now running and deploying a host pool as per normal but with the details we have specified in the variables and text sections earlier.
When this completes you will have a new Azure Resource group with all the resources required for this new WVD Host pool.

Plus, it will also have created the new Host Pool and presented the Default Desktop group to the user you specified in step 46.

Step 51. Create an Update host pool task.

Now we will create the Update a Host Pool Task and then enable Continuous Deployment for this Release, such that whenever we build a new image via the Build Pipeline it will automatically start the release Pipeline to update the existing host pool.

We will use the new host pool just created to update.

Go back to your Release Pipelines and click on Edit.

In Stages click once on the Deploy host pool task so it is selected.

Click on the +Add drop down button and select Clone.

This will create a task that follows the Deploy a new host pool task, which is what we don’t want.

Click on the Pre-Deployment conditions button:

Change the Trigger to After Release:

This option kicks of this new task after the build completes which itself still does not have continuous deployment enabled - yet.

Your Pipeline should now look like this:

Step 52. We need to modify this task to do an update

Click on the link in the task.

Change the name of this task to "Update Host Pool".

Click on Save at the top and OK.

The Load Build Variables is OK as is.

Click on the Azure Deployment task.

Modify the Display name to remove "Create or"

The other settings are the same as in the Deploy task. However, we do need to change the template we are using from the mainTemplate to the Update Template

In the Template field click on the ellipsis on the right and browse to the UpdateTemplate.json file.

Click on OK

In the Override template parameters paste in the following:

-_artifactsLocation "" -_artifactsLocationSasToken "" -rdshImageSource CustomImage -vmImageVhdUri "" -rdshGalleryImageSKU "Windows-10-Enterprise-multi-session-with-Office-365-ProPlus" -rdshCustomImageSourceName $(BuildImage) -rdshCustomImageSourceResourceGroup $(wvd_goldimage_rg) -rdshNamePrefix VM-WVD-$(Release.ReleaseName) -rdshNumberOfInstances 1 -rdshVMDiskType Premium_LRS -rdshVmSize Standard_D2s_v3 -enableAcceleratedNetworking false -rdshUseManagedDisks true -storageAccountResourceGroupName "" -domainToJoin LOCALAD -existingDomainUPN $(DomainJoinAccountUpn) -existingDomainPassword $(DomainJoinAccountPassword) -ouPath OU=WVD,DC=LOCALAD,DC=com -existingVnetName YOURVNET -newOrExistingVnet existing -existingSubnetName YOURSUBNET -virtualNetworkResourceGroupName YOURRG -rdBrokerURL -existingTenantGroupName "Default Tenant Group" -existingTenantName YOURTENANT -existingHostpoolName WYOURHOSTPOOL -serviceMetadataLocation United-States -enablePersistentDesktop false -tenantAdminUpnOrApplicationId $(WVDServicePrincipalAppID) -tenantAdminPassword $(WVDServicePrincipalSecret) -isServicePrincipal false -aadTenantId $(az_tenant_id) -actionOnPreviousVirtualMachines "Delete" -userLogoffDelayInMinutes 1 -userNotificationMessage "Scheduled maintenance, please save your work and logoff as soon as possible" -location "North Europe"

Again, click on the ellipse on the right-hand side, this will read your variables in and you will notice some items are in BOLD. These are values you need to change to reflect your environment. Updating them is slightly easier in this UI. These are exactly the same as previously as remember we are now running the Update Template not the Deployment template, there are a few differences.

Replace the bold items with the values for your environment.

Click on Save at the top and OK.

Step 53. Now we will deploy this Release Pipeline to update the VM's in our existing host pool.

Click on Create Release at the top.

The Update host task is now highlighted to show that this task is an automatic one and as soon as we click on Create the Release Pipeline will start. This is different to the Deploy task which is manual.

Click on Create at the bottom
The release task has now started.

Click on the "ReleaseNo." link

This will show you the progress of this task:

You can also see the progress by going back to the Azure portal and going to the Resource Group and then clicking on Deployments. You will see this deployment in progress.

Clicking on the deployment will show you more details and this is the standard ARM template output, that you are no doubt used to if you have done WVD deployments before.

The final step is to enable the Continuous Deployment Trigger that will mean that when you build a new image, it will automatically update your host pool.

Go back to your Release Pipeline

On the Artifacts section click on the Continuous Deployment Trigger button and then switch on the Enabled button:

Click on Save at the top and OK.

The next time you update your Image by running the Build Pipeline ADO will automatically update your host pool with this new image.

You’re done, well done for sticking with it.

This shows the power that Azure DevOps can bring to a WVD deployment. It saves you deployment time, errors and troubleshooting and gives you a reliable and repeatable method for creating an Image and for updating all of your existing host pools as well as creating new ones.

This guide only scratches the surface of the capability but hopefully is enough to learn the basics and to use it to a fuller extent, as have I.
There are plenty of other features and capabilities that could be used within this process such as automated testing, but that might get added to a "Part 2" article.



  1. Hello Tom,

    Excellent article!!! I'm afraid I got stuck though on Step 29. I couldn't find the Packer Tool Installer in my instance of Azure Devops. I see you are running the Enterprise version, which I'm unable to get access to and I'm wondering if that's where the difference lies.

    1. I think you need to install Packer tool from market place first, then you can select Packer tool installer.

  2. Hi Tom, The explanation is very clear, thank you very vuch for that. Can you also please check and try to link the images in this article with that of yours again. They are not getting populated and that would also help us understand properly. Thank you so much.

  3. Hi Tom,

    Can you please let me know the purpose of public IP, I have disabled the Public IP and observed that winRM is failing

  4. Hi Tom, I get this error during build immutable image stage - "Not waiting for Resource Group delete as requested by user. Resource Group Name is PackerBuild-Temp". It installs notepad++ and fslogix and then it fails. It doesnt produce any artifacts

  5. Hi Tom, excellent step by step.
    I'm having a problem with the build failing with the following azure-arm: ERROR: -> ResourceNotFound : The Resource 'Microsoft.Compute/images/Windows-10-1903-Build-2020-12-06-2323-Build7' under resource group 'Packer-Build' was not found. For more details please go to

    It is looking for an image that has not been created.
    What am I missing?

  6. Thank you for the great post.
    Prancer is a pre-deployment and post-deployment multi-cloud validation framework for your Infrastructure as Code (IaC) pipeline and continuous compliance in the cloud.

  7. Hi Tom, Thank so much for for sharing this excelent step-by-step guide.

    It is successfully mapping a drive but having an issue with executing the script from C:\Windows\Temp. I think it is getting confused with the CD $Version in my script.

    vsphere-iso: DESKTOP-MCHCSCF restarted.
    ==> vsphere-iso: Machine successfully restarted, moving on
    ==> vsphere-iso: Provisioning with Powershell...
    ==> vsphere-iso: Provisioning with powershell script: C:\Users\vi-admin\AppData\Local\Temp\powershell-provisioner673298167
    vsphere-iso: Defender RealTime scanning temporarily disabled
    ==> vsphere-iso: Provisioning with Powershell...
    ==> vsphere-iso: Provisioning with powershell script: C:\Users\vi-admin\AppData\Local\Temp\powershell-provisioner696842375
    vsphere-iso: Preparing to establish connection to File Share
    vsphere-iso: \\\MDTProduction$\Applications -Username vanMDT01\Administrator -Password P@ssw0rd
    vsphere-iso: 'S:' drive has been successfully mapped, preparing for application installation
    vsphere-iso: Defender RealTime scanning temporarily disabled
    vsphere-iso: VERBOSE: Setting Arguments
    ==> vsphere-iso: CD : Cannot find path 'C:\Users\Administrator\2100120145' because it does not exist.
    ==> vsphere-iso: At S:\Adobe\DC\install.ps1:18 char:1
    ==> vsphere-iso: + CD $Version
    ==> vsphere-iso: + ~~~~~~~~~~~
    ==> vsphere-iso: + CategoryInfo : ObjectNotFound: (C:\Users\Administrator\2100120145:String) [Set-Location], ItemNotFoundE
    ==> vsphere-iso: xception
    ==> vsphere-iso: + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand
    ==> vsphere-iso:
    ==> vsphere-iso: Provisioning step had errors: Running the cleanup provisioner, if present...
    ==> vsphere-iso: Power off VM...
    ==> vsphere-iso: Deleting Floppy image ...
    ==> vsphere-iso: Destroying VM...
    Build 'vsphere-iso' errored after 34 minutes 34 seconds: Script exited with non-zero exit status: 1.Allowed exit codes are: [0]

    ==> Wait completed after 34 minutes 34 seconds

    ==> Some builds didn't complete successfully and had errors:
    --> vsphere-iso: Script exited with non-zero exit status: 1.Allowed exit codes are: [0]

    ==> Builds finished but no artifacts were created.

  8. Having issues with running the install scripts: In my install script I have a CD $Version for selecting different versions of applications and it seems to be getting confused.

    vsphere-iso: Provisioning with powershell script: C:\Users\vi-admin\AppData\Local\Temp\powershell-provisioner696842375
    vsphere-iso: Preparing to establish connection to File Share
    vsphere-iso: \\\MDTProduction$\Applications -Username vanMDT01\Administrator -Password P@ssw0rd
    vsphere-iso: 'S:' drive has been successfully mapped, preparing for application installation
    vsphere-iso: Defender RealTime scanning temporarily disabled
    vsphere-iso: VERBOSE: Setting Arguments
    ==> vsphere-iso: CD : Cannot find path 'C:\Users\Administrator\2100120145' because it does not exist.
    ==> vsphere-iso: At S:\Adobe\DC\install.ps1:18 char:1
    ==> vsphere-iso: + CD $Version
    ==> vsphere-iso: + ~~~~~~~~~~~
    ==> vsphere-iso: + CategoryInfo : ObjectNotFound: (C:\Users\Administrator\2100120145:String) [Set-Location], ItemNotFoundE
    ==> vsphere-iso: xception
    ==> vsphere-iso: + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand
    ==> vsphere-iso:

  9. Hi Tom, thank you so much for sharing this execellent post.

    I seem to be having trouble with the app installs.

    Tried using my existing Install.ps1 and the Install.cmd method you described with separate folders for each application but it keeps failing.

    Could you please share what your install.ps1 script looks like?


  10. Hi Tom, thank you so much for sharing this execellent post.

    I seem to be having trouble with the app installs.

    Tried using my existing Install.ps1 and the Install.cmd method you described with separate folders for each application but it keeps failing.

    Could you please share an example of what your install.ps1 script looks like?


  11. Yep all the files mentioned in this article are store in my github here:

    1. Thanks! What I was trying to do is use my existing network file share and powershell/batch files for installing the apps. I managed to get it working for the simple MSI apps. But ran into an issue with managing the root drive particularly with installing Office 365 onto the image using packer? I am using the downloaded binaries and having issues with managing the path of the install.xml config file. Do you have a working method?

  12. Hi Tom, thank you so much for sharing. This is amazing work. I have a dude about of the secrets information in the key

    I can't indentify the all information. In the manual only mention 1 spn but in the table mention 2 differents secrets.

    DevOps SPN App ID
    Secret for above
    WVD Service Principal Account
    Secret for above

    Thanks !


Post a Comment

Popular posts from this blog

Reassign a WVD Personal Session Host

AVD and Azure Active Directory Domain Join public preview