Azure

How to deploy Linux in WVD when you can't deploy Linux in WVD

In my role as a Windows Virtual Desktop Global Black Belt at Microsoft working with customers deploying WVD, I often get asked can we deploy Linux in WVD?

The answer is no, but yes.

The official answer is we don't support Linux. In fact, this is the official list of operating systems that we support:




















So how can you run Linux in WVD?
Well this is a feature of Azure and Windows 10, not specifically WVD.

There are two options:

  1. Nested Virtualisation - i.e. running Hyper-V in an Azure VM
  2. Running Windows Subsystem for Linux on Windows 10. 
Lets have a look at the first option.

Nested Virtualisation

This assumes you know a bit about Windows Virtual Desktop.
You can do this on either Windows Server or Windows 10.

Windows Server

Deploy a standard WVD host pool using Windows Server, lets say Server 2016. Once you have logged into this VM run the followoing commands in an elevated PowerShell session:

Install the Hyper-V role:
Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart

Create a new network Switch
New-VMSwitch -Name "InternalNAT" -SwitchType Internal

Get the Interface Index number - take a note of this number to use next
Get-NetAdapter

Set an IP Address and create the network:
New-NetIPAddress -IPAddress 192.168.0.1 -PrefixLength 24 -InterfaceIndex 13
New-NetNat -Name "InternalNat" -InternalIPInterfaceAddressPrefix 192.168.0.0/24

Once this is complete you can then open up Hyper-V manager and create your "nested" VM, and this is where you create your Linux VM. This can be any distro.
Just go and download the ISO file from the relevant website and create a new VM as per normal. 

In my example I have a WVD session aptly called "WVD-HyperV" and when I connect to the desktop I have Hyper-V running with a Ubuntu distro of Linux running as my nested VM:


























The beauty here is that you have two VM's for the price of one. We only charge per second that the first VM is powered on. This also applied regardless of how many nested VM's you run.

You can also install Hyper-V on Windows 10 by just installing the Hyper-V Feature, and then installing a nested VM in a similar manner to the above. The benefit of Windows 10 and Hyper-V is that you get teh "Hyper-V Quick Create" feature:

















Which as the name suggest will create one of four templated VM's for you automatically:
























Once this Quick Create has completed you again will have a Ubuntu 19.10 nested VM ready to go.

However, if you are doing this on Windows 10, you could also use the Windows Subsystem for 
Linux. This won't give you a full VM but does give you the Linux command line Interface.

Windows Subsystem for Linux
The second option is to install the Windows Subsytem for Linux on a Windows 10 session host.
This is even simpler.

Go to the Microsoft Store and search for Linux:


Click on "Get the apps" or click on one of the items below:


Then select your Linux Distribution - in my case Ubuntu 18.04 LTS:




Then click on Get or Install to install your choice:

Once installed you can run your distro from the published desktop, where you will likely need to complete some final installation configuration steps, but once complete you can now run (Ubuntu in my case) in the Windows Subsytem for Linux:

























Once those final configuration tasks are complete you could also publish this as a RemoteApp.
Trying to launch the app as a RemoteApp before completing the configuration steps fails.

Obviously you will need to find the executable for the distro that has been installed, as well as potentially creating your own icon if one has not
This is a command to publish Ubuntu via PowerShell:

New-RdsRemoteApp -tenantname "YourTenant" -hostpoolname "YourHostPool" -AppGroupName "YourRemoteAppGroup" -Name Ubuntu -FilePath "C:\Program Files\WindowsApps\CanonicalGroupLimited.Ubuntu18.04onWindows_2020.1804.7.0_x64__79rhkp1fndgsc\ubuntu1804.exe" -IconPath "C:\Program Files\WindowsApps\CanonicalGroupLimited.Ubuntu18.04onWindows_2020.1804.7.0_x64__79rhkp1fndgsc\Assets\Square44x44Logo.altform-unplated_targetsize-256.png"










How to deploy a Windows Virtual Dekstop host pool using Infrastrucutre as code from Azure DevOps


If like me you have come from an infrastructure background and always built servers, virtual machines etc. manually, then the thought of doing all that hard work via code does not always come naturally.

Building out a gold image was a time intensive process with lots of manual steps plus deploying that at scale needed tools like Citrix MCS, or tools within the Hypervisor or others.

But if you have done anything in the public cloud you will have heard of “Infrastructure as Code! and if you have done anything in Azure then you will likely have heard of Azure DevOps, which provides people to have continuous integration, testing and delivery, the ability to deploy code, or whole applications with the press of a button.

But how can this apply to a Virtual Desktop capability when you are deploying hundreds or thousands of Azure IaaS VM's as part of a Windows Virtual Desktop deployment? Well not coming from a development background, but working at Microsoft as a Windows Virtual Desktop Global Black Belt, and importantly always hearing about Azure DevOps I thought it was time to see how one could use ADO to automate the steps that have always been somewhat manual but were nonetheless always required when doing large virtual desktop deployments.

There must be a way to bring the power of DevOps to the Virtual Desktop capability, I thought. Turns out there is and in fact there are multiple ways. Hence this article is designed for traditional Infrastructure people to learn the basics - (genuinely the basics here - there is a lot more to learn) to deploy that infrastructure via code - repeatedly.

This is not an exclusive work but rather a community effort with lots of information I have compiled from various sources, in particular, thanks to Jack Rudin whose article on this is really the main basis for this.

This article makes the massive assumption that you are an EUC expert with an Infrastructure background and you already know WVD and the ways that you can deploy a host pool, and the plumbing required for this, including having a WVD Service Principal. However, it assumes that you don't know much about DevOps.


If you already know Azure DevOps you won’t learn anything here! However, if you are a traditional IT Pro doing EUC then hopefully you will and you will be able to deploy or update your WVD host pools with the press of a button, saving you massive amounts of time and error.

By the end of this article you will have
1. Two golden images. A Windows 10 Multi Session 1903 and 1909, but with the knowledge of how to easily create a 20nn image when it becomes available. This will use an ADO Build Pipeline
2. An automated method to update an existing host pool, which is the main use case here, as we are talking about making the ongoing life of a WVD admin as easy as possible. But also, how to build a new Host Pool based upon one of your images. This will use an ADO Release Pipeline. We will then enable it so that when you build a new Image your existing WVD host pool is automatically updated with this new image. This takes care of our need to regularly update the session host VM's in a host pool with an updated image, in order to take advantage of new applications, patches or anything else we have added to the image.


Within Azure DevOps we will create:


  • A code repository to store the build templates as well as the WVD Deploy and Update ARM Templates. 
  • Then use a Build Pipeline. The Pipeline uses that repository as well as an Azure Files SMB file share where we store the Application Install media for apps in our golden image. It then uses Key vault where we store the passwords for access for some sensitive services. This Build Pipeline builds an Azure Managed Image with those apps installed. It uses Hashicorp Packer to do that build.
  • The second step uses a Release Pipeline to take that image and use the existing WVD ARM Templates to either build a new Host Pool or update an existing host pool. This will be configured so that we kick off a new build and ADO will not only complete the build but automatically deploy or update the host pool with no further interaction.


    This all looks like this:



    The pre-reqs for this are an Azure Subscription, a WVD tenant and an Azure DevOps account.
    Throughout this guide you will also need several files already created. These are available in my GitHub Repo here. Download all these files to a local location.

    This article has two main sections:

    Section 1 – Creating the Azure DevOps Repo where we store all of the JSON templates. It also goes through the process of creating the required Azure Infrastructure that we use in the build.

    Section 2 – Creating the Azure DevOps Build and Release Pipelines that build an image and create or update WVD host pools with the new image.

    Section 1

    To start you need an Azure DevOps account. Go to this site to create your ADO account. This will take you through the process to get your own ADO account set up so that you can start building things.


    So, if you have your ADO account let’s start.

    The first thing to do is create a New Project.


    Step 1. In the ADO Homepage click on + New Project:






    Step 2. Give your project a meaningful name. Then depending on your ADO setup choose the appropriate visibility. Click on Advanced and then select "Git" in Version Control and "Basic" in Work Item process:



















    Click on Create and you will have your new ADO project ready to go:










    Step 3.  Create a Repo.
    On the left Click on Repos.








    Leave the defaults and click on Initialize at the bottom.










    This has created an empty Repo and initialized it ready to receive some content.

    Step 4. Create some folders to store your files and then upload them

    On the top right click on the vertical ellipsis and select + New > Folder:











    Name this folder ARM Templates and enter README.md as the New File name:













    In the top right click on the Commit button.









    In the Commit window just click on the Commit button at the bottom.

















    Step 5. Click on WVD-Auto-HostPool (or your Project name) at the top of the Repo screen and repeat these steps and create a folder named - Packer Build - Win 10 1903 EVD

    Step 6. Repeat the above and create a folder named - Packer Build - Win 10 1909 EVD


    You should now have a folder structure that looks like this:
































    Step 7. Now we will upload the two "deployment" ARM templates for WVD Host pool deployments, that we will use right at the end in the "Release Pipeline" that builds or updates a Host Pool
    Click on the ARM Templates folder and then click on the ellipsis icon at the top right and click on Upload Files.




















    Then Browse to the mainTemplate,json file you downloaded earlier, then click on Commit in the bottom right.




















    Delete the README.md file as that is no longer required. Select the file then click on the ellipsis and Delete.








    Then click on Commit in the bottom right.

    Step 8. Repeat this step and upload the UpdateTemplate.json file.

    This should leave you with a folder structure like this.




















    Step 9. Now we will upload the two Packer Templates that build the Windows 10 1903 EVD image and the Windows 10 1909 EVD image.

    Click on the "Packer Build - Win 10 1903 EVD" folder and click the ellipses in the top right and upload and commit the packer-win10_1903.json file you downloaded previously.

    Also delete the README.md file in this folder. You can also delete a file by clicking the ellipsis to the right of the file and selecting Delete - then clicking on Commit.


















    Now let’s have a look at this template file. Click on the packer-win10_1903.json file.


    The variables section has several variables that are set either in Key vault or in the Pipeline which we will set a little later.


    The builders section is where the action happens and builds the Azure Managed Image.




    It is in here that we specify which build of Windows 10 we want. At line 32 we call 19h1-evd:








    This is the 1903 build. If change this to 19h2-evd it will build 1909.
    In the future this where you would enter 20h1-evd and it will build the 200n build.

    Also note that lines 41-44 reference a Virtual Network. This vNet is the one that the VM used to create the Image is placed on. We will need to create that later.


    The provisioners section does the application installs. Now in the file I have uploaded I have entered two methods for installing applications. You should select only the one that is the most appropriate for you. I have added both methods in the one file just to show the choices (at least two choices as there are many more) you have. You will need to select one and delete the other in a moment.




    The application installs are done in this section from lines 79 - 86:


    The first method has two examples in lines 79-83.




    These two examples install two apps (Notepad++ and FSLogix).

    This depends on having an Azure Files SMB share available which we will create a little later where we have placed all the app installers. It also needs to have the "I" drive mapped which we do in lines 71-74.

    Then creating the cmd file for each app to silently install the apps.
    For reference the FSlogix.cmd just has this command within it:
    i:\fslogix\x64\Release\FSLogixAppsSetup.exe /install /quiet /norestart

    This approach allows the flexibility of including or removing apps as required.
    If you are doing app installs this way remember to remove the "," at the end of line 83 and then delete lines 85 & 86.


    However, lines 85 & 86 show the second way to deploy apps, which is to create a PowerShell script that has all the apps you want to install within the one file:


    This calls another cmd file which itself calls a PS1 file with all the app installs done within the one file.
    The cmd file contains just this line:
    powershell.exe -executionpolicy bypass -windowstyle hidden -noninteractive -nologo -file "I:\GoldImage\Installapps.ps1"

    The Installapps.ps1 file has all the PowerShell commands to download and install all the apps you require within your Golden Image. This method downloads the app installers directly from the internet locally and then installs them. This saves the need to download the app installers and place on the Azure Files share.

    The file looks like this:














    If you would rather deploy apps this way, then delete lines 79-83.

    Step 10. Click on the Packer Build - Win 10 1909 EVD folder and upload the packer-win10_1909.json file as above.

    Once uploaded note that line 32 has 19h2-evd. Plus, also note the app installation section between lines 76-83. Delete the sections that you do not want.

    Delete the README.md file

    You will now be left with a Repo structure that looks like this:


    Our Repo is now complete, and we can move on to the Build Pipeline which will use Hashicorp's Packer to do the actual build.

    Section 2

     Step 11. First thing we need is a Service Principal for Packer.

    Packer like every other service that integrates with your Azure subscription needs a secure mechanism for doing this. WVD itself is another example of this type of access. Packer creates some infrastructure components in your Azure subscription as does WVD and thus needs this SPN.

    Two ways to create a Service Principal:
    1. Via the Azure portal.
    2. Powershell.

    Let’s start by doing this via the Azure Portal.


    In the Azure Portal go to your Azure Active Directory:








    Select App Registrations.










    Click on + New Registration.






    Step 12. Give it a sensible name, select "Accounts in this organizational directory only ...." and in the Redirect URI enter a fictitious URL as this is actually not required in this context.

























    Click on Register at the bottom.

    Step 13. Assign a Role to this SPN.


    Go to your "Subscription" blade.






    Select your subscripton, and then Access Control (IAM).







    On the right selct Add in the Add a role assignbement section.











    Select the appropriate Role I have selected Contributor, then Assign access to Azure AD user, group or service principal. In the Select field type the first few characters of your SPN and then select your SPN.

    Click on Save at the bottom.

    Step 14. Get the sign in values for this SPN.

    Go back to the SPN in the App Registrations section of AAD

    On the overview tab copy the Application (client) ID and store it somewhere. I suggest you store this temporarily in Notepad and keep Notepad open as there will be a number of items, we will need access to later on. We will need this later in step 18 when we add this to Key vault for secure storage.







    Click on Certificates & secrets.






    Click on + New client secret.










    Enter a description and select an expiration.



















    Copy out Value and temporarily store it in Notepad. You will need to copy this now as it won’t be shown again. If you don’t copy it, you will need to create a new one. We need this and the App ID later. We will securely store these in Azure Key vault, later in step 18.











    The other way which will be quicker is to use Powershell.

    Step 15. Run this Powershell commands to create the SPN:

    $aadContext = Connect-AzureAD
    $svcPrincipal = New-AzureADApplication -AvailableToOtherTenants $true -DisplayName "Packer-ServicePrincipal"
    $svcPrincipalCreds = New-AzureADApplicationPasswordCredential -ObjectId $svcPrincipal.ObjectId

    #Get Password (secret)
    $svcPrincipalCreds.Value

    #Get App id:
    $svcPrincipal.AppId

    Repeat the steps above to give this SPN at least Contributor access to your subscription.

    Step 16. Create a share for storing the application installers.

    We will use Azure Files to create a simple SMB file share. This share is what we map to in the Packer build and is defined in the Packer Build templates that we uploaded to the Repo and edited earlier.

    In the Azure Portal create a new Resource Group. Call it Packer-Build. This resource group will remain and will end up containing a few services such as the Storage account and Key vault. It will also be where we permanently store the resulting Images.

    Create a new storage account.

    In your Resource Group Click on + Add and Type Storage account.









    Click on Create.














    Give it a name and make sure you select the same Azure Region as you would like to deploy your new session hosts.



    Either click on Review + Create or go through the other sections.

    Create the Storage account.

    Step 17. Create the Azure File Share.

    Now we just need to create the Azure File share in the storage account. This is where we will place the file for the app installs.

    Go to your newly created Storage Account and on the Overview, tab click on File Shares.





    Click on + File Share:







    Give the File share a name and a quota:













    Click on Create at the bottom.
    Go into the File share and click in Upload







    Now upload the application install files and the install cmd files for each.
    Also note that if you are going to deploy apps via the Powershell file you don’t need to upload the installers. All you need to do is create a folder such as "GoldImage" and place the "InstallApps.cmd" and "InstallApps.ps1" files.


    Either way (or both) you should end up with something that looks like:




















    Also, we need to be able to map a drive to this share a little later.

    With the File Share selected, click on Connect at the top:






    Select Windows and copy out the whole text:




















    Paste it into Notepad or something similar and then copy out just the path to the share and store that as we will need it later. This is the line that starts with New-PSDrive....

    i.e. in my example it is:

    \\packerbuildstore.file.core.windows.net\appinstalls


    On the Storage Account go back the Access Keys:







    Copy out and store the Storage Account name and the Key. We will need them in Step 18.


    Step 18. Create an Azure Key vault.

    Go back to the Resource Group you just created and click on +Add

    Search for Key vault and click on Create


    Give the Key Vault a name and click on Create at the bottom:






































    You will now have a new Key vault. We will now add the various details you have stored from earlier as secrets in key vault. Later on, we will retrieve these secrets from the Build Pipeline so that we are not storing them anywhere in code.

    Step 19. Add secrets to Key vault


    Open the new Key vault and click on Secrets and then click on + Generate/Import:






















    Now add the following eight secrets. This requires the items you have saved from above, plus some of the credentials used within your existing WVD deployment:

     Name 
     Value
    AppInstallsStorageAccountName
     Name of the Azure Storage Account
    AppInstallsStorageAccountKey
     Storage account Access Key
    DomainJoinAccountUPN
     AD account for doing doman joins
    DomainJoinAccountPassword
     Password for above account
    PackerDevOpsVMProvisioningServicePrincipalAppID
     DevOps SPN App ID
    PackerDevOpsVMProvisioningServicePrincipalSecret
     Secret for above
    WVDServicePrincipalAppID
     WVD Service Principal Account
    WVDServicePrincipalSecret
     Secret for above


    You should now have a list like this:















    Step 20. Add an Access Policy.

    We need the Service Principal to at least List and Read these keys in order to pull them back into DevOps.


    Click on Access policies:







    Then click on + Add Access Policy:















    In the configure from template (optional) drop down you can select a preconfigured set of permissions. These easiest is Key & Secret Management. You can optionally select more granular permissions in the Key permissions drop down

    We will select that template.

    In the Select Principal search for and add your Service Principal created earlier and click on Select at the bottom.

























    Make sure you click on Save at the top.







    Step 21. Get those secrets from Key vault and place them in DevOps.

    To be able to grab these values from with DevOps we need to add them to a Library inside the Pipeline.


    Back in Azure DevOps click on Pipelines and Library:















    Then click on + Variable Group.

















    Give the Variable group a name and enable the "Link Secrets from an Azure key vault as variables" button. This is required to connect to your Key vault to retrieve the secrets. Without this it will only allow you to enter variables manually.


































    Step 22. In the Azure Subscription drop down select the subscription you have been using and into which you want DevOps to deploy into.










    In the Blue Authorize button drop down select Advanced options
    We will now create a DevOps Service Connection to the Azure subscription.


    Step 23. Click on the “...use the full version of the service connection dialog”. Link.




































    Enter the Packer Service Principal client ID and the Service principal key or secret that you previously created and stored in Key vault:






































    Click on Verify Connection, and you should get a green Verified tick:
















    If this fails, go back to the Subscription blade and into the Access Control blade and check that the Packer Service Principle you created does have at least Contributor access to your subscription.

    Click on OK.

    Step 24.  In the Key vault name drop down, select your newly created Key vault and click on the blue Authorize button.











    This will open the standard AAD authentication dialog. Enter your Azure admin credentials.


    Once connected the red text will disappear if not click the refresh button on the right, and should look like this:

















    Click on Save at the top.

    Now in the Variables section click on + Add. This will list all the Secrets in your Key vault.

    Select them all and click on OK.

































    Your Variables section will now list all your secrets within it.

    Click on Save at the top.


    Back in the DevOps Library section your Azure Key Vault Variables is now listed.














    Next step is to create another Variables group that does not require to be stored in Key vault.

    Step 25. Create a Variable Group.

    In DevOps Go to Pipelines > Library.

    At the Top click on + Variable group.
























    Give your Variable group a name and click on + Add four times to create four new variables.

































    Now add in four new variables:
     ARM_Subscription_ID
    Your Subscription ID
     AZ_Tenant_ID
    Your Azure AD Tenant ID
     wvd_goldimage_rg
    This is the name of the resource group where the build pripeline will place the Image it builds. Use the same Resource Group that has your Key vault and storage account
     packaged_app_installs_path
    This is the path to the Azure Files SMB file share. You should have copied this in step 17. It will look like \\packerbuildstore.file.core.windows.net\appinstalls













    Click on Save at the top.

    Next requirement here is to create a Service Connection for DevOps to be able to authenticate itself seamlessly during the build of the image.

    Step 26.  DevOps – Azure Service Connection.


    In your DevOps project in the bottom left Click on Project settings > Service connections in the Pipelines Section. Click on New service connection and Select Azure Resource Manager:
















    Click on Next.

    Select Service principal (automatic) and Click on Next.












    In the New Azure service connection section, select your Azure subscription (you may be prompted for your Azure credentials again).
    Select the Resource Group previously created

    Give you connection a name and optionally a description.






































    Click on Save at the bottom, this will create your DevOps Azure subscription service connection.


    So now you have two sets of variables. One stored in your Key vault, the other in the project itself.

    This now means that the variables section of the packer-win10_1903.json file in the repo makes sense.





















    These are the variables we have defined already and should now be obvious what they relate to.


    Within the builders section are the values for the temporary resource group where the image is compiled, as well as the permanent vNet that needs to exist to place the VM on.




    Step 27. Create a Virtual Network for the Image Template VM to reside upon
    Very importantly we need to create a vNet upon which the VM from which we take the image will reside. As you will likely build many of these images the vNet needs to be permanent (it has no cost) so we keep this in the Resource Group with the other items that are also permanent.
    The vNet and Subnet defined in lines 41 and 42 need to be created now in the Resource Group defined in 44 above.


    So as a recap before starting a build pipeline.

    • We have created a DevOps project, into which we have uploaded the standard WVD ARM deployment and update templates.
    • We have also uploaded two JSON templates for the building of a Windows 10 1903 and 1909 image.
    • We have created an Azure Files share with software located in it for use in that build.
    • We have created a key vault to store sensitive secrets, that we call from the build templates, as well as storing other variables within the project. 

    Create a Build Pipeline

    So, the next thing to do is to create a Build Pipeline in order to build our first image. From this image we will create a host pool with multiple session hosts based upon this image. This image is your corporate image.

    A build Pipeline is a collection of build type tasks combined together to produce some code, an app, infrastructure and in our case a Windows 10 VM that is converted to an Image. The build actually happens from a Hosted Agent which is a generic VM behind the scenes that runs your pipeline and produces the output.


    Step 28. Back in Azure DevOps select Pipelines > Pipelines.




















    Click on Create Pipeline














    Select Use the classic editor at the bottom.





















    Select Azure Repos Git and click on Continue.


    Select start with an empty job.












    Step 29. Give your job a short but meaningful name. This name will also be used for the Image name that gets created at the end of this build process.

    Select windows-2019 as the Agent Specification.



















    In the Agent Job 1 section click on +












    On the right type packer in to the search field, and then Select "Packer Tool Installer"


    Step 30. Now repeat these steps and add the following four remaining tasks:

    A Build Machine Image task:












    A Copy Files Task:





    A Variable Save Task;
    Finally a Publish Build Artifacts Task;

    You should now have a list of build Tasks in your Pipeline that looks like this:


    Now we need to apply some settings for some of these tasks.

    Step 31. Click on Agent job 1.
    On the right change the Display name to something more appropriate such as Packer Build.




    Step 32. Click on the Use Packer task.

    Modify the Display name and append 1.3.4 and in the Version field enter 1.3.4


    Step 33. Move to the Build immutable image task.
    Change the Packer Template to User Provided:

    In the Packer template location browse to the packer-win10_1903.json file from your repo.














    Click on OK.

    In Template Parameters delete the existing {} and paste in the below.

    {"client_id":"$(PackerDevopsVMProvisioningServicePrincipalAppID)","client_secret":"$(PackerDevopsVMProvisioningServicePrincipalSecret)","AppInstallsStorageAccountName":"$(AppInstallsStorageAccountName)","AppInstallsStorageAccountKey":"$(AppInstallsStorageAccountKey)"}

    Now click on the ellipsis on the right to confirm in a more readable format the variables you just pasted in. 

















    Move down to the Output section and in the Image URL or Name field enter BuildImage.














    Step 34. Select the Copy Files to: task.

    Change the display name to Copy Files to: $(build.artifactstagingdirectory)

    In the Source Folder Browse to the ARM Templates in the Repo.


    In the Target Folder paste in $(build.artifactstagingdirectory)

    Your Copy Files to: task should now look like this:



















    Step 35. Select the Save Build Variables task.

    Change the Display name to: Save Build Variables BuildImage
    Change the Prefixes to: BuildImage.

    This task will now look like:


    Step 36. Select the Publish Artifact task.

    Change the Display name to: Publish Artifact: Build Image and WVD Template.

    Change the Artifact name to: Build Image.

    It will now look like:


    Step 37. We also need to some further variables to this specific Pipeline.


    At the top click on Variables:












    Click on the Link variable group:











    Click the Radio button to the Left of the Azure Key Vault Variables,














    Click on Link at the bottom.

    Repeat this for your the My Variables Group:











    Step 38. Importantly save this Pipeline.


    At the top click on the drop down on the Save & Queue button and then click on Save.













    Pipeline Recap

    So, we have just built a DevOps Build Pipeline. This pipeline:

    • Installs Packer 1.3.4.
    • It then builds an Azure VM using Packer. This build uses four secure variables from Key vault. It creates an output variable of BuildImage.
    • It copies the contents of the ARM Templates folder which are the build and update ARM Templates from our Repo into the build artifact directory. They can then be used in our Release pipelines to build a new host pool or update an existing one.
    • It then converts our variables into artifacts, so that they are preserved and available for use later in the Release Pipeline.
    • It also uses the other variables needed in this pipeline that are not stored in Key vault.

    You will more than likely need to edit this pipeline, to do that. Go back to the Pipelines section, click on Pipelines.
    The view has now changed, and it shows your "Windows 10 1903 build" pipeline. Click on the pipeline and the resulting screen is where you would run it. But at the top right there is an Edit button. Clicking on this takes you back to the Edit section where we just created the pipeline.

    Now you are ready to run this Windows 10 1903 Multi Session Build.

    Step 39. Run the Build Pipeline.


    If you are editing the Pipeline the easiest way to start the build is to Click on the Queue button at the top right.







    On the Run pipeline screen click on Run at the bottom:






































    Your build will now commence.
    Click on Packer Build in the Jobs section to see the progress:













    The progress and any errors will be displayed in the Job run:



































    This will now create a new Resource Group call PackerBuild-Temp and start deploying some infrastructure into it (VM, Public IP, NIC, Disk) in order to build an Image. The resulting image will be placed back into the PackerBuild Resource Group, so it is kept permanently.

    The PackerBuild-Temp Resource Group will be deleted at the end of this Pipeline

    This will now leave you with an Image in your PackerBuild Resource Group called:


    Windows-10-1903-Build-"Date and time"-"Build No." i.e.







    Your Build Pipeline Job will have finished and should take approximately 10 minutes - you will also receive an email to this effect from Azure DevOps Notifications.


    You can confirm the Artifacts have been published correctly by clicking on the Packer Build section at the top and on the Right click on the 1 Artifact Produced link:





















    That link will take you to the artifacts which will look like this:





















    Troubleshooting. Things that could go wrong:

    • You have typos in the names of your Azure Key vault variables or DevOps variables.
    • You have errors in the name and or paths for your Azure File Share. Make sure the names and paths are as defined in the JSON template.
    • The Install cmd files or the PS1 files are not correct.


    How do you build a Windows 10 1909 build image?


    All you have to do is repeat steps 27-38 and replace references to 1903 with 1909, and select the packer-win10_1909.json file in the Packer Template location, in the Build Immutable image task within the Build Pipeline:





















    This is the exact same process to build a 20nn (20h1) build when it is released.


    Build a Release Pipeline

    We now need to build a Release Pipeline that will take our newly created image and the artifacts from the build and then build a new host pool, or update an existing one.

    My use case here, just as a reminder is that you are an IT Pro who knows WVD but not ADO. In which case you will likely already have done many Host Pool deployments but now you want a simple, quick, repeatable and reliable method to update the session hosts in the host pool.

    In this process we will build a new host pool just to show the process and then update using continuous deployment.


    Step 40. In ADO go to Pipelines > Releases.


    Then click on New Pipeline















    Select Empty Job at the top.








    Step 41. Give your host pool a name.

    Click the Save button at the top and save it into the default folder.


    Click on OK.


    Step 42. Click on the task link in the Update Stage just created.

    Click on Agent Job and then in the Agent Specification select windows-2019.


    Click on Save at the top and OK

    Step 43. Go to the Options tab at the top

    Replace the Release name format with: REL$(rev:r). This will be used in the session host naming, and is important this is replaced as without the VM name will be too long.

















    Click on Save at the top and OK.

    Step 44. Go to the Options tab at the top.


    Replace the Release name format with: REL$(rev:r). This will be used in the session host naming and is important this is replaced as without the VM name will be too long.



















    The Build Source type is the default, your ADO project will be selected in the Project field.
    In the Source (build pipeline) drop down select your Windows 10 1903 Build pipeline.
    Default version should be Latest.

    In Source alias give it an appropriate name: Latest Windows 10 1903 Build Artifacts.






































    Click on Add.


    Step 45. Go back and edit the task again. This is similar to the Build Pipeline creation process.

    Add a Variable Load task. Search for Load variable.












    Select Variable Load Task and click on Add.


    Step 46. Add an Azure Resource group Deployment.












    Click on Add.

    Step 47. The Load Build Variables doesn't need any changes. This will copy the variables from the build into the release.

    In the Azure Resource Group, we need to make some updates.

    Come back to the Azure Resource Group Deployment Task. Now update the following sections


    • The Azure Subscription.
    • The default action is fine of Create or update resource group.
    • The Resource Group you want the Host Pool resources to be created within.
    • The desired Location.



























      A little further in the Template section, the Template Location is "Linked artifact".

      Then browse within the Template field and select your mainTemplate.json file:


      In the Override template parameters paste in the following text:


      -_artifactsLocation "https://raw.githubusercontent.com/Azure/RDS-Templates/master/wvd-templates/" -_artifactsLocationSasToken "" -rdshImageSource CustomImage -vmImageVhdUri "" -rdshGalleryImageSKU "Windows-10-Enterprise-multi-session-with-Office-365-ProPlus" -rdshCustomImageSourceName $(BuildImage) -rdshCustomImageSourceResourceGroup $(wvd_goldimage_rg) -rdshNamePrefix VM-WVD-$(Release.ReleaseName) -rdshNumberOfInstances 1 -rdshVMDiskType Premium_LRS -rdshVmSize Standard_D2s_v3 -enableAcceleratedNetworking false -rdshUseManagedDisks true -storageAccountResourceGroupName "" -domainToJoin LOCALAD -existingDomainUPN $(DomainJoinAccountUpn) -existingDomainPassword $(DomainJoinAccountPassword) -ouPath OU=WVD,DC=LOCALAD,DC=COM -existingVnetName YOURVNET -newOrExistingVnet existing -existingSubnetName YOURSUBNET -virtualNetworkResourceGroupName YOURRG -rdBrokerURL https://rdbroker.wvd.microsoft.com -existingTenantGroupName "Default Tenant Group" -existingTenantName YOURTENANT -hostPoolName YOURHOSTPOOL -serviceMetadataLocation United-States -enablePersistentDesktop false -defaultDesktopUsers AUSERACCOUNT -tenantAdminUpnOrApplicationId $(WVDServicePrincipalAppID) -tenantAdminPassword $(WVDServicePrincipalSecret) -isServicePrincipal false -aadTenantId $(az_tenant_id) -location "North Europe"


      Then at the right of this field click on the ellipse, this will read in those variables.
      This will show the variables in an easier to read format and show any errors if there are some.

      You will notice some items are in BOLD. These are values you need to change to reflect your environment. Updating them is slightly easier in this UI.


      The values that will need updating are:


      • domainToJoin - this is your FQDN domain name
      • existingVnetName - this is your vnet that the WVD Session hosts need to reside upon, i.e. one that has access to your AD DC.
      • existingSubnetName - the subnet in the above vNet
      • VirtualNetworkResourceGroupName - the RG the above vNet is in
      • existingTenantName - your WVD tenants name
      • HostPoolName -Your WVD Host Pool Name
      • defaultDesktopUsers - a user account to present this published desktop to.


      In Deployment Mode we will select Incremental which will deploy all new infrastructure and leave anything not in the template. Note you can do a validation of the template if you select Validate mode.

      Click on Save at the top and OK.

      Step 48. Remove the Pre-deployment condition trigger. Currently this task will automatically start when a new build completes and a new Artifact is created. We will remove that.

      Go back to the Pipeline at the top.

      Click on the Pre-deployment conditions button.














      Then Select the Manual only Trigger.


















      Click on Save at the top and OK.

      Step 49. Add the Variables to the Release.

      Click on Variables at the top and select Variable Groups.


      Click on Link variable.


      Select Azure Key Vault Variables and then Link at the bottom.












      Repeat this for the My Variables group, and click on Link at the bottom.













      Click on Save at the top and OK.


      Step 50. Build the new Host Pool using this Pipeline.


      Go back to the Pipeline tab at the top and then select Create release on the right-hand side.







      The Deploy host pool is now manual so can’t be triggered here, so just click on Create at the bottom and we will manually start it.

      A release has now been created but nothing is yet running.

      Click on the “Release-no." link.
















      Now in the Stages section hover over Deploy Host Pool and you will see a Deploy bottom beneath it.















      Click on the Deploy button and then Deploy.





















      The stage changes to Queued and then to In Progress.















      Clicking on In Progress and takes you to the processing of the release.





















      The ARM Template is now running and deploying a host pool as per normal but with the details we have specified in the variables and text sections earlier.
      When this completes you will have a new Azure Resource group with all the resources required for this new WVD Host pool.

      Plus, it will also have created the new Host Pool and presented the Default Desktop group to the user you specified in step 46.





















      Step 51. Create an Update host pool task.

      Now we will create the Update a Host Pool Task and then enable Continuous Deployment for this Release, such that whenever we build a new image via the Build Pipeline it will automatically start the release Pipeline to update the existing host pool.

      We will use the new host pool just created to update.

      Go back to your Release Pipelines and click on Edit.

      In Stages click once on the Deploy host pool task so it is selected.


      Click on the +Add drop down button and select Clone.





      This will create a task that follows the Deploy a new host pool task, which is what we don’t want.









      Click on the Pre-Deployment conditions button:


      Change the Trigger to After Release:

















      This option kicks of this new task after the build completes which itself still does not have continuous deployment enabled - yet.

      Your Pipeline should now look like this:

































      Step 52. We need to modify this task to do an update


      Click on the link in the task.











      Change the name of this task to "Update Host Pool".


















      Click on Save at the top and OK.

      The Load Build Variables is OK as is.

      Click on the Azure Deployment task.


      Modify the Display name to remove "Create or"












      The other settings are the same as in the Deploy task. However, we do need to change the template we are using from the mainTemplate to the Update Template


      In the Template field click on the ellipsis on the right and browse to the UpdateTemplate.json file.






































      Click on OK

      In the Override template parameters paste in the following:

      -_artifactsLocation "https://raw.githubusercontent.com/Azure/RDS-Templates/master/wvd-templates/" -_artifactsLocationSasToken "" -rdshImageSource CustomImage -vmImageVhdUri "" -rdshGalleryImageSKU "Windows-10-Enterprise-multi-session-with-Office-365-ProPlus" -rdshCustomImageSourceName $(BuildImage) -rdshCustomImageSourceResourceGroup $(wvd_goldimage_rg) -rdshNamePrefix VM-WVD-$(Release.ReleaseName) -rdshNumberOfInstances 1 -rdshVMDiskType Premium_LRS -rdshVmSize Standard_D2s_v3 -enableAcceleratedNetworking false -rdshUseManagedDisks true -storageAccountResourceGroupName "" -domainToJoin LOCALAD -existingDomainUPN $(DomainJoinAccountUpn) -existingDomainPassword $(DomainJoinAccountPassword) -ouPath OU=WVD,DC=LOCALAD,DC=com -existingVnetName YOURVNET -newOrExistingVnet existing -existingSubnetName YOURSUBNET -virtualNetworkResourceGroupName YOURRG -rdBrokerURL https://rdbroker.wvd.microsoft.com -existingTenantGroupName "Default Tenant Group" -existingTenantName YOURTENANT -existingHostpoolName WYOURHOSTPOOL -serviceMetadataLocation United-States -enablePersistentDesktop false -tenantAdminUpnOrApplicationId $(WVDServicePrincipalAppID) -tenantAdminPassword $(WVDServicePrincipalSecret) -isServicePrincipal false -aadTenantId $(az_tenant_id) -actionOnPreviousVirtualMachines "Delete" -userLogoffDelayInMinutes 1 -userNotificationMessage "Scheduled maintenance, please save your work and logoff as soon as possible" -location "North Europe"


      Again, click on the ellipse on the right-hand side, this will read your variables in and you will notice some items are in BOLD. These are values you need to change to reflect your environment. Updating them is slightly easier in this UI. These are exactly the same as previously as remember we are now running the Update Template not the Deployment template, there are a few differences.

      Replace the bold items with the values for your environment.

      Click on Save at the top and OK.


      Step 53. Now we will deploy this Release Pipeline to update the VM's in our existing host pool.


      Click on Create Release at the top.








      The Update host task is now highlighted to show that this task is an automatic one and as soon as we click on Create the Release Pipeline will start. This is different to the Deploy task which is manual.

























      Click on Create at the bottom
      The release task has now started.



      Click on the "ReleaseNo." link










      This will show you the progress of this task:


























      You can also see the progress by going back to the Azure portal and going to the Resource Group and then clicking on Deployments. You will see this deployment in progress.












      Clicking on the deployment will show you more details and this is the standard ARM template output, that you are no doubt used to if you have done WVD deployments before.





























      The final step is to enable the Continuous Deployment Trigger that will mean that when you build a new image, it will automatically update your host pool.

      Go back to your Release Pipeline

      On the Artifacts section click on the Continuous Deployment Trigger button and then switch on the Enabled button:















      Click on Save at the top and OK.

      The next time you update your Image by running the Build Pipeline ADO will automatically update your host pool with this new image.

      You’re done, well done for sticking with it.

      This shows the power that Azure DevOps can bring to a WVD deployment. It saves you deployment time, errors and troubleshooting and gives you a reliable and repeatable method for creating an Image and for updating all of your existing host pools as well as creating new ones.

      This guide only scratches the surface of the capability but hopefully is enough to learn the basics and to use it to a fuller extent, as have I.


      Create a corporate URL for the Windows Virtual Desktop Website,
      Part 2 Azure Front Door

      In Part one of this topic I showed how you could redirect a corporate URL to the WVD URL, so that your users would only need to remeber or bookmark a familar URL. That was using a few lines of code and an Azure Function app: http://xenithit.blogspot.com/2020/02/create-corporate-url-for-windows.html

      This post shows how to acheive the same thing but using Azure Front Door to do so.

      To set this up in Azure Front Door then follow these steps.

      The first requirement is to have a Web App. If your just starting follow this simple guide and create yourself a free F1 App Service Plan.

      Once that is created copy the URL for your web and then you can now create your Azure Front Door and URL Redirect Rule.

      In Add a Resource in the Azure portal search for Front Door
      Click on Create, choose or create a Resource Group


      Step 1 is to create a Frontend, click on the + sign

      Give your Frontend a name and click on Add
      Now you create a Backend pool, click on the + sign
      Give your Backend a name and click on + Add a backend
      Select Custom host and enter the URL for your WebApp without the https://
      Click on Add

      On Routing Rules click on the + sign

      Give your Redirect Rule a name.
      In Route Type select Redirect
      In Destination Host select Replace and enter: rdweb.wvd.microsoft.com
      In Destination path select Replace and enter: /webclient/index.html
      Click on Add

      Then Click on Review and Create and then the Create button to create your AFD.

      Now go back to your Frontend hosts and copy your Frontend URL, enter that into a browser and again you will be redirected to the Remote Desktop Web Application.

      Now you will need to add a Custom domain so that you can use a corporate URL to access this re-direction service and get to the RDWeb URL. That requires creating a new CNAME record that points your desired URL to this Azure Front Door front end URL

      With Azure Front Door the CNAME record needs to already exist, and as it may take some time to propagate, do that now. As with the Function App, go to your DNS manager and add a new CNAME record for your domain i.e. myapps.contoso.com mapping to your Azure Front Door URL, i.e. wvd.azurefd.net

      Whilst that propagates, the next thing we need to check is which Tier your App Service plan is on.

      Custom Domains are not a feature of the F1 plan, so you will need to upgrade to a minimum of the B1 Plan.

      Go to your App Service. This is the App service the Azure Front door is running within.

      Go to Scale Up:

      Select the B1 plan (or higher), ensure the plan you want is surrounded in blue. Click on Apply


      Now go to your Azure Front Door designer and click on the + sign on the right hand side of your Frontend hosts:
      As we already have a Frontend pool this will default to the adding of a custom domain.
      In the Custom host name enter the URL you want to use which is the CNAME record you just created.
      This will automatically check your DNS for the existence of this record.
      Click on Add.

      That has added the custom domain, now we need to add a routing rule for this custom domain.
      In fact we only need to add it to our existing routing rule.

      Back in the Azure Front Door designer in the green section for Routing rules Click on the existing rule you created earlier
      In the resulting "Update Routing Rule" blade put a tick in the Frontend hosts and click on Update

      Finally click on Save at the top on the left:

      You have now created an Azure Front Door redirector for the RDWeb URL.

      To test just enter your corporate CNAME record into your browser. It will get redirected to your Azure Front Door Frontend Host Pool, where your routing rule will redirect it to the RDWeb URL.



      Create a corporate URL for the Windows Virtual Desktop Website,
       Part 1 Azure Function App


      The current Windows Virtual Desktop HTML5 client is currently accessed using a Microsoft URL which is reasonably long and also is the exact same for every customer:

      https://rdweb.wvd.microsoft.com/webclient/index.html

      This isn't ideal if you work for an enterprise and want your users to be using a corporate URL in order to access what are corporate applications and data from the Windows Virtual Desktop service.

      What would be better is to use a URL that looks something like myapps.contoso.com.

      Well using an Azure Function App makes this very simple, and the guide below walks through how to set this up.

      First thing you will need is an Azure Subscription and have access to your corporate DNS in order to create a CNAME record.

      Function App is a serverless application that allows you to run small pieces of code without the need for any infrastructure.

      Create your Function App

      In the Azure portal click on Create a Resource.


      Search for Function App.

      Give your app a name and select .NET Core as your runtime stack.
      On the hosting tab, create or select an existing Storage Account. For the Plan type as this is such a tiny Function App select Consumption. See this article for details on pricing.


      I switch off Application Insights as this app only does one thing.

      Create your Function App.

      In your Resource Group you will have an App Service, an App Service Plan and a Storage Account. Click on the App Service. 

      Now you will need to add some code to your new Function App
      Click on the New Function button.

      Select the Azure portal to be your development environment, and click on Continue.
      Click on More Templates, and Finish and view templates.
      From the resulting templates select HTTP Trigger, give it a name and click on Create.
      Now you will need to enter your code. Delete what is there and enter your own code.
      This is the code that I use:
      using static System.Environment;
      using System.Net;
      using System.Net.Http;
      using System.Net.Http.Headers;

      public static async Task<HttpResponseMessage> Run (HttpRequestMessage req, TraceWriter log) {
      string OriginUrl = req.Headers.GetValues ("DISGUISED-HOST").FirstOrDefault ();
      log.Info ("RequestURI org: " + OriginUrl);

      //create response
      var response = req.CreateResponse (HttpStatusCode.MovedPermanently);
      if (OriginUrl.Contains ("myapps.contoso.com")) {
      response.Headers.Location = new Uri ("https://rdweb.wvd.microsoft.com/webclient");
      } else {
      log.Info ("error RequestURI org: " + OriginUrl);
      return req.CreateResponse (HttpStatusCode.InternalServerError);
      }
      return response;
      }

      All you will need to do is replace myapps.contoso.com with your corporate URL that you want your users to consume.

      Click on Save.
      Click on </> Get function URL and then click on Copy.

      The next thing you will need is a Proxy.
      On the left-hand side click on Proxies and click on the + to create a new one.

      In the New Proxy window enter a name, for the Route template enter a / accept All methods and then in the Backend URL paste in the URL copied from above.
      Click on Create.

      Final thing for the function app is to add your corporate domain as a custom domain.

      On the left select your Function App name, and then on the right click on Platform Features
      Click on Custom Domains.
      In the Custom domain blade click on Add Domain.
      Enter your domain name and click on Validate.

      The next blade will provide you the CNAME record that you will need to create in your DNS service.

      Copy the Value text in the Domain Ownership section go to your DNS service and create a new CNAME record that maps "myapps" for your domain to the Value.
      Once that has been created successfully, come back and click n the Validate button once again. (This may take some time to proppgate).
      The Domain ownership icon will change to a Green Tick.
      Then click on Add Custom domain.

      This will add your custom domain into this Function App.
      Now all you need to do is test it. It should look like this:




      How to dynamically update session hosts in an existing host pool


      Do you have an existing Windows Virtual Desktop where you want to replace the session host virtual machines with new images? 

      You could build a new host pool with the new session host VM's and present the new icon to users and get them to test and then switch them over. That's not ideal as users need to launch this new desktop which could lead to confusion. 

      However what you can now do is use an "update" ARM template to dynamically replace the VM's in the existing host pool with no other changes being made to it and no or very little user interaction. 

      This template actually builds any number of new session hosts into the same host pool, and then will either deallocate or delete the existing VM's. If you choose delete it will delete the VM's and related storage which means there will be no residual costs related to the VM's. It also deletes all other infrastructure components. It will also send a message to all conencted user sessions.

      Have a look at this video which goes through the process: 




      The ARM template is stored in this Github repo.


      Drinking our own VDI champagne.

      So my laptop packed up over the weekend, and that gave me the opportunity to actually walk the walk rather than just talking about Windows Virtual Desktop and Azure, actually drinking our own champagne.
      Hence for the last couple of days I have been using my own Windows 10 VDI that is in Azure delivered via Windows Virtual Desktop, and in fact I am writing this post form that very Virtual Machine.

      So first off I have an Azure subscription - in to that subscription I have provisioned a Windows Virtual Desktop hostpool (see the document links at the end for deployment guidance).

      A hostpool is a collection of similar VM's hosting a set desktop or set of applications for the in scope group of users.

      In my case to make my demo's slightly less boring I have chosen an N-Series VM from Azure which have Tesla M60 GPU cards within the hosts. This allows me to run some graphical applications that look interesting rather that standard apps. It also demonstrates the power and flexibility of Azure as even the largest VM in Azure takes around five minutes to deploy and users can thus be up running and productive in significantly reduced amounts of time compared to doing this on-premises.

      To make things slightly more interesting I was using a Samsung Galaxy S10e mobile phone running Samsung Dex. DeX allows you to connect your mobile phone via an HDMI cable to a monitor or TV, and launch a full desktop like experience. In that desktop you can launch any apps that run on the phone which is great of itself.

      However the real power comes about when you connect to your VDI session running in Azure. The Samsung DeX supports both the Microsoft Remote Desktop Client and Citrix Workspace. I have my VM running in Windows Virtual Desktop in Azure, I also have a number of other VM's in Azure being orchestrated by Citrix Cloud that I could equally, easily connect to. So in essence I am using a device that I always have with me no matter where I am and at anytime - as a thin client allowing me to connect into my corporate VDI session to use my line of business apps and data.

      Once on the VDI the user experience is very good, with no noticeable delay and as I am typing this blog there has been no pauses or delays to every character appearing as i type them. The actual experience is indistinguishable from having a full local device. This is particularly impressive seeing as I am located in the UK and I am accessing my VM that is located in our South Central US Azure region which is nearly 5000 miles away.

      Now if you have worked in this space you may well remember two other devices that did the same thing - definitely in the past tense! The first was the Motorola Atrix, which required a dock to place the phone into and also connect cables to for the monitor, keyboard and mouse. Whilst it was pretty small it was still too clunky to carry around. This meant that the dock was left on the desk. This in turn meant that whilst the mobile phone was a mobile device - accessing your VDI from that device required you to be back at your desk, which worked fine but tied you to that desk, so you weren't able to be fully mobile and productive.

      Following later Microsoft released a device called Continuum which was an improvement on the Atrix but still suffered the same issue of people not wanting to carry the hardware device around with them.

      Whereas the Samsung Dex requirement is just one HDMI cable, which is about the same size as a power cable and thus no problem to put in a bag and carry around. This means that you can be fully mobile and able to connect into your VDI session anywhere at any time - you just need to have a monitor or TV available.
      Now you do also need a Bluetooth keyboard and mouse, but i'm sure everyone who carries a laptop also carries a Bluetooth mouse, and it takes all of about 20 seconds to connect a mouse to the phone, so people could quite easily carry the one mouse and connect to their phone when needed. Plus you need a Bluetooth keyboard of which their is a plethora of them about. But in terms of size Microsoft have a Universal folding keyboard that is so small anyone can chuck it into their work bag.

      What this results in, is that all I need to carry around is the phone that I would have with me anyway, a mouse that I always have in my bag, an additional cable which is nothing and this small keyboard. My bag doesn't notice the difference.

      Which when in use looks like:


      The other item that is worth noting is the price. In my case I picked an N-Series VM which is one of the more expensive VM's due to the nVidia Tesla M60 GPU within the host.
      The specific VM is an NV6 in the South Central US Azure region. The NV6 VM has six vcpu's, 56GB of RAM and the one M60 GPU.

      With three years Reserved Instances and Azure Hybrid Use Benefits applied the VM costs £0.4523 per hour to run. But this VM should support up to 36 "light" users, which gives a price per user per hour of just £0.013p. 

      As mentioned the N-Series are quite bespoke for specific workloads and probably not all that common in VDI environments. The D series VM's are better for this workload. The generally recommended VM for multi-session Windows 10 VM's is the D8s_V3. This VM has 8 vcpu's and 32 GB of RAM and again with three years Reserved Instances and Azure Hybrid Use Benefits applied the VM costs £0.1346/hour in the South Central US Region. This VM supports 48 "light" users which gives a price per user per hour of just £0.0028.

      Pricing for these is listed here in the context of Windows Virtual Desktop providing typical scenarios: https://azure.microsoft.com/en-us/pricing/details/virtual-desktop/


      I created a little video of this whole user experience.



      To get started in the Windows Virtual Desktop preview you can follow this documentation:  https://docs.microsoft.com/en-us/azure/virtual-desktop/overview

      Windows Virtual Desktop pre-requisites - everything in the right place to enable you to deploy without errors

      Windows Virtual Desktop is a newly announced service for managing VDI and RDSH as a service from Azure. It went into public preview in March of 2019, with many successful deployments for testing having been completed. However we have seen a large number of failures of the Azure Resource Manager deployment from a set of customers, all for very similar quite simple errors entered into the Azure portal deployment.

      Hence this guide is just to clearly explain what prerequisites are required to be in place and where to get the relevant information and then where to exactly put these details into the Windows Virtual Desktop HostPool creation process in the Azure portal. This is to ensure the deployment process will complete successfully. This is not a full deployment guide, there is already existing full deployment instructions available.

      This guide will enable to collect all the relevant pieces of information and have them all in one place that you can then put back into the Azure portal at deployment time. In this guide you will either create and record or just record the information needed in the WVD deployment process and keep it in one place in Notepad to use later in the full deployment.

      Use this in conjunction with the existing deployment guide from Microsoft docs.

      From a high level you will require the following items before you can deploy Windows Virtual Desktop
      1. An Azure Active Directory
      2. An Active Directory
      3. Azure Active Directory Connect
      4. An Azure Virtual Network updated with your DNS server(s)
      5. An Azure subscription and its associated ID.
      6. A Windows Virtual Desktop tenant
      Why do you need this?
      1. The Azure Active Directory is your identity provider in the cloud and users authenticate against AAD to get access to the Windows Virtual Desktop service
      2. When launching published Desktops and Applications - Windows still requires Active Directory authentication.
      3. Azure AD Connect is the tool that will provision accounts from AD to AAD to enable 1. above.
      4. The Virtual Machines all need to be located on a Virtual Network. That vNet needs access to Active Directory, that can either be located in Azure or on-premises as long as there is connectivity. When Azure deploys new VM's it will join these VM's to your Active Directory domain and as such the VM's need to located the Domain Controller via DNS, without this DNS server setting by set the VM's have no name resolution for the local AD.
      The high level deployment process for a WVD hostpool and why you need these pre-requisites already in place is to automate all of the following actions:
      • Deploy a Virtual Machine (or multiples)
      • Join the Virtual Machine to your Active Directory
      • Install the local WVD Client agents and join to the WVD hostpool specified 
      • Publish the default published desktop to the user specified.

      1. So lets get item 1 - your Azure Active Directory Tenant ID.

      If you don't already have an AAD then you will need to create one. To do so follow this guide: https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-access-create-new-tenant

      If you do have an AAD then we just need to copy the Azure Active Directory Tenant ID. To do this open the Azure Portal. On the left click on All Services

      Either go down to the Identity section and select Azure Active Directory

      Alternatively in the Search field in the All Services section at the top of the blade start typing        Azure Active Directory and the resulting list of services will reduce displaying AAD.

      Once you have the AAD blade open, on the left go down to Properties and then on the right look for the Directory ID Field

      Click on the copy button at the right of this field

      Now open Notepad and paste this in as Item 1.

      Whilst you are in AAD create an admin account that will be used as the Windows Virtual Desktop admin account. Go back to the top on the left and Click on Users and click on + New User and create an account such as wvdadmin@contoso.onmicrosoft.com.

      Copy this user ID into your Notepad file as Item 1.1

      Notepad should look like this:
      1. abc4-5a45-4533-8ab4-8991abc98abc
      1.1 wvdadmin@contoso.onmicrosoft.com

      2. Onto Item 2 - your Active Directory

      If you don't have AD already, the easiest way to deploy Active Directory in Azure is to use this Azure Resource Manager template: https://azure.microsoft.com/en-us/resources/templates/active-directory-new-domain/

      Or alternatively deploy it manually on a Virtual Machine. Record the IP Address of your Domain Controller VM

      Once you have Active Directory deployed create an admin account that you can use in the WVD deployment process to automatically join the host pool VM's that get created to this AD i.e. "domainjoin@contoso.com"

      Go back to Notepad and enter your AD domain UPN for this user account in full as Item 2

      You will also need an account in AD to test as a user logging into and launching an app from Windows Virtual Desktop. Create a user account in AD i.e. test1@contoso.com. 
      In section 3. below we deploy AAD Connect which will sync this account with AAD where it will in addition have the full UPN of  test1@contoso.onmicrosoft.com.
      Enter this account in Notepad as Item 2.1

      Notepad should like this now:
      1. abc4-5a45-4533-8ab4-8991abc98abc
      1.1 wvdadmin@contoso.onmicrosoft.com
      2. domainjoin@contoso.com
      2.1 test1@contoso.onmicrosoft.com

      3. Now we need to deploy Azure Active Directory Connect to provision your AD users up into AAD.


      During the install you will need an AAD global admin account and an AD admin account. For simplicity you can use Items 1.1 and 2.1 from your Notepad file.

      Once completed AAD connect will provision your test account up into AAD, which we can later use for testing.

      4. Now we need to update your Virtual Network with the IP address(es) of your AD domain controllers so that when new VM's are placed on this vNet they and are joined to your domain they can access the Domain Controller using DNS for local name resolution.

      In the Azure portal go the Virtual Network that your domain controller was deployed onto. In the Settings section click on DNS Servers and enter the IP Address of your domain controller and click on Save above:


      5. Finally lets grab your Azure Subscription ID.
      Back in the Azure subscription open the Subscriptions blade:

      Click on the subscription you want to deploy your hostpools into. Then in the Overview section copy the Subscription ID

      Paste this into your Notepad file as Item 3

      Now your Notepad file should look like this:
      1. abc4-5a45-4533-8ab4-8991abc98abc
      1.1. wvdadmin@contoso.onmicrosoft.com
      2. domainjoin@contoso.com
      2.1. test1@contoso.onmicrosoft.com
      3. 12345a125-1234-12a1-123af-123456abc123

      If it does you have all the information you need and so you are now ready to follow the rather good Windows Virtual Desktop deployment documentation starting in the link below. Open this document and have these two documents open in to tabs side by side

      If you follow the above guide accurately you will successfully deploy your first WVD hostpool. Below are the steps where you need to paste in the information from your Notepad file in to the relevant steps in the process stated in the guide above, follow both guides step by step.

      1. In this document you are asked to provide consent for WVD to use your AAD. 
      From your Notepad file paste Item 1 into the "AAD Tenant GUID or name" field:



      In the next section "Assign the TenantCreator application role to a user in your Azure Active Directory tenant" section you can use the user account in section 1.1 from Notepad.

      In the next section "Create a Windows Virtual Desktop Preview tenant" section in the second PowerShell command: 

      New-RdsTenant -Name <TenantName> -AadTenantId <DirectoryID> -AzureSubscriptionId <SubscriptionID>

      Replace <DirectoryID> with item 1. from Notepad and <SubscriptionID> with item 3, i.e.

      New-RdsTenant -Name <TenantName> -AadTenantId abc4-5a45-4533-8ab4-8991abc98abc -AzureSubscriptionId 12345a125-1234-12a1-123af-123456abc123

      The resulting Powershell output will conform the tenant name that has been created. Copy this in to item 4. in Notepad file - which should now look like:

      1. abc4-5a45-4533-8ab4-8991abc98abc
      1.1 wvdadmin@contoso.onmicrosoft.com
      2. domainjoin@contoso.com
      2.1 test1@contoso.onmicrosoft.com
      3. 12345a125-1234-12a1-123af-123456abc123
      4. YourTenantName


      2. In the second stage: "Tutorial: Create a host pool with Azure Marketplace" which deploys the hostpool from within the Azure portal you will need to enter the remaining parts from Notepad. You are directed to the Azure portal and will deploy a Windows Virtual Desktop Host pool.

      In the 1st section enter your test user UPN

      In the first "Basics" section within "Default desktop users" enter item 2.1 from Notepad:
      In the third "VM Settings" section enter your:
      • AD admin account to do the VM domain join which is item 2. from your Notepad file.
      • As well as entering the vNet that your AD domain controller is on that has the DNS server set correctly providing name resolution for the VM's to then locate the Domain Controller to complete the domain join.






      In the 4th "Windows Virtual Desktop" information section enter your:
      • "Windows Virtual Tenant Name" - which is item 4 in your Notepad File
      • "Windows Virtual Desktop Tenant RDS owner UPN" - which is item 1 in your Notepad file



      Finish the remaining screens and the instructions from the main deployment guide and your WVD hostpool deployment will commence and if you have followed this pre-guide correctly will successfully complete.
      The main guide will have the steps to test user access.

       How to deploy Citrix XenApp Essentials on Azure

      Citrix have recently released a new edition to the XenApp family called XenApp Essentials.
      But what is the new flavour of XenApp and why has it been created?

      So, a brief history lesson.
      Microsoft have had for a number of years an Azure services called Azure RemoteApp. This was essentially Remote Desktop Services "as a service" from the the Azure cloud.
      It was cost effective, simple to deploy and had many of the great capabilities that only the cloud provides, such as auto scaling, usage based pricing, etc.

      It allowed organisations to publish applications (no desktops) directly to the users, without the need to manage an estate of virtual machines, and the need to build these out to meet the peak user load.
      However, on the 31st of August 2017 this product will be switched off. In fact this will be the first ever Azure product to be switched off. The reason it is being switched off is because there simply are not enough users on the service, and the reason there are not enough users on the service is the same reason that enterprise haven't historically used RDS on its own in their on-premises app delivery and virtual desktop estates. These enterprises have been willing to pay Citrix for the additional management capabilities and access to the many tools that they have developed over the last 25 years of extending upon RDS.

      So where did that leave this capability and the existing RemoteApp users. Well Microsoft approached Citrix and asked if they could produce a replacement for RemoteApp. Citrix as such produced XenApp Express subsequently renamed to Essentials.

      This "Essential" moniker is there to represent only the fact that it is the Essential services of XenApp, because the edition has been designed to replace not another edition of XenApp but rather RemoteApp which itself was a simple solution. Hence Citrix have simplified XenApp Essentials by removing some of the more advanced capabilities to try to draw it closer to what RemoteApp was.
      For example, it only allows application publishing - no desktops which is what RemoteApp provided, there are a limited amount of Policies, RemoteApp had none.


      So essentially, Essentials is marketed as the RemoteApp replacement, plus SME's or greenfield projects inside larger organisations. For enterprises, they may well be better of choosing one of the other higher editions of Citrix cloud running on Azure.

      So then how to you deploy this?
      At the top level, there are only two requirements. 1. An Azure Subscription and 2. a Citrix Cloud account.
      This guide assumes you have an Azure subscription. If you don't have one - go and get one now.
      What is the Citrix Cloud, well it’s not a public cloud in the same sense that Azure is. Rather this is Citrix "cloudifying" their management service, plus Netscaler and Storefront. It provides Studio and Director as a service via a browser and optional use of Netscaler and StoreFront as a service.


      Hence, the second requirement is a Citrix Cloud account. As the name suggests this is just an account in the Citrix Cloud. You then have services enabled for that account that can then be consumed by your organisation.
      Easiest way to create a Citrix Cloud Account is by completing the form at https://onboarding.cloud.com/ where you can also optionally register for a trial of the full Citrix Cloud suite.
      Or if you are a RemoteApp user you can register for a XenApp Essentials trial here: https://www.citrix.com/products/citrix-cloud/form/xenapp-essentials-trial/


      So now to deploying XenApp Essentials.
      This depends on what Citrix Cloud enablement you have. If you have specifically requested a XenApp Essentials trial account as per the second option above - you don't actually have to deploy the XenApp Essentials product in Azure, as per section 1 below, you can move to section 2. You will already have this enablement in your Citrix Cloud account and all you need to deploy are your Azure infrastructure.


      If, however you have a Citrix Cloud account without XenApp Essentials then you will need to deploy XenApp Essentials from the Azure marketplace, so you will need to complete both sections 1 and 2.

      Section 1 - Deploying XenApp Essentials via the Azure Marketplace.

      Step 1:  Go to the Azure Portal and Click on New



      Step 2: Type Citrix…. And Select Citrix XenApp Essentials


      Step 3: Then Select Citrix XenApp Essentials

      Then the Create button on the next Blade
      Step 4: In the resulting blade, start by providing a name for the Resource itself, chose the Azure Subscription you would like place this resource, then either Create a new Resource Group, or use an existing one, and chose an Azure region to locate it:
      Step 5: 

      Now you will need to connect this resource back to your Citrix Cloud Account.Click on the Connect button




      Step 6: This will bring up another browser for you to enter your Citrix Cloud credentials




      Step 7: If there is an existing Citrix Cloud account this will return and show you the name of the customer when you created your Citrix Cloud Account



      Then you just need to select the number of users you want with a minimum of 25, and any additional Data Transfer. 
      Then just click on Create

      Step 8:  This will take a couple of hours to complete. You can see the progress by looking at the overview of this resource and look at the Status, you will be ready to proceed when the status changes to “Ready”.

      Step 9: This doesn’t actually deploy anything at this point, it just sets up the billing and related services in the Citrix backend.
      As this takes some time to complete, it’s worth doing two things in the interim.
      1.  View the Citrix Cloud. Click on “Manage through Citrix Cloud”. This will take you directly to your Citrix Cloud Account which if you are just enabled for XenApp Essentials will look like this:



      This look is unique just for XenApp Essentials, as it is designed to be as simple as Azure RemoteApp was. All the other editions will have a different albeit similar appearance. All the other editions will show you the standard Studio console, whilst the XenApp Essentials is a very simple point and click user interface.
      We will come back here to configure our catalogues.
      You can achieve the same directly by thing by logging into citrix.cloud.com

      2.  Start considering your core Azure services that you need, such as VM size, Storage Accounts, and storage type, connectivity, generally the recommendation is to use ExpressRoute for enterprise grade connectivity.

      You will need to have a Virtual Network available with at least one subnet when you come to deploy a catalogue later. This virtual network will need to have DNS configured using the Custom option to add the IP address(es) of your DNS server(s) in its options to enable the deployment of the master images to work correctly as they need to reach your domain controllers in order to join your domain.


      Section 2 - Deploying Workloads into your Azure Subscription

      Step 10:  Once the status is Ready you are well, ready to go. Its worthwhile reading the information here, and once you’re ready you can click on I’m ready to start!


      The first thing we will do is create a new Catalog




      Step 11: Give the Catalog a name and Save it.




      Step 12: Now link this to your Azure Subscription. Click on the Subscription Name Drop Down and click on Link an Azure subscription.






      Click on Sign in





      Step 13: You will be redirected to the Azure Sign in page, note it will have "XenApp Essentials" at the top of the page. Log in with the Azure credentials for the subscription you will use to host your VDA’s.


      Then Click on Accept to allow XenApp Essentials to have permissions to the Subscription.



      Step 14: You will be returned to your Citrix Cloud configuration. If you have more than one Azure subscription this will list them all. Select the one(s) you would like to use.


      Now ensure the Subscription is selected, then choose a Resource Group to deploy into, then select a Virtual Network and Subnet that you will have needed to have created previously.





      Click on Save

      Step 15: Enter the details of the domain you want these VM’s to join, and click on Save



      Step 16: Now you need to link to a master image that includes your apps for this catalogue. You have three choices:

      1. Select an existing image – Use this if you already have a custom image.

      2. Import a new image - this will require you to have uploaded your VHD to an Azure Storage account.

      3. Use a Citrix prepared image – this is for demo purposes as it is a Citrix prepared image.

      For this we will use option three to demonstrate how it works.

      Click on Use a Citrix Prepared Image, and select an image name, then click on Save






      Step 17: Now select the disk type, the capacity on the VM’s that get deployed, and scaling settings.

      You can choose Standard Disks which are magnetic disks or for better performance, premium disks which are SSD backed. Then you can select the user load expected per VM, by selecting a preconfigured value or by selecting your own custom value, in accordance with your known app/user workload figures






      Step 18: Now select your scaling settings.






      Step 19: Once you have everything selected correctly you can click on Start deployment:


      and the Citrix Cloud will start provisioning.



      So what gets deployed?


      This will take a few hours to complete, but if you are watching it will the Citrix Cloud will firstly deploy two Virtual Machines that host the Cloud Connector client, with the associated Azure infrastructure components, such as NIC's, Storage accounts, Network Security Groups and a Key Vault. 

      These components will be deployed in to the same Resource Group that you selected in Step 14. Refresh your Resource Group and you will start to see these resources being deployed.

      The Cloud Connector VM's will be named something like: XAE60xxx-Edge1 and 2, and will be a Standard  A2-V2, but you can scale them up or down manually by going to the Size blade and changing the size to something you would prefer.
      After this is complete it will move on to create the Catalog. It will create a new Resource Group using the name of the Catalog as the Resource Group name. In here it will create the infrastructure required for the Resource Group.
      Go back to the Resource Groups and refresh and you will see a new Resource Group Called XenApp-"Your catalog name", and you will start to see components appearing in this new Resource Group.




      Section 3 - Publishing Applications and Assigning users



      You will notice that the Catalog has been successfully created when the Citrix Cloud console moves to Section 2



      Now we need to add some applications and some users.


      Step 20: Click on + Apps



      Step 21: You will now have two options to find your application. 1, is from the Start menu, and 2, is from a path.


      Select Publish from Start menu on the left and then click on the drop down on the right to select your app:
      This screen seems a little un-intuitive here as the app is published as soon as you click on it there is no "Publish" type of button. However selecting the radio button to the left of the app just gives you the option to Unpublish the app. Then you just click the X at the top right but this feels as if you are actually cancelling this action instead you are just closing.

      Step 22: Now we need to add users. click on Add Users. In the search field on the right type in your user name:
      Here you do need to put a tick in the box and make sure you click the "Assign Users button at the top.

      You should now get two green ticks at the right of the rows.

      Step 23: Now all that is left is to launch the app. Go to Section 3 which will show you your StoreFront URL. Click on the link and you can log in and launch your applications.

      That's it you now have an app published from Azure orchestrated by Citrix XenApp Essentials.

      Notes: 
      You can easily deploy a global Citrix farm, by deploying a Catalog into any of the 30 Azure regions around the world. This is a compelling capability that is next to impossible without the power of Azure. You just need to deploy two Citrix Cloud connectors in any region, and deploy then deploy a Catalog to that region.

      You can also integrate multiple domains simply by deploying Cloud Connectors that are connected to other domains, and then the standard domain drop down in StoreFront will display the domain names.

      You can also add in multiple Azure subscriptions, just by going to the Subscriptions and clicking on Add Subscriptions, then just signing in with credential for another subscription. This will then allow you to deploy infrastructure into other subscriptions.




      Using PowerShell to create a Windows VM Hosted AD and join VM's to that Domain - in Windows Azure

      There seem to be numerous PowerShell snippits of scripts out there to create an AD and to join VM's to that domain. However this Microsoft article (bit.ly/LEOSocseems to suggest that people are still having difficulties doing so, myself included and it took hours of trial and error to get the scripts correct.

      So I thought that I would document exactly what I have got that now works every time (for me at least). My intention is that there would be nothing missing to get this to work.

      So we are going to use two PowerShell scripts, one to create a VM that will have AD installed on it, and a second to create a VM that is automatically joined to that domain all hosted in Azure.

      There are numerous prerequisites that are still needed in order to get this script to work. I.e. the script depends on you having an Azure subscription with the following items already created: Storage Account, Certificate, Publish settings file, Affinity Group, Virtual Network and subnet. These can all be created by following the link above.

      You also need to have Azure Powershell installed, and use the PowerShell ISE (Integrated scripting environment). Microsoft have released the October version  with new features. One of those new features is the ability to use Azure AD support to configure Powershell to integrate with your Azure subscription (see 
      http://michaelcollier.wordpress.com/2013/10/28/windows-azure-ad-authentication-support-for-powershell), which is great as it prevents having to download and specify a .publishsetings file and creating, uploading and installing a certificate and specifying it  in your scripts. 

      However this version (0.7.0) of the Azure PowerShell cmdlets also has a bug that prevents the  function that joins VM's to a domain, as per (bit.ly/1b3tf94). This means that you need an older version of the Azure PowerShell cmdlets, namely June's version which is 0.6.19. This should get resolved in early November.


      You can get this from the Microsoft download site or from here.


      So the scripts are also here. You need two. The first creates a base VM that you will then need to deploy AD on. The second creates a VM that is joined to the domain created by running the first script. You will need some basic scripting knowledge and knowledge of the values that are required in relation to your subscription - only you will know these!


      The brief overall instructions are:

      1. Install Powershell 0.6.19 (until the issue mentioned above is fixed).
      2. Create, upload to your Azure subscription and install locally a suitable certificate (not required if the above issue is resolved)
      3. Get your Azure publish.settings file by following this link:
      https://manage.windowsazure.com/publishsettings/Index?client=vs&SchemaVersion=1.0

      Once you have all of this you are ready to configure your PowerShell scripts. The first script is split into three sections. The first is where you will need to configure:
      1. The path to the Azurepsd1 file (if its not as per the default location).
      2. Your certificate and its thumbprint (you get this from the Azure portal in SETTINGS>MANAGEMENT CERTIFICATES. You also need to have installed the certificate locally in the local personal certificate store.
      3. The path to your publish.settings file you got in step 3 above.
      4. Define your subscription name and storage account using the $Storage variable

      The second section is where you will define all of the variables related to your Azure subscription and the VM your are creating, these include:

      1. DNS Service name
      2. VMName
      3. Azure Windows OS Image (these change frequently and a current list can be retrieved by running Get-AzueVMImage, then paste into the scrip the image you want.
      4. Affinity Group
      5. Virtual Network and Subnet
      6. Cloud Service
      7. Password for a local user account
      8. VMSize

      The third section has the PowerShell commands used in conjunction with the variable settings to build your VM.


      Once you have specified all of these variables then you can paste the scrip into the PowerShell ISE and press enter. If the text turns white it is all good to go, and you will be prompted to enter a username. This username is for a local account on the VM that you will use to connect over RDP, with the password specified in the script. PowerShell will then connect to the Azure API and configure your VM. You then need to follow the instructions in bit.ly/LEOSoc to configure AD.


      Once you have done this then you are ready to add VM's to that domain as part of the provisioning process. This requires the second script.


      This script requires many of the same variable as script one, so just copy them in where appropriate. The one different variable that you need to set is $myDNS, this needs to be the AD and DNS server that you have created using the first script and have manually deployed AD and DNS.

      Once you have the variables configured correctly, again just copy and paste it into the PowerShell ISE and it will create you a VM that is automatically joined to your Windows VM hosted Active Directory. If you want to deploy additional Domain joined VM's just change the variable that need to be unique such as VMName and run it again, saving you alot of time.


      No comments:

      Post a Comment