Microsoft Azure – Tech-Coffee //www.tech-coffee.net Wed, 24 May 2017 11:34:18 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 RDS 2016 farm: RDS Final configuration //www.tech-coffee.net/rds-2016-farm-rds-final-configuration/ //www.tech-coffee.net/rds-2016-farm-rds-final-configuration/#comments Wed, 24 May 2017 11:32:49 +0000 //www.tech-coffee.net/?p=5497 This article is the final topic about how to deploy a Remote Desktop Service in Microsoft Azure with Windows Server 2016. In this topic, we will apply the RDS Final configuration, such as the certificates, the collection and some custom settings. Then we will try to open a remote application from the portal. Deploy a ...

The post RDS 2016 farm: RDS Final configuration appeared first on Tech-Coffee.

]]>
This article is the final topic about how to deploy a Remote Desktop Service in Microsoft Azure with Windows Server 2016. In this topic, we will apply the RDS Final configuration, such as the certificates, the collection and some custom settings. Then we will try to open a remote application from the portal.

Certificates

Before creating the collection, we can configure the certificates for RD Web Access, RD Gateway and the brokers. You can request a public certificate for this or you can use your own PKI. If you not use your own PKI, you have to distribute the certificate authority certificates to all clients. You have also to provide the CRL/OCSP responder. If you use a public certificate, there is almost no client side configuration. You can get more information about required certificates here.

Once you have your certificate(s), you can open the properties of the RDS Farm from the server manager. Then navigate to certificates. In this interface, you can add the certificate(s) for each role.

On client side, you should add a setting by GPO or with local policy editor. Get the RD Connection Broker – Publishing thumbprint and copy it. Then edit this setting (Specify SH1 thumbprint of certificates representing trusted .rdp publishers) and add the certificate thumbprint without spaces. This setting enable to remove a pop-up for the clients.

Create and configure the collection

To create the collection, I use the following PowerShell cmdlet:

New-RDSessionCollection –CollectionName RemoteApps `
                        –SessionHost azrdh0.homecloud.net, azrdh1.homecloud.net `
                        –CollectionDescription "Remote application collection" `
                        –ConnectionBroker azrdb0.homecloud.net

Once you have created the collection, the RDS farm should indicates a new collection:

Now we can configure the User Profile Disks location:

Set-RDSessionCollectionConfiguration -CollectionName RemoteApps `
                                     -ConnectionBroker azrdb0.homecloud.net `
                                     -EnableUserProfileDisk `
                                     -MaxUserProfileDiskSizeGB 10 `
                                     -DiskPath \\SOFS\UPD$

If you edit the properties of the collection, you should have this User Profile Disk configuration:

In the \\sofs\upd$ folder, you can check if you have new VHDX files as bellow:

From the Server Manager, you can configure the collection properties as below:

Add applications to the collection

The collection that we have created is used to publish applications. So, you can install each application you need in all RD Host servers. Once the applications are installed you can publish them. Open the collection properties and click on add applications in RemoteApp Programs part.

Then select applications you want to publish. If the application you want to publish is not available in the list, you can click on add.

Then the wizard confirms you the application that will be published.

Test

Now that applications are published, you can browse to the RD Web Access portal. In my configuration, I have added a DNS record which is bound to the Azure Load Balancer public IP. Specify your credential and click on Sign In.

Click on the application of your choice.

I have chosen the calculator. As you can see in the task manager, the calculator is run through a Remote Desktop Connection. Great, it is working.

Conclusion

This series of topics about Remote Desktop Services shown you how to deploy the farm in Azure. We saw that Windows Server 2016 brings a lot of new features that ease the deployment in Azure. However, you can also deploy the RDS Farm On-Prem if you wish.

The post RDS 2016 farm: RDS Final configuration appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-rds-final-configuration/feed/ 2 5497
RDS 2016 Farm: Deploy the Microsoft Azure VM //www.tech-coffee.net/rds-2016-farm-deploy-the-microsoft-azure-vm/ //www.tech-coffee.net/rds-2016-farm-deploy-the-microsoft-azure-vm/#comments Tue, 11 Apr 2017 11:38:33 +0000 //www.tech-coffee.net/?p=5340 This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. Previously, we have created the network resources, the storage account for diagnostics and the Windows image. In this topic, we will create all the Azure VM required for the solution. The deployment will be processed from ...

The post RDS 2016 Farm: Deploy the Microsoft Azure VM appeared first on Tech-Coffee.

]]>
This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. Previously, we have created the network resources, the storage account for diagnostics and the Windows image. In this topic, we will create all the Azure VM required for the solution. The deployment will be processed from a JSON template. This series talks about the following subjects:

Github

The template for this series are located in my Github. I have created a folder called RDSFarm that containers JSON template. For this topic, I have used RDS-VMs.json.

JSON template explanation

In this template, I create an availability set for each kind of service. So, I have 5 availability sets (Domain Controllers, File Servers, RD Host, RD Broker and RD Gateway). So, I have the following block code for each availability set:

{
      "type": "Microsoft.Compute/availabilitySets",
      "sku": {
        "name": "Aligned"
      },
      "name": "[parameters('ASDomainControllersName')]",
      "apiVersion": "[variables('computeResouresApiVersion')]",
      "location": "[variables('ResourcesLocation')]",
      "tags": {
        "displayName": "AS_DomainControllers"
      },
      "properties": {
        "platformUpdateDomainCount": 5,
        "platformFaultDomainCount": 2
      }
    }

Then I create the virtual network adapters. Each VM has one network adapters excepted the File Servers which have two (cluster and management). Each vNIC is connected to the right subnet. You can see also that I have created a loop (copy section): because each kind of service has at least two VMs, the loop avoids me to duplicate several times same block code.

{
      "type": "Microsoft.Network/networkInterfaces",
      "name": "[concat(parameters('PrefixNameDC'), copyindex())]",
      "apiVersion": "[variables('NetworkResouresApiVersion')]",
      "location": "[variables('ResourcesLocation')]",
      "tags": {
        "displayName": "vNIC_DomainControllers"
      },
      "copy": {
        "name": "DCnicLoop",
        "count": "[parameters('numberOfDC')]"
      },
      "properties": {
        "ipConfigurations": [
          {
            "name": "ipconfig1",
            "properties": {
              "privateIPAllocationMethod": "Dynamic",
              "subnet": {
                "id": "[Variables('vNetSubIntRef')]"
              }
            }
          }
        ],
        "dnsSettings": {
          "dnsServers": []
        },
        "enableIPForwarding": false
      }
    }

Next I create data disks. File Servers have four data disks each (for Storage Spaces Direct). Each Domain Controller has one data disk to host the AD database and RD Hosts have a data disk for application. All these disks are managed disks. I have also made a loop for each kind of data disk:

{
      "type": "Microsoft.Compute/disks",
      "name": "[concat(parameters('PrefixNameDC'), copyindex(),'-Data01')]",
      "apiVersion": "[variables('computeResouresApiVersion')]",
      "location": "[variables('ResourcesLocation')]",
      "tags": {
        "displayName": "Disks_DomainControllers"
      },
      "copy": {
        "name": "DCDskLoop",
        "count": "[parameters('numberOfDC')]"
      },
      "properties": {
        "creationData": {
          "createOption": "Empty"
        },
        "accountType": "Standard_LRS",
        "diskSizeGB": 10
      }
    }

I have also created a public IP for the RD Access load balancer:

{
      "type": "Microsoft.Network/publicIPAddresses",
      "name": "[parameters('PublicIPName')]",
      "apiVersion": "[variables('NetworkResouresApiVersion')]",
      "location": "[variables('ResourcesLocation')]",
      "tags": {
        "displayName": "Public IP Address"
      },
      "properties": {
        "publicIPAllocationMethod": "Static",
        "idleTimeoutInMinutes": 4
      },
      "dependsOn": []
    }

To finish, the following JSON block code creates VMs. I have a block code for each kind of VM. Then I use a loop to deploy several times the same VM with a different name. I use the Windows image to deploy the VM. Credentials are provided from parameters. Boot diagnostics are enabled and logs are stored in the storage account. Each vNIC is also bound to the right VM. VMs are added to availability set and connected to the right data disks.

{
      "name": "[concat(parameters('PrefixNameDC'), copyindex())]",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "[variables('computeResouresApiVersion')]",
      "location": "[variables('ResourcesLocation')]",
      "tags": {
        "displayName": "VM_DomainControllers"
      },
      "copy": {
        "name": "DCVMLoop",
        "count": "[parameters('NumberOfDC')]"
      },
      "dependsOn": [
        "[resourceId('Microsoft.Compute/availabilitySets', parameters('ASDomainControllersName'))]",
        "[resourceId('Microsoft.Network/networkInterfaces', concat(parameters('PrefixNameDC'), copyindex()))]"
      ],
      "properties": {
        "osProfile": {
          "computerName": "[concat(parameters('PrefixNameDC'), copyindex())]",
          "adminUsername": "[parameters('adminUser')]",
          "adminPassword": "[parameters('adminPassword')]",
          "windowsConfiguration": {
            "provisionVmAgent": "true"
          }
        },
        "hardwareProfile": {
          "vmSize": "Standard_DS1_v2"
        },
        "storageProfile": {
          "imageReference": {
            "id": "[parameters('OSDiskMasterPath')]"
          },
          "osDisk": {
            "name": "[concat(parameters('PrefixNameDC'), copyindex(),'-OS')]",
            "createOption": "FromImage",
            "managedDisk": {
              "storageAccountType": "Standard_LRS"
            }
          },
          "dataDisks": [
            {
              "lun": 2,
              "name": "[concat(parameters('PrefixNameDC'), copyindex(),'-Data01')]",
              "createOption": "Attach",
              "managedDisk": {
                "id": "[resourceId('Microsoft.Compute/disks', concat(parameters('PrefixNameDC'), copyindex(),'-Data01'))]"
              }
            }
          ]
        },
        "networkProfile": {
          "networkInterfaces": [
            {
              "id": "[resourceId('Microsoft.Network/networkInterfaces', concat(parameters('PrefixNameDC'), copyindex()))]"
            }
          ]
        },
        "diagnosticsProfile": {
          "bootDiagnostics": {
            "enabled": true,
            "storageUri": "[reference(resourceId('rdsfarm', 'Microsoft.Storage/storageAccounts', parameters('Sto_LogsAccount')), '2015-06-15').primaryEndpoints['blob']]"
          }
        },
        "availabilitySet": {
          "id": "[resourceId('Microsoft.Compute/availabilitySets', parameters('ASDomainControllersName'))]"
        }
      }
    }

Template deployment

To run the deployment with the JSON template, go to the marketplace and search for Template Deployment.

Then, copy past the template. You should have something like this:

Next change parameters as you wish and click on purchase.

After the deployment, I have stopped all VMs to not spend money immediately for VMs not used.

Result

Once the deployment is finished, you should have several Azure VM depending on the loop settings. On my side, I have 10 Azure VMs.

If I select a VM such as a file server, you can see that managed disks are well bound to Azure VM.

Network interfaces are also connected to the server and well associated with the right subnet.

Azure VM are also inside Availability Sets.

To finish, boot diagnostics are enabled and stored in the storage account.

Next topic

In the next topic, I will configure the domain controller. I’ll set the AD site and I’ll promote the Azure domain controllers.

The post RDS 2016 Farm: Deploy the Microsoft Azure VM appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-deploy-the-microsoft-azure-vm/feed/ 7 5340
RDS 2016 Farm: Create Microsoft Azure networks, storage and Windows image //www.tech-coffee.net/rds-2016-farm-create-microsoft-azure-networks-storage-and-windows-image/ //www.tech-coffee.net/rds-2016-farm-create-microsoft-azure-networks-storage-and-windows-image/#comments Mon, 10 Apr 2017 10:20:20 +0000 //www.tech-coffee.net/?p=5319 This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. In this topic, we will see how to deploy the Microsoft Azure networks resources, the storage account and how to prepare a Windows Image. You can find the other topics of the series in the ...

The post RDS 2016 Farm: Create Microsoft Azure networks, storage and Windows image appeared first on Tech-Coffee.

]]>
This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. In this topic, we will see how to deploy the Microsoft Azure networks resources, the storage account and how to prepare a Windows Image. You can find the other topics of the series in the following menu:

Github

I have published the complete JSON template on my github. You can copy it and make your modifications as you wish.

JSON template explanation

The JSON template consists of parameters, variables and resources. Parameters and variable are easy to understand. However, it is a little more complicated for resources. The below resource is a Virtual Network. This virtual network takes settings in parameters and variables. The below JSON code create a virtual network with four subnets (Internal, DMZ, Cluster and Gateway).

{
      "apiVersion": "[variables('API-Version')]",
      "location": "[variables('ResourcesLocation')]",
      "name": "[parameters('vNETName')]",
      "properties": {
        "addressSpace": {
          "addressPrefixes": [
            "[parameters('vNETPrefix')]"
          ]
        },
        "subnets": [
          {
            "name": "[parameters('vNETSubIntName')]",
            "properties": {
              "addressPrefix": "[parameters('vNETSubIntPrefix')]"
            }
          },
          {
            "name": "[parameters('vNETSubExtName')]",
            "properties": {
              "addressPrefix": "[parameters('vNETSubExtPrefix')]"
            }
          },
          {
            "name": "[parameters('vNETSubCluName')]",
            "properties": {
              "addressPrefix": "[parameters('vNETSubCluPrefix')]"
            }
          },
          {
            "name": "[Parameters('vNETSubGtwName')]",
            "properties": {
              "addressPrefix": "[Parameters('vNETSubGtwPrefix')]"
            }
          }
        ]
      },
      "tags": {
        "displayName": "Virtual Network"
      },
      "type": "Microsoft.Network/virtualNetworks"
    },

The following block code creates a Public IP address for the Azure Gateway.

{
      "apiVersion": "[variables('API-Version')]",
      "location": "[variables('ResourcesLocation')]",
      "name": "[parameters('S2SPIPName')]",
      "properties": {
        "publicIPAllocationMethod": "Dynamic"
      },
      "tags": {
        "displayName": "Public IP Address"
      },
      "type": "Microsoft.Network/publicIPAddresses"
    }

The following JSON code deploys the local gateway. The S2SGtwOnPremPIP specifies the public IP address of the On-Prem Gateway. The S2SLocalIPSubnet specifies the On-Prem routed IP subnets.

{
      "apiVersion": "[variables('API-version')]",
      "location": "[variables('ResourcesLocation')]",
      "name": "[parameters('S2SGtwOnPremName')]",
      "properties": {
        "localNetworkAddressSpace": {
          "addressPrefixes": [
            "[parameters('S2SLocalIPSubnet')]"
          ]
        },
        "gatewayIpAddress": "[parameters('S2SGtwOnPremPIP')]"
      },
      "tags": {
        "displayName": "Local Gateway"
      },
      "type": "Microsoft.Network/localNetworkGateways"
    }

The following JSON code deploys the Microsoft Azure Gateway by taking the previously created Public IP address. The Microsoft Azure Gateway is located in the gateway subnet.

{
      "apiVersion": "[variables('API-version')]",
      "dependsOn": [
        "[concat('Microsoft.Network/publicIPAddresses/', parameters('S2SPIPName'))]",
        "[concat('Microsoft.Network/virtualNetworks/', parameters('vNETName'))]"
      ],
      "location": "[Variables('Resourceslocation')]",
      "name": "[parameters('S2SGtwAzureName')]",
      "properties": {
        "enableBgp": false,
        "gatewayType": "Vpn",
        "ipConfigurations": [
          {
            "properties": {
              "privateIPAllocationMethod": "Dynamic",
              "publicIPAddress": {
                "id": "[resourceId('Microsoft.Network/publicIPAddresses',parameters('S2SPIPName'))]"
              },
              "subnet": {
                "id": "[variables('vNETSubGtwRef')]"
              }
            },
            "name": "vnetGatewayConfig"
          }
        ],
        "vpnType": "[parameters('S2SGtwVPNType')]"
      },
      "tags": {
        "displayName": "Azure Gateway"
      },
      "type": "Microsoft.Network/virtualNetworkGateways"
    }

To finish, the following block code creates a storage account. This storage account will be used for VM diagnostic logs.

{
      "name": "[parameters('StoAcctLogName')]",
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2016-05-01",
      "tags": {
        "displayName": "Log Storage Account"
      },
      "sku": {
        "name": "[parameters('StoAcctLogType')]"
      },
      "kind": "Storage",
      "location": "[variables('ResourcesLocation')]"
    }

Import the template

To import the template, connect to Microsoft Azure and search for Template Deployment. Copy past the template. You should have something as below:

Then change the parameters as you wish and click on Purchase (don’t worry, it’s free :p).

Once the template is deployed, you should have 5 resources as below. So the virtual network, the gateways and storage account are created.

You can review the virtual network configuration as the following screenshot:

The public IP is also created:

Create the VPN connection

Now I create the VPN connection between On-Prem and Microsoft Azure. Select the On-Prem gateway and click on Configuration. Please review if the local gateway IP address is correct.

Then select Connections, and create a new connection. Provide a name, select Site-to-Site and specify the virtual network gateway and the local network gateway. To finish, provide a shared key.

Now, you have to configure your local gateway. I have an Ubiquiti gateway and I have set it with the following command lines:

set vpn ipsec auto-firewall-nat-exclude disable
set vpn ipsec disable-uniqreqids
set vpn ipsec esp-group esp-azure compression disable
set vpn ipsec esp-group esp-azure lifetime 3600
set vpn ipsec esp-group esp-azure mode tunnel
set vpn ipsec esp-group esp-azure pfs disable
set vpn ipsec esp-group esp-azure proposal 1 encryption aes256
set vpn ipsec esp-group esp-azure proposal 1 hash sha1
set vpn ipsec ike-group ike-azure ikev2-reauth no
set vpn ipsec ike-group ike-azure key-exchange ikev2
set vpn ipsec ike-group ike-azure lifetime 28800
set vpn ipsec ike-group ike-azure proposal 1 dh-group 2
set vpn ipsec ike-group ike-azure proposal 1 encryption aes256
set vpn ipsec ike-group ike-azure proposal 1 hash sha1
set vpn ipsec ipsec-interfaces interface pppoe0
set vpn ipsec nat-traversal enable
set vpn ipsec site-to-site peer <Azure Gateway Public IP> authentication mode pre-shared-secret
set vpn ipsec site-to-site peer <Azure Gateway Public IP> authentication pre-shared-secret <Shared Key>
set vpn ipsec site-to-site peer <Azure Gateway Public IP> connection-type initiate
set vpn ipsec site-to-site peer <Azure Gateway Public IP> default-esp-group esp-azure
set vpn ipsec site-to-site peer <Azure Gateway Public IP> ike-group ike-azure
set vpn ipsec site-to-site peer <Azure Gateway Public IP> ikev2-reauth inherit
set vpn ipsec site-to-site peer <Azure Gateway Public IP> local-address any
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 allow-nat-networks disable
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 allow-public-networks disable
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 esp-group esp-azure
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 local prefix 10.10.0.0/16
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 protocol all
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 remote prefix 10.11.0.0/16

Once the VPN is connected, you should have a Succeeded status as below:

Create the Windows Server 2016 image

To create the Windows Server 2016 image, first I deploy a new Azure VM. I call it zTemplate.

Then I choose a VM size.

I choose to use managed disks and I connect the VM in the Internal subnet. I don’t need Network Security Group for this VM. I enable the boot diagnostics and I choose the previously created storage account to store logs.

Once the Azure VM is started, I customize the operating system and I apply updates. Then I run sysprep as below:

Once the VM is stopped, I click on Capture:

Then I specify an image name and the resource group. I choose also to automatically delete the VM after creating the image.

At the end of this topic, I have the following resources in the resource group:

Next topic

In the next topic, we will deploy all Azure VMs for the Remote Desktop farm. The VM will be deployed from the Windows Image and from a JSON template.

The post RDS 2016 Farm: Create Microsoft Azure networks, storage and Windows image appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-create-microsoft-azure-networks-storage-and-windows-image/feed/ 4 5319
Deploy a Windows Server 2016 RDS Farm in Microsoft Azure //www.tech-coffee.net/deploy-a-windows-server-2016-rds-farm-in-microsoft-azure/ //www.tech-coffee.net/deploy-a-windows-server-2016-rds-farm-in-microsoft-azure/#comments Fri, 07 Apr 2017 09:55:15 +0000 //www.tech-coffee.net/?p=5293 Remote Desktop Service (RDS) has been improved in Windows Server 2016. RDS farm deployment has been simplified, especially for the Cloud. For example, you can now leverage Azure SQL to host the RD Broker database. This series of topics aims to show you how to deploy a high availability RDS farm in Microsoft Azure. I’ll ...

The post Deploy a Windows Server 2016 RDS Farm in Microsoft Azure appeared first on Tech-Coffee.

]]>
Remote Desktop Service (RDS) has been improved in Windows Server 2016. RDS farm deployment has been simplified, especially for the Cloud. For example, you can now leverage Azure SQL to host the RD Broker database. This series of topics aims to show you how to deploy a high availability RDS farm in Microsoft Azure. I’ll describe how to deploy Microsoft Azure resources and how to configure the operating systems. This series consists of the following topics:

This topic introduces the RDS farm architecture overview.

Azure SQL for RD Broker database

Before Windows Server 2012R2, when you built HA RDS Farm, you had at least two RD Brokers in high availability. This mode required a SQL Server database in HA to avoid a single point of failure. If you deployed the RDS farm in Azure, you had two possibilities to deploy the RD broker database in HA:

  • SQL Server with database mirroring (almost deprecated)
  • SQL Server with AlwaysOn Availability Group (2x SQL Server Enterprise = uber expensive).

In Windows Server 2016, you can deploy the RD Broker database in the PaaS Azure SQL Server. So, you save money because you don’t need anymore two additional Azure VM and SQL Server licenses. In this series, we will leverage this new feature.

Personal Session Desktops

In Windows Server 2012 R2 there is two kinds of deployment. in Remote Desktop farm. The first is the session-based desktop deployment, which enables the user to connect to a remote server and get the desktop. All users share the same servers and so the same settings. Users can customize applications and settings are stored in User Profile Disk. You can also provide a Remote App RDP file to users and when the RDP session is opened, only the applications are shown to users. They don’t see any desktop and they can customize the applications. Users think that the application runs on their computers but it is in fact a RDP session.

The second model is Virtual Machines-based deployment. This solution is based on virtual machines and Hyper-V. When users open a virtual desktop, a VM is deployed and each user is isolated in their own virtual machine. This solution requires dedicated Hyper-V and so, can’t be deployed in Microsoft Azure.

Windows Server 2016 brings a new model called Personal Session Desktops. We can see this new model like a mix between Virtual Machines-based and Session-Based desktop deployment. Each user has his own personal desktop with specific user rights. It is great for Cloud deployment and you can provide Desktop as a Service.

In this series, I will implement classic session collection based on Remote Apps.

Architecture Overview

In this design, I have two sites: the On-Premise site and Microsoft Azure. In On-Premise site, I have an Active Directory forest called homecloud.net. I will deploy a Site-to-Site VPN between both sites to extend the on-Prem AD forest to Azure.

From Microsoft Azure perspective, I will have two kinds of servers: those reachable through a public IP and those are not reachable from a public IP. This is why I have two “zones”: DMZ and Internal. I’ll deploy all Remote Desktop roles:

  • RD Access and RD Gateway share two Azure VM
  • RD Broker and RD Licensing share two Azure VM. The RD broker database will be located in Microsoft Azure SQL.
  • Two RD Host that will host Personal Session Desktops
  • Two File Servers with Storage Spaces Direct for User Profile Disks
  • Two Domain Controllers of homecloud.net.

Microsoft Azure resource requirements

To host this solution, several components will be required. In the next topics, I’ll show you how I have deployed all these resources. These resources belong to a resource group called RDSFarm. The following resources will be deployed:

  • Virtual network: to interconnect resources
    • DMZ subnet: 10.11.1.0/24
    • Internal subnet: 10.11.0.0/24
    • Cluster subnet: 10.11.100.0/24
    • Gateway subnet: 10.11.255.0/24
  • Azure Gateway + Site-to-Site VPN connection
  • Local Gateway
  • Storage account
    for VM diagnostics
  • 2x Azure VM Domain Controllers with vNIC + OS Disk + 1x Data disks each + Availability Set
  • 2x Azure VM RD Access (+ RD gateway) with vNIC + OS Disk + Load balancing (Public IP) + Availability Set
  • 2x Azure VM File Servers with 2x vNICs (1x Internal + 1x Cluster), + OS Disk, 4x data disks each + Availability Set
  • 2x Azure VM RD Broker (+ RD Licensing) with vNIC + OS Disk + Availability Set
  • 2x Azure VM RD Host with vNIC + OS Disk + 1x Data Disk each + Availability Set
  • 1x Azure SQL
  • 1x Azure VM Image (Windows Server 2016 image)

Each type of Azure VM is in an availability set to ensure a 99,95% SLA. Regarding the VM disks (OS + data), I use the new managed disks. So, I don’t create a storage account for Azure VM disks. The storage account will be used for VM diagnostic logs. The On-Prem site will be connected to Microsoft Azure through a Site-To-Site VPN. So, an Azure and local gateways are required to connect both sites.

Active Directory design

In the On-Premise site, I have two Active Directory sites. Each site has two domain controllers and are “connected” by a replication link. This forest is extended to Azure through a Site-To-Site VPN. The domain controllers in Azure will be in a specific AD site called Azure. The Azure AD site will replicate only with Lyon-HyperV.

File Server design

To store the User Profile Disks, I’ll deploy a Storage Spaces Direct cluster in Azure. Thanks to S2D, I’m able to leverage Scale-Out-File Server (SOFS) which provides a distributed shares system. Storage Spaces Direct enables to deploy file servers in high availability. The User Profile Disk will be stored in this file server cluster.

Remote Desktop farm design

All the Remote Desktop components will be deployed. The RD Broker and the RD Gateway share two virtual machines located in DMZ subnet. These VMs will be connected to a load balancer with a Public IP. In internal network, there are two servers for RD Broker and RD Licensing and two RD Hosts (or more regarding the needs). A Microsoft Azure SQL database will be deployed to host the RD Broker database. The isolation between DMZ and Internal can be implemented with Network Security Groups.

Next topic

In the next topic, I’ll show you how to deploy network resources in Azure thanks to a JSON template. Then I will create a Windows Server 2016 Datacenter image to deploy all VM from this image.

The post Deploy a Windows Server 2016 RDS Farm in Microsoft Azure appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-windows-server-2016-rds-farm-in-microsoft-azure/feed/ 12 5293
Make a Veeam backup copy to Microsoft Azure //www.tech-coffee.net/make-a-veeam-backup-copy-to-microsoft-azure/ //www.tech-coffee.net/make-a-veeam-backup-copy-to-microsoft-azure/#comments Wed, 25 Jan 2017 13:56:54 +0000 //www.tech-coffee.net/?p=5064 Veeam can make a backup copy from On-Premise to Microsoft Azure. This is possible thanks to an appliance available on Microsoft Azure called Veeam Cloud Connect. Thanks to Veeam Cloud Connect you can make a backup copy to Microsoft Azure. This enables to follow easily the 3-2-1 backup rule (3 copies on 2 different medias ...

The post Make a Veeam backup copy to Microsoft Azure appeared first on Tech-Coffee.

]]>
Veeam can make a backup copy from On-Premise to Microsoft Azure. This is possible thanks to an appliance available on Microsoft Azure called Veeam Cloud Connect. Thanks to Veeam Cloud Connect you can make a backup copy to Microsoft Azure. This enables to follow easily the 3-2-1 backup rule (3 copies on 2 different medias and on 1 remote site). This topic shows you how making this backup copy from On-Premise to Microsoft Azure.

On-Premise architecture overview

I have deployed Veeam Backup and Replication 9.5 with update 1 in a Hyper-V virtual machine. This VM is located on a 2-node cluster based on Storage Spaces Direct. The backups are in a Synology NAS and connected through SMB. I have already set a backup job to protect domain controllers. I will make a backup copy of this job to Microsoft Azure

Deploy and Veeam Cloud Connect

First, the Veeam Cloud Connect must be deployed in Microsoft Azure. Logon to the Azure Portal and look for Veeam Cloud Connect for Enterprise.

Then configure the VM as you wish. Keep in mind that some data disks must be added to the VM for the backup repositories. So, select the VM size with the right number of disks allowed. For this topic, I deploy all Veeam Cloud Connect services in the same VM, but for production, you can deploy services across several VMs. For example, you can dedicate the backup repositories to some VMs and the gateway to others. For my lab needs, I have deployed a DS2_V2 VM.

Once your VM is deployed, we can add some disk for the backup repositories. To add a disk, navigate to the VM settings and select disks.

Once you have added some additional disks, we have to configure a public IP address statically. To set the static IP, navigate to the public IP resource and click on configuration. Then change the assignment setting to static.

You can now connect to the VM across RDP

Configure Veeam Cloud Connect

The first time you connect to the VM, you have to do the following task:

  • Add the Veeam Cloud Connect license
  • Upgrade to Veeam Backup & Replication to the same On-Premise version

Once these tasks are done, you can format the additional disks as below:

Add a backup repository

Now you can open the Veeam Cloud Connect console (which is in fact a Veeam Backup & Replication console). Navigate to backup infrastructure and select Add Backup Repository.

Give a name and a description for your backup repository.

Next, specify the type of backup repository. Because the backup will be located on disks directly attached to the VM, I choose Microsoft Windows Server.

Then specify the repository server. You can add a remote VM if you would like. For this topic, I choose to store backup locally.

Next I specify the drive letter of my additional disk.

In the next screen, I don’t enable the vPower NFS because Hyper-V doesn’t need it.

Configure the Cloud Gateway

Now that backup repository is set, we can configure the Cloud Gateways. The On-Premise Veeam Backup & Replication connects to Veeam in Microsoft Azure through the Cloud Gateways. You can deploy this role to other servers (with for example, a load-balancer). For this topic, the cloud gateway is the same server than other roles. To configure the Cloud Gateways, navigate to Cloud Connect, and select the default cloud gateway. Right click on it and choose Properties.

Select the server and click on next. If you have configured a Network Security Group, don’t forget to allow the external port.

Select This server is located behind NAT, and specify the static public IP of the Azure VM.

Add a tenant

To finish the Veeam Cloud Connect configuration, we should create a tenant. Navigate to Cloud Connect tab and right click on Tenants. Then select Add tenant.

Specify credentials for this tenant and choose which resources are assigned to it.

In the next screen, you can define the number of concurrent tasks and limit the bandwidth for this tenant.

You can also define a quota associated with this tenant. With the below setting, the tenant can use 1000GB on the backup repositories.

To finish, specify which backup repository the tenant can use.

At this moment, we have finished configuring the Veeam Cloud Connect. We can now connect Veeam Cloud Connect from On-Premise Veeam Backup & Replication.

Add Cloud repository to Veeam Backup & Replication

Open your On-Premise Veeam Backup & Replication and navigate to backup infrastructure. Click on Add Service Provider.

Next, specify the static Public IP address of the Veeam Cloud Connect.

In credentials screen, I add the credentials that I have set when I have added the tenant in the Veeam Cloud Connect.

If Veeam Backup & Replication can connect to Veeam Cloud Connect, you should see the available cloud repositories.

Once you have finished, you should have the Veeam Cloud Connect listed in service providers.

Make the backup copy to Microsoft Azure

Now that On-Premise Veeam Backup & Replication is connected to Veeam Cloud Connect, we can make a backup copy. Select a job and click on Backup Copy.

Give a name and a description for this backup copy job. Then choose when the backup copies are created.

Next, add virtual machines to the backup copy job.

In the next screen you can choose the backup repository, the number of restore points to keep and archival policy. After all, the Cloud can replace LTO libraries for long-term backups.

Then choose if you want to transfer data through the WAN accelerators or directly. For this topic, I choose direct.

Because the backup to the Cloud can take a lot of bandwidth, you can schedule when the data can be transferred.

Once the backup job is finished, I run it to copy VM backup to Azure.

While the copy, a new job has been created to receive data.

Once the backup copy job is finished, I open the backup file and as you can see, both backup VMs are now externalized to Microsoft Azure.

Conclusion

The Veeam Cloud Connect feature enables to externalize some backups to Microsoft Azure. Thanks to this feature, you can leverage Microsoft Azure for long-term backups & archival. Moreover the 3-2-1 rule can be applied easily.

The post Make a Veeam backup copy to Microsoft Azure appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/make-a-veeam-backup-copy-to-microsoft-azure/feed/ 6 5064
Protect Hyper-V VM in Microsoft Azure with Azure Site Recovery //www.tech-coffee.net/protect-hyper-v-vm-microsoft-azure-azure-site-recovery/ //www.tech-coffee.net/protect-hyper-v-vm-microsoft-azure-azure-site-recovery/#comments Tue, 16 Feb 2016 12:59:22 +0000 //www.tech-coffee.net/?p=4476 Azure Site Recovery is a Microsoft Azure feature that enables you to replicate virtual machines (VM) from one site to another and orchestrate the failover in case of disaster. It is a great tool to implement a Disaster Recovery Plan (DRP) for your Hyper-V or VMware VM or for physical machines. There are several scenarios ...

The post Protect Hyper-V VM in Microsoft Azure with Azure Site Recovery appeared first on Tech-Coffee.

]]>
Azure Site Recovery is a Microsoft Azure feature that enables you to replicate virtual machines (VM) from one site to another and orchestrate the failover in case of disaster. It is a great tool to implement a Disaster Recovery Plan (DRP) for your Hyper-V or VMware VM or for physical machines.

There are several scenarios available with Azure Site Recovery to protect your workloads. The two first regards the use of two On-Premises datacenters:

In the first scenario, you have two On-Premises sites where Hyper-V hosts and Virtual Machine Manager are deployed. Virtual Machines are replicated between both sides with Hyper-V Replica or SAN array replication. Health replication monitoring and orchestration management are located in an Azure Site Recovery vault in Microsoft Azure. On the VMware side, the InMage Scout has to be downloaded and deployed on both datacenters. Then you will be able to protect your servers.

The three others scenarios regard the use of Microsoft Azure as DRP site:

In the first scenario you have Hyper-V Hosts and Virtual Machine Manager. In this scenario an agent will be deployed on VMM server and on Hyper-V Hosts. Then Azure Site Recovery will protect VM in VMM clouds. The second scenario is the same without using of Virtual Machine Manager. An agent is deployed on Hyper-V hosts and the VMs are protected and replicated in Microsoft Azure. To finish, Azure Site Recovery supports to protect VMware VM and/or physical server in Microsoft Azure. It can also be a great way to migrate your VMware VM or your physical server to Hyper-V VM J

In this topic, I will present you the scenario where you use Microsoft Azure as DRP site and where you have deployed Hyper-V and Virtual Machine Manager On-Premises.

Common Azure Site Recovery scenario

Usually your applications leverage some other services as SQL Server for the databases or Active Directory for the authentication. These services have some built-in replication process to support the High Availability. So instead of using Azure Site Recovery to protect these services, we can use their replication process. So for the Active Directory case, we will deploy VM in Azure. These VM will execute domain controllers. It will be necessary to create an Active Directory Site for domain controllers in Azure and create a replication link to manage the weight.

On SQL Side, we will deploy VM in Azure where SQL Server will be deployed. Then an asynchronous replication will be set between SQL Server On-Prem and SQL Server in Microsoft Azure.

Then the VMs in application tier will be replicated with Azure Site Recovery. When a disaster will occur, only servers in application tier will failover to Microsoft Azure.

Requirements

To use Microsoft Azure as DRP site with Virtual Machine Manager you need:

  • Virtual Machine Manager 2012 R2 with at least Update Rollup 5
  • Hyper-V hosts under Windows Server 2012 R2
  • The protected VM must be supported in Microsoft Azure
  • A Microsoft Azure Account
  • An Azure Site Recovery vault
  • A virtual Network in the same region as the Azure Site Recovery Vault
  • A Storage Account Geo-Redundant in the same region as the Azure Site Recovery Vault

Deploy requirements in Microsoft Azure

Virtual Network configuration

I have created a virtual network in Central US called POC-ASR-Exakis.

This virtual network contains two subnets called Subnet-LAN and Subnet-DMZ.

Storage Account creation

Then I have created a Geo-Redundant storage account called pocasrexakis in Central US.

Azure Site Recovery vault creation

Next I navigate to Recovery Services to create a new vault. Then I select Site Recovery Vault and I specify ASR-Exakis as name. Then I choose Central US location.

Once the Site Recovery Vault, I choose the scenario that I want to implement. So I choose Between an on-premises VMM site and Azure.

First of all, we need to prepare VMM server. Download the registration key and the Microsoft Azure Site Recovery Provider for installation on VMM server.

On-Premises configuration

Now that the Site Recovery vault is created, we have to deploy agent in VMM server and in Hyper-V hosts.

Prepare VMM servers

Once you have downloaded the registration key and the ASR provider binaries, you should have both files in your VMM server.

Then run the AzureSiteRecoveyProvider executable. When you are in vault settings screen, specify the registration key file.

Then you have to specify a location to save a certificate. VMs protected in Azure will be encrypted. If you have to unencrypt data, this certificate will be required. So keep this certificate in several vault!

To finish, specify a friendly name for your VMM Server.

If the registration has worked, you should have your server connected to the site recovery vault as below.

VMM configuration

On VMM side, I have created a cloud called MyApps. Three VMs belong to this cloud.

If you edit the properties of a VM, you should have something as below in Microsoft Azure Site Recovery tab.

Deploy agent on Hyper-V hosts

Now that VMM is ready, we are in step 2. Download the Microsoft Azure Recovery Services agent on Hyper-V hosts and run the executable.

Specify an installation folder and a cache location. In real world, the cache location should be located in a separate disk.

Then specify the registration key file that you have downloaded on the VMM server.

Azure Site Recovery configuration

Now that On-Premises configuration is finished, we can configure Site Recovery vault to protect and replicate your VM. Then we will create a recovery plan to orchestrate the failover in case of disaster.

Map network resource

First of all, we have to bind the On-Premises networks with the Virtual Networks created in Microsoft Azure. So navigate to resources and networks as below. Without any configuration, you should have the list of your On-Premises network marked as Unmapped. To bind the On-Premises network to a Virtual Network in Microsoft Azure, select the network and click on map.

Then select the target Azure network and click on ok.

Protect virtual machines

To protect VMs, navigate to protected items and select VMM Clouds. In this view, you should all Clouds that you have created in Virtual Machine Manager. Below you can see that I have the Cloud MyApps.

When you select the Cloud, you can configure it as below. You can select the storage account, if you want encrypt stored data, the copy frequency and so on.

Once you have configured the Cloud protection, we can enable protection on VM. So just select Enable Protection.

Select the VM that you want to protect and specify the storage account.

Once the protection is enabled, the replication should start. Below you can find a screenshot of the throughput on my router and the state synchronizing on VMs.

Once the replication is finished, the status is protected.

If you click on a protected VM, you can configure its name, its size and its network when it will be failover in Microsoft Azure.

Create a recovery plan

Now that VMs are protected, we can create a recovery plan to orchestrate the failover in case of disaster. Navigate to recovery plans tab and select create recovery plan.

Give a name to your recovery plan then choose the source and the target.

Select the VMs that will be included into the recovery plan.

Then you can create groups. Each VM in a single group will be started simultaneously. You can add manual tasks or scripts between groups. To use scripts, you need an Azure Automation account. Below I have a recovery plan with three groups and a single manual task.

Test the plan

Once you have created your recovery plan, you can test it or make a real failover. When you test failover, the source VM will not be stopped and the VM will be started in Azure in a specific network to not disturb the production. When you run a real failover, you can choose unplanned failover or planned failover. With the planned failover, the source VM will be stopped and a final synchronization will be executed. To try my recovery plan, I choose Test failover.

When I click on Test Failover, Microsoft Azure asks me the network where will be connected the VMs. Then the recovery plan is executed.

After the group 1, I have added a manual task. So I have to click on complete manual action to continue.

The VMs are created in Microsoft Azure and started regarding to the recovery plan.

When the plan is finished, Microsoft Azure asks me to complete the test. When you have finished to verify that all is ok, you can click on test completed and all VMs will be deleted in Microsoft Azure (the VM only, not the VHD).

Monitor the virtual machine health

Azure Site Recovery is able to monitor the state of the VM. For example, I have stopped my Hyper-V host to apply some updates. Azure Site Recovery had detected an issue on VMs.

The post Protect Hyper-V VM in Microsoft Azure with Azure Site Recovery appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/protect-hyper-v-vm-microsoft-azure-azure-site-recovery/feed/ 2 4476
Understand Failover Cluster Quorum //www.tech-coffee.net/understand-failover-cluster-quorum/ //www.tech-coffee.net/understand-failover-cluster-quorum/#comments Tue, 17 Nov 2015 10:22:02 +0000 //www.tech-coffee.net/?p=4274 This topic aims to explain the Quorum configuration in a Failover Clustering. As part of my job, I work with Hyper-V Clusters where the Quorum is not well configured and so my customers have not the expected behavior when an outage occurs. I work especially on Hyper-V clusters but the following topic applies to most ...

The post Understand Failover Cluster Quorum appeared first on Tech-Coffee.

]]>
This topic aims to explain the Quorum configuration in a Failover Clustering. As part of my job, I work with Hyper-V Clusters where the Quorum is not well configured and so my customers have not the expected behavior when an outage occurs. I work especially on Hyper-V clusters but the following topic applies to most of Failover Cluster configuration.

What’s a Failover Cluster Quorum

A Failover Cluster Quorum configuration specifies the number of failures that a cluster can support in order to keep working. Once the threshold limit is reached, the cluster stops working. The most common failures in a cluster are nodes that stop working or nodes that can’t communicate anymore.

Imagine that quorum doesn’t exist and you have two-nodes cluster. Now there is a network problem and the two nodes can’t communicate. If there is no Quorum, what prevents both nodes to operate independently and take disks ownership on each side? This situation is called Split-Brain. Quorum exists to avoid Split-Brain and prevents corruption on disks.

The Quorum is based on a voting algorithm. Each node in the cluster has a vote. The cluster keeps working while more than half of the voters are online. This is the quorum (or the majority of votes). When there are too many of failures and not enough online voters to constitute a quorum, the cluster stop working.

Below this is a two nodes cluster configuration:

The majority of vote is 2 votes. So a two nodes cluster as above is not really resilient because if you lose a node, the cluster is down.

Below a three-node cluster configuration:

Now you add a node in your cluster. So you are in a three-node cluster. The majority of vote is still 2 votes. But because there is three nodes, you can lose a node and the cluster keep working.

Below a four-node cluster configuration:

Despite its four nodes, this cluster can support one node failure before losing the quorum. The majority of vote is 3 votes so you can lose only one node.

On a five-node cluster the majority of votes is still 3 votes so you can lose two nodes before than the cluster stop working and so on. As you can see, the majority of nodes must remain online in order to the cluster keeps working and this is why it is recommended to have an odd majority of votes. But sometimes we want only a two-node cluster for some application that don’t require more nodes (as Virtual Machine Manager, SQL AlwaysOn and so on). In this case we add a disk witness, a file witness or in Windows Server 2016, a cloud Witness.

Failover Cluster Quorum Witness

As said before, it is recommended to have an odd majority of votes. But sometimes we don’t want an odd number of nodes. In this case, a disk witness, a file witness or a cloud witness can be added to the cluster. This witness too has a vote. So when there are an even number of nodes, the witness enables to have an odd majority of vote. Below, the requirements and recommendations of each Witness type (except Cloud Witness):

Witness type Description Requirements and recommendations
Disk witness
  • Dedicated LUN that stores a copy of the cluster database
  • Most useful for clusters with shared (not replicated) storage
  • Size of LUN must be at least 512 MB
  • Must be dedicated to cluster use and not assigned to a clustered role
  • Must be included in clustered storage and pass storage validation tests
  • Cannot be a disk that is a Cluster Shared Volume (CSV)
  • Basic disk with a single volume
  • Does not need to have a drive letter
  • Can be formatted with NTFS or ReFS
  • Can be optionally configured with hardware RAID for fault tolerance
  • Should be excluded from backups and antivirus scanning
File share witness
  • SMB file share that is configured on a file server running Windows Server
  • Does not store a copy of the cluster database
  • Maintains cluster information only in a witness.log file
  • Most useful for multisite clusters with replicated storage
  • Must have a minimum of 5 MB of free space
  • Must be dedicated to the single cluster and not used to store user or application data
  • Must have write permissions enabled for the computer object for the cluster name

The following are additional considerations for a file server that hosts the file share witness:

  • A single file server can be configured with file share witnesses for multiple clusters.
  • The file server must be on a site that is separate from the cluster workload. This allows equal opportunity for any cluster site to survive if site-to-site network communication is lost. If the file server is on the same site, that site becomes the primary site, and it is the only site that can reach the file share.
  • The file server can run on a virtual machine if the virtual machine is not hosted on the same cluster that uses the file share witness.
  • For high availability, the file server can be configured on a separate failover cluster.

So below you can find again a two-nodes Cluster with a witness:

two-nodes-witness

Now there is a witness, you can lose a node and keep the quorum. Even if a node is down, the cluster still working. So when you have an even number of nodes, the quorum witness is required. But to keep an odd majority of votes, when you have an odd number of nodes, you should not implement a quorum witness.

Quorum configuration

Below you can find the four possible cluster configuration (taken from TechNet):

  • Node Majority (recommended for clusters with an odd number of nodes)
    • Can sustain failures of half the nodes (rounding up) minus one. For example, a seven node cluster can sustain three node failures.
  • Node and Disk Majority (recommended for clusters with an even number of nodes).
    • Can sustain failures of half the nodes (rounding up) if the disk witness remains online. For example, a six node cluster in which the disk witness is online could sustain three node failures.
    • Can sustain failures of half the nodes (rounding up) minus one if the disk witness goes offline or fails. For example, a six node cluster with a failed disk witness could sustain two (3-1=2) node failures.
  • Node and File Share Majority (for clusters with special configurations)
    • Works in a similar way to Node and Disk Majority, but instead of a disk witness, this cluster uses a file share witness.
    • Note that if you use Node and File Share Majority, at least one of the available cluster nodes must contain a current copy of the cluster configuration before you can start the cluster. Otherwise, you must force the starting of the cluster through a particular node. For more information, see “Additional considerations” in Start or Stop the Cluster Service on a Cluster Node.
  • No Majority: Disk Only (not recommended)
    • Can sustain failures of all nodes except one (if the disk is online). However, this configuration is not recommended because the disk might be a single point of failure.

Stretched Cluster Scenario

Unfortunately (I don’t like stretched cluster in Hyper-V scenario), some customers have stretched cluster between two datacenters. And the most common mistake I see to save money is the below scenario:

So the customer tells me: Ok I’ve followed the recommendation because I have four nodes in my cluster but I have added a witness to obtain an odd majority of votes. So let’s start the production. The cluster is running for a while and then one day the room 1 is underwater. So you lose Room 1:

In this scenario you should have also a stretched storage and so if you have implemented a disk witness it should move to room 2. But in the above case you have lost the majority of votes and so the cluster stop working (sometimes with some luck, the cluster is still working because the disk witness has time to failover but it is lucky). So when you implement a stretched cluster, I recommend the below scenario:

In this scenario, even if you lose a room, the cluster still working. Yes I know, three rooms are expensive but I have not recommended you to make a stretched cluster J (Hyper-V case). Fortunately, in Windows Server 2016, the quorum witness can be hosted in Microsoft Azure (Cloud Witness).

Dynamic Quorum (Windows Server 2012 feature)

Dynamic Quorum enables to assign vote to node dynamically to avoid to lose the majority of votes and so the cluster can run with one node (known as last-man standing). Let’s take the above example with four-node cluster without quorum witness. I said that the Quorum is 3 votes so without dynamic quorum, if you lose two nodes, the cluster is down.

Now I enable the Dynamic Quorum. The majority of votes is computed automatically related to running nodes. Let’s take again the Four-Node example:

So, why implementing a witness, especially for stretched cluster? Because Dynamic Quorum works great when the failure are sequential and not simultaneous. So for the stretched cluster scenario, if you lose a room, the failure is simultaneous and the dynamic quorum has not the time to recalculate the majority of votes. Moreover I have seen strange behavior with dynamic quorum especially with two-node cluster. This is why in Windows Server 2012, I always disabled the dynamic quorum when I didn’t use a quorum witness.

The dynamic quorum has been enhanced in Windows Server 2012 R2. Now there is the Dynamic Witness implemented. This feature calculate if the Quorum Witness has a vote. There is two cases:

  • If there is an even number of node in the cluster with the dynamic quorum enabled, the Dynamic Witness is enabled on the Quorum Witness and so the witness has vote.
  • If there is an odd number of node in the cluster with the dynamic quorum enabled, the Dynamic Witness is enabled on the Quorum Witness and so the witness has not vote.

So since Windows Server 2012 R2, Microsoft recommends to always implement a witness in a cluster and let the dynamic quorum to decide for you.

The Dynamic Quorum is enabled by default since Windows Server 2012. In the below example, there is a four-node cluster on Windows Server 2016. But it is the same behavior.

I verify if the dynamic quorum is enabled and also the dynamic witness:

The Dynamic Quorum and the Dynamic Witness are well enabled. Because I have four nodes, the Witness has a vote and this is why the Dynamic Witness is enabled. If you want to disable the Dynamic Quorum you can run this command:

(Get-Cluster).DynamicQuorum = 0

To finish, Microsoft has enhanced the dynamic quorum by adjusting the number of online node’s vote to keep an odd number of votes. First the cluster plays with the dynamic witness to keep an odd majority of votes. Then if it can’t adjust the number of vote with the dynamic witness, it remove a vote on a running node.

For example you have a four-node cluster in a streched cluster. You have lost your quorum witness. Now you have two nodes in the room one and two nodes in the room two. The cluster will remove a vote on a node to keep a majority in a room. In this way, even if you lose a node, the cluster still working.

Cloud Quorum Witness (Windows Server 2016 feature)

By implementing a Cloud Quorum Witness, you avoid to spend money on a third room in case of stretched cluster. Below this is the scenario:

The Cloud Witness, hosted in Microsoft Azure, has also one vote. In this way you have also an odd majority of votes. For that you need an existing storage account in Microsoft Azure. You need also an access key.

Now you have just to configure the quorum as a standard witness. Select Configure a Cloud Witness when it is asked.

Then specify the Azure Storage Account and a storage key.

At the end of the configuration, the Cloud Witness should be online.

Conclusion

In conclusion I recommend this when you configure a Quorum in a failover cluster:

  • Prior to Windows Server 2012 R2, always keep an odd majority of vote
    • In case of an even number of nodes, implement a witness
    • In case of an odd number of nodes, do not implement a witness
  • Since Windows Server 2012 R2, Always implement a quorum witness
    • Dynamic Quorum manage the assigned vote to the nodes
    • Dynamic Witness manage the assigned vote to the Quorum Witness
  • In case of stretched cluster, implement the witness in a third room or use Microsoft Azure.

The post Understand Failover Cluster Quorum appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/understand-failover-cluster-quorum/feed/ 28 4274
Deploy Azure Resources with JSON template //www.tech-coffee.net/deploy-azure-resources-with-json-template/ //www.tech-coffee.net/deploy-azure-resources-with-json-template/#comments Sat, 25 Jul 2015 19:32:46 +0000 //www.tech-coffee.net/?p=3702 If you are using Microsoft Azure, you may have noticed that currently there are two Portals: Standard Azure Portal: https://manage.windowsazure.com Preview Portal: https://portal.azure.com The Standard Azure Portal is based on the REST API called Service Management while the Preview Portal is based on Azure Resource Manager (ARM). Microsoft introduces ARM to simplify the deployment in ...

The post Deploy Azure Resources with JSON template appeared first on Tech-Coffee.

]]>
If you are using Microsoft Azure, you may have noticed that currently there are two Portals:

The Standard Azure Portal is based on the REST API called Service Management while the Preview Portal is based on Azure Resource Manager (ARM). Microsoft introduces ARM to simplify the deployment in their Public Cloud thanks to reusable template written in JSON. We will see in the next section that this template is declarative and describes the resource and its properties that you want to deploy. So it is easy to deploy your development, validation and production environments with the same template. It enables to avoid mistakes and configuration drift. To finish, these templates will be reusable in the AzureStack solution. With ARM and template, you enter in the DevOps worldJ.

This topic is not intended to teach you everything about the Azure Resource Manager template. I write this topic to make a quick overview. To go in deep, I recommend you to check links referenced in the documentation section.

Documentation

Before getting to the heart of the matter, I want to share with you some resources that may interest you:

Recommended stuff

Nothing is mandatory to create and edit your JSON template. But some software can ease your life. However to deploy resources in Azure, you need an Azure subscription. Below you can find the recommended software:

  • Azure PowerShell module: it enables to control Azure Resource by using PowerShell. You can download it here;
  • Visual Studio 2015: I use Visual Studio 2015 Community. It is an Integrated Development Environment (IDE). You can download it here;
  • Azure SDK for .NET: It is the development kit for Microsoft Azure. Be sure to download Azure SDK for Visual Studio 2015. You can download it here.

Azure Resource Manager template

Structure

The template structure looks like this:

{
 "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
 "contentVersion": "",
 "parameters": { },
 "variables": { },
 "resources": [ ],
 "outputs": { }
 }

In the below table, you can find the description of each part of the JSON structure. This table comes from Authoring Azure Resource Manager Templates topic.

Parameters part

In the parameters part, you define which settings will be asked to users when the deployment will be executed. Below you can find an example of a simple parameter:

"StoAccountName": {
        "type": "string",
 }

This parameter is called StoAccountName and its type is a string. When the deployment will be executed, the user will be asked to set the StoAccountName parameter. You can add a default value for this parameter as below. However, if you specify another value during deployment, the specified value will replace the defaultValue.

"StoAccountName": {
       "type": "string",
       "defaultValue": "techcoffeevmsto",
}

To finish you can specify a list of allowed values by using allowedValues. You may have noticed that in the below example, the allowed values are between brackets because it is a table.

"StorageAccountType": {
        "type": "string",
        "allowedValues": [
            "Standard_LRS",
            "Standard_GRS",
            "Standard_ZRS"
        ]
}

The allowed parameter types are:

  • String
  • SecureString (usually for password)
  • int (Integer)
  • bool (Boolean)
  • object
  • array

You can create a file that contains values of each parameter to avoid to specify them each time you execute a deployment. Below this is an example of a parameter file:

"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "StorageAccountType": {
            "value": "Standard_LRS"
        },
        "VirtualNetworkName": {
            "value": "TechCoffeevNet"
        },
        "StoAccountName": {
            "value": "techcoffeevmstorage"
        }    
    }
}

To call a parameter in the JSON template, you can use parameters(‘<ParameterName>’).

Variables part

Variables are used to simplify the readability of the template and to reuse several times a same value but specified one time. You can use parameters to construct variables. Below some examples:

"VMWEBImageOffer": "WindowsServer",
"VMWEBOSDiskName": "[concat(parameters('VMWEBName'), '_OSDisk')]",
"ResourcesLocation": "[ResourceGroup().location]"

VMWEBImageOffer is a variable that containing WindowsServer string.

VMWebOSDiskName variable contains a concatenation of the value of the VMWEBName parameters and the string _OSDisk. For example, if VMWebName parameter is VMWEB01, the VMWEBOSDiskName variable contains VMWEB01_OSDisk.

To finish ResourcesLocation contains the location of the resource group where the resources will be deployed.

You can call a variable in the JSON template by using variables(‘<VariableName>’).

Resources part

In this part you define the resource that will be deployed in Microsoft Azure (Virtual Machines, vNICs, Storage Accounts and so on). If the object already exists, it will be updated with the settings specified in the template.

I really recommend you to use Visual Studio with Azure SDK because you can add a resource to the template with some clicks. When you have created an Azure Resource Group Project (Templates, Visual C# and Cloud), right click on resources in the JSON Outline and click on Add New Resource.

Select the resource that you want to add and click on Add. For example, below I add a Storage Account:

When you have clicked on add, you should have additional parameters, variables and resources as below:

{
    "name": "[parameters('mystoaccountName')]",
    "type": "Microsoft.Storage/storageAccounts",
    "location": "[parameters('mystoaccountLocation')]",
    "apiVersion": "2015-05-01-preview",
    "dependsOn": [ ],
    "tags": {
           "displayName": "mystoaccount"
     },
     "properties": {
           "accountType": "[parameters('mystoaccountType')]"
     }
}

Now you just have to change the properties of the resource group with your variables and parameters and the resource configuration is finished!

Make a loop

I know that I said earlier that I will not go deep in this topic, but I think loops are important to simplify your template. Loops enable you to declare one time a resource and make several deployments of this resource. For example, you can declare once a time a Virtual Machine and make a loop to create several instances with the same settings.

To make a loop, first you should create an integer parameter as below:

"WebInstanceCount": {
        "defaultValue": "2",
        "type": "int"
}

Now I’ll take an example of creating several vNICs by using a loop.

{
       "name": "[concat('vNIC_', parameters('VMWEBName'), copyindex(1))]",
       "type": "Microsoft.Network/networkInterfaces",
       "location": "[variables('ResourcesLocation')]",
       "apiVersion": "2015-05-01-preview",
        "copy": {
            "name": "VMWEBNicLoop",
            "count": "[parameters('WebInstanceCount')]"

        }
}

First, you can see copy element that enables to specify a loop name and a counter. So in count element, I specify my parameter WebInstanceCount that has a default value of 2. So two vNIC will be created by using this loop.

Now I want to get the counter index to name my vNIC (to name them vNIC1, vNIC2 and so on). So I use copyindex() function. You can find it in the above example in the name element. The number 1 specified in the copyindex() function enables me to shift the index by 1. I do that because the index start from 0 but I don’t want a vNIC called vNIC0. So I have shifted the index by 1 to start from vNIC1.

Deploy the template

You can deploy the template directly from Visual Studio, by using PowerShell or from the Azure Marketplace (template deployment).

Deploy from Visual Studio

To deploy your resource from Visual Studio, right click on your project and select Deploy.

Next, select your account, your subscription and so on. You can fill automatically your parameter files by clicking on Edit Parameters.

In Edit Parameters, you can specify the parameter values. When you have specified the AllowedValues element in parameters, you have a drop-down menu instead of a field.

Deploy from Azure Portal

You can also deploy the JSON directly from the Azure Portal. Navigate to the marketplace and find template deployment.

Now you just have to past your JSON template and set the parameters, the resource group and so on:

Deploy from PowerShell

Here is my favorite method. You can deploy the template from PowerShell by using the New-AzureResourceGroup cmdlet. First connect to your subscription.

Next run the following command:

New-AzureResourceGroup -Name TechCoffeeLab `
                       -Location "West US" `
                       -DeploymentName TechCoffeeDep `
                       -TemplateParameterFile C:\temp\TechCoffeeLab.param.json `
                       -TemplateFile C:\temp\TechCoffeeLab.json `
                       -verbose

You can find my JSON files on my GitHub repository: https://github.com/SerreRom/TechCoffee

Now that the deployment is finished you can open your Azure Portal to see your resources deployed:

Conclusion

This topic presents you a quick overview of the Azure Resource Manager Template. To go in deep, I recommend you to check links referenced in the documentation section. As you have seen in this topic, template enables to make consistent deployments, even if you have several environments as testing, validation and production. Moreover, you can update quickly some settings just by changing the values in the template. To finish you can leverage on Azure VM Extensions to configure your Virtual Machines as you want (run scripts, Desired State configuration and so on) during the deployment. And JSON template will be compatible with Azure Stack J.

The post Deploy Azure Resources with JSON template appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-azure-resources-with-json-template/feed/ 2 3702
Manage Azure VM from Virtual Machine Manager 2012 R2 //www.tech-coffee.net/manage-azure-vm-from-virtual-machine-manager-2012-r2/ //www.tech-coffee.net/manage-azure-vm-from-virtual-machine-manager-2012-r2/#respond Tue, 14 Jul 2015 19:55:38 +0000 //www.tech-coffee.net/?p=3684 Since Update Rollup 6 of Virtual Machine Manager 2012 R2, it is possible to manage Azure VM from the VMM console. You can do simple actions as stop or start the machine, establish an RDP connection. In this topic I’ll describe how to add the Azure Subscription to manage Azure VM from Virtual Machine Manager 2012R2. ...

The post Manage Azure VM from Virtual Machine Manager 2012 R2 appeared first on Tech-Coffee.

]]>
Since Update Rollup 6 of Virtual Machine Manager 2012 R2, it is possible to manage Azure VM from the VMM console. You can do simple actions as stop or start the machine, establish an RDP connection. In this topic I’ll describe how to add the Azure Subscription to manage Azure VM from Virtual Machine Manager 2012R2.

Requirements

To follow this topic you need:

  • A working Virtual Machine Manager with at least Update Rollup 6;
  • An Azure Subscription.

Moreover Azure VM created from the Azure Resource Manager are currently not manageable from VMM.

Create and import in Azure a management certificate

Create from an enterprise PKI

First, you need to create a management certificate. You can use your enterprise Public Key Infrastructure to make a certificate. This certificate must be in the personal user store as below.

Next, export this certificate as CER.

Create from MakeCert

The other method consists of using MakeCert from visual studio to create a self-signed certificate (for further information read this topic):

makecert -sky exchange -r -n "CN=<CertificateName>" -pe -a sha1 -len 2048 -ss My "<CertificateName>.cer"

Once the certificate is generated, you can find it in your personal certificate store.

Import the management certificate in Azure

Now that your CER file is generated, you can navigate to the Azure Portal and select Settings. Click on Management certificates and choose Upload a Management Certificate.

Next, select your CER file and click on OK.

After a couple of minutes, you should see the certificate as below.

Add the Azure Subscription to Virtual Machine Manager

First you need your Subscription ID. You can use the Add-AzureAccount cmdlet as below.

Next, open your Virtual Machine Manager console and select Add Subscription as below:

Then specify a Display Name and your Subscription ID. To finish, select the certificate (you can compare the thumbprint with the CER previously imported in Azure).

After the initial synchronization (it can take few minutes), you should see your Azure VM as below.

It’s a great feature but …

Thanks to this feature you can see and manage the power of your VM. You can also connect to your Azure VM by using RDP from VMM console.

But I think this feature is not finished. For example only Azure VM created from the Azure Portal are visible from VMM console. The Azure VM created from Azure Resource Manager are not manageable from VMM. For example, below I have some resources created from an Azure Resource Manager Template (JSON file):

The Azure VM circled in red are VM created from Azure Resource Manager. If you compare the two last screenshots, you can see that VM circled in red are not manageable from VMM.

Next I think that not enough actions are possible from VMM to manage Azure VM. For example I would like to manage the size of the VM, the availability set or the VM creation. But it is not yet possibleJ.

However the Azure VM management from VMM has been released in the last Update Rollup (UR6). I trust the team responsible for VMM to improve this feature J.

The post Manage Azure VM from Virtual Machine Manager 2012 R2 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/manage-azure-vm-from-virtual-machine-manager-2012-r2/feed/ 0 3684
Deploy Azure VM from a generalized image //www.tech-coffee.net/deploy-azure-vm-from-a-generalized-image/ //www.tech-coffee.net/deploy-azure-vm-from-a-generalized-image/#comments Wed, 08 Jul 2015 19:06:50 +0000 //www.tech-coffee.net/?p=3668 To deploy a large amount of consistent Virtual Machines, generalized images are often used. You can upload a generalized image that you deploy usually on your On-Premise datacenter or you can also create a generalized image directly from Microsoft Azure. In this topic I’ll explain how to capture an image directly from Azure and how ...

The post Deploy Azure VM from a generalized image appeared first on Tech-Coffee.

]]>
To deploy a large amount of consistent Virtual Machines, generalized images are often used. You can upload a generalized image that you deploy usually on your On-Premise datacenter or you can also create a generalized image directly from Microsoft Azure. In this topic I’ll explain how to capture an image directly from Azure and how to upload your already existing generalized image from the On-Premise datacenter.

What is a generalized image

A generalized image is a capture of an already installed Operating System without the machine specific settings and without user’s settings. For example the machine name, its SID, the administrator password and so on are not retained when capturing the image. That enables to customize your own image before deploying it in large scale. You can for example, install IIS role in the image before capturing it. In this way, each server deployed by using this image will have IIS pre-installed. And each server deployed with this generalized image will have its own server name, SID and so on.

To create a generalized image of a Windows Server, you have to use Sysprep. For a Linux machine you can use WAAgent (Windows Azure Agent).

When you use Sysprep, you have to specify the above settings to create your generalized image. For more information to create a generalized image, you can read this topic.

Upload you own generalized image to Azure

N.B: To follow this guide, the Azure PowerShell module must be installed and the settings profile must be imported. For further information, please read this topic.

Once you have created your generalized image in your Datacenter, you can upload it to Azure. Be careful because currently Azure supports only VHD files. If you have created a VHDX, you can convert the VHDX to VHD by using the Hyper-V GUI or convert-VHD PowerShell cmdlet. To convert a VHDX to VHD, the disk must not be used by a running Virtual Machine.

Once you have your Generalized Image in VHD format, open PowerShell. The cmdlet Add-AzureVHD enables you to upload a local VHD to a Page Blob storage. For further information about blob storage, you can read this topic.

I have uploaded a VHD file to Azure by using my home internet provider with an awesome 70KB/s… And 5 days after, my VHD was stored in Azure J.

So the next step is the creation of the VM image from the VHD previously uploaded in Azure. I open again PowerShell to run the Add-AzureVMImage cmdlet.

Once the VMImage is created I can deploy a Virtual Machine from this image. So I open the Azure Portal and I select new virtual machine. Then I select My Image as below.

I select W2012R2-Datacenter-1.4 VM Image and I click on next. Then I configure my VM as usually.

When you have finished to create the virtual machine, the provisioning should start by using your generalized image J.

Create a generalized image from Azure

To create a generalized image from Azure, first you have to create an Azure VM. When the VM is deployed you can make any customization. On my side I have installed the IIS role. Next run the Sysprep utility as below:

Once the Azure VM is shutdown, you can click on Capture as below.

Next give a name and a description to your image, and don’t forget to check the box I have run Sysprep on the Virtual Machine. When the capture process will be finished, the source Azure VM will be deleted.

Next you can check in Images tab that your new VM image is available as below.

Now you can create an Azure VM from My Image repository and you can select your new imageJ.

Conclusion

In this topic we have seen how to upload an existing generalized image to Azure and how to capture a generalized image from an Azure VM. Thanks to these images, it is possible to deploy a large amount of Azure VM with consistent base installation.

The post Deploy Azure VM from a generalized image appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-azure-vm-from-a-generalized-image/feed/ 3 3668