High availability – Tech-Coffee //www.tech-coffee.net Thu, 03 May 2018 11:44:21 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Deploy Windows Admin Center in HA through Kemp Load Balancer //www.tech-coffee.net/deploy-windows-admin-center-in-ha-through-kemp-load-balancer/ //www.tech-coffee.net/deploy-windows-admin-center-in-ha-through-kemp-load-balancer/#comments Thu, 03 May 2018 11:44:21 +0000 //www.tech-coffee.net/?p=6318 Windows Admin Center (formerly Honolulu Project) was released in April 2018 by Microsoft. WAC is a web-based management tool to help to administrate Windows Server and hyperconverged cluster. In part of my job, I use primarily Windows Admin Center for Storage Spaces Direct Cluster and to manage Windows Server in Core edition especially drivers. Since ...

The post Deploy Windows Admin Center in HA through Kemp Load Balancer appeared first on Tech-Coffee.

]]>
Windows Admin Center (formerly Honolulu Project) was released in April 2018 by Microsoft. WAC is a web-based management tool to help to administrate Windows Server and hyperconverged cluster. In part of my job, I use primarily Windows Admin Center for Storage Spaces Direct Cluster and to manage Windows Server in Core edition especially drivers. Since the release of Windows Admin Center, Microsoft provides the capability to deploy it in high availability. In this topic we’ll see how to deploy Windows Admin Center in this manner. Moreover, some of customers want to connect to WAC through a load balancer such as Kemp to avoid private certificate management and to be able to connect from the Internet. So, we’ll see also how to connect to WAC through a Kemp load balancer.

Requirements

To follow this topic, you need the following:

  • 2x virtual machines
    • I set 2vCPU, 4GB of memory, a dynamic OS disk of 60GB
    • I deployed Windows Server 2016 in Core edition
    • 1x Network Adapter for management
    • 1x Network Adapter for cluster
    • The VM must be joined to the Active Directory domain
  • 1x shared disk of 10GB for these two VMs. You can use traditional iSCSI, FC LUN or shared VHDX / VHD Set
  • 1x IP in management network for the cluster
  • 1x IP in management network for Windows Admin Center cluster resource
  • 1x Name for the cluster (in this example: Cluster-WAC01.SeromIT.local)
  • 1x Name for Windows Admin Center cluster resource (in this example: WAC.SeromIT.local)

You need also to download the latest Windows Admin Center build from this link and the script to deploy WAC in high availability from this link.

Deploy the cluster

First of all, we have to deploy features on both virtual machine. I install Failover Clustering and its PowerShell module with these cmdlet:

Install-WindowsFeature RSAT-Clustering-PowerShell, Failover-Clustering -ComputerName "Node1"
Install-WindowsFeature RSAT-Clustering-PowerShell, Failover-Clustering -ComputerName "Node2"

Then I initialize the shared disk. First, I show disks connected to the VM. The disk 0 is for operating system and disk 1 is the shared disk. Then I initialize the disk and I create a NTFS volume:

Initialize-Disk -Number 1
New-Volume -DiskNumber 1 -FriendlyName Data -FileSystem NTFS

Once the volume is created, I run a test cluster to check if nodes are compliant to be part of a cluster. To execute this validation, I run the following cmdlet:

Test-Cluster -Node Node1,Node2

N.B: My test reports an issue related to software update levels: it is because I have not the last Windows Defender signature on one node.

Once you have validated the report, you can create the cluster by running the following cmdlet. I specify NoStorage option to avoid that my shared disk is taken by the cluster for witness usage.

New-Cluster -Node Node1, Node2, -Name ClusterName -StaticAddress ClusterIPAddress -NoStorage

Once the cluster is created, I move the Cluster Name Object (CNO) to a specific OU. Then I add the permission to this CNO to create computer object in this OU.

Next I rename cluster network to Management and Cluster. The network cluster with Cluster and Client role is renamed Management and the one with the cluster role is called … cluster.

(Get-Cluster -Name ClusterName | Get-ClusterNetwork -Name "Cluster Network 1").Name="Management"
(Get-Cluster -Name ClusterName | Get-ClusterNetwork -Name "Cluster Network 2").Name="Cluster"

Then I add a file share witness. For that I have created a share on my domain controller server called Cluster-WAC$:

Get-Cluster -Name ClusterName | Set-ClusterQuorum -FileShareWitness "\\path\to\the\file\share\witness"

To finish I add a the Cluster Shared Volume (CSV):

Get-Disk -Number 1 | Add-ClusterDisk
Add-ClusterSharedVolume -Name "Cluster Disk 1"
(Get-ClusterSharedVolume -Name "Cluster Disk 1").Name="Data"
Rename-Item C:\ClusterStorage\Volume1\ Data

As you can see in the failover clustering console, the file share witness is well configured.

The cluster network are renamed to Management and Cluster.

The CSV is present in the cluster and it’s called Data.

(Optionnal) Get a certificate from enterprise PKI

If you want to use your own enterprise PKI, you can follow these steps. Connect to an enterprise CA and manage the template. Duplicate the Web Server template. In the Subject Name, choose Supply in the request. Allow also the private key to be exportable.

Then request a certificate from the MMC or from the web interface and specify the following information:

  • Subject Name: Common Name as the Windows Admin Center cluster resource Name
  • Subject Alternative Name:
    • DNS: Windows Admin Center Cluster resource name
    • DNS: first node FQDN
    • DNS: second node FQDN

Then export the certificate and its private key in a PFX file.

Deploy Windows Admin Center

In a folder on a node of the cluster, you should have the following files: (WAC.pfx only if you have created your own certificate from the enterprise PKI)

Run the following cmdlets to deploy Windows Admin Center in the cluster:

$CertPassword = Read-Host -AsSecureString
.\Install-WindowsAdminCenterHA.ps1 -ClusterStorage c:\ClusterStorage\Data -ClientAccessPoint WACClusterResourceName -MSIPath c:\path\to\WAC\build.msi -CertPath c:\path\to\pfx\file.pfx -CertPassword $CertPassword -StaticAddress IPAddressForWAC

N.B: If you have no enterprise PKI, you can deploy the service by running the following cmdlet:

.\Install-WindowsAdminCenterHA.ps1 -ClusterStorage c:\ClusterStorage\Data -ClientAccessPoint WACClusterResourceName -MSIPath c:\path\to\WAC\build.msi -StaticAddress IPAddressForWAC -GenerateSSLCert

After some times, the service is deployed in the failover clustering and you have now Windows Admin Center in high availability.

If you specify the name of the WAC cluster resource as below, you can connect to Windows Admin Center.

Configure Kemp Load Balancer

First of all, I create a rule to redirect the traffic to the right service. Because this is a reverse proxy, a single IP address is used for several web services. In this configuration I use the web service URL to redirect traffic to the right web server. To make it work, a rule as the following must be created.

Then I create a Sub Virtual Service in my reverse proxy virtual service. I name it Windows Admin Center and I specify the name of the WAC cluster resource.

Then I map the rule I have previously created with the Windows Admin Center Sub Virtual Service:

To finish, verify that the SSL Acceleration is activated with the right public certificate as below:

Then I connect to Windows Admin Center through the Kemp Load Balancer. As you can see, the certificate is validated without any warning and I can get access to WAC. Thanks to these settings, you can access to WAC through the Internet.

The post Deploy Windows Admin Center in HA through Kemp Load Balancer appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-windows-admin-center-in-ha-through-kemp-load-balancer/feed/ 4 6318
Fault Domain Awareness with Storage Spaces Direct //www.tech-coffee.net/fault-domain-awareness-with-storage-spaces-direct/ //www.tech-coffee.net/fault-domain-awareness-with-storage-spaces-direct/#comments Mon, 07 Nov 2016 13:49:33 +0000 //www.tech-coffee.net/?p=4862 Fault Domain Awareness is a new feature in Failover Clustering since Windows Server 2016. Fault Domain Awareness brings a new approach of the high availability which is more flexible and Cloud oriented. In previous edition, the high availability was based only on node: if a node failed, the resources were moved to another node. With ...

The post Fault Domain Awareness with Storage Spaces Direct appeared first on Tech-Coffee.

]]>
Fault Domain Awareness is a new feature in Failover Clustering since Windows Server 2016. Fault Domain Awareness brings a new approach of the high availability which is more flexible and Cloud oriented. In previous edition, the high availability was based only on node: if a node failed, the resources were moved to another node. With Fault Domain Awareness, the point of failure can be a node (as previously), a chassis, a rack or a site. This enables a greater flexibility and a modern approach of the high availability. Datacenters which are Cloud oriented, require this kind of flexibility to change the point of failure of the cluster from the single node to an entire rack which contains several nodes.

In Microsoft definition, a fault domain is a set of hardware that shares the same point of failure. The default fault domain in a cluster is the node. You can also create the fault domain based on chassis, rack and site. Moreover, a fault domain can belong to another fault domain. For example, you can create racks fault domains and configure them to specify that the parent is a site.

Storage Spaces Direct (S2D) can leverage Fault Domain Awareness to spread block replicas across fault domains (unfortunately it is not yet possible to spread block replicas across sites because Storage Spaces Direct doesn’t support stretched cluster with Storage Replica). Let think about a three-way mirroring implementation of S2D: this means that we have three times the data (the original and two replicas). S2D is able for example to create the original data on a first rack, and each replica are copied in several racks. In this way, even if you lose a rack, the storage keeps working.

In S2D documentation, Microsoft doesn’t say anymore the number nodes required for each resilience type:

  • 2-Way Mirroring: two fault domains
  • 3-way Mirroring: three fault domains
  • Erasure Coding: from four fault domains.

These statements are really important for design consideration. If you plan to use fault domain awareness with racks and you plan to use erasure coding, you need also four racks at least. Each rack must have the same number of nodes. So, in the case of there is four racks, the number of nodes per cluster can be 4, 8, 12 or 16. So by using fault domain awareness, you lose some flexibility for deployment, but you increase the availability capabilities.

Configure Fault Domain Awareness

This section introduces how to configure fault domain in the cluster. It is heavily recommended to make this configuration, before that you enable Storage Spaces Direct!

By using PowerShell

In this example, I show you how to configure the fault domain in your cluster with a two nodes cluster. It is not really useful for a two-node cluster to create fault domain but I just want to show you how to create them in the cluster configuration.

Before running the below cmdlet, I have initialized $CIM variable by using the following command (Cluster-Hyv01 is the name of my cluster):

$CIM = New-CimSession -ComputerName Cluster-Hyv01

Then I gather fault domain information by using the Get-ClusterFaultDomain cmdlet:

As you can see above, a fault domain is automatically created for each node. To create an additional fault domain, you can use the cmdlet New-ClusterFaultDomain as below.

If I run again the Get-ClusterFaultDomain cmdlet, you can see each fault domain.

Then I run the following cmdlet to set the Fault Domain parents:

Set-ClusterFaultDomain -Name Rack-22U -Parent Lyon
Set-ClusterFaultDomain -Name Chassis-Fabric -Parent Rack-22U
Set-ClusterFaultDomain -Name pyhyv01 -Parent Chassis-Fabric
Set-ClusterFaultDomain -Name pyhyv02 -Parent Chassis-Fabric

In the Failover Clustering manager, you can see the result by opening the node tab. As you can see below, each node belongs to Rack-22U and the site Lyon.

By using XML

You can also declare your physical infrastructure by using a XML File as below:

<Topology>
    <Site Name="Lyon" Location="Lyon 8e">
        <Rack Name="Rack-22U" Location="Restroom">
            <Node Name="pyhyv01" Location="Rack 6U" />
            <Node Name="pyhyv02" Location="Rack 12U" />
        </Rack>
    </Site>
</Topology>

Once your topology is written, you can configure your cluster with the XML File:

$xml = Get-Content <XML File> | Out-String
Set-ClusterFaultDomainXML -XML $xml

Conclusion

Fault Domain Awareness is a great feature to improve the availability of your infrastructure, especially with Storage Spaces Direct. The fault domain can be oriented on racks instead of nodes. This means that you can lose a higher number of nodes and keep the service running. On the other hand, it is necessary to be careful during the design phase because an equivalent number of nodes must be installed in each rack. If you need erasure coding, you require 4 racks.

The post Fault Domain Awareness with Storage Spaces Direct appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/fault-domain-awareness-with-storage-spaces-direct/feed/ 4 4862
Deploy highly available IaaS service in Azure Resource Manager //www.tech-coffee.net/deploy-highly-available-iaas-service-in-azure-resource-manager/ //www.tech-coffee.net/deploy-highly-available-iaas-service-in-azure-resource-manager/#respond Fri, 01 Apr 2016 10:09:08 +0000 //www.tech-coffee.net/?p=4575 When you deploy production VMs and so production services in Azure, you often want high availability. Sometimes Microsoft makes operations in Azure Datacenter that can impact the availability of your service. Some prerequisites are required to have a 99,95% SLA on VMs in Azure. Moreover, you may need some load-balancers to route the traffic to ...

The post Deploy highly available IaaS service in Azure Resource Manager appeared first on Tech-Coffee.

]]>
When you deploy production VMs and so production services in Azure, you often want high availability. Sometimes Microsoft makes operations in Azure Datacenter that can impact the availability of your service. Some prerequisites are required to have a 99,95% SLA on VMs in Azure. Moreover, you may need some load-balancers to route the traffic to healthy servers and to spread the charge.

In this topic,  I will address the following resources in Azure Resource Manager (ARM):

  • Azure VMs
  • Availability Sets
  • Load-Balancers

Lab overview for Highly Available IaaS 3-tier service

N.B: In this topic, I use PowerShell cmdlets to manage Azure resources. You can have further information here.

The goal of this lab regards the deployment of a 3-tier service:

  • First tier: Web Servers
  • Second tier: Application Servers
  • Third tier: Database Servers

The user will connect to the Web Servers load-balancer. Then the Web Servers will connect to the application servers across the application load-balancer. Then Application servers will send a request to SQL Servers. The availability Set will be configured on each server role to support the 99,95% SLA.

Regarding the network, the virtual network is split into two subnets called external and internal subnet. All VMs are stored in the same storage account.

I have created the resource groups, the storage account and the virtual network. It only remains to create availability set, Azure VMs and load-balancer.

Availability Set

Usually to support High Availability, we use two servers that host the same role or/and application. Then these servers are spread across several racks, rooms or hypervisors (in case of VMs). In this way, even if an outage occurs, the others servers continue to deliver the service. In Azure, we use the Availability Set to spread in the datacenter, the Azure VMs which deliver the same service.

With Availability Set comes two concepts:

  • Fault Domain: this is a physical unit for the deployment of an application. Thanks to fault domain, VMs are deployed on different servers, racks and switches to avoid a single point of failure.
  • Update Domain: this is a logical unit for the deployment of an application. Servers which are associated with the same availability set will be arranged in the rack. In this way, one update domain will be unavailable at the same time when Microsoft makes an update. So servers in the remaining update domains continue to deliver the service.

To support the 99,95% SLA, I will create an availability set for each tier. To create the Availability Set from the portal, go to the Marketplace and select Availability Set. You can then specify the availability set name, the number of fault and update domains and the resource group.

You can do the same thing with PowerShell.

New-AzureRmAvailabilitySet -ResourceGroupName LabHAIaaS -Name AppTier -Location "West Europe" -PlatformUpdateDomainCount 2 -PlatformFaultDomainCount 2

Once I have created availability sets, I have three new resources in the resource group:

Azure VMs creation

N.B: At this moment, you can’t associate availability set to a VM already created (in Azure Resource Manager) from PowerShell or from the portal.

Now I will create Azure VMs with the availability set association. You can create it by using the portal:

Below you can find PowerShell cmdlets to create an external virtual machine: (the public IP is needed to connect to VMs from the portal. If you have a Site-to-Site VPN, you shouldn’t need the public IP)

# Set values for existing resource group and storage account names
$rgName="LabHAIaaS"
$locName="West Europe"
$saName="labhaiaasvm"
$AVName = "WebTier"
# Ask for VM credential
$cred=Get-Credential -Message "Type the name and password of the local administrator account."

# Set the existing virtual network and subnet index
$vnetName="LabHAIaasNetwork"
$subnetIndex=1
$vnet=Get-AzureRMVirtualNetwork -Name $vnetName -ResourceGroupName $rgName

# Create the NIC.
$nicName="ExtVM06-NIC"
$pip=New-AzureRmPublicIpAddress -Name $nicName -ResourceGroupName $rgName -Location $locName -AllocationMethod Dynamic
$nic=New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $rgName -Location $locName -SubnetId $vnet.Subnets[$subnetIndex].Id -PublicIpAddressId $pip.Id

#Availabiloty Set
$AvID = (Get-AzureRmAvailabilitySet -ResourceGroupName $RGName -Name $AvName).id

# Specify the name, size, and existing availability set
$vmName="ExtVM06"
$vmSize="Standard_A0"
$vm=New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize -AvailabilitySetId $AvID

# Specify the image and local administrator account, and then add the NIC
$pubName="MicrosoftWindowsServer"
$offerName="WindowsServer"
$skuName="2012-R2-Datacenter"
$vm=Set-AzureRmVMOperatingSystem -VM $vm -Windows -ComputerName $vmName -Credential $cred -ProvisionVMAgent -EnableAutoUpdate
$vm=Set-AzureRmVMSourceImage -VM $vm -PublisherName $pubName -Offer $offerName -Skus $skuName -Version "latest"
$vm=Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id

# Specify the OS disk name and create the VM
$diskName="OSDisk"
$storageAcc=Get-AzureRmStorageAccount -ResourceGroupName $rgName -Name $saName
$osDiskUri=$storageAcc.PrimaryEndpoints.Blob.ToString() + "vhds/" + $vmName + $diskName + ".vhd"
$vm=Set-AzureRmVMOSDisk -VM $vm -Name $diskName -VhdUri $osDiskUri -CreateOption fromImage
New-AzureRmVM -ResourceGroupName $rgName -Location $locName -VM $vm

Once all Azure VMs are created, I have 6 VMs in the resource group with their own network interfaces.

In the below example, you can see that Azure VMs that belong to the WebTier availability set are spread between two fault and update domains.

Implement the external load-balancer

Now that Azure VMs are created and are in availability sets, we can create the Load-Balancer. First, I create the external Load-Balancer for the Web servers (WebTier). Open the marketplace and type Load-Balancer. Then create it and chose the Public scheme. Create a public static IP as below and select the resource group.

Once the load-balancer is created, open settings and select Backend Pools.

Then create a backend address pool, and choose the WebTier availability Set and the Azure VMs as below.

Now you can create a probe to verify the health of your application. In the below example I create a probe for a web service which listens on HTTP/80.

Once the probe is created, we can create a load-balancing rule related to the probe health. If a server is not healthy, the load-balancer will not route traffic to this server.

Implement internal Load Balancer

As the external Load-Balancer, create again a load-balancer but this time select the Internal scheme. Then select the virtual network and the internal subnet (where are the application servers). To finish, select the resource group and set a static IP address.

Next, open the settings of this load-balancer and select Backend Pools.

Then create a backend pool and select the AppTier availability set and its Azure VMs.

Then I create a probe to verify the health of the application on port TCP/1234.

To finish, I create the load-balacing rule based on the previous probe to route the traffic to healthy servers.

The post Deploy highly available IaaS service in Azure Resource Manager appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-highly-available-iaas-service-in-azure-resource-manager/feed/ 0 4575
Windows Azure Pack and VM Clouds in High Availability //www.tech-coffee.net/windows-azure-pack-vm-clouds-high-availability/ //www.tech-coffee.net/windows-azure-pack-vm-clouds-high-availability/#respond Mon, 13 Oct 2014 11:27:19 +0000 //www.tech-coffee.net/?p=2661 When the Windows Azure Pack is installed for production, the access to the cloud services should be accessible 99,9% of the time. To implement this service level, the Windows Azure Pack has to be deployed with high availability mechanisms as Load-Balancing, SQL AlwaysOn and so on. The below schema shows the Windows Azure Pack deployment ...

The post Windows Azure Pack and VM Clouds in High Availability appeared first on Tech-Coffee.

]]>
When the Windows Azure Pack is installed for production, the access to the cloud services should be accessible 99,9% of the time. To implement this service level, the Windows Azure Pack has to be deployed with high availability mechanisms as Load-Balancing, SQL AlwaysOn and so on.

The below schema shows the Windows Azure Pack deployment in high availability that I have made on my mockup.

The Cluster-THUB hosts the below tenant web services:

  • Tenant public API (DNS alias: api.home.net);
  • Tenant authentication site (DNS Alias: auth.home.net);
  • Management portal for tenants (DNS alias: www.home.net).

The Cluster-AHUB hosts the below privileges web services:

  • Admin API (DNS Alias: wapadminapi.home.net);
  • Tenant API (DNS Alias: waptenantapi.home.net);
  • Admin authentication site (DNS Alias: wapadminauth.home.net);
  • Management portal for administrators (DNS Alias: wapadmin.home.net).

The SQL Server Availability Group AAGWAP01 hosts databases for Windows Azure Pack while AAGWAP02 hosts databases for SPF, SMA, and Websites. In this topic I don’t approach the installation and the configuration of SQL Server AlwaysOn. For more information about AlwaysOn topic, please read this article.

To implement load-balancing, I use the NLB feature included in Windows Server. For company that already have a load-balancing appliance such as F5, of course you can use it instead of NLB (and I recommend to use a dedicated load balancing appliance for intensive environment).

To deploy the infrastructure described above, first I have installed the SQL Always On. Then I have installed the Windows Azure Pack on servers.

Public services installation and configuration

Installation

To install the public services, please read the part “Public Services Installation” of this topic. Follow this procedure for each node that hosts public services. When you are on Database Server Setup screen, specify the same database information on each node. The database server name should be an AlwaysOn Availability Group (AAG) Listener. For me, the database server name is AAGWAP01.home.net.

NLB feature configuration

In the below screenshot you can see the Cluster-THUB. VMWAP01-THUB01 and VMWAP02-THUB02 are part of the cluster. These servers host public services of the Windows Azure Pack.

The Cluster-THUB is bound on the IP address 10.10.3.100.

I have configured the cluster operation mode to Multicast. Be careful, if you use unicast and Virtual Machine, don’t forget to enable the Mac Spoofing feature in your Virtual Machine configuration.

To finish with NLB, I have configured the port rules to load-balance equally across all members the traffic when it comes on port TCP 443.

DNS aliases

Next I have configured DNS aliases to the cluster cluster-thub.home.net. So I have just added entries in the DNS as below:

Certificates

Regarding certificates, one are needed per node which is a member of the load-balancing cluster. I have duplicated the Web Server certificate template in Active Directory Certificate Service. The issued to field contains the FQDN of the server. Next I have added each DNS alias in the Subject Alternative Name as below. In this way, the certificate can be used for all web services. For further information about certificate template, you can read this topic.

IIS binding configuration

Next, on each node I have reconfigured the site binding for tenant public API, tenant authentication site and management portal for tenants. As a reminder, below you can find the DNS aliases related the web services that I have set on my infrastructure:

  • Tenant public API (DNS alias: api.home.net);
  • Tenant authentication site (DNS Alias: auth.home.net);
  • Management portal for tenants (DNS alias: www.home.net).

Repeat this procedure for each node with the same configuration except the SSL certificate.

Privileged services installation and configuration

Installation

To install the public services, please read the part “Privileged services Installation” of this topic. Follow this procedure for each node that hosts privileged services. When you are on Database Server Setup screen, specify the same database information on each node. The database server name should be an AlwaysOn Availability Group (AAG) Listener. For me, the database server name is AAGWAP01.home.net.

NLB feature configuration

In the below screenshot you can see the Cluster-AHUB. VMWAP03-AHUB01 and VMWAP04-AHUB02 are part of the cluster. These servers host privileged services of the Windows Azure Pack.

The Cluster-THUB is bound on the IP address 10.10.0.100.

I have configured the cluster operation mode to Multicast. Be careful, if you use unicast and Virtual Machine, don’t forget to enable the Mac Spoofing feature in your Virtual Machine configuration.

DNS Aliases

Next I have configured DNS aliases to the cluster cluster-ahub.home.net. So I have just added entries in the DNS as below:

Certificates

The Certificates for privileged services are similar to the certificates for public services. I have used the same certificate template than I have used to enroll certificates for public services. The issued to field contains the FQDN of the server. Next I have added each DNS alias in the Subject Alternative Name as below. In this way, the certificate can be used for all web services

IIS binding configuration

Next, on each node I have reconfigured the site binding for admin API, tenant API, admin authentication site and management portal for administrators. As a reminder, below you can find the DNS aliases related the web services that I have set on my infrastructure:

  • Admin API (DNS Alias: wapadminapi.home.net);
  • Tenant API (DNS Alias: waptenantapi.home.net);
  • Admin authentication site (DNS Alias: wapadminauth.home.net);
  • Management portal for administrators (DNS Alias: wapadmin.home.net).

Repeat this procedure for each node with the same configuration except the SSL certificate.

Windows Azure Pack Configuration

N.B: I used scripts described in this Hyper-v.nu topic for this part. You can find the documentation about these scripts from this TechNet topic.

Before that the configuration works, it is necessary to reconfigure the Windows Azure Pack. First WAP components have to be reconfigured to point to the load-balancers. Next we have to Re-Establish trust between the authentication sites and the management portals. To finish the FQDN of resource providers must be updated. These scripts must be run from a node that hosts privileged services.

Reconfigure WAP components to point to Load-Balancers

To reconfigure WAP components use the below script. You can change all the variables to make it match your environment. Firstly, the script updates the federation endpoints to the load-balancers. These federation endpoints are used by admin and tenant sites to know the location of their authentication sites and vice versa. Once federation endpoints are updated, the endpoints of each web service are updated in The Windows Azure Pack database.

Import-Module MgmtSvcAdmin
### VARIABLES
## Environment settings
# SQL Server AlwaysOn DNS Listener containing the Windows Azure Pack databases
$server="AAGWAP01.home.net"

## Define the desired FQDNs and Ports
# Admin Site
$AdminSiteLB = "wapadmin.home.net"
$AdminSitePort = "443"
# Admin Authentication Site
$WinAuthSiteLB = "wapadminauth.home.net"
$WinAuthSitePort = "443"
# Tenant Site
$TenantSiteLB ="www.home.net"
$TenantSitePort = "443"
# Tenant Auth Site
$TenantAuthSiteLB ="auth.home.net"
$TenantAuthSitePort = "443"
# Admin API
$AdminApiLB ="wapadminapi.home.net"
$AdminApiPort = "443"
# Tenant API
$TenantApiLB = "waptenantapi.home.net"
$TenantApiPort = "443"
# Tenant Public API
$TenantPublicApiLB = "api.home.net"
$TenantPublicApiPort = "443"

### MAIN CODE
# Define the federation endpoints
$TenantMetadataEndpoint="https://${TenantAuthSiteLB}:$AuthSitePort/federationMetaData/2007-06/FederationMetadata.xml"
$AdminMetadataEndpoint="https://${WinAuthSiteLB}:$WinAuthSitePort/federationMetaData/2007-06/FederationMetadata.xml"
$AdminSiteMetadataEndpoint="https://${AdminSiteLB}:$AdminSitePort/federationMetaData/2007-06/FederationMetadata.xml"
$TenantSiteMetadataEndpoint="https://${TenantSiteLB}:$TenantSitePort/federationMetaData/2007-06/FederationMetadata.xml"

$adminApiUri = "https://${AdminApiLB}:$AdminApiPort"
$windowsAuthSite = "https://${WinAuthSiteLB}:$WinAuthSitePort"

# Reconfigure Windows Azure Pack components to point to load balancers
Set-MgmtSvcFqdn -Namespace AdminSite -FQDN $AdminSiteLB -Server $server -Port $AdminSitePort
Set-MgmtSvcFqdn -Namespace AuthSite -FQDN $TenantAuthSiteLB -Port $TenantAuthSitePort -Server $server
Set-MgmtSvcFqdn -Namespace AdminAPI -FQDN $AdminApiLB -Port $AdminApiPort -Server $server
Set-MgmtSvcFqdn -Namespace TenantSite -FQDN $TenantSiteLB -Port $TenantSitePort -Server $server
Set-MgmtSvcFqdn -Namespace WindowsAuthSite -FQDN $WinAuthSiteLB -Port $WinAuthSitePort -Server $server
Set-MgmtSvcFqdn -Namespace TenantApi -FQDN $TenantApiLB -Port $TenantApiPort -Server $server
Set-MgmtSvcFqdn -Namespace TenantPublicApi -FQDN $TenantPublicApiLB -Port $TenantPublicApiPort -Server $server

Re-Establish trust between the authentication sites and the management portals

Once you have run the above script, you have to execute below command. This re-establish trust between authentication sites and the management portals.


Set-MgmtSvcRelyingPartySettings -Target Tenant –MetadataEndpoint $TenantMetadataEndpoint -DisableCertificateValidation -Server $server
Set-MgmtSvcRelyingPartySettings -Target Admin –MetadataEndpoint $AdminMetadataEndpoint -Server $server
Set-MgmtSvcIdentityProviderSettings -Target MemberShip –MetadataEndpoint $TenantSiteMetadataEndpoint -Server $server
Set-MgmtSvcIdentityProviderSettings -Target Windows –MetadataEndpoint $AdminSiteMetadataEndpoint -Server $server

Update FQDNs for resource providers

To finish, the FQDN for resource providers must be updated to the load-balancers. There are three resource providers to update: marketplace, monitoring and UsageService. For that you can use the below script:


Import-Module MgmtSvcAdmin

## Environment settings
# SQL Server AlwaysOn DNS Listener containing the Windows Azure Pack databases
$server="AAGWAP01.home.net"

# Admin Authentication Site
$WinAuthSiteLB = "wapadminauth.home.net"
$WinAuthSitePort = "443"
# Admin API
$AdminApiLB ="wapadminapi.home.net"
$AdminApiPort = "443"

$adminApiUri = "https://${AdminApiLB}:$AdminApiPort"
$windowsAuthSite = "https://${WinAuthSiteLB}:$WinAuthSitePort"

# credentials for performing actions
$password = ConvertTo-SecureString "password" -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ("home\rserre",$password)
$token = Get-MgmtSvcToken -Type Windows -AuthenticationSite $windowsAuthSite -ClientRealm "https://azureservices/AdminSite" -User $credential -DisableCertificateValidation

# Get a list of resource providers with the current configured endpoint values
$rp = Get-MgmtSvcResourceProvider -IncludeSystemResourceProviders -AdminUri $adminApiUri -Token $token -DisableCertificateValidation
$rp | Select Name, @{e={$_.AdminEndPoint.ForwardingAddress}}, @{e={$_.TenantEndpoint.ForwardingAddress}}

# new fqdn for resource provider marketplace
$resourceProviderName = "marketplace"
$adminEndpoint = "https://${AdminApiLB}:30018/"
$tenantEndpoint = "https://${AdminApiLB}:30018/subscriptions"
$usageEndpoint = $null
$healthCheckEndpoint = $null
$notificationEndpoint = $null

if ($rp.AdminEndpoint -and $adminEndpoint) {
    # update endpoint
    $rp.AdminEndpoint.ForwardingAddress = New-Object System.Uri($adminEndpoint)
}
if ($rp.TenantEndpoint -and $tenantEndpoint) {
    # update endpoint
    $rp.TenantEndpoint.ForwardingAddress = New-Object System.Uri($tenantEndpoint)
}
if ($rp.UsageEndpoint -and $usageEndpoint) {
    # update endpoint
    $rp.TenantEndpoint.ForwardingAddress = New-Object System.Uri($usageEndpoint)
}
if ($rp.HealthCheckEndpoint -and $healthCheckEndpoint) {
    # update endpoint
    $rp.TenantEndpoint.ForwardingAddress = New-Object System.Uri($healthCheckEndpoint)
}
if ($rp.NotificationEndpoint -and $notificationEndpoint) {
    # update endpoint
    $rp.TenantEndpoint.ForwardingAddress = New-Object System.Uri($notificationEndpoint)
}
Set-MgmtSvcResourceProvider -ResourceProvider $rp -AdminUri $adminApiUri -Token $token -DisableCertificateValidation –Force

Re-run the above script by changing below variables:


$resourceProviderName = "monitoring"
$adminEndpoint = "https://${AdminApiLB}:30020/"
$tenantEndpoint = "https://${AdminApiLB}:30020/"
$usageEndpoint = $null
$healthCheckEndpoint = $null
$notificationEndpoint = $null

And re-run once again the script by changing below variables:


$resourceProviderName = "usageservice"
$adminEndpoint = "https://${AdminApiLB}:30022/"
$tenantEndpoint = "https://${AdminApiLB}:30022/"
$usageEndpoint = $null
$healthCheckEndpoint = $null
$notificationEndpoint = $null

To verify that the configuration is updated run these PowerShell commands:


$rp = Get-MgmtSvcResourceProvider -IncludeSystemResourceProviders -AdminUri $adminApiUri -Token $token -DisableCertificateValidation
$rp | Select Name, @{e={$_.AdminEndPoint.ForwardingAddress}}, @{e={$_.TenantEndpoint.ForwardingAddress}}

SQL AlwaysOn configuration

I’m not a SQL server guy so I have followed the TechNet documentation regarding the AlwaysOn configuration. For each instance that hosts Windows Azure Pack database, you have to run the below script:

sp_configure contained database authentication, 1
RECONFIGURE
GO

Next you have to add the Windows Azure Pack to the Availability group. For that you have to:

  • Check if the database transaction logs is set to full;
  • Make a full backup of each Windows Azure Pack database;
  • Add the WAP databases to the AAG.

If you want information from a guy that who knows what his is doing on SQL Server, you can read this series.

Once database are in the AAG, you have to copy each security login from the first instance to the second. For that you can use this topic.

VM Clouds in High Availability

Architecture overview

Now that the Windows Azure Pack is installed in high availability we can install the VM clouds part in the same way. For that we can use the the SQL Servers than WAP installation. We need also a Virtual Machine Manager deployed in high availability and two more servers to host Service Provider Foundation.

NLB configuration

As other load-balancing clusters, I have used NLB. VMWAP08-WEB01 and VMWAP08-WEB02 are in this cluster. These servers host Service Foundation Provider.

The Cluster-WEB is bound on the IP address 10.10.0.101. I have configured the cluster operation mode to Multicast. Be careful, if you use unicast and Virtual Machine, don’t forget to enable the Mac Spoofing feature in your Virtual Machine configuration.

DNS Aliases

Next I have created DNS aliases for each roles installed on the nodes of the cluster. So for example, I will use spf.home.net to connect to SPF from the Windows Azure Pack.

Certificates

Two certificates have been created from a duplicate of Web server certificate template. The issued to field contains the FQDN of the related server. Next I have added each DNS alias in the Subject Alternative Name as below. In this way, the certificate can be used for all web services

Service Provider Foundation installation

To install Service Provider Foundation, please read this topic. However, please be careful about below information:

  • On each node, the SQL Server information provided in the SPF installation should be the same. For me the database server name is WAPAAG02.home.net;
  • Select the server certificate related to the node;
  • For security reason, use different services account on each node for Admin, Provider, VMM and Usage web service;
  • Create the same local account (with the same password) on each node.

After SPF installation don’t forget to add the SPF database to the AAG and to copy security login to the other instance (C.F the above part called SQL
AlwaysOn configuration).

Connect to SPF from Windows Azure Pack

Open an administrative portal on the Windows Azure Pack and click on VM Clouds. Register the SPF as below:

Install Virtual Machine Manager in high availability

To install Virtual Machine Manager in High availability, please read this topic.

Connect to Virtual Machine Manager from Windows Azure Pack

Now you can add the cluster of VMM to the Windows Azure Pack as below:

The post Windows Azure Pack and VM Clouds in High Availability appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/windows-azure-pack-vm-clouds-high-availability/feed/ 0 2661
Implement VMM highly available //www.tech-coffee.net/implement-vmm-highly-available/ //www.tech-coffee.net/implement-vmm-highly-available/#comments Mon, 08 Sep 2014 20:50:14 +0000 //www.tech-coffee.net/?p=2304 When Virtual Machine Manager is used, it is the management point of your virtual infrastructure. So VMM is a critical component and usually needs to be implemented in high availability. Moreover when the Windows Azure Pack is used in production, Virtual Machine Manager must be installed in high availability in order to your tenants have ...

The post Implement VMM highly available appeared first on Tech-Coffee.

]]>
When Virtual Machine Manager is used, it is the management point of your virtual infrastructure. So VMM is a critical component and usually needs to be implemented in high availability. Moreover when the Windows Azure Pack is used in production, Virtual Machine Manager must be installed in high availability in order to your tenants have almost always access to their VM management.

Design VMM highly available

To be installed in high availability, VMM uses the Failover Clustering Windows feature. One node is active while the other is passive. If the active node is down, the passive node becomes active. No shared volume is needed in the cluster except if you use a witness disk. VMM binaries are installed on each machine. The VMM configuration is stored in the database and the encryption key is stored in Active Directory.

To avoid a single point of failure, the SQL database should be also highly available. I recommend you the AlwaysOn solution described by my friend Gilles Monville in this topic.

So in the below example, the VMM connection point is VMMConnector.home.net and the SQL connection point is AAGWAP02.home.net. From the Hyper-V host’s side, the VMM cluster is seen as a single VMM.

Implement VMM highly available

Install SQL AlwaysOn Availability group

The first thing to do to install VMM in high availability is to prepare the database. I’m not going to talk about that in this topic. However you can read this article to help you to install your SQL AlwaysOn Availability Group.

Install VMM prerequisites on VMM nodes

Next, on each node, we have to install VMM prerequisites. For that you need to download Windows Assessment and Deployment Kit for Windows 8.1. You have to install:

  • Deployment Tools
  • Windows Preinstallation Environment (Windows PE)

Next, we are going to install the Failover Clustering feature:

Install-WindowsFeature Failover-Clustering

Create the failover cluster

Open the failover clustering console and select Validate Configuration as below:

Add the nodes that will be part of your cluster. On my side I have added two below servers:

Select Run all tests and click next.

To run the validation of the configuration, click on next.

I have some warning because there is not eligible storage for my cluster. For my example I do not use a witness because I use the dynamic quorum. Click on finish to create the cluster.

Specify a Cluster Name and type an IP Address. Then click on next.

Click on Next to confirm settings and create the cluster.

As I said before, I have no eligible storage so no witness disk will be created.

Configure Active Directory objects

Now I pre-create Active Directory objects for the VMM cluster role. VMMConnector is the name of the VMM role in the cluster. So open dsa.msc and create a computer object as below:

Then add a full control permission to the above object. The trustee has to be the cluster account.

Now open the DNS console (dnsmgmt.msc) and add an A record as below:

To finish add full control permission to the cluster account:

Install VMM on the first node

Now all is ready to install VMM on the first node. Mount Virtual Machine Manager media to your server and run the installation. Once you select the VMM management server feature, you should have the below message. Click on yes.

On database screen, specify information about your SQL Server. Click on next.

Next type the VMM role name and its IP address. It should be the same information that you have set in the previous part.

When VMM in installed in high availability, you have to use a domain account as a service account. The encryption key has to be stored in Active Directory.

Choose your port configuration and click on next.

When VMM is installed in High availability, the library has to be created manually later.

Once the installation is finished, you can see the VMM role in the cluster and the IP address associated:

Install VMM on a second node

Now we can add an additional VMM node. For that connects to the second node and mount Virtual Machine Manager 2012 R2 media. Run the installation. As the first node, when you click on the VMM management server feature, you should have the below message. Click on next.

On the database configuration screen, all fields should be grey-out. It is because the configuration is collected from the first node.

As database configuration, all is grey out. Specify the account password and click next.

Port are also in read-only mode. Click on Next.

As the first node, you can’t create the library now. You have to create the library manually later.

Checking

First thing to check is the failover of the VMM service from the first node to the second node:

So everything works fine on the cluster side.

Next open the Virtual Machine Manager console and specify the VMM connector point instead of the server name:

If you open the fabric, and navigate to the servers you should see your two VMM server J

If you use a SQL AlwaysOn, don’t forget to integrate the VMM database in the AlwaysOn Availability group J.

The post Implement VMM highly available appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/implement-vmm-highly-available/feed/ 2 2304