Microsoft – Tech-Coffee //www.tech-coffee.net Tue, 06 Nov 2018 18:02:29 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Register Windows Admin Center in Microsoft Azure //www.tech-coffee.net/register-windows-admin-center-in-microsoft-azure/ //www.tech-coffee.net/register-windows-admin-center-in-microsoft-azure/#respond Tue, 06 Nov 2018 18:02:29 +0000 //www.tech-coffee.net/?p=6597 With Windows Server 2019 and Windows Admin Center, we are able to build hybrid cloud in an easy way. First Windows Admin Center provide a GUI to configure features such as Azure Backup, Azure Site Recovery or Azure File Sync. With Windows Server 2019, we can interconnect an On-Prem host to an Azure virtual network ...

The post Register Windows Admin Center in Microsoft Azure appeared first on Tech-Coffee.

]]>
With Windows Server 2019 and Windows Admin Center, we are able to build hybrid cloud in an easy way. First Windows Admin Center provide a GUI to configure features such as Azure Backup, Azure Site Recovery or Azure File Sync. With Windows Server 2019, we can interconnect an On-Prem host to an Azure virtual network thanks to Azure Virtual Network Adapter. Finally, Storage Migration Service enables to migrate a file server to an Azure File Service such as Azure File Sync. But to be able to leverage all these features from Windows Admin Center, it must be registered in Microsoft Azure. In this topic, I’ll show you step-by-step how to register Windows Admin Center in Microsoft Azure.

Requiements

To be able to follow this topic, you need the following:

  • An Azure subscription
  • A running Windows Admin Center (1809 at least).

Register Windows Admin Center in Microsoft Azure

From a web browser (Edge or Chrome), open Windows Admin Center and click on the wheel at the top right corner. Then click on Azure and Register.

Then copy the code and click on Device Login and past the code you just copied. A Microsoft login pop-up should be raised: enter your Azure Credentials.

If you have several tenant, choose the right one. You can find the tenant ID from the Azure Portal by clicking on Switch Directory. If you have already register a Windows Admin Center before, you can reuse the Azure AD App by selecting the option.

Now you are asked to grant permissions to the Azure AD App. Open an Azure Portal from the browser of your choice.

Then navigate to App Registrations and select your Windows Admin Center App. Edit its settings and click on Required permissions. Finally click on Grant Permissions.

If the Windows Admin Center works well, you should have the following information.

Now you can enjoy Azure Hybrid features such as Azure Backup from Windows Admin Center.

If you wish, you can also use Azure Active Directory to authenticate users and administrators on Windows Admin Center.

Conclusion

With Windows Server 2019 and Windows Admin Center has promised to simplify hybrid scenario. Thanks to Windows Admin Center we are able to configure On-Prem hosts in Azure Site Recovery and Azure Backup. The “hybrid” extensions of Windows Admin Center are still in preview. Just by upgrading extensions, we’ll have more features. This is why Windows Admin Center is a good product (and it’s free !)

The post Register Windows Admin Center in Microsoft Azure appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/register-windows-admin-center-in-microsoft-azure/feed/ 0 6597
Don’t worry, Storage Spaces Direct is not dead ! //www.tech-coffee.net/dont-worry-storage-spaces-direct-not-dead/ //www.tech-coffee.net/dont-worry-storage-spaces-direct-not-dead/#comments Fri, 20 Oct 2017 08:53:32 +0000 //www.tech-coffee.net/?p=5840 Usually, I’m not speaking about news but only about technical details. But with the release of Windows Server 2016 1709, lot of misinformation have been written and I’d like to show another way to look at this. First of all, it is important to understand what happened to Windows Server 2016. Microsoft has modified how ...

The post Don’t worry, Storage Spaces Direct is not dead ! appeared first on Tech-Coffee.

]]>
Usually, I’m not speaking about news but only about technical details. But with the release of Windows Server 2016 1709, lot of misinformation have been written and I’d like to show another way to look at this.

First of all, it is important to understand what happened to Windows Server 2016. Microsoft has modified how Windows Server is distributed to customer. There is two kind of deployment:

  • LTSC (Long-Term Servicing Channel): This is Windows Server with 5 years support and 5 years of extended support. You ‘ll get security and quality updates but no new features. Windows Server Core and with GUI is supported with this deployment. Microsoft expects to release a new LTSC every 2 or 3 years
  • SAC (Semi Annual Channel): With this kind of deployment, Microsoft will release a new version each 6 months. Each release will be supported 18 months from the initial release. Each new release should get new features. Only Windows Server Core is supported with this kind of deployment.

So the release of this month called 1709 (2017 = 17, September = 09: 1709) is part of the SAC deployment mode. In 6 months, a new release in Semi Annual Channel should be released called 1803.

But where is Storage Spaces Direct

Storage Spaces Direct is the main piece to run Software-Defined Storage with Microsoft solution. This feature has been released with Windows Server 2016 called now 1609 (October 2016, you follow me ?). The Windows Server 1609 is LTSC. Storage Spaces Direct (S2D for friends), works great with this release and I have deployed plenty of S2D cluster which are currently running in production without issues (yes I had some issues but resolved quickly).

Microsoft this month has released Windows Server 1709 which is in SAC. This release contains mainly container improvements, and the reason of this topic, no support of S2D. This is a SAC release, not a service pack. You can’t compare anymore a service pack with a SAC release. SAC is a continuously system upgrade while service pack is mainly an aggregate of updates. You are running S2D? don’t install 1709 release and wait 6 months… you’ll see 🙂

Why removing the support of S2D ?

The Storage is a complicated component. We need stable and valuable storage because today companies data are located in this storage. If the storage gone, the company can close down.

I can tell you that Microsoft works hard on S2D to bring you the best Software-Defined solution. But the level of validation for production has not been reached to provide S2D with Windows Server 1709. What do you prefer: a buggy S2D release or wait 6 months for a high quality product ? From my side I prefer to wait 6 months for a better product.

Why Storage Spaces Direct is not dead ?

Last two days, I read some topics saying that Microsoft is pushing Azure / Azure Stack and they don’t care about On-Prem solution. Yes this true, today Microsoft talks only about Azure / Azure Stack and I think it is a shame. But Azure Stack solution is based on Storage Spaces Direct. They need to improve this feature to deploy more and more Azure Stack.

Secondly, Microsoft has presented a new GUI feature called Honolulu. Microsoft has developed a module to manage hyperconverged solution. You may have seen the presentation at Ignite. Why Microsoft is developing a product for a technology that wants to give up?

To finish, I work sometime with the product group which is in charge of S2D. I can tell you they work hard on the product to make it greater. I have the good fortune of being able to see and try next new features of S2D.

Conclusion

You are running S2D on Windows Server 2016: you should keep Windows Server 1609 and wait for the next new release in SAC or LTSC. You want to run S2D but you are afraid about the new announcement: be sure that Microsoft will not left behind S2D. You can deploy S2D with Windows Server 1609 or maybe wait for Windows Server 1803 (next march). Be sure of one thing: Storage Spaces Direct is not dead !

The post Don’t worry, Storage Spaces Direct is not dead ! appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/dont-worry-storage-spaces-direct-not-dead/feed/ 4 5840
Update Mellanox network adapter firmware //www.tech-coffee.net/update-mellanox-network-adapter-firmware/ //www.tech-coffee.net/update-mellanox-network-adapter-firmware/#respond Wed, 06 Sep 2017 06:57:23 +0000 //www.tech-coffee.net/?p=5721 Like the others, Mellanox network adapters should be updated to the latest firmware to solve issues. When you download the Mellanox firmware release note, you can see how much reported issues have been solved (usually, a lot :)). Mellanox provides tools to update and manage the firmware from Linux, Freebsd, VMware ESXi, Windows and Windows ...

The post Update Mellanox network adapter firmware appeared first on Tech-Coffee.

]]>
Like the others, Mellanox network adapters should be updated to the latest firmware to solve issues. When you download the Mellanox firmware release note, you can see how much reported issues have been solved (usually, a lot :)). Mellanox provides tools to update and manage the firmware from Linux, Freebsd, VMware ESXi, Windows and Windows PE. Thanks to this set of tools, you can update Mellanox network adapter firmware from a powered-up operating system. In this topic, we will see how to manage the firmware from Windows Server 2016 Datacenter Core and from VMware ESXi 6.5u1.

This topic shows you how to update Mellanox network adapter from Mellanox. If you have a branded Mellanox network adapter (Dell, IBM etc.), please check the related documentation from the vendor.

Requirements

First, you need to identify which Mellanox network adapter is installed on your system. You can retrieve this information from a sticker on the network adapter or from invoice. In my lab, I have two Mellanox model:

  • ConnectX3-Pro (MCX312B-XCCT)
  • ConnectX3 (MCX312A-XCBT)

N.B: If you can’t get the model of the network adapter, run mlxfwmanager when the Mellanox tools will be deployed. Then retrieve the PSID information and make a search on Google.

Once you have identified the network adapter model, you can download the firmware from Mellanox. Usually I type in Google “Firmware MCX312B-XCCT” for example. Then I can get this webpage from Mellanox and I can download and unzip the firmware.

You need also the Mellanox toolset called Mellanox Firmware Tools (MST). You can get the documentation from this location. If you need to update firmware from VMware ESXi 6.5u1, download the two VIB files as below:

If you plan to update firmware from Windows Server, download the following executable:

Update firmware from VMware ESXi

First, we need to install MST on ESXi. A reboot is required. I recommend you to place your server in maintenance mode. From vCenter or ESXi web interface, upload in a datastore the firmware and VIB files.

Then open a SSH session and navigate to /vmfs/volumes/<your datastore>. Copy the path with the datastore ID as below:

Then install both VIB files with the command esxcli software vib install -v <path to vib file>:

Then reboot the server. Once you have rebooted the server, you can navigate to /opt/mellanox/bin. The command /opt/Mellanox/bin/mst status gives you the installed Mellanox device.

Then you can flash your device by specifying /opt/Mellanox/bin/flint -d <device> -I <path to firmware file> burn

After the firmware is updated on Mellanox network adapter, you need to reboot your server again. If you open Flexboot (CtRL+B at startup), you can see the new version. You can also use /opt/Mellanox/bin/mlxfwmanager to get this information.

Update firmware from Windows Server

I recommend you to place your node in pause state because a reboot is required. On the MST web page, download the Windows executable as indicated in the following screenshot:

Copy the executable on the server and run the installation. Once MST is installed, navigate to C:\Program Files\Mellanox\WinMFT. You can run mst status to get information about installed Mellanox network adapter:

Then you can run flint -d <Device> -I <path to firmware> burn

As you can see, after the firmware is updated, the new version is not active (you can use mlxfwmanager.exe to get these information). You need to reboot the server in order to use the new firmware version.

After a reboot, you can see that the new version is the only one.

Force to change the PSID to remove vendor custom firmware

There is a chance to brick your network adapter! I’m not responsible in case of hardware degradation.

Recently, I bought on eBay a ConnectX3 from Mellanox with a IBM PSID. I wanted to flash the firmware with a Mellanox image. To make this change, I run /opt/Mellanox/bin/flint -d <device> -i <path to firmware> -allow_psid_change burn. Then flint asked me if I really want to change the PSID because it is not recommended.

After a reboot, I checked from /opt/Mellanox/bin/mlxfwmanager and the PSID was changed.

The post Update Mellanox network adapter firmware appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/update-mellanox-network-adapter-firmware/feed/ 0 5721
Host Veeam backup on Storage Spaces Direct //www.tech-coffee.net/host-veeam-backups-storage-spaces-direct/ //www.tech-coffee.net/host-veeam-backups-storage-spaces-direct/#comments Tue, 06 Jun 2017 08:01:26 +0000 //www.tech-coffee.net/?p=5527 Storage Spaces Direct (S2D) is well known to host virtual machines in the disaggregated or hyperconverged model. But S2D can also be used for backup purpose as a backup repository. You can plan to implement a S2D cluster in the dissagregated model to host virtual machines but also to store backups. Because Veeam Backup & ...

The post Host Veeam backup on Storage Spaces Direct appeared first on Tech-Coffee.

]]>
Storage Spaces Direct (S2D) is well known to host virtual machines in the disaggregated or hyperconverged model. But S2D can also be used for backup purpose as a backup repository. You can plan to implement a S2D cluster in the dissagregated model to host virtual machines but also to store backups. Because Veeam Backup & Replication can leverage a repository based on SMB shares, the Veeam backups can be hosted on a S2D cluster through Scale-Out File Server (SOFS).

Veeam Backup & Replication 9.5 provides advanced ReFS integration to provide faster synthetic full backup creation, reduce storage requirements and improve reliability and backup and restore performance. With Storage Spaces Direct, Microsoft recommends mainly ReFS as the file system. This is why, if you have a S2D cluster (or you plan to deploy) and Veeam, it can be a great opportunity to host backup on the S2D cluster.

S2D cluster provides three resilience models: mirroring, parity and mixed resiliency. Mirroring volume is not a good option to store backup because too many spaces is used for resilience (50% in 2-way mirroring or 3-way mirroring). Mirroring is good to store virtual machines. Parity is a good option to store backups. More you add storage, more your storage is efficiency. However, you need a 4-node S2D cluster. Mixed resiliency is also a good option because you mix mirroring and parity and so performance and efficiency. But mixed resiliency requires a fine design.

In this topic, I’ll implement a S2D cluster with a dual parity volume to store Veeam backups.

4-node S2D cluster deployment

First of all, you have to deploy a S2D Cluster. You can follow this topic to implement the cluster. For this topic, I have deployed a 4-node cluster. After that Operating System and drivers are installed I have run the following PowerShell script. This script installs required features on all nodes and enable RDMA for network adapters with the name containing Cluster.

$Nodes = "VMSDS01","VMSDS02","VMSDS03","VMSDS04"

Foreach ($Node in $Nodes){
    Try {
        $Cim = New-CimSession -ComputerName $Node -ErrorAction Stop
        Install-WindowsFeature Failover-Clustering, FS-FileServer -IncludeManagementTools -Restart ComputerName $Node -ErrorAction Stop | Out-Null

        Enable-NetAdapterRDMA -CimSession $Node -Name Cluster* -ErrorAction Stop | Out-Null
    }
    Catch {
        Write-Host $($Error[0].Exception.Message) -ForegroundColor Red -BackgroundColor Green
        Exit
    }
}

Then from a node of the cluster I have run the following cmdlets:

$Nodes = "VMSDS01","VMSDS02","VMSDS03","VMSDS04"
$ClusIP = "10.10.0.44"
$ClusNm = "Cluster-BCK01"

Test-Cluster -Node $Nodes -Include "Storage Spaces Direct", Inventory,Network,"System Configuration"
New-Cluster -Node $Nodes -StaticAddress $ClusIP -NoStorage
Enable-ClusterS2D

New-Volume -StoragePoolFriendlyName "*Cluster-BCK01" -FileSystem CSVFS_ReFS -ResiliencySettingName Parity -PhysicalDiskRedundancy 2 -Size 100GB

Rename-Item c:\ClusterStorage\Volume1 BCK01

At this moment, the cluster is created and Storage Spaces Direct is enabled. The cluster is called Cluster-BCK01 (IP: 10.10.0.44) and a dual parity volume is created. Then open the permissions of the OU where is located the Cluster Name Object of the cluster. Then add a permission for the cluster name object to allow to create computer objects.

Open the failover clustering manager and rename the networks to ease the management.

You can check also that you have all the enclosures and physical disks.

When S2D has been enabled, a storage pool with all physical disks has been automatically created. I have renamed it to Backup Pool.

You can check also that a Cluster Shared Volume has been well created.

Next run the following cmdlets to create SOFS role, create a folder in the volume and create a share on this folder.

Add-ClusterScaleOutFileServerRole -Name BCK-REPO
new-item -Type Directory -Path '\\vmsds01\c$\ClusterStorage\BCK01' -Name VMBCK01
New-SmbShare -Name 'VMBCK01' -Path C:\ClusterStorage\BCK01\VMBCK01 -FullAccess everyone

If you go back to the cluster, you can see that a Scale-Out File Server role has been created as well as the share.

You can edit the permissions of the folder to give specific permissions to the account that will be used in Veeam Backup & Replication.

Veeam Backup & Replication configuration

First of all, I create a new backup repository in Veeam Backup & Replication.

Then choose the shared folder backup repository.

Next specify the shared folder where you want store the backups. My SOFS role is called BCK-REPO and the share is called VMBCK01. So, the path is \\BCK-REPO\VMBCK01. Specify also credentials that have permissions on the shared folder.

In the next window, you can specify advanced properties.

Then I choose to not enable the vPower NFS service because I backup only Hyper-V VMs.

To finish the backup repository creation, review the properties and click on Apply.

Run a backup on the S2D repository

To test the new backup repository, I choose to create a backup copy job.

Then choose the VM that will be in the backup copy job.

In the next screen, choose the S2D backup repository, the number of restore points and the archival settings.

Next choose if you want use a WAN accelerator or not.

When the wizard is finished, the backup copy job is processing. You can see in the shared folder that data is coming.

When the backup is finished, you can see that data processed and size on disk are different. It is because Veeam leverages ReFS to reduce the storage usage.

Conclusion

Microsoft Storage Spaces Direct can be used to store your virtual machines but also your backups. If you plan a S2D in disaggregated model, you can design the solution to store VM data and backup job. The main disadvantage is that backup should be located in a parity volume (or a mixed resiliency) and that requires at least a 4-node S2D cluster.

The post Host Veeam backup on Storage Spaces Direct appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/host-veeam-backups-storage-spaces-direct/feed/ 3 5527
RDS 2016 Farm: Configure File Servers for User Profile Disks //www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/ //www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/#comments Tue, 09 May 2017 11:26:37 +0000 //www.tech-coffee.net/?p=5471 In the previous topics of this series, we have deployed the RDS Farm in Azure. Now we need a file service in high availability to manage user profile disks (UPD). To support the high availability, I leverage Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS). For more information about the deployment of S2D, you ...

The post RDS 2016 Farm: Configure File Servers for User Profile Disks appeared first on Tech-Coffee.

]]>
In the previous topics of this series, we have deployed the RDS Farm in Azure. Now we need a file service in high availability to manage user profile disks (UPD). To support the high availability, I leverage Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS). For more information about the deployment of S2D, you can read this topic (based on hyperconverged model). For Remote Desktop usage, I’ll deploy a disaggregated model of S2D. In this topic, I’ll configure file servers for User Profile Disks. This series consists of the following topics:

I’ll deploy this file service by using only PowerShell. Before following this topic, be sure that your Azure VM has joined the Active Directory and they have two network adapters in two different subnets (one for cluster and the other for management). I have also fixed the IP addresses from Azure portal.

Deploy the cluster

First of all, I install these features in both file server nodes:

install-WindowsFeature FS-FileServer, Failover-Clustering -IncludeManagementTools

Then I install the RSAT of Failover Clustering in the management VM.

Install-WindowsFeature RSAT-Clustering

Next I test if the cluster nodes can manage Storage Spaces Direct

Test-Cluster -Node "AZFLS0","AZFLS1" -Include "Storage Spaces Direct", Inventory,Network,"System Configuration"

If the test is passed successfully, you can run the following cmdlet to deploy the cluster with the name UPD-Sto and the IP 10.11.0.29.

New-Cluster -Node "AZFLS0","AZFLS1" -Name UPD-Sto -StaticAddress 10.11.0.29 -NoStorage

Once the cluster is created, add the Cluster Name Object (UPD-Sto) the right to create computer object on the OU where it is located. This permission is required to create the CNO for SOFS.

Enable and configure S2D and SOFS

Now that the cluster is created, you can enable S2D (I run the following PowerShell on a file server node by using Remote PowerShell).

Enable-ClusterS2D

Then I create a new volume formatted with ReFS and with a capacity of 100GB. This volume has the 2-Way Mirroring resilience.

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName UPD01 -FileSystem CSVFS_REFS -Size 100GB

Now I rename the folder Volume1 in ClusterStorage by UPD-01

rename-item C:\ClusterStorage\Volume1 UPD-01

Then I a add the role Scale-Out File Server role in the cluster and I call it SOFS.

Add-ClusterScaleOutFileServerRole -Name SOFS

To finish I create a folder called Profiles in the volume and I share it for everyone (not recommended in production) and I call the share UPD$

New-Item -Path C:\ClusterStorage\UPD-01\Profiles -ItemType Directory
New-SmbShare -Name 'UPD$' -Path C:\ClusterStorage\UPD-01\Profiles -FullAccess everyone

Now my storage is ready and I am able to reach \\SOFS.homecloud.net\UPD$

Next topic

In the next topic, I will deploy a session collection and configure it. Then I will add the certificate for each Remote Desktop components.

The post RDS 2016 Farm: Configure File Servers for User Profile Disks appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/feed/ 4 5471
RDS 2016 Farm: Deploy RDS 2016 farm in Azure //www.tech-coffee.net/rds-2016-farm-deploy-the-farm-in-azure/ //www.tech-coffee.net/rds-2016-farm-deploy-the-farm-in-azure/#comments Tue, 09 May 2017 09:21:33 +0000 //www.tech-coffee.net/?p=5461 This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. In previous topics, we saw how to deploy networks, storage and virtual machines in Azure. We added also the domain controller to the On-Prem forest across the Site-to-Site VPN. In this topic, we will deploy ...

The post RDS 2016 Farm: Deploy RDS 2016 farm in Azure appeared first on Tech-Coffee.

]]>
This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. In previous topics, we saw how to deploy networks, storage and virtual machines in Azure. We added also the domain controller to the On-Prem forest across the Site-to-Site VPN. In this topic, we will deploy RDS 2016 farm in Azure. This farm is executed on Windows Server 2016. This series consists of the following topics:

Deploy the Azure SQL database

In the previous topics, we have not deployed the Azure SQL database. In this part, I will deploy this component. In Microsoft Azure, open marketplace and look for SQL Database. Create a blank database and create a new SQL Server. I have called the SQL server sql-rds and the database DBA-Broker.

Deploy RDS 2016 Farm

Once all your VM has joined the Active Directory, you can create a new Remote Desktop deployment based on session. The first broker server is AZRDB0, the first RD host server is AZRAH0 and the first RD access server is AZRDA0. From AZRDB0, I run the following cmdlet:

New-RDSessionDeployment -ConnectionBroker AZRDB0.homecloud.net `
                        -SessionHost AZRAH0.homecloud.net `
                        -WebAccessServer AZRDA0.homecloud.net

Next, in the Server Manager of AZRDB0, add all servers of the RDS farm.

Then, I add additional server to the RDS Farm. First, I add two license servers. Each server will have some licenses, so even if a server is down, a license server is available.

Add-RDServer -ConnectionBroker AZRDB0.homecloud.net -Server AZRDB0.homecloud.net -Role RDS-LICENSING
Add-RDServer -ConnectionBroker AZRDB0.homecloud.net -Server AZRDB1.homecloud.net -Role RDS-LICENSING

Then I add an additional RD host server:

Add-RDServer -ConnectionBroker AZRDB0.homecloud.net -Server AZRAH1.homecloud.net -Role RDS-RD-SERVER

And I add an additional RD Web Access server:

Add-RDServer -ConnectionBroker AZRDB0.homecloud.net -Server AZRDA1.homecloud.net -Role RDS-WEB-ACCESS

In Server Manager, if you browse the Remote Desktop Deployment, you should have the following diagram.

Configure the RD Broker in High Availability

Before configuring the RD Broker in High Availability mode, go back to the Azure Portal and open the SQL database settings. Click on the link connection strings.

Then create two DNS records where each DNS record is associated to one RD broker.

N.B: you can use an Azure Load Balancer instead of DNS round-robin for the RD Broker. For more information, you can read this topic.

Then install the SQL Native Client on each RD Broker server. Next run the following cmdlet. Replace in the Connection String the SQL server and database name.

Set-RDConnectionBrokerHighAvailability -ConnectionBroker 'azrdb0.homecloud.net' `
                                       -DatabaseConnectionString 'Driver={SQL Server Native Client 11.0};Server=tcp:sql-rds.database.windows.net,1433;Database=DBA-Broker;Uid=master@sql-rds;Pwd={DATABASE PASSWORD};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;' `
                                       -ClientAccessName 'broker.homecloud.net'

To finish, run the following cmdlet to add an additional RD Broker server:

Add-RDServer -ConnectionBroker AZRDB0.homecloud.net -Server AZRDB1.homecloud.net -Role RDS-CONNECTION-BROKER

If you come back to the deployment overview In Server Manager, the RD Connection Broker should be marked as a High Availability Mode.

Configure RD Gateway

To add RD Gateways, click on the + symbol in the deployment overview. Then select both RD Gateway servers and add them to selected box.

Provide an SSL certificate name which should be the FQDN of the RD Gateway servers.

Then click on add to start the RD Gateway deployment.

Now the deployment overview should look like that :

In each RD Gateway server, open the RD Gateway console and edit the server properties. Then navigate to Transport Settings and disable UDP.

In Server Farm tab, add both servers and click on Apply.

Repeat these steps for each RD Gateway server.

Deploy the Load Balancer

A Load Balancer is required for the RD Web Access and the RD Gateways. You can use also an Azure Load Balancer for the RD Broker. But in this example I deploy an Azure Load Balancer for RD Web Access and Gateway. Open the marketplace and specify load balancer in the search box.

Provide a name to the Load Balancer and select public. Select the Public IP address previously created from the JSON template.

Once the Azure Load Balancer is created, open the Backend Pools settings. Then click on Add.

Specify a name for the backend pool and select associated to Availability Set. Select the RD Access availability set and add both virtual machines.

Next add a Health probe based on TCP 443 (HTTP / 443 is currently not supported).

Add also a load balancing rule based on TCP. Specify the public TCP port and the backend port. Then select the health probe.

Now you can try the public IP (https://<IP>/rdweb). You should get the Remote Web Access authentication page.

What is missing?

For the moment, no certificate has been deployed. So, you should have some security alerts in the web browser and the RD Gateway still not working. We will configure these certificates in another topic.

Next topic

In the next topic, I’ll deploy a SOFS cluster based on Storage Spaces Direct to store User Profile Disk.

The post RDS 2016 Farm: Deploy RDS 2016 farm in Azure appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-deploy-the-farm-in-azure/feed/ 5 5461