Windows Server 2019 – Tech-Coffee //www.tech-coffee.net Tue, 02 Apr 2019 20:13:44 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.7 65682309 Storage Spaces Direct: performance tests between 2-Way Mirroring and Nested Resiliency //www.tech-coffee.net/storage-spaces-direct-performance-tests-between-2-way-mirroring-and-nested-resiliency/ //www.tech-coffee.net/storage-spaces-direct-performance-tests-between-2-way-mirroring-and-nested-resiliency/#comments Wed, 17 Oct 2018 09:38:52 +0000 //www.tech-coffee.net/?p=6581 Microsoft has released Windows Server 2019 with a new resiliency mode called nested resiliency. This mode enables to handle two failures in a two-node S2D cluster. Nested Resiliency comes in two flavors: nested two-way mirroring and nested mirror-accelerated parity. I’m certain that two-way mirroring is faster than nested mirror-accelerated parity but the first one provides ...

The post Storage Spaces Direct: performance tests between 2-Way Mirroring and Nested Resiliency appeared first on Tech-Coffee.

]]>
Microsoft has released Windows Server 2019 with a new resiliency mode called nested resiliency. This mode enables to handle two failures in a two-node S2D cluster. Nested Resiliency comes in two flavors: nested two-way mirroring and nested mirror-accelerated parity. I’m certain that two-way mirroring is faster than nested mirror-accelerated parity but the first one provides only 25% of usable capacity while the second one provides 40% of usable capacity. After having discussed with some customers, they prefer improve the usable capacity than performance. Therefore, I should expect to deploy more nested mirror-accelerated parity than nested two-way mirroring.

Before Windows Server 2019, two-way mirroring (provide 50% of usable capacity) was mandatory in two-node S2D cluster. Now with Windows Server 2019, we have the choice. So, I wanted to compare performance between two-way mirroring and nested mirror-accelerated parity. Moreover, I want to know if compression and deduplication has an impact on performance and CPU workloads.

N.B: I executed tests on my lab which is composed of Do It Yourself servers. What I want to show is a “trend” to know what could be the bottleneck in some cases and if nested resiliency has an impact on performance. So please, don’t blame me in comment section 🙂

Test platform

I run my tests on the following platform composed of two nodes:

  • CPU: 1x Xeon 2620v2
  • Memory: 64GB of DDR3 ECC Registered
  • Storage:
    • OS: 1x Intel SSD 530 128GB
    • S2D HBA: Lenovo N2215
    • S2D storage: 6x SSD Intel S3610 200GB
  • NIC: Mellanox Connectx 3-Pro (Firmware 5.50)
  • OS: Windows Server 2019 GA build

Both servers are connected to two Ubiquiti ES-16-XG switches. Even if it doesn’t support PFC/ETS and so one, RDMA is working (I tested it with test-RDMA script). I have not enough traffic in my lab to disturb RDMA without a proper configuration. Even if I implemented that in my lab, it is not supported and you should not implement your configuration in this way for production usage. On Windows Server side, I added both Mellanox network adapters in a SET and I created three virtual network adapters:

  • 1x Management vNIC for RDP, AD and so one (routed)
  • 2x SMB vNIC for live-migration and SMB traffics (not routed). Each vNIC is mapped to a pNIC.

To test the solution I use VMFleet. First I created volumes in two-way mirroring without compression, then I enabled deduplication. After I deleted and recreated volumes in nested mirror-accelerated parity without deduplication. Finally, I enabled compression and deduplication.

I run the VM Fleet with a block size of 4KB, an outstanding of 30 and on 2 threads per VM.

Two-Way Mirroring without deduplication results

First, I ran the test without write workloads to see the “maximum” performance I can get. My cluster is able to deliver 140K IOPS with a CPU workload of 82%.

In the following test, I added 30% of write workloads. The total IOPS is almost 97K for 87% of CPU usage.

As you can see, the RSS and VMMQ are well set because all Cores are used.

Two-Way Mirroring with deduplication

First, you can see that deduplication is efficient because I saved 70% of total storage.

Then I run a VMFleet test and has you can see, I have a huge drop in performance. By looking closely to the below screenshot, you can see it’s because of my CPU that reach almost 97%. I’m sure with a better CPU, I can get better performance. So first trend: deduplication has an impact on CPU workloads and if you plan to use this feature, don’t choose the low-end CPU.

By adding 30% write, I can’t expect better performance. The CPU still limit the overall cluster performance.

Nested Mirror-Accelerated Parity without deduplication

After I recreated volumes I run a test with 100% read. Compared to two-way mirroring, I have a slightly drop. I lost “only” 17KIOPS to reach 123KIOPS. The CPU usage is 82%. You can see also than the latency is great (2ms).

Then I added 30% write and we can see the performance drop compared to two-way mirroring. My CPU usage reached 95% that limit performance (but the latency is content to 6ms in average). So nested mirror-accelerated parity require more CPU than two-way mirroring.

Nested Mirror-Accelerated Parity with deduplication

First, deduplication works great also on nested mirror-accelerated parity volume. I saved 75% of storage.

As two-way mirroring with compression, I have poor performance because of my CPU (97% usage).

Conclusion

First, deduplication works great if you need to save space at the cost of a higher CPU usage. Secondly, nested mirror-accelerated parity requires more CPU workloads especially when there are write workloads. The following schemas illustrate the CPU bottleneck. In the case of deduplication, the latency always increases and I think because of CPU bottleneck. This is why I recommend to be careful about the CPU choice. Nested Mirror Accelerated Parity takes also more CPU workloads than 2-Way Mirroring.

Another interesting thing is that Mirror-Accelerated Parity produce a slightly performance drop compared to 2-Way Mirroring but brings the ability to support two failures in the cluster. With deduplication enabled we can save space to increase the usable space. In two-node configuration, I’ll recommend to customer Nested Mirror-Accelerated Parity by paying attention to the CPU.

The post Storage Spaces Direct: performance tests between 2-Way Mirroring and Nested Resiliency appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/storage-spaces-direct-performance-tests-between-2-way-mirroring-and-nested-resiliency/feed/ 2 6581
Support two failures in 2-node S2D cluster with nested resiliency //www.tech-coffee.net/support-two-failures-in-2-node-s2d-cluster-with-nested-resiliency/ //www.tech-coffee.net/support-two-failures-in-2-node-s2d-cluster-with-nested-resiliency/#comments Mon, 08 Oct 2018 09:19:01 +0000 //www.tech-coffee.net/?p=6554 Microsoft just released Windows Server 2019 with a lot of improvement for Storage Spaces Direct. One of these improvements is the nested resiliency that is specific for 2-node S2D cluster. Thanks to this feature, a 2-node S2D cluster can now support two failures, at the cost of storage dedicated for resiliency. Nested Resiliency comes in ...

The post Support two failures in 2-node S2D cluster with nested resiliency appeared first on Tech-Coffee.

]]>
Microsoft just released Windows Server 2019 with a lot of improvement for Storage Spaces Direct. One of these improvements is the nested resiliency that is specific for 2-node S2D cluster. Thanks to this feature, a 2-node S2D cluster can now support two failures, at the cost of storage dedicated for resiliency. Nested Resiliency comes in two flavors:

  • Nested two-Way mirroring: It’s more or less a 4-way mirroring that provide 25% of usable storage
  • Nested mirror-accelerated parity: it’s a volume with a mirror tier and a parity tier.

The following slide comes from a deck presented at Ignite.

To support two failures, a huge amount of storage is consumed by the resiliency. Hopefully, Windows Server 2019 allows to run deduplication in ReFS volume. But be careful about the CPU usage and storage device performances. I’ll talk about that in a next topic.

Create a Nested Two-Way Mirror volume

To create a Nested 2-Way Mirroring volume, you have to create a storage tier and a volume. Below you can find an example in my lab (full flash solution) where Storage Pool is called VMPool:

New-StorageTier -StoragePoolFriendlyName VMPool -FriendlyName Nested2wMirroringTier -ResiliencySettingName Mirror -NumberOfDataCopies 4 -MediaType SSD

New-Volume -StoragePoolFriendlyName VMPool -FriendlyName CSV-01 -StorageTierFriendlyNames Nested2wMorringTier -StorageTierSizes 500GB

Create a Nested Mirror-Accelerated Parity volume

To create a Nested Mirror-Accelerated Parity volume, you need to create two tiers and a volume composed of these tiers. In the below example, I create two nested Mirror-Accelerated Parity volume:

New-StorageTier -StoragePoolFriendlyName VMPool -FriendlyName Nested2wMirroringTier -MediaType SSD -ResiliencySettingName Mirror -NumberOfDataCopies 4

New-StorageTier -StoragePoolFriendlyName VMPool -FriendlyName NestedSParityTier -ResiliencySettingName Parity -NumberOfDataCopies 2 -PhysicalDiskRedundancy 1 -NumberOfGroups 1 -FaultDomainAwareness StorageScaleUnit -ColumnIsolation PhysicalDisk -MediaType SSD

New-Volume -StoragePoolFriendlyName VMPool -FriendlyName PYHYV01 -StorageTierFriendlyNames NestedMirror,NestedParity -StorageTierSizes 80GB, 150GB

New-Volume -StoragePoolFriendlyName VMPool -FriendlyName PYHYV02 -StorageTierFriendlyNames NestedMirror,NestedParity -StorageTierSizes 80GB, 150GB

Conclusion

Some customers didn’t want to deploy a 2-node S2D cluster in branch office because of lack of the support of two failures. Thanks to nested resiliency we can support two failures in a 2-node cluster. However be careful to storage usage for resiliency and the performance of the overall cluster if you enable deduplication.

The post Support two failures in 2-node S2D cluster with nested resiliency appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/support-two-failures-in-2-node-s2d-cluster-with-nested-resiliency/feed/ 2 6554
Deploy a Windows Server 2019 RDS farm with HTML5 client //www.tech-coffee.net/deploy-a-windows-server-2019-rds-farm-with-html5-client/ //www.tech-coffee.net/deploy-a-windows-server-2019-rds-farm-with-html5-client/#comments Thu, 06 Sep 2018 14:38:50 +0000 //www.tech-coffee.net/?p=6492 These days I’m trying in depth Windows Server 2019. Today I chose to pay attention to Remote Desktop Services. The goal of my lab is to deploy a RDS Farm with all components and with the new HTML5 Remote Desktop Client. Even though I’m running my lab on Windows Server 2019, you can also deploy ...

The post Deploy a Windows Server 2019 RDS farm with HTML5 client appeared first on Tech-Coffee.

]]>
These days I’m trying in depth Windows Server 2019. Today I chose to pay attention to Remote Desktop Services. The goal of my lab is to deploy a RDS Farm with all components and with the new HTML5 Remote Desktop Client. Even though I’m running my lab on Windows Server 2019, you can also deploy the HTML5 client on Windows Server 2016. In this topic, I wanted to share with you the steps I followed to deploy the Windows Server 2019 RDS farm.

Requirements

To make this lab, I have deployed four virtual machines which are running Windows Server 2019:

  • RDS-APP-01: RD Host Server that hosts the RemoteApp collection
  • RDS-DKP-01: RD Host Server that hosts the Remote Desktop collection
  • RDS-BRK-01: Hosts RD Broker and RD Licensing
  • RDS-WEB-01: Hosts RD Web Access and RD Gateway

Then I have a public certificate for RD Web Access and RD Gateway role:

I have also a private certificate for RD Broker publishing and RD Broker connection. To create this certificate, I duplicated the Workstation Authentication ADCS template as described in this topic.

I have register both certificates in PFX (with private key) and in cer (just the public certificate).

Finally, I have two DNS zone:

  • SeromIT.local: Active Directory forest zone
  • SeromIT.com: splitted zone: hosted by local domain controllers and by public provider. I use this zone to connect from Internet. In this zone I have created two registrations:
    • Apps.SeromIT.com: leading to RDS-WEB-01 (CNAME)
    • RDS-GW.SeromIT.com: leading to RDS-BRK-01 (CNAME) for the gateway

RDS farm deployment

To deploy the RDS farm, I use only PowerShell. In this way I can reproduce the deployment for other customers. First of all, I run a Remote Desktop deployment to configure a RD Web Access, a RD Broker and a RD Host Server:


New-RDSessionDeployment -ConnectionBroker RDS-BRK-01.SeromIT.local `
                        -SessionHost RDS-DKP-01.SeromIT.local `
                        -WebAccessServer RDS-WEB-01.SeromIT.local

Then I run a PowerShell cmdlet to add another RD Host Server, a RD Licensing and a RD Gateway role.


Add-RDServer -Server RDS-APP-01.SeromIT.local `
             -Role RDS-RD-SERVER `
             -ConnectionBroker RDS-BRK-01.SeromIT.local

Add-RDServer -Server RDS-BRK-01.SeromIT.local `
             -Role RDS-Licensing `
             -ConnectionBroker RDS-BRK-01.SeromIT.local

Add-RDServer -Server RDS-WEB-01.SeromIT.local `
             -Role RDS-Gateway `
             -ConnectionBroker RDS-BRK-01.SeromIT.local `
             -GatewayExternalFqdn RDS-GW.SeromIT.com

Once these commands are run, the role deployment is finished:

Now we can configure the certificates.

Certificate configuration

To configure each certificate, I use again PowerShell. Remember, I have store both certificates in PFX in C:\temp\RDS of my broker server.

$Password = Read-Host -AsSecureString
$Password = Read-Host -AsSecureString
Set-RDCertificate -Role RDGateway `
                  -ImportPath C:\temp\RDS\wildcard_SeromIT_com.pfx `
                  -Password $Password `
                  -ConnectionBroker RDS-BRK-01.SeromIT.local `
                  -Force

Set-RDCertificate -Role RDWebAccess `
                  -ImportPath C:\temp\RDS\wildcard_SeromIT_com.pfx `
                  -Password $Password `
                  -ConnectionBroker RDS-BRK-01.SeromIT.local `
                  -Force

Set-RDCertificate -Role RDPublishing `
                  -ImportPath C:\temp\RDS\Broker.pfx `
                  -Password $Password `
                  -ConnectionBroker RDS-BRK-01.SeromIT.local `
                  -Force

Set-RDCertificate -Role RDRedirector `
                  -ImportPath C:\temp\RDS\Broker.pfx `
                  -Password $Password `
                  -ConnectionBroker RDS-BRK-01.SeromIT.local `
                  -Force

Once these commands are executed, the certificate are installed for each role:

Collection creation

Now I create a collection to add resources inside the RD Web Access portal:

New-RDSessionCollection -CollectionName Desktop `
                        -CollectionDescription "Desktop Publication" `
                        -SessionHost RDS-DKP-01.SeromIT.local `
                        -ConnectionBroker RDS-BRK-01.SeromIT.local

Then from Server Manager, you can configure settings of this collection:

Enable HTML 5 Remote Desktop client

In this lab, I don’t want to use the legacy portal. I’d like to use the super cool new HTML5 RD client. To enable this client, I connect to the server hosting RD Web Access role and I run the following cmdlet:

Install-Module -Name PowerShellGet -Force -Confirm:$False

After, close and open again a PowerShell window. Then execute this command:

Install-Module -Name RDWebClientManagement -Confirm:$False

Then copy the RD Broker certificate in cer format into the RD Web Access server and run the following cmdlets:

Import-RDWebClientBrokerCert c:\temp\broker.cer

Install-RDWebClientPackage
Publish-RDWebClientPackage -Type Production -Latest

Now you can connect to the RD Web client by using the following URL: https:///RDWeb/WebClient/Index.html. In my example, I connect to https://apps.SeromIT.com/RDWeb/WebClient/Index.html.

Conclusion

I like the RD Web client for several reasons. First, you can connect to a RDS session from a HTML5 ready web browser. You don’t need anymore a compatible RD client and you can connect from several devices such as Mac, a Linux device or maybe a tablet or smartphone. Secondly, the HTML5 client doesn’t require settings for SSO like we did with the legacy portal. The deployment is easier as before. And finally I found this client more user friendly than the legacy portal. The only thing missing is the ability to enable the HTML5 client by a single click or PowerShell cmdlet, or to enable it by default.

The post Deploy a Windows Server 2019 RDS farm with HTML5 client appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-windows-server-2019-rds-farm-with-html5-client/feed/ 27 6492