PowerShell – Tech-Coffee //www.tech-coffee.net Tue, 02 Apr 2019 20:13:44 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Deploy a Windows Server 2019 RDS farm with HTML5 client //www.tech-coffee.net/deploy-a-windows-server-2019-rds-farm-with-html5-client/ //www.tech-coffee.net/deploy-a-windows-server-2019-rds-farm-with-html5-client/#comments Thu, 06 Sep 2018 14:38:50 +0000 //www.tech-coffee.net/?p=6492 These days I’m trying in depth Windows Server 2019. Today I chose to pay attention to Remote Desktop Services. The goal of my lab is to deploy a RDS Farm with all components and with the new HTML5 Remote Desktop Client. Even though I’m running my lab on Windows Server 2019, you can also deploy ...

The post Deploy a Windows Server 2019 RDS farm with HTML5 client appeared first on Tech-Coffee.

]]>
These days I’m trying in depth Windows Server 2019. Today I chose to pay attention to Remote Desktop Services. The goal of my lab is to deploy a RDS Farm with all components and with the new HTML5 Remote Desktop Client. Even though I’m running my lab on Windows Server 2019, you can also deploy the HTML5 client on Windows Server 2016. In this topic, I wanted to share with you the steps I followed to deploy the Windows Server 2019 RDS farm.

Requirements

To make this lab, I have deployed four virtual machines which are running Windows Server 2019:

  • RDS-APP-01: RD Host Server that hosts the RemoteApp collection
  • RDS-DKP-01: RD Host Server that hosts the Remote Desktop collection
  • RDS-BRK-01: Hosts RD Broker and RD Licensing
  • RDS-WEB-01: Hosts RD Web Access and RD Gateway

Then I have a public certificate for RD Web Access and RD Gateway role:

I have also a private certificate for RD Broker publishing and RD Broker connection. To create this certificate, I duplicated the Workstation Authentication ADCS template as described in this topic.

I have register both certificates in PFX (with private key) and in cer (just the public certificate).

Finally, I have two DNS zone:

  • SeromIT.local: Active Directory forest zone
  • SeromIT.com: splitted zone: hosted by local domain controllers and by public provider. I use this zone to connect from Internet. In this zone I have created two registrations:
    • Apps.SeromIT.com: leading to RDS-WEB-01 (CNAME)
    • RDS-GW.SeromIT.com: leading to RDS-BRK-01 (CNAME) for the gateway

RDS farm deployment

To deploy the RDS farm, I use only PowerShell. In this way I can reproduce the deployment for other customers. First of all, I run a Remote Desktop deployment to configure a RD Web Access, a RD Broker and a RD Host Server:


New-RDSessionDeployment -ConnectionBroker RDS-BRK-01.SeromIT.local `
                        -SessionHost RDS-DKP-01.SeromIT.local `
                        -WebAccessServer RDS-WEB-01.SeromIT.local

Then I run a PowerShell cmdlet to add another RD Host Server, a RD Licensing and a RD Gateway role.


Add-RDServer -Server RDS-APP-01.SeromIT.local `
             -Role RDS-RD-SERVER `
             -ConnectionBroker RDS-BRK-01.SeromIT.local

Add-RDServer -Server RDS-BRK-01.SeromIT.local `
             -Role RDS-Licensing `
             -ConnectionBroker RDS-BRK-01.SeromIT.local

Add-RDServer -Server RDS-WEB-01.SeromIT.local `
             -Role RDS-Gateway `
             -ConnectionBroker RDS-BRK-01.SeromIT.local `
             -GatewayExternalFqdn RDS-GW.SeromIT.com

Once these commands are run, the role deployment is finished:

Now we can configure the certificates.

Certificate configuration

To configure each certificate, I use again PowerShell. Remember, I have store both certificates in PFX in C:\temp\RDS of my broker server.

$Password = Read-Host -AsSecureString
$Password = Read-Host -AsSecureString
Set-RDCertificate -Role RDGateway `
                  -ImportPath C:\temp\RDS\wildcard_SeromIT_com.pfx `
                  -Password $Password `
                  -ConnectionBroker RDS-BRK-01.SeromIT.local `
                  -Force

Set-RDCertificate -Role RDWebAccess `
                  -ImportPath C:\temp\RDS\wildcard_SeromIT_com.pfx `
                  -Password $Password `
                  -ConnectionBroker RDS-BRK-01.SeromIT.local `
                  -Force

Set-RDCertificate -Role RDPublishing `
                  -ImportPath C:\temp\RDS\Broker.pfx `
                  -Password $Password `
                  -ConnectionBroker RDS-BRK-01.SeromIT.local `
                  -Force

Set-RDCertificate -Role RDRedirector `
                  -ImportPath C:\temp\RDS\Broker.pfx `
                  -Password $Password `
                  -ConnectionBroker RDS-BRK-01.SeromIT.local `
                  -Force

Once these commands are executed, the certificate are installed for each role:

Collection creation

Now I create a collection to add resources inside the RD Web Access portal:

New-RDSessionCollection -CollectionName Desktop `
                        -CollectionDescription "Desktop Publication" `
                        -SessionHost RDS-DKP-01.SeromIT.local `
                        -ConnectionBroker RDS-BRK-01.SeromIT.local

Then from Server Manager, you can configure settings of this collection:

Enable HTML 5 Remote Desktop client

In this lab, I don’t want to use the legacy portal. I’d like to use the super cool new HTML5 RD client. To enable this client, I connect to the server hosting RD Web Access role and I run the following cmdlet:

Install-Module -Name PowerShellGet -Force -Confirm:$False

After, close and open again a PowerShell window. Then execute this command:

Install-Module -Name RDWebClientManagement -Confirm:$False

Then copy the RD Broker certificate in cer format into the RD Web Access server and run the following cmdlets:

Import-RDWebClientBrokerCert c:\temp\broker.cer

Install-RDWebClientPackage
Publish-RDWebClientPackage -Type Production -Latest

Now you can connect to the RD Web client by using the following URL: https:///RDWeb/WebClient/Index.html. In my example, I connect to https://apps.SeromIT.com/RDWeb/WebClient/Index.html.

Conclusion

I like the RD Web client for several reasons. First, you can connect to a RDS session from a HTML5 ready web browser. You don’t need anymore a compatible RD client and you can connect from several devices such as Mac, a Linux device or maybe a tablet or smartphone. Secondly, the HTML5 client doesn’t require settings for SSO like we did with the legacy portal. The deployment is easier as before. And finally I found this client more user friendly than the legacy portal. The only thing missing is the ability to enable the HTML5 client by a single click or PowerShell cmdlet, or to enable it by default.

The post Deploy a Windows Server 2019 RDS farm with HTML5 client appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-windows-server-2019-rds-farm-with-html5-client/feed/ 28 6492
Deploy Windows Admin Center in HA through Kemp Load Balancer //www.tech-coffee.net/deploy-windows-admin-center-in-ha-through-kemp-load-balancer/ //www.tech-coffee.net/deploy-windows-admin-center-in-ha-through-kemp-load-balancer/#comments Thu, 03 May 2018 11:44:21 +0000 //www.tech-coffee.net/?p=6318 Windows Admin Center (formerly Honolulu Project) was released in April 2018 by Microsoft. WAC is a web-based management tool to help to administrate Windows Server and hyperconverged cluster. In part of my job, I use primarily Windows Admin Center for Storage Spaces Direct Cluster and to manage Windows Server in Core edition especially drivers. Since ...

The post Deploy Windows Admin Center in HA through Kemp Load Balancer appeared first on Tech-Coffee.

]]>
Windows Admin Center (formerly Honolulu Project) was released in April 2018 by Microsoft. WAC is a web-based management tool to help to administrate Windows Server and hyperconverged cluster. In part of my job, I use primarily Windows Admin Center for Storage Spaces Direct Cluster and to manage Windows Server in Core edition especially drivers. Since the release of Windows Admin Center, Microsoft provides the capability to deploy it in high availability. In this topic we’ll see how to deploy Windows Admin Center in this manner. Moreover, some of customers want to connect to WAC through a load balancer such as Kemp to avoid private certificate management and to be able to connect from the Internet. So, we’ll see also how to connect to WAC through a Kemp load balancer.

Requirements

To follow this topic, you need the following:

  • 2x virtual machines
    • I set 2vCPU, 4GB of memory, a dynamic OS disk of 60GB
    • I deployed Windows Server 2016 in Core edition
    • 1x Network Adapter for management
    • 1x Network Adapter for cluster
    • The VM must be joined to the Active Directory domain
  • 1x shared disk of 10GB for these two VMs. You can use traditional iSCSI, FC LUN or shared VHDX / VHD Set
  • 1x IP in management network for the cluster
  • 1x IP in management network for Windows Admin Center cluster resource
  • 1x Name for the cluster (in this example: Cluster-WAC01.SeromIT.local)
  • 1x Name for Windows Admin Center cluster resource (in this example: WAC.SeromIT.local)

You need also to download the latest Windows Admin Center build from this link and the script to deploy WAC in high availability from this link.

Deploy the cluster

First of all, we have to deploy features on both virtual machine. I install Failover Clustering and its PowerShell module with these cmdlet:

Install-WindowsFeature RSAT-Clustering-PowerShell, Failover-Clustering -ComputerName "Node1"
Install-WindowsFeature RSAT-Clustering-PowerShell, Failover-Clustering -ComputerName "Node2"

Then I initialize the shared disk. First, I show disks connected to the VM. The disk 0 is for operating system and disk 1 is the shared disk. Then I initialize the disk and I create a NTFS volume:

Initialize-Disk -Number 1
New-Volume -DiskNumber 1 -FriendlyName Data -FileSystem NTFS

Once the volume is created, I run a test cluster to check if nodes are compliant to be part of a cluster. To execute this validation, I run the following cmdlet:

Test-Cluster -Node Node1,Node2

N.B: My test reports an issue related to software update levels: it is because I have not the last Windows Defender signature on one node.

Once you have validated the report, you can create the cluster by running the following cmdlet. I specify NoStorage option to avoid that my shared disk is taken by the cluster for witness usage.

New-Cluster -Node Node1, Node2, -Name ClusterName -StaticAddress ClusterIPAddress -NoStorage

Once the cluster is created, I move the Cluster Name Object (CNO) to a specific OU. Then I add the permission to this CNO to create computer object in this OU.

Next I rename cluster network to Management and Cluster. The network cluster with Cluster and Client role is renamed Management and the one with the cluster role is called … cluster.

(Get-Cluster -Name ClusterName | Get-ClusterNetwork -Name "Cluster Network 1").Name="Management"
(Get-Cluster -Name ClusterName | Get-ClusterNetwork -Name "Cluster Network 2").Name="Cluster"

Then I add a file share witness. For that I have created a share on my domain controller server called Cluster-WAC$:

Get-Cluster -Name ClusterName | Set-ClusterQuorum -FileShareWitness "\\path\to\the\file\share\witness"

To finish I add a the Cluster Shared Volume (CSV):

Get-Disk -Number 1 | Add-ClusterDisk
Add-ClusterSharedVolume -Name "Cluster Disk 1"
(Get-ClusterSharedVolume -Name "Cluster Disk 1").Name="Data"
Rename-Item C:\ClusterStorage\Volume1\ Data

As you can see in the failover clustering console, the file share witness is well configured.

The cluster network are renamed to Management and Cluster.

The CSV is present in the cluster and it’s called Data.

(Optionnal) Get a certificate from enterprise PKI

If you want to use your own enterprise PKI, you can follow these steps. Connect to an enterprise CA and manage the template. Duplicate the Web Server template. In the Subject Name, choose Supply in the request. Allow also the private key to be exportable.

Then request a certificate from the MMC or from the web interface and specify the following information:

  • Subject Name: Common Name as the Windows Admin Center cluster resource Name
  • Subject Alternative Name:
    • DNS: Windows Admin Center Cluster resource name
    • DNS: first node FQDN
    • DNS: second node FQDN

Then export the certificate and its private key in a PFX file.

Deploy Windows Admin Center

In a folder on a node of the cluster, you should have the following files: (WAC.pfx only if you have created your own certificate from the enterprise PKI)

Run the following cmdlets to deploy Windows Admin Center in the cluster:

$CertPassword = Read-Host -AsSecureString
.\Install-WindowsAdminCenterHA.ps1 -ClusterStorage c:\ClusterStorage\Data -ClientAccessPoint WACClusterResourceName -MSIPath c:\path\to\WAC\build.msi -CertPath c:\path\to\pfx\file.pfx -CertPassword $CertPassword -StaticAddress IPAddressForWAC

N.B: If you have no enterprise PKI, you can deploy the service by running the following cmdlet:

.\Install-WindowsAdminCenterHA.ps1 -ClusterStorage c:\ClusterStorage\Data -ClientAccessPoint WACClusterResourceName -MSIPath c:\path\to\WAC\build.msi -StaticAddress IPAddressForWAC -GenerateSSLCert

After some times, the service is deployed in the failover clustering and you have now Windows Admin Center in high availability.

If you specify the name of the WAC cluster resource as below, you can connect to Windows Admin Center.

Configure Kemp Load Balancer

First of all, I create a rule to redirect the traffic to the right service. Because this is a reverse proxy, a single IP address is used for several web services. In this configuration I use the web service URL to redirect traffic to the right web server. To make it work, a rule as the following must be created.

Then I create a Sub Virtual Service in my reverse proxy virtual service. I name it Windows Admin Center and I specify the name of the WAC cluster resource.

Then I map the rule I have previously created with the Windows Admin Center Sub Virtual Service:

To finish, verify that the SSL Acceleration is activated with the right public certificate as below:

Then I connect to Windows Admin Center through the Kemp Load Balancer. As you can see, the certificate is validated without any warning and I can get access to WAC. Thanks to these settings, you can access to WAC through the Internet.

The post Deploy Windows Admin Center in HA through Kemp Load Balancer appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-windows-admin-center-in-ha-through-kemp-load-balancer/feed/ 4 6318
Real Case: Implement Storage Replica between two S2D clusters //www.tech-coffee.net/real-case-implement-storage-replica-between-two-s2d-clusters/ //www.tech-coffee.net/real-case-implement-storage-replica-between-two-s2d-clusters/#comments Fri, 06 Apr 2018 09:08:17 +0000 //www.tech-coffee.net/?p=6252 This week, in part of my job I deployed a Storage Replica between two S2D Clusters. I’d like to share with you the steps I followed to implement the storage replication between two S2D hyperconverged cluster. Storage Replica enables to replicate volumes at the block-level. For my customer, Storage Replica is part of a Disaster ...

The post Real Case: Implement Storage Replica between two S2D clusters appeared first on Tech-Coffee.

]]>
This week, in part of my job I deployed a Storage Replica between two S2D Clusters. I’d like to share with you the steps I followed to implement the storage replication between two S2D hyperconverged cluster. Storage Replica enables to replicate volumes at the block-level. For my customer, Storage Replica is part of a Disaster Recovery Plan in case of the first room is down.

Architecture overview

The customer has two rooms. In each room, a four-node S2D cluster has been deployed. Each node has a Mellanox Connectx3-Pro (dual 10GB ports) and an Intel network adapter for VMs. Currently the Mellanox network adapter is used for SMB traffic such as S2D and Live-Migration. This network adapter supports RDMA. Storage Replica can leverage SMB Direct (RDMA). So, the goal is to use also the Mellanox adapters for Storage Replica.

In each room, two Dell S4048S switches are deployed in VLT. Then the switches in both room are connected by two fiber optics of around 5Km. The latency is less than 5ms so we can implement synchronous replication. The Storage Replica traffic must use the fiber optics. Currently the storage traffic is in a VLAN (ID: 247). We will use the same VLAN for Storage Replica.

Each S2D cluster has several Cluster Shared Volume (CSV). Among all these CSV, two CSV will be replicated in each S2D cluster. Below you can find the name of volumes that will be replicated:

  • (S2D Cluster Room 1) PERF-AREP-01 -> (S2D Cluster Room 2) PERF-PREP-01
  • (S2D Cluster Room 1) PERF-AREP-02 -> (S2D Cluster Room 2) PERF-PREP-02
  • (S2D Cluster Room 2) PERF-AREP-03 -> (S2D Cluster Room 1) PERF-PREP-03
  • (S2D Cluster Room 2) PERF-AREP-04 -> (S2D Cluster Room 1) PERF-PREP-04

In order to work, each volume (source and destination) are strictly identical (same capacity, same resilience, same file system etc.). I will create a log volume per volume so I’m going to deploy 4 log volumes per S2D cluster.

Create log volumes

First of all, I create the log volumes by using the following cmdlet. The log volumes must not be converted to Cluster Shared Volume and a drive letter must be assigned:

New-Volume -StoragePoolFriendlyName "<storage pool name>" `
           -FriendlyName "<volume name>" `
           -FileSystem ReFS `
           -DriveLetter "<drive letter>" `
           –Size <capacity> 

As you can see in the following screenshots, I created four log volumes per cluster. The volumes are not CSV.

In the following screenshot, you can see that for each volume, there is a log volume.

Grant Storage Replica Access

You must grant security access between both cluster to implement Storage Replica. To grant the access, run the following cmdlets:

Grant-SRAccess -ComputerName "<Node cluster 1>" -Cluster "<Cluster 2>"
Grant-SRAccess -ComputerName "<Node cluster 2>" -Cluster "<Cluster 1>"

Test Storage Replica Topology

/!\ I didn’t success to run the test storage replica topology. It seems there is a known issue in this cmdlet

N.B: To run this test, you must move the CSV on the node which host the core cluster resources. In the below example, I moved the CSV to replicate on HyperV-02.


To run the test, you have to run the following cmdlet:

Test-SRTopology -SourceComputerName "<Cluster room 1>" `
                -SourceVolumeName "c:\clusterstorage\PERF-AREP-01\" `
                -SourceLogVolumeName "R:" `
                -DestinationComputerName "<Cluster room 2>" `
                -DestinationVolumeName "c:\ClusterStorage\Perf-PREP-01\" `
                -DestinationLogVolumeName "R:" `
                -DurationInMinutes 10 `
                -ResultPath "C:\temp" 

As you can see in the below screenshot, the test is not successful because a path issue. Even if the test didn’t work, I was able to enable Storage Replica between cluster. So if you have the same issue, try to enable the replication (check the next section).

Enable the replication between two volumes

To enable the replication between the volumes, you can run the following cmdlets. With these cmdlets, I created the four replications.

New-SRPartnership -SourceComputerName "<Cluster room 1>" `
                  -SourceRGName REP01 `
                  -SourceVolumeName c:\ClusterStorage\PERF-AREP-01 `
                  -SourceLogVolumeName R: `
                  -DestinationComputerName "<Cluster Room 2>" `
                  -DestinationRGName REP01 `
                  -DestinationVolumeName c:\ClusterStorage\PERF-PREP-01 `
                  -DestinationLogVolumeName R:

New-SRPartnership -SourceComputerName "<Cluster room 1>" `
                  -SourceRGName REP02 `
                  -SourceVolumeName c:\ClusterStorage\PERF-AREP-02 `
                  -SourceLogVolumeName S: `
                  -DestinationComputerName "<Cluster Room 2>" `
                  -DestinationRGName REP02 `
                  -DestinationVolumeName c:\ClusterStorage\PERF-PREP-02 `
                  -DestinationLogVolumeName S:

New-SRPartnership -SourceComputerName "<Cluster Room 2>" `
                  -SourceRGName REP03 `
                  -SourceVolumeName c:\ClusterStorage\PERF-AREP-03 `
                  -SourceLogVolumeName T: `
                  -DestinationComputerName "<Cluster room 1>" `
                  -DestinationRGName REP03 `
                  -DestinationVolumeName c:\ClusterStorage\PERF-PREP-03 `
                  -DestinationLogVolumeName T:

New-SRPartnership -SourceComputerName "<Cluster Room 2>" `
                  -SourceRGName REP04 `
                  -SourceVolumeName c:\ClusterStorage\PERF-AREP-04 `
                  -SourceLogVolumeName U: `
                  -DestinationComputerName "<Cluster room 1>" `
                  -DestinationRGName REP04 `
                  -DestinationVolumeName c:\ClusterStorage\PERF-PREP-04 `
                  -DestinationLogVolumeName U: 

Now that replication is enabled, if you open the failover clustering management, you can see that some volumes are source or destination. A new tab called replication is added and you can check the replication status. The destination volume is no longer accessible until you reverse storage replica way.

Once the initial synchronization is finished, the replication status is Continuously replicating.

Network adapters used by Storage Replica

In the overview section, I said that I want use the Mellanox network adapters for Storage Replica (for RDMA). So I run the following cmdlet to check the Storage Replica is using the Mellanox Network Adapters.

Reverse the Storage Replica way

To reverse the Storage Replica, you can use the following cmdlet:

Set-SRPartnership -NewSourceComputerName "<Cluster room 2>" `
                  -SourceRGName REP01 `
                  -DestinationComputerName "<Cluster room 1>" `
                  -DestinationRGName REP01   

Conclusion

Storage Replica enables to replicate a volume at the block-level to another volume. In this case, I have two S2D clusters where each cluster hosts two source volumes and two destination volumes. Storage Replica helps the customer to implement a Disaster Recovery Plan.

The post Real Case: Implement Storage Replica between two S2D clusters appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/real-case-implement-storage-replica-between-two-s2d-clusters/feed/ 3 6252
Storage Spaces Direct dashboard //www.tech-coffee.net/storage-spaces-direct-dashboard/ //www.tech-coffee.net/storage-spaces-direct-dashboard/#comments Thu, 14 Dec 2017 10:17:42 +0000 //www.tech-coffee.net/?p=6015 Today I release a special Christmas gift for you. For some time, I’m writing a PowerShell script to generate a Storage Spaces Direct dashboard. This dashboard enables you to validate each important setting for a S2D cluster. I decided to write this PowerShell script to avoid to run hundred of PowerShell cmdlet and check manually ...

The post Storage Spaces Direct dashboard appeared first on Tech-Coffee.

]]>
Today I release a special Christmas gift for you. For some time, I’m writing a PowerShell script to generate a Storage Spaces Direct dashboard. This dashboard enables you to validate each important setting for a S2D cluster.

I decided to write this PowerShell script to avoid to run hundred of PowerShell cmdlet and check manually returned value. With this dashboard, you get almost all the information you needs.

Where can I download the script

The script is available on github. You can download the documentation and the script from this link. Please read the documentation before running the script.

Storage Spaces Direct dashboard

The below screenshot shows you a dashboard example. This dashboard has been generated from my 2-node cluster in lab.

Roadmap

I plan to improve the script next year by adding the support for disaggregated S2D deployment model and to add information such as the cache / capacity ratio and the reservation space.

Special thanks

I’d like to thanks Dave Kawula, Charbel Nemnom, Kristopher Turner and Ben Thomas. Thanks for helping me to resolve most of the issues by running the script on your S2D infrastructures

The post Storage Spaces Direct dashboard appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/storage-spaces-direct-dashboard/feed/ 10 6015
Next gen Microsoft management tool: Honolulu //www.tech-coffee.net/next-gen-microsoft-management-tool-honolulu/ //www.tech-coffee.net/next-gen-microsoft-management-tool-honolulu/#respond Mon, 25 Sep 2017 06:10:47 +0000 //www.tech-coffee.net/?p=5791 Since the beginning of the year, Microsoft is working on a new management tool based on modern web languages such as HTML5, Angular and so on. This tool is called Honolulu. Honolulu is a user-friendly web interface that enables to manage Windows Server, Failover Clustering and Hyperconverged cluster. Currently, to manage hyperconverged cluster, Honolulu requires ...

The post Next gen Microsoft management tool: Honolulu appeared first on Tech-Coffee.

]]>
Since the beginning of the year, Microsoft is working on a new management tool based on modern web languages such as HTML5, Angular and so on. This tool is called Honolulu. Honolulu is a user-friendly web interface that enables to manage Windows Server, Failover Clustering and Hyperconverged cluster. Currently, to manage hyperconverged cluster, Honolulu requires Semi-Annual Windows Server release.

Honolulu is currently in public preview release which means that the product is under construction :). Honolulu is built in a modular way where you can add or remove extensions. Each management feature is included in an extension that you can add or remove. Microsoft expects later that vendors develop third party extensions. To be honest with you, this is the set of tools I’m waiting for a while ago. Microsoft was in late in management tools compared to other companies such as VMware. I hope that Honolulu will close the gap with VMware vCenter and Nutanix Prism.

Microsoft listens customers and feedback to improve this product. So you can download the product here and report feedback in this user voice.

In this topic, we will see an overview of Honolulu. I’ll dedicate a topic about Honolulu and Microsoft hyperconverged solution because Honolulu requires Windows Server 2016 RS3 release (in Semi-Annual Channel) to work with and I have not yet upgraded my lab.

Getting started with Honolulu

In the below screenshot, you can see Honolulu home page. You get all your connections (and the type) and you can add more of them.

By clicking on arrow next to Project Honolulu, you can filter the connection type on Server Manager, Failover Cluster Manager and Hyper-Converged Cluster Manager.

By clicking on the wheel (top right), you can access to extension manager and you get installed extensions. For example you have extensions for firewall management, Hyper-V, failover clustering and so on. You can remove extensions you don’t want.

Sever Manager

As you have seen before, you can manage a single server from Honolulu. I will not show you all management tools but just an overview of Honolulu. By adding and connecting to a server, you get the following dashboard. In this dashboard you can retrieve real-time metrics (CPU, memory and network) and information, you can restart or shutdown the system or edit RDP access and environment variables. For the moment you can’t resize columns and tables and I think in near future that Microsoft will add this feature.

An interesting module is the Events. In this pane, you get the same thing as this good old Event Viewer. You can retrieve all the events of your system and you can filter them. Maybe a checkbox enabling real-time events could be interesting :).

The devices pane is also available. In a single view, you have all hardware installed in the system. If Microsoft adds the ability to install drivers from there, Honolulu can replace DevCon for Core servers.

You can also browse the system files and manage file and folders.

Another pane enables to manage the network adapters as you can see below. For the moment this pane is limited because it doesn’t allow to manage advanced feature such as RDMA, RSS, VMMQ and so on.

You can also add or remove roles and features from Honolulu. It is really cool that you can manage this from a Web service.

If you use Hyper-V, you can manage VMs from Honoulu. The dashboard also is really nice because there is counters about VMs and last events.

Another super cool feature is the ability to manage updates from Honolulu. I hope Microsoft will add WSUS configuration from this pane with some scheduling.

Failover Cluster management

Honolulu enables also to manage failover cluster. You can add a failover cluster connection from Honolulu home page. Just click on Add.

Then specify the cluster name. Honolulu asks if you want to add also the servers member of the cluster.

One it is added, you can select it and you get this dashboard. You get cluster core ressource states, and some information about the cluster such as the number of roles, networks and disks.

By clicking on disks, you can get a list of Cluster Shared Volumes in the cluster and information about them.

If your cluster hosts Hyper-V VMs (not in hyperconverged way), you can manage VMs from there. You get the same pane than in Honolulu server manager. The VMs and related metrics are shown and you can create or delete virtual machines. A limited set of option is currently available.

You can also get the vSwitches deployed in each node. It’s pitty that Switch Embedded Teaming is not yet supported but I think the support will be added later.

Hyperconverged cluster management

As I said earlier, hyperconverged cluster is supported but only for Windows Server Semi-Annual channel (for the moment). I’ll dedicate a topic about Honolulu and hyperconverged cluster once I’ll upgrade my lab.

Update Honolulu

When a Honolulu update is released, you get notified by Update Available mention. Currently, the update process is not really user-friendly because when you click on Update Available, an executable is downloaded and you have to run again the Honolulu installation (specify installation path, certificate thumbprint etc.). I hope in the future that the update process will be a self-update.

When I have downloaded the executable, I checked the package size and it is amazing: only 31MB.

Conclusion

Finally, they did it! A true modern management tool. I try for Microsoft this tool for 3 months and I can say you that developers work really quickly and they make a great job. Features are added quickly and Microsoft listens customers. I recommend you to post in the user voice the features you want. The tool is currently not perfect, some features are missing but Honolulu is still in preview release ! Microsoft is in the right direction with Honolulu and I hope this tool will be massively used. I hope also that Honolulu will help to install more Windows Server in Core edition, especially for Hyper-V and storage server.

The post Next gen Microsoft management tool: Honolulu appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/next-gen-microsoft-management-tool-honolulu/feed/ 0 5791
Deploy Veeam Cloud Connect for large environments in Microsoft Azure //www.tech-coffee.net/deploy-veeam-cloud-connect-for-large-environments-in-microsoft-azure/ //www.tech-coffee.net/deploy-veeam-cloud-connect-for-large-environments-in-microsoft-azure/#comments Fri, 30 Jun 2017 07:57:00 +0000 //www.tech-coffee.net/?p=5604 Veeam Cloud Connect is a solution to store backups and archives in a second datacenter such as Microsoft Azure. Thanks to this technology, we can easily follow the 3-2-1 backup rule (3 backups; 2 different medias; 1 offsite). Last time I talked about Veeam Cloud Connect, I deployed all Veeam roles within a single VM. ...

The post Deploy Veeam Cloud Connect for large environments in Microsoft Azure appeared first on Tech-Coffee.

]]>
Veeam Cloud Connect is a solution to store backups and archives in a second datacenter such as Microsoft Azure. Thanks to this technology, we can easily follow the 3-2-1 backup rule (3 backups; 2 different medias; 1 offsite). Last time I talked about Veeam Cloud Connect, I deployed all Veeam roles within a single VM. This time I’m going to deploy the Veeam Cloud Connect in Microsoft Azure where roles are allocated across different Azure VMs. Moreover, some roles such as the Veeam Cloud Gateway will be deployed in a high availability setup.

Before I begin, I’d like to thank Pierre-Francois Guglielmi – Veeam Alliances System Engineer (@pfguglielmi) for his time. Thank you for your review, your English correction and your help.

What is Veeam Cloud Connect

Veeam Cloud Connect provides an easy way to copy your backups to an offsite location that can be based on public cloud (such as Microsoft Azure) or for archival purpose. Instead of investing money in another datacenter to store backup copies, you can choose to leverage Veeam Cloud Connect (VCC) to send these backup copies to Microsoft Azure. VCC exists in the form of two templates that you can find in the Microsoft Azure Marketplace:

  • Veeam Cloud Connect for Service Providers
  • Veeam Cloud Connect for the Enterprise

The first one is for service providers with several customers who want to deliver Backup-as-a-Service offerings using the Veeam Cloud Connect technology. This provider can deploy the solution in a public cloud and deliver the service to clients. The second version is dedicated to companies willing to build similar Backup-as-a-Service offerings internally, leveraging the public cloud to send backup copies offsite. For this topic, I’ll work on Veeam Cloud Connect for Enterprise, but the technology is the same.

Veeam Cloud Connect is a Veeam Backup & Replication server with Cloud Connect features unlocked by a specific license file. When deploying this kind of solution, you have the following roles:

  • Microsoft Active Directory Domain Controller (optional)
  • Veeam Cloud Connect server
  • Veeam Cloud Gateway
  • Veeam backup repositories
  • Veeam WAN Accelerator (optional)

Microsoft Active Directory Domain Controller

A Domain controller is not a mandatory role for the Veeam Cloud Connect infrastructure but it can make servers and credentials management easier. If you plan to establish a site-to-site VPN from your on-premises to Microsoft Azure, you can deploy domain controllers within Azure, in the same forest than the existing domain controllers and add all Azure VMs to a domain. In this way, you can use your existing credentials to manage servers, apply existing GPOs and create specific service accounts for Veeam managed by Active Directory. It is up to you: if you don’t deploy a domain controller within Azure, you can still deploy the VCC infrastructure. But then you’ll have to manage servers one by one.

Veeam Cloud Connect server

Veeam Cloud Connect server is a Veeam Backup & Replication server with Cloud Connect features. This is the central point to manage and deploy Veeam Cloud Connect infrastructure components. From this component, you can deploy Veeam Cloud Gateway, WAN accelerator, backup repositories and manage backup copies.

Veeam Cloud Gateway

The Veeam Cloud Gateway component is the entry point of your Veeam Cloud Connect infrastructure. When you’ll choose to send a backup copy to this infrastructure, you’ll specify the public IP or DNS name of the Veeam Cloud Gateway server(s). This service is based on Azure VM(s) running Windows Server and with a public IP address to allow secure inbound and outbound connections to the on-premises environment. If you choose to deploy several Veeam Cloud Gateway servers for high availability, you have two ways to provide a single entry point:

  • Round-Robin record in your public DNS registrar; one DNS name for all A records bound to Veeam Cloud Gateways public IP adresses.
  • A Traffic Manager in front of all Veeam Cloud Gateway servers

Because Veeam Cloud Gateway has its own load balancing mechanism, you can’t deploy Azure Load balancer, F5 appliance or other kinds of load balancers on front of Veeam Cloud Gateways.

Veeam Backup repositories

This is the storage system that stores backups. It can be a single Windows Server with a single disk or a storage space. Don’t forget that in Azure, the maximum size of a single data disk is 4TB (as of June 2017). You can also leverage the Scale-Out Backup Repository functionality where several backup repositories are managed by Veeam as a single logical repository. To finish, and this is the scenario I’m going to present later in this topic, you can store backups on a Scale-Out File Server based on a Storage Spaces Direct cluster. This solution provides SMB 3.11 access to the storage.

Veeam WAN Accelerator

Veeam WAN accelerator is the same component already available in Veeam Backup & Replication. This service optimizes the traffic between source and destination by sending only new unique blocks not already known at destination. To leverage this feature, you need a pair of WAN Accelerator servers. The source WAN Accelerator creates a digest for data blocks and the target synchronizes these digests and populates a global cache. During next transfer, the source WAN Accelerator compares digests of the blocks in the new incremental backup file with the already known digests. If nothing has changed, the block is not copied over the network and the data is taken from the global cache in the target, or from the target backup repositories, which in such a case act as infinite cache.

Architecture Overview

For this topic, I decided to separate roles on different Azure VMs. I’ll have 5 kinds of Azure VMs:

  • Domain Controllers
  • Veeam Cloud Gateways
  • Veeam Cloud Connect
  • Veeam WAN Accelerator
  • File Servers (Storage Spaces Direct)

First, I deploy two Domain Controllers to ease management. This is completely optional. All domain controllers are members of an Azure Availability Set.

The Veeam Cloud Gateway servers are located behind a Traffic Manager profile. Each Veeam Cloud Gateway has its own public IP address. The Traffic Manager profile distributes the traffic across public IP addresses of Veeam Cloud Gateway servers. The JSON template provided below allows to deploy from 1 to 9 Cloud Gateway servers depending on your needs. All Veeam Cloud Gateways are added to an Availability Set to support a 99,95% SLA.

Then I deploy two Veeam Cloud Connect VMs: one active and one passive. I add these both Azure VMs in an Availability Set. If the first VM crashes, the backup configuration is restored to the second VM.

The WAN Accelerator is not in an Availability Set because you can add only one WAN Accelerator per tenant. You can deploy as many WAN accelerators as required.

Finally, the backup repository is based on Storage Spaces Direct. I deploy 4 Azure VMs to leverage parity. I choose parity because my S2D managed disks are based on SSD (premium disk). If you want more performance or if you choose standard disks, I recommend you mirroring instead of parity. You can use a single VM to store backups to save money but for this demonstration, I’d like to share with Storage Spaces Direct just to show that it is possible. However, there is one limitation with S2D in Azure: for better performance, managed disks are recommended. An Availability Set with Azure VMs with managed disks supports only three fault domains. That means that in a four-node S2D cluster, two nodes will be in the same fault domain. So there is a chance that two nodes fail simultaneously. But dual parity (or 3-way mirroring) supports two fault domain failures.

Azure resources: Github

I have published in my Github repository a JSON template to deploy the infrastructure described above. You can use this template to deploy the infrastructure for your lab or production environment. In this example, I won’t explain how to deploy the Azure Resources because this template does it automatically.

Active Directory

Active Directory is not mandatory for this kind of solution. I have deployed domain controllers to make management of servers and credentials easier. To configure domain controllers, I started the Azure VMs where domain controller roles will be deployed. In the first VM, I run the following PowerShell cmdlets to deploy the forest:

# Initialize the Data disk
Initialize-Disk -Number 2

#Create a volume on disk
New-Volume -DiskNumber 2 -FriendlyName Data -FileSystem NTFS -DriveLetter E

#Install DNS and ADDS features
Install-windowsfeature -name AD-Domain-Services, DNS -IncludeManagementTools

# Forest deployment
Import-Module ADDSDeployment
Install-ADDSForest -CreateDnsDelegation:$false `
                   -DatabasePath "E:\NTDS" `
                   -DomainMode "WinThreshold" `
                   -DomainName "VeeamCloudConnect.net" `
                   -DomainNetbiosName "HOMECLOUD" `
                   -ForestMode "WinThreshold" `
                   -InstallDns:$true `
                   -LogPath "E:\NTDS" `
                   -NoRebootOnCompletion:$false `
                   -SysvolPath "E:\SYSVOL" `
                   -Force:$true

Then I run these cmdlets for additional domain controllers:

# Initialize data disk
Initialize-Disk -Number 2

# Create a volume on disk
New-Volume -DiskNumber 2 -FriendlyName Data -FileSystem NTFS -DriveLetter E

# Install DNS and ADDS features
Install-windowsfeature -name AD-Domain-Services, DNS -IncludeManagementTools

# Add domain controller to forest
Import-Module ADDSDeployment
Install-ADDSDomainController -NoGlobalCatalog:$false `
                             -CreateDnsDelegation:$false `
                             -Credential (Get-Credential) `
                             -CriticalReplicationOnly:$false `
                             -DatabasePath "E:\NTDS" `
                             -DomainName "VeeamCloudConnect.net" `
                             -InstallDns:$true `
                             -LogPath "E:\NTDS" `
                             -NoRebootOnCompletion:$false `
                             -SiteName "Default-First-Site-Name" `
                             -SysvolPath "E:\SYSVOL" `
                             -Force:$true

Once the Active Directory is ready, I add each Azure VM to the domain by using the following cmdlet:

Add-Computer -Credential homecloud\administrator -DomainName VeeamCloudConnect.net -Restart

Configure Storage Spaces Direct

I have written several topics on Tech-Coffee about Storage Spaces Direct. You can find for example this topic or this one. These topics are more detailed about the Storage Spaces Direct if you need more information.

To configure Storage Spaces Direct in Azure, I started all file servers VMs. Then in each VM I ran the following cmdlet:

# Rename vNIC connected in Internal subnet by Management
rename-netadapter -Name "Ethernet 3" -NewName Management

# Rename vNIC connected in cluster subnet by cluster
rename-netadapter -Name "Ethernet 2" -NewName Cluster

# Disable DNS registration for cluster vNIC
Set-DNSClient -InterfaceAlias *Cluster* -RegisterThisConnectionsAddress $False

# Install required features
Install-WindowsFeature FS-FileServer, Failover-Clustering -IncludeManagementTools -Restart

Once you have run these commands on each server, you can deploy the cluster:

# Validate cluster prerequisites
Test-Cluster -Node AZFLS00, AZFLS01, AZFLS02, AZFLS03 -Include "Storage Spaces Direct",Inventory,Network,"System Configuration"

#Create the cluster
New-Cluster -Node AZFLS00, AZFLS01, AZFLS02, AZFLS03 -Name Cluster-BCK01 -StaticAddress 10.11.0.160

# Set the cluster quorum to Cloud Witness (choose another Azure location)
Set-ClusterQuorum -CloudWitness -AccountName StorageAccount -AccessKey "AccessKey"

# Change the CSV cache to 1024MB per CSV
(Get-Cluster).BlockCacheSize=1024

# Rename network in the cluster
(Get-ClusterNetwork "Cluster Network 1").Name="Management"
(Get-ClusterNetwork "Cluster Network 2").Name="Cluster"

# Enable Storage Spaces Direct
Enable-ClusterS2D -Confirm:$False

# Create a volume and rename the folder Volume1 to Backup
New-Volume -StoragePoolFriendlyName "*Cluster-BCK01*" -FriendlyName Backup -FileSystem CSVFS_ReFS -ResiliencySettingName parity -PhysicalDiskRedundancy 2 -Size 100GB
Rename-Item C:\ClusterStorage\Volume1 Backup
new-item -type directory C:\ClusterStorage\Backup\HomeCloud

Then open the Active Directory console (dsa.msc) and edit the permissions of the OU where the Cluster Name Object is located. Grant the permission to create computer objects to the CNO (in the example Cluster-BCK01) on the OU.

Next, run the following cmdlets to complete the file server’s configuration:

# Add Scale-Out File Server to cluster
Add-ClusterScaleOutFileServerRole -Name BackupEndpoint

# Create a share
New-SmbShare -Name 'HomeCloud' -Path C:\ClusterStorage\Backup\HomeCloud -FullAccess everyone

First start of the Veeam Cloud Connect VM

First time you connect to the Veeam Cloud Connect VM, you should see the following screen. Just specify the license file for Veeam Cloud Connect and click Next. The next screen shows the requirements to run a Veeam Cloud Connect infrastructure.

Deploy Veeam Cloud Gateway

First component I deploy is Veeam Cloud Gateway. In the Veeam Backup & Replication console (in the Veeam Cloud Connect VM), you can navigate to Cloud Connect. Then select Add Gateway.

In the first screen, just click on Add New…

Then specify the name of the first gateway and provide a description.

In the next screen, enter credentials that have administrative permissions in the Veeam Cloud Gateway VM. For that, I created an account in Active Directory and I added it to local administrators of the VM.

Then Veeam tells you that it has to deploy a component on the target host. Just click Apply.

The following screen shows a successful deployment:

Next you have a summary of the operations applied to the target server and what has been installed.

Now you are back to the first screen. This time select the host you just added. You can change the external port. For this test I kept the default value.

Then choose “This server is located behind NAT” and specify the public IP address of the machine. You can find this information in the Azure Portal on the Azure VM blade. Here again I left the default internal port.

This time, Veeam tells you that it has to install Cloud Gateway components.

The following screenshot shows a successful deployment:

Repeat these steps for each Cloud Gateway. In this example, I have two Cloud Gateways:

To complete the Cloud Gateway configuration, open up the Azure Portal and edit the Traffic Manager profile. Add an endpoint for each Cloud Gateway you deployed and select the right public IP address. (Sorry I didn’t find how to loop the creation of endpoint in JSON template).

Because I have two Cloud Gateways and so two Traffic Manager endpoints with the same weight.

Add the backup repository

In this step, we add the backup repository. Open the Veeam Backup & Replication console (in Veeam Cloud Connect VM) and navigate to Backup Infrastructure. Then select Add Repository.

Enter a name and a description for your backup repository.

Next select Shared folder because Storage Spaces Direct with SOFS is based on … shared folder.

Then specify the UNC path to the share that you have previously created (Storage Spaces Direct section) and provide credentials with privileges.

In the next screen you can limit the maximum number of concurrent tasks, the data rates and set some advanced parameters.

Then I choose to not enable vPower NFS because it’s only use in VMware vSphere environments.

The following steps are not mandatory. I just clean up the default configuration. First I remove the default tenant.

Then I change the Configuration Backup task’s repository to the one created previously. For that I navigate to Configuration Backup:

Then I specify that I want to store the configuration backups to my S2D cluster. It is highly recommended to encrypt configuration backup to save credentials

Finally, I remove the default backup repository.

Deploy Veeam WAN Accelerator (Optional)

To add a Veeam WAN Accelerator, navigate to Backup Infrastructure and select Add WAN Accelerator.

In the next screen, click Add New…

Specify the FQDN of the target host and type in a description.

Then select credentials with administrative permissions on the target host.

In the next screen, Veeam tells you that a component has to be installed.

This screen shows a successful deployment.

Next you have a summary screen which provides a summary of the configuration of the target host.

Now you are back to the first screen. Just select the server that you just added and provide a description. I choose to leave the default traffic port and the number of streams.

Select a cache device with enough capacity for your needs.

Finally you can review your settings. If all is ok, just click Apply.

You can add as many WAN accelerators as needed. One WAN Accelerator can used by several tenants. Only one WAN Accelerator can be bound to a tenant.

Prepare the tenant

Now you can add a tenant. Navigate to Cloud Connect tab and select Add tenant.

Provide a user name, a password and a description to your tenant. Then choose Backup storage (cloud backup repository).

In the next screen you can define the maximum number of concurrent tasks and a bandwidth limit.

Then click Add to bind a backup repository to the tenant.

Specify the cloud repository name, the backup repository, the capacity of the cloud repository and the WAN Accelerator.

Once the cloud repository is configured, you can review the settings in the last screen.

Now the Veeam Cloud Connect infrastructure is ready. The enterprise can now connect to Veeam Cloud Connect in Azure.

Connect On-Premises to Veeam Cloud Connect

To connect to the Veeam Cloud Connect infrastructure from On-Premises, open your Veeam Backup & Replication console. Then in Backup infrastructure, navigate to Service Providers. Click Add Service Provider.

Type in the FQDN to your Traffic Manager profile and provide a description. Select the external port your chose for the Veeam Cloud Gateways configuration (I left mine to the default 6180).

In the next screen, enter the credentials to connect to your tenant.

If the credentials are correct, you should see the available cloud repositories.

Now you can create a backup copy job to Microsoft Azure.

Enter a job name and description and configure the copy interval.

Add virtual machine backups to copy to Microsoft Azure and click Next.

In the next screen you can set archival settings and how many restore points you want to keep. You can also configure some advanced settings.

If you a WAN Accelerator on-premises, you can select the source WAN Accelerator.

Then you can configure scheduling options for the backup copy job.

When the backup copy job configuration is complete, the job starts and you should see backup copies being created in the Veeam Cloud Connect infrastructure.

Conclusion

This topic introduces “a large” Veeam Cloud Connect infrastructure within Azure. All components can be deployed in a single VM (or two) for small environments or as described in this post for huge infrastructure. If you have several branch offices and want to send backup data to an offsite location, it can be the right solution instead of tape library.

The post Deploy Veeam Cloud Connect for large environments in Microsoft Azure appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-veeam-cloud-connect-for-large-environments-in-microsoft-azure/feed/ 1 5604
Deploy a SMB storage solution for Hyper-V with StarWind VSAN free //www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/ //www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/#comments Wed, 14 Jun 2017 14:29:22 +0000 //www.tech-coffee.net/?p=5543 StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS ...

The post Deploy a SMB storage solution for Hyper-V with StarWind VSAN free appeared first on Tech-Coffee.

]]>
StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS solution with Windows Server 2016 Standard Edition and StarWind VSAN free. It is an affordable solution for your Hyper-V VM storage.

In this topic, we’ll see how to deploy StarWind VSAN free on two nodes based on Windows Server 2016 Standard Core edition. Then we’ll deploy a Failover Clustering with SOFS to deliver storage to Hyper-V nodes.

Architecture overview

This solution should be deployed on physical servers with physical disks (NVMe, SSD or HDD etc.). For the demonstration, I have used two virtual machines. Each virtual machine has:

  • 4 vCPU
  • 4GB of Memories
  • 1x OS disk (60GB dynamic) – Windows Server 2016 Standard Core edition
  • 1x Data disk (127GB dynamic)
  • 3x vNIC (1x Management / iSCSI, 1x Hearbeart, 1x Synchronization)

Both nodes are deployed and joined to the domain.

Node preparation

On both nodes, I run the following cmdlet to install the features and prepare a volume for StarWind:

# Install FS-FileServer, Failover Clustering and MPIO
install-WindowsFeature FS-FileServer, Failover-Clustering, MPIO -IncludeManagementTools -Restart

# Set the iSCSI service startup to automatic
get-service MSiSCSI | Set-Service -StartupType Automatic

# Start the iSCSI service
Start-Service MSiSCSI

# Create a volume with disk
New-Volume -DiskNumber 1 -FriendlyName Data -FileSystem NTFS -DriveLetter E

# Enable automatic claiming of iSCSI devices
Enable-MSDSMAutomaticClaim -BusType iSCSI

StarWind installation

Because I have installed nodes in Core edition, I install and configure components from PowerShell and command line. You can download StarWind VSAN free from this link. To install StarWind from command line, you can use the following parameters:

Starwind-v8.exe /SILENT /COMPONENTS="comma separated list of component names" /LICENSEKEY="path to license file"

Current list of components:

Service: StarWind iSCSI SAN server.

service\haprocdriver: HA Processor Driver, it is used to support devices that have been created with older versions of the Software.

service\starflb: Loopback Accelerator, it is used with Windows 2012 and upper versions to accelerate iSCSI operation when client resides on the same machine as server.

service\starportdriver: StarPort driver that is required for operation of Mirror devices.

Gui : Management Console ;

StarWindXDll: StarWindX COM object;

StarWindXDll\powerShellEx: StarWindX PowerShell module.

To install StarWind, I have run the following command:

C:\temp\Starwind-v8.exe /SILENT /COMPONENTS="Service,service\starflb,service\starportdriver,StarWindxDll,StarWindXDll\powerShellEx /LICENSEKEY="c:\temp\ StarWind_Virtual_SAN_Free_License_Key.swk"

I run this command on both nodes. After this command is run, StarWind is installed and ready to be configured.

StarWind configuration

StarWind VSAN free provides a trial of 30 days for the management console. After the 30 days, you have to manage the solution from PowerShell. So I decided to configure the solution from PowerShell:

Import-Module StarWindX

try
{
    $server = New-SWServer -host 10.10.0.54 -port 3261 -user root -password starwind

    $server.Connect()

    $firstNode = new-Object Node

    $firstNode.ImagePath = "My computer\E"
    $firstNode.ImageName = "VMSTO01"
    $firstNode.Size = 65535
    $firstNode.CreateImage = $true
    $firstNode.TargetAlias = "vmsan01"
    $firstNode.AutoSynch = $true
    $firstNode.SyncInterface = "#p2=10.10.100.55:3260"
    $firstNode.HBInterface = "#p2=10.10.100.55:3260"
    $firstNode.CacheSize = 64
    $firstNode.CacheMode = "wb"
    $firstNode.PoolName = "pool1"
    $firstNode.SyncSessionCount = 1
    $firstNode.ALUAOptimized = $true
    
    #
    # device sector size. Possible values: 512 or 4096(May be incompatible with some clients!) bytes. 
    #
    $firstNode.SectorSize = 512
	
	#
	# 'SerialID' should be between 16 and 31 symbols. If it not specified StarWind Service will generate it. 
	# Note: Second node always has the same serial ID. You do not need to specify it for second node
	#
	$firstNode.SerialID = "050176c0b535403ba3ce02102e33eab" 
    
    $secondNode = new-Object Node

    $secondNode.HostName = "10.10.0.55"
    $secondNode.HostPort = "3261"
    $secondNode.Login = "root"
    $secondNode.Password = "starwind"
    $secondNode.ImagePath = "My computer\E"
    $secondNode.ImageName = "VMSTO01"
    $secondNode.Size = 65535
    $secondNode.CreateImage = $true
    $secondNode.TargetAlias = "vmsan02"
    $secondNode.AutoSynch = $true
    $secondNode.SyncInterface = "#p1=10.10.100.54:3260"
    $secondNode.HBInterface = "#p1=10.10.100.54:3260"
    $secondNode.ALUAOptimized = $true
        
    $device = Add-HADevice -server $server -firstNode $firstNode -secondNode $secondNode -initMethod "Clear"
    
    $syncState = $device.GetPropertyValue("ha_synch_status")

    while ($syncState -ne "1")
    {
        #
        # Refresh device info
        #
        $device.Refresh()

        $syncState = $device.GetPropertyValue("ha_synch_status")
        $syncPercent = $device.GetPropertyValue("ha_synch_percent")

        Start-Sleep -m 2000

        Write-Host "Synchronizing: $($syncPercent)%" -foreground yellow
    }
}
catch
{
    Write-Host "Exception $($_.Exception.Message)" -foreground red 
}

$server.Disconnect()

Once this script is run, two HA images are created and they are synchronized. Now we have to connect to this device through iSCSI.

iSCSI connection

To connect to the StarWind devices, I use iSCSI. I choose to set iSCSI from PowerShell to automate the deployment. In the first node, I run the following cmdlets:

New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260
New-IscsiTargetPortal -TargetPortalAddress 10.10.0.55 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.54
Get-IscsiTarget | Connect-IscsiTarget -isMultipathEnabled $True

In the second node, I run the following cmdlets:

New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260
New-IscsiTargetPortal -TargetPortalAddress 10.10.0.54 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.55
Get-IscsiTarget | Connect-IscsiTarget -isMultiPathEnabled $True

You can run the command iscsicpl in a server core to show the iSCSI GUI. You should have something like that:

PS: If you have a 1GB/s network, set the load balance policy to Failover and leave Active the 127.0.0.1 path. If you have 10GB/s network, choose Round Robin policy.

Configure Failover Clustering

Now that a shared volume is available for both node, you can create the cluster:

Test-Cluster -node VMSAN01, VMSAN02

Review the report and if all is ok, you can create the cluster:

New-Cluster -Node VMSAN01, VMSAN02 -Name Cluster-STO01 -StaticAddress 10.10.0.65 -NoStorage

Navigate to Active Directory (dsa.msc) and locate the OU where is located the Cluster Name Object. Edit the permissions on this OU to allow the Cluster Name Object to create computer object:

Now we can create the Scale-Out File Server role:

Add-ClusterScaleOutFileServerRole -Name VMStorage01

Then we can initialize the StarWind disk to convert it later in CSV. Then we can create a SMB share:

# Initialize the disk
get-disk |? OperationalStatus -like Offline | Initialize-Disk

# Create a CSVFS NTFS partition
New-Volume -DiskNumber 3 -FriendlyName VMSto01 -FileSystem CSVFS_NTFS

# Rename the link in C:\ClusterStorage
Rename-Item C:\ClusterStorage\Volume1 VMSTO01

# Create a folder
new-item -Type Directory -Path C:\ClusterStorage\VMSto01 -Name VMs

# Create a share
New-SmbShare -Name 'VMs' -Path C:\ClusterStorage\VMSto01\VMs -FullAccess everyone

The cluster looks like that:

Now from Hyper-V, I am able to store VM in this cluster like that:

Conclusion

StarWind VSAN free and Windows Server 2016 Standard Edition provides an affordable SDS solution. Thanks to this solution, you can deploy a 2-node storage cluster which provides SMB 3.11 shares. So Hyper-V can uses these shares to host virtual machines.

The post Deploy a SMB storage solution for Hyper-V with StarWind VSAN free appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/feed/ 4 5543
Host Veeam backup on Storage Spaces Direct //www.tech-coffee.net/host-veeam-backups-storage-spaces-direct/ //www.tech-coffee.net/host-veeam-backups-storage-spaces-direct/#comments Tue, 06 Jun 2017 08:01:26 +0000 //www.tech-coffee.net/?p=5527 Storage Spaces Direct (S2D) is well known to host virtual machines in the disaggregated or hyperconverged model. But S2D can also be used for backup purpose as a backup repository. You can plan to implement a S2D cluster in the dissagregated model to host virtual machines but also to store backups. Because Veeam Backup & ...

The post Host Veeam backup on Storage Spaces Direct appeared first on Tech-Coffee.

]]>
Storage Spaces Direct (S2D) is well known to host virtual machines in the disaggregated or hyperconverged model. But S2D can also be used for backup purpose as a backup repository. You can plan to implement a S2D cluster in the dissagregated model to host virtual machines but also to store backups. Because Veeam Backup & Replication can leverage a repository based on SMB shares, the Veeam backups can be hosted on a S2D cluster through Scale-Out File Server (SOFS).

Veeam Backup & Replication 9.5 provides advanced ReFS integration to provide faster synthetic full backup creation, reduce storage requirements and improve reliability and backup and restore performance. With Storage Spaces Direct, Microsoft recommends mainly ReFS as the file system. This is why, if you have a S2D cluster (or you plan to deploy) and Veeam, it can be a great opportunity to host backup on the S2D cluster.

S2D cluster provides three resilience models: mirroring, parity and mixed resiliency. Mirroring volume is not a good option to store backup because too many spaces is used for resilience (50% in 2-way mirroring or 3-way mirroring). Mirroring is good to store virtual machines. Parity is a good option to store backups. More you add storage, more your storage is efficiency. However, you need a 4-node S2D cluster. Mixed resiliency is also a good option because you mix mirroring and parity and so performance and efficiency. But mixed resiliency requires a fine design.

In this topic, I’ll implement a S2D cluster with a dual parity volume to store Veeam backups.

4-node S2D cluster deployment

First of all, you have to deploy a S2D Cluster. You can follow this topic to implement the cluster. For this topic, I have deployed a 4-node cluster. After that Operating System and drivers are installed I have run the following PowerShell script. This script installs required features on all nodes and enable RDMA for network adapters with the name containing Cluster.

$Nodes = "VMSDS01","VMSDS02","VMSDS03","VMSDS04"

Foreach ($Node in $Nodes){
    Try {
        $Cim = New-CimSession -ComputerName $Node -ErrorAction Stop
        Install-WindowsFeature Failover-Clustering, FS-FileServer -IncludeManagementTools -Restart ComputerName $Node -ErrorAction Stop | Out-Null

        Enable-NetAdapterRDMA -CimSession $Node -Name Cluster* -ErrorAction Stop | Out-Null
    }
    Catch {
        Write-Host $($Error[0].Exception.Message) -ForegroundColor Red -BackgroundColor Green
        Exit
    }
}

Then from a node of the cluster I have run the following cmdlets:

$Nodes = "VMSDS01","VMSDS02","VMSDS03","VMSDS04"
$ClusIP = "10.10.0.44"
$ClusNm = "Cluster-BCK01"

Test-Cluster -Node $Nodes -Include "Storage Spaces Direct", Inventory,Network,"System Configuration"
New-Cluster -Node $Nodes -StaticAddress $ClusIP -NoStorage
Enable-ClusterS2D

New-Volume -StoragePoolFriendlyName "*Cluster-BCK01" -FileSystem CSVFS_ReFS -ResiliencySettingName Parity -PhysicalDiskRedundancy 2 -Size 100GB

Rename-Item c:\ClusterStorage\Volume1 BCK01

At this moment, the cluster is created and Storage Spaces Direct is enabled. The cluster is called Cluster-BCK01 (IP: 10.10.0.44) and a dual parity volume is created. Then open the permissions of the OU where is located the Cluster Name Object of the cluster. Then add a permission for the cluster name object to allow to create computer objects.

Open the failover clustering manager and rename the networks to ease the management.

You can check also that you have all the enclosures and physical disks.

When S2D has been enabled, a storage pool with all physical disks has been automatically created. I have renamed it to Backup Pool.

You can check also that a Cluster Shared Volume has been well created.

Next run the following cmdlets to create SOFS role, create a folder in the volume and create a share on this folder.

Add-ClusterScaleOutFileServerRole -Name BCK-REPO
new-item -Type Directory -Path '\\vmsds01\c$\ClusterStorage\BCK01' -Name VMBCK01
New-SmbShare -Name 'VMBCK01' -Path C:\ClusterStorage\BCK01\VMBCK01 -FullAccess everyone

If you go back to the cluster, you can see that a Scale-Out File Server role has been created as well as the share.

You can edit the permissions of the folder to give specific permissions to the account that will be used in Veeam Backup & Replication.

Veeam Backup & Replication configuration

First of all, I create a new backup repository in Veeam Backup & Replication.

Then choose the shared folder backup repository.

Next specify the shared folder where you want store the backups. My SOFS role is called BCK-REPO and the share is called VMBCK01. So, the path is \\BCK-REPO\VMBCK01. Specify also credentials that have permissions on the shared folder.

In the next window, you can specify advanced properties.

Then I choose to not enable the vPower NFS service because I backup only Hyper-V VMs.

To finish the backup repository creation, review the properties and click on Apply.

Run a backup on the S2D repository

To test the new backup repository, I choose to create a backup copy job.

Then choose the VM that will be in the backup copy job.

In the next screen, choose the S2D backup repository, the number of restore points and the archival settings.

Next choose if you want use a WAN accelerator or not.

When the wizard is finished, the backup copy job is processing. You can see in the shared folder that data is coming.

When the backup is finished, you can see that data processed and size on disk are different. It is because Veeam leverages ReFS to reduce the storage usage.

Conclusion

Microsoft Storage Spaces Direct can be used to store your virtual machines but also your backups. If you plan a S2D in disaggregated model, you can design the solution to store VM data and backup job. The main disadvantage is that backup should be located in a parity volume (or a mixed resiliency) and that requires at least a 4-node S2D cluster.

The post Host Veeam backup on Storage Spaces Direct appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/host-veeam-backups-storage-spaces-direct/feed/ 3 5527
RDS 2016 farm: RDS Final configuration //www.tech-coffee.net/rds-2016-farm-rds-final-configuration/ //www.tech-coffee.net/rds-2016-farm-rds-final-configuration/#comments Wed, 24 May 2017 11:32:49 +0000 //www.tech-coffee.net/?p=5497 This article is the final topic about how to deploy a Remote Desktop Service in Microsoft Azure with Windows Server 2016. In this topic, we will apply the RDS Final configuration, such as the certificates, the collection and some custom settings. Then we will try to open a remote application from the portal. Deploy a ...

The post RDS 2016 farm: RDS Final configuration appeared first on Tech-Coffee.

]]>
This article is the final topic about how to deploy a Remote Desktop Service in Microsoft Azure with Windows Server 2016. In this topic, we will apply the RDS Final configuration, such as the certificates, the collection and some custom settings. Then we will try to open a remote application from the portal.

Certificates

Before creating the collection, we can configure the certificates for RD Web Access, RD Gateway and the brokers. You can request a public certificate for this or you can use your own PKI. If you not use your own PKI, you have to distribute the certificate authority certificates to all clients. You have also to provide the CRL/OCSP responder. If you use a public certificate, there is almost no client side configuration. You can get more information about required certificates here.

Once you have your certificate(s), you can open the properties of the RDS Farm from the server manager. Then navigate to certificates. In this interface, you can add the certificate(s) for each role.

On client side, you should add a setting by GPO or with local policy editor. Get the RD Connection Broker – Publishing thumbprint and copy it. Then edit this setting (Specify SH1 thumbprint of certificates representing trusted .rdp publishers) and add the certificate thumbprint without spaces. This setting enable to remove a pop-up for the clients.

Create and configure the collection

To create the collection, I use the following PowerShell cmdlet:

New-RDSessionCollection –CollectionName RemoteApps `
                        –SessionHost azrdh0.homecloud.net, azrdh1.homecloud.net `
                        –CollectionDescription "Remote application collection" `
                        –ConnectionBroker azrdb0.homecloud.net

Once you have created the collection, the RDS farm should indicates a new collection:

Now we can configure the User Profile Disks location:

Set-RDSessionCollectionConfiguration -CollectionName RemoteApps `
                                     -ConnectionBroker azrdb0.homecloud.net `
                                     -EnableUserProfileDisk `
                                     -MaxUserProfileDiskSizeGB 10 `
                                     -DiskPath \\SOFS\UPD$

If you edit the properties of the collection, you should have this User Profile Disk configuration:

In the \\sofs\upd$ folder, you can check if you have new VHDX files as bellow:

From the Server Manager, you can configure the collection properties as below:

Add applications to the collection

The collection that we have created is used to publish applications. So, you can install each application you need in all RD Host servers. Once the applications are installed you can publish them. Open the collection properties and click on add applications in RemoteApp Programs part.

Then select applications you want to publish. If the application you want to publish is not available in the list, you can click on add.

Then the wizard confirms you the application that will be published.

Test

Now that applications are published, you can browse to the RD Web Access portal. In my configuration, I have added a DNS record which is bound to the Azure Load Balancer public IP. Specify your credential and click on Sign In.

Click on the application of your choice.

I have chosen the calculator. As you can see in the task manager, the calculator is run through a Remote Desktop Connection. Great, it is working.

Conclusion

This series of topics about Remote Desktop Services shown you how to deploy the farm in Azure. We saw that Windows Server 2016 brings a lot of new features that ease the deployment in Azure. However, you can also deploy the RDS Farm On-Prem if you wish.

The post RDS 2016 farm: RDS Final configuration appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-rds-final-configuration/feed/ 2 5497
RDS 2016 Farm: Configure File Servers for User Profile Disks //www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/ //www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/#comments Tue, 09 May 2017 11:26:37 +0000 //www.tech-coffee.net/?p=5471 In the previous topics of this series, we have deployed the RDS Farm in Azure. Now we need a file service in high availability to manage user profile disks (UPD). To support the high availability, I leverage Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS). For more information about the deployment of S2D, you ...

The post RDS 2016 Farm: Configure File Servers for User Profile Disks appeared first on Tech-Coffee.

]]>
In the previous topics of this series, we have deployed the RDS Farm in Azure. Now we need a file service in high availability to manage user profile disks (UPD). To support the high availability, I leverage Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS). For more information about the deployment of S2D, you can read this topic (based on hyperconverged model). For Remote Desktop usage, I’ll deploy a disaggregated model of S2D. In this topic, I’ll configure file servers for User Profile Disks. This series consists of the following topics:

I’ll deploy this file service by using only PowerShell. Before following this topic, be sure that your Azure VM has joined the Active Directory and they have two network adapters in two different subnets (one for cluster and the other for management). I have also fixed the IP addresses from Azure portal.

Deploy the cluster

First of all, I install these features in both file server nodes:

install-WindowsFeature FS-FileServer, Failover-Clustering -IncludeManagementTools

Then I install the RSAT of Failover Clustering in the management VM.

Install-WindowsFeature RSAT-Clustering

Next I test if the cluster nodes can manage Storage Spaces Direct

Test-Cluster -Node "AZFLS0","AZFLS1" -Include "Storage Spaces Direct", Inventory,Network,"System Configuration"

If the test is passed successfully, you can run the following cmdlet to deploy the cluster with the name UPD-Sto and the IP 10.11.0.29.

New-Cluster -Node "AZFLS0","AZFLS1" -Name UPD-Sto -StaticAddress 10.11.0.29 -NoStorage

Once the cluster is created, add the Cluster Name Object (UPD-Sto) the right to create computer object on the OU where it is located. This permission is required to create the CNO for SOFS.

Enable and configure S2D and SOFS

Now that the cluster is created, you can enable S2D (I run the following PowerShell on a file server node by using Remote PowerShell).

Enable-ClusterS2D

Then I create a new volume formatted with ReFS and with a capacity of 100GB. This volume has the 2-Way Mirroring resilience.

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName UPD01 -FileSystem CSVFS_REFS -Size 100GB

Now I rename the folder Volume1 in ClusterStorage by UPD-01

rename-item C:\ClusterStorage\Volume1 UPD-01

Then I a add the role Scale-Out File Server role in the cluster and I call it SOFS.

Add-ClusterScaleOutFileServerRole -Name SOFS

To finish I create a folder called Profiles in the volume and I share it for everyone (not recommended in production) and I call the share UPD$

New-Item -Path C:\ClusterStorage\UPD-01\Profiles -ItemType Directory
New-SmbShare -Name 'UPD$' -Path C:\ClusterStorage\UPD-01\Profiles -FullAccess everyone

Now my storage is ready and I am able to reach \\SOFS.homecloud.net\UPD$

Next topic

In the next topic, I will deploy a session collection and configure it. Then I will add the certificate for each Remote Desktop components.

The post RDS 2016 Farm: Configure File Servers for User Profile Disks appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/feed/ 4 5471