Windows Server – Tech-Coffee //www.tech-coffee.net Tue, 02 Apr 2019 20:13:44 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Register Windows Admin Center in Microsoft Azure //www.tech-coffee.net/register-windows-admin-center-in-microsoft-azure/ //www.tech-coffee.net/register-windows-admin-center-in-microsoft-azure/#respond Tue, 06 Nov 2018 18:02:29 +0000 //www.tech-coffee.net/?p=6597 With Windows Server 2019 and Windows Admin Center, we are able to build hybrid cloud in an easy way. First Windows Admin Center provide a GUI to configure features such as Azure Backup, Azure Site Recovery or Azure File Sync. With Windows Server 2019, we can interconnect an On-Prem host to an Azure virtual network ...

The post Register Windows Admin Center in Microsoft Azure appeared first on Tech-Coffee.

]]>
With Windows Server 2019 and Windows Admin Center, we are able to build hybrid cloud in an easy way. First Windows Admin Center provide a GUI to configure features such as Azure Backup, Azure Site Recovery or Azure File Sync. With Windows Server 2019, we can interconnect an On-Prem host to an Azure virtual network thanks to Azure Virtual Network Adapter. Finally, Storage Migration Service enables to migrate a file server to an Azure File Service such as Azure File Sync. But to be able to leverage all these features from Windows Admin Center, it must be registered in Microsoft Azure. In this topic, I’ll show you step-by-step how to register Windows Admin Center in Microsoft Azure.

Requiements

To be able to follow this topic, you need the following:

  • An Azure subscription
  • A running Windows Admin Center (1809 at least).

Register Windows Admin Center in Microsoft Azure

From a web browser (Edge or Chrome), open Windows Admin Center and click on the wheel at the top right corner. Then click on Azure and Register.

Then copy the code and click on Device Login and past the code you just copied. A Microsoft login pop-up should be raised: enter your Azure Credentials.

If you have several tenant, choose the right one. You can find the tenant ID from the Azure Portal by clicking on Switch Directory. If you have already register a Windows Admin Center before, you can reuse the Azure AD App by selecting the option.

Now you are asked to grant permissions to the Azure AD App. Open an Azure Portal from the browser of your choice.

Then navigate to App Registrations and select your Windows Admin Center App. Edit its settings and click on Required permissions. Finally click on Grant Permissions.

If the Windows Admin Center works well, you should have the following information.

Now you can enjoy Azure Hybrid features such as Azure Backup from Windows Admin Center.

If you wish, you can also use Azure Active Directory to authenticate users and administrators on Windows Admin Center.

Conclusion

With Windows Server 2019 and Windows Admin Center has promised to simplify hybrid scenario. Thanks to Windows Admin Center we are able to configure On-Prem hosts in Azure Site Recovery and Azure Backup. The “hybrid” extensions of Windows Admin Center are still in preview. Just by upgrading extensions, we’ll have more features. This is why Windows Admin Center is a good product (and it’s free !)

The post Register Windows Admin Center in Microsoft Azure appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/register-windows-admin-center-in-microsoft-azure/feed/ 0 6597
Deploy a Windows Server 2019 RDS farm with HTML5 client //www.tech-coffee.net/deploy-a-windows-server-2019-rds-farm-with-html5-client/ //www.tech-coffee.net/deploy-a-windows-server-2019-rds-farm-with-html5-client/#comments Thu, 06 Sep 2018 14:38:50 +0000 //www.tech-coffee.net/?p=6492 These days I’m trying in depth Windows Server 2019. Today I chose to pay attention to Remote Desktop Services. The goal of my lab is to deploy a RDS Farm with all components and with the new HTML5 Remote Desktop Client. Even though I’m running my lab on Windows Server 2019, you can also deploy ...

The post Deploy a Windows Server 2019 RDS farm with HTML5 client appeared first on Tech-Coffee.

]]>
These days I’m trying in depth Windows Server 2019. Today I chose to pay attention to Remote Desktop Services. The goal of my lab is to deploy a RDS Farm with all components and with the new HTML5 Remote Desktop Client. Even though I’m running my lab on Windows Server 2019, you can also deploy the HTML5 client on Windows Server 2016. In this topic, I wanted to share with you the steps I followed to deploy the Windows Server 2019 RDS farm.

Requirements

To make this lab, I have deployed four virtual machines which are running Windows Server 2019:

  • RDS-APP-01: RD Host Server that hosts the RemoteApp collection
  • RDS-DKP-01: RD Host Server that hosts the Remote Desktop collection
  • RDS-BRK-01: Hosts RD Broker and RD Licensing
  • RDS-WEB-01: Hosts RD Web Access and RD Gateway

Then I have a public certificate for RD Web Access and RD Gateway role:

I have also a private certificate for RD Broker publishing and RD Broker connection. To create this certificate, I duplicated the Workstation Authentication ADCS template as described in this topic.

I have register both certificates in PFX (with private key) and in cer (just the public certificate).

Finally, I have two DNS zone:

  • SeromIT.local: Active Directory forest zone
  • SeromIT.com: splitted zone: hosted by local domain controllers and by public provider. I use this zone to connect from Internet. In this zone I have created two registrations:
    • Apps.SeromIT.com: leading to RDS-WEB-01 (CNAME)
    • RDS-GW.SeromIT.com: leading to RDS-BRK-01 (CNAME) for the gateway

RDS farm deployment

To deploy the RDS farm, I use only PowerShell. In this way I can reproduce the deployment for other customers. First of all, I run a Remote Desktop deployment to configure a RD Web Access, a RD Broker and a RD Host Server:


New-RDSessionDeployment -ConnectionBroker RDS-BRK-01.SeromIT.local `
                        -SessionHost RDS-DKP-01.SeromIT.local `
                        -WebAccessServer RDS-WEB-01.SeromIT.local

Then I run a PowerShell cmdlet to add another RD Host Server, a RD Licensing and a RD Gateway role.


Add-RDServer -Server RDS-APP-01.SeromIT.local `
             -Role RDS-RD-SERVER `
             -ConnectionBroker RDS-BRK-01.SeromIT.local

Add-RDServer -Server RDS-BRK-01.SeromIT.local `
             -Role RDS-Licensing `
             -ConnectionBroker RDS-BRK-01.SeromIT.local

Add-RDServer -Server RDS-WEB-01.SeromIT.local `
             -Role RDS-Gateway `
             -ConnectionBroker RDS-BRK-01.SeromIT.local `
             -GatewayExternalFqdn RDS-GW.SeromIT.com

Once these commands are run, the role deployment is finished:

Now we can configure the certificates.

Certificate configuration

To configure each certificate, I use again PowerShell. Remember, I have store both certificates in PFX in C:\temp\RDS of my broker server.

$Password = Read-Host -AsSecureString
$Password = Read-Host -AsSecureString
Set-RDCertificate -Role RDGateway `
                  -ImportPath C:\temp\RDS\wildcard_SeromIT_com.pfx `
                  -Password $Password `
                  -ConnectionBroker RDS-BRK-01.SeromIT.local `
                  -Force

Set-RDCertificate -Role RDWebAccess `
                  -ImportPath C:\temp\RDS\wildcard_SeromIT_com.pfx `
                  -Password $Password `
                  -ConnectionBroker RDS-BRK-01.SeromIT.local `
                  -Force

Set-RDCertificate -Role RDPublishing `
                  -ImportPath C:\temp\RDS\Broker.pfx `
                  -Password $Password `
                  -ConnectionBroker RDS-BRK-01.SeromIT.local `
                  -Force

Set-RDCertificate -Role RDRedirector `
                  -ImportPath C:\temp\RDS\Broker.pfx `
                  -Password $Password `
                  -ConnectionBroker RDS-BRK-01.SeromIT.local `
                  -Force

Once these commands are executed, the certificate are installed for each role:

Collection creation

Now I create a collection to add resources inside the RD Web Access portal:

New-RDSessionCollection -CollectionName Desktop `
                        -CollectionDescription "Desktop Publication" `
                        -SessionHost RDS-DKP-01.SeromIT.local `
                        -ConnectionBroker RDS-BRK-01.SeromIT.local

Then from Server Manager, you can configure settings of this collection:

Enable HTML 5 Remote Desktop client

In this lab, I don’t want to use the legacy portal. I’d like to use the super cool new HTML5 RD client. To enable this client, I connect to the server hosting RD Web Access role and I run the following cmdlet:

Install-Module -Name PowerShellGet -Force -Confirm:$False

After, close and open again a PowerShell window. Then execute this command:

Install-Module -Name RDWebClientManagement -Confirm:$False

Then copy the RD Broker certificate in cer format into the RD Web Access server and run the following cmdlets:

Import-RDWebClientBrokerCert c:\temp\broker.cer

Install-RDWebClientPackage
Publish-RDWebClientPackage -Type Production -Latest

Now you can connect to the RD Web client by using the following URL: https:///RDWeb/WebClient/Index.html. In my example, I connect to https://apps.SeromIT.com/RDWeb/WebClient/Index.html.

Conclusion

I like the RD Web client for several reasons. First, you can connect to a RDS session from a HTML5 ready web browser. You don’t need anymore a compatible RD client and you can connect from several devices such as Mac, a Linux device or maybe a tablet or smartphone. Secondly, the HTML5 client doesn’t require settings for SSO like we did with the legacy portal. The deployment is easier as before. And finally I found this client more user friendly than the legacy portal. The only thing missing is the ability to enable the HTML5 client by a single click or PowerShell cmdlet, or to enable it by default.

The post Deploy a Windows Server 2019 RDS farm with HTML5 client appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-windows-server-2019-rds-farm-with-html5-client/feed/ 28 6492
Introduction to System Insights //www.tech-coffee.net/introduction-to-system-insights/ //www.tech-coffee.net/introduction-to-system-insights/#respond Thu, 26 Jul 2018 10:21:44 +0000 //www.tech-coffee.net/?p=6452 It’s been a while that Microsoft releases new Windows Server 2019 builds regularly. The latest public build holds a new feature called System Insights. It enables the analysis for the usage of some system resources such as CPU, storage and network and build forecasting. Moreover, in specific situations, a script can be run to resolve ...

The post Introduction to System Insights appeared first on Tech-Coffee.

]]>
It’s been a while that Microsoft releases new Windows Server 2019 builds regularly. The latest public build holds a new feature called System Insights. It enables the analysis for the usage of some system resources such as CPU, storage and network and build forecasting. Moreover, in specific situations, a script can be run to resolve the incident. System Insights is included in Windows Server 2019 without any extra cost and can be manage through Windows Admin Center. In this topic we will see what System Insights looks like.

Enable System Insights

To work with System Insights, you need at least the build 17709 of Windows Server 2019. When the OS is installed, you can enable System Insights in two different ways:

  • Through PowerShell
  • With Windows Admin Center

For this topic, I decided to enable System Insights with the “old way”, meaning from PowerShell. Because I didn’t know the feature name, I searched for it by using the following PowerShell cmdlet:

So next I ran the following cmdlet to install System Insights:

Install-WindowsFeature -Name System-Insights -IncludeManagementTools -Restart

PowerShell Cmdlet

Several PowerShell cmdlets are available with System Insights. By running the following cmdlet, we are able to list of commands available to manage System Insights

For example, the cmdlet Get-InsightsCapability shows what is analyzed by System Insights.

With these commands you can disable capabilities, force System Insights to run an analyze, configure the scheduling and so on. It’s good we can manage System Insights from PowerShell for automation but for day-to-day administration, I think Windows Admin Center is better.

Manage System Insights through Windows Admin Center

This topic has been written from the latest Windows Admin Center build (1806). In this build, Microsoft provide an extension for System Insights. To install the extension, open the wheel (top right) and select extensions. In the Available extensions tab, select Windows Server System Insights extension and click on Install.

Once the extension is installed, you get a new tab in standalone server management. As you can see in the following screenshot, we have the same capabilities than those we get through PowerShell. Even through capabilities are enabled, the status indicates None. I had to wait 6 days to get first information about CPU capacity and volume consumption forecasting.

If you select a capability and then Settings, you can schedule it:

If you click on Action, you can specify script to run if a warning or a critical status is raised. For example, if System Insights predicts that CPU workloads will consumes over 100%, you can send an E-mail to your boss to buy new servers through a PowerShell scripts. Some pre-written script are available on Microsoft website. You can discover them by clicking on the link in the actions tab.

When System Insights has collected enough of recent data, you get the following graph. The grey area is the forecast.

If the forecast reaches a specific limit, the status of the capability changes to warning or critical. Finally, you have an history of the status:

Conclusion

System Insights is an interesting feature. Even though it is still in preview, we can see how it can help to forecast the future consumption of the infrastructure. I think it can be useful especially in hyperconverged solution to know in advance when a node should be added in the cluster or when we have to add some storage devices. For the moment we have only four capabilities but I’m sure we will have more later. Moreover, Microsoft provides some pre-written script for actions in case of warning or critical status. I suggest you to keep an eye on the feature 🙂

The post Introduction to System Insights appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/introduction-to-system-insights/feed/ 0 6452
Create a custom SUU to update Dell firmware //www.tech-coffee.net/create-a-custom-suu-to-update-dell-firmware/ //www.tech-coffee.net/create-a-custom-suu-to-update-dell-firmware/#respond Tue, 10 Jul 2018 16:23:07 +0000 //www.tech-coffee.net/?p=6436 Dell provides a smart utility to update firmware and drivers in their servers. This utility is called Server Update Utility or shortly: SUU. SUU is an ISO that holds all drivers and firmware for all supported hardware. When you use SUU in a Windows Server, it detects hardware, firmware’s versions and driver’s versions. Then SUU ...

The post Create a custom SUU to update Dell firmware appeared first on Tech-Coffee.

]]>
Dell provides a smart utility to update firmware and drivers in their servers. This utility is called Server Update Utility or shortly: SUU. SUU is an ISO that holds all drivers and firmware for all supported hardware. When you use SUU in a Windows Server, it detects hardware, firmware’s versions and driver’s versions. Then SUU asks you if you wish to upgrade or not the components. When I deploy hyperconverged solution with script such as Storage Spaces Direct, it helps me to automate the deployment. However, because SUU contains a lot of firmware and drivers, the ISO is really huge (almost 8GB). Thanks to Dell Repository Manager, you can create your own SUU based on the hardware you need to upgrade. It results in a lightweight SUU and reduce the time of upgrade because you don’t need anymore to copy 8GB. In this topic, we’ll see how to create a custom SUU.

Dell Repository Manager

To follow this topic, you need to install Dell Repository Manager. The installation is pretty easy: Next, Next, Install. This application enables you to connect to an online repository to download drivers and firmware and to create custom bundles. Dell Repository Manager is able to connect to an iDrac to detect the hardware. You can also choose the server reference from a list. When you open for the first time Dell Repository Manager, you can only add a repository.

Then provide a name to your repository and choose Enterprise Server Catalog. Next I choose the repository type called Integration and I select iDrac.

Specify the IP address of the iDrac and the credentials.

Then your server is detected (the service tag as well).

Now that the repository is added, you should get two bundles: one for Linux and the other one for Windows. I select the Windows bundle and I click on Export.

Create the custom SUU

Once you have clicked on export, select SUU ISO. If you use Dell Repository Manager for the first time, the application should warn you that plugins are required. Just install plugins to be able to export as SUU ISO. Select SUU ISO and specify a location. Click on Export to start the process.

If you click on Repository Manager (in the top of application), you can select Jobs. From this view, you are able to monitor the job status.

When the export process is finished, you should the SUU ISO.

Now that the SUU ISO is created, you can copy it to the server you want to upgrade. When you mount the ISO on a Windows Server, you can run SUU.cmd -e and SUU will take care to upgrade your drivers and firmware itself.

Conclusion

When you have dozens of servers, the server upgrade process can be a pain and take a lot of time. SUU helps to automate the firmware upgrade but the full ISO can take a long time to copy past because of its size. Thanks to Dell Repository Manager, you can create a custom SUU with just enough firmware and drivers for your systems. It’s free, enjoy 🙂

The post Create a custom SUU to update Dell firmware appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/create-a-custom-suu-to-update-dell-firmware/feed/ 0 6436
Deploy Windows Admin Center in HA through Kemp Load Balancer //www.tech-coffee.net/deploy-windows-admin-center-in-ha-through-kemp-load-balancer/ //www.tech-coffee.net/deploy-windows-admin-center-in-ha-through-kemp-load-balancer/#comments Thu, 03 May 2018 11:44:21 +0000 //www.tech-coffee.net/?p=6318 Windows Admin Center (formerly Honolulu Project) was released in April 2018 by Microsoft. WAC is a web-based management tool to help to administrate Windows Server and hyperconverged cluster. In part of my job, I use primarily Windows Admin Center for Storage Spaces Direct Cluster and to manage Windows Server in Core edition especially drivers. Since ...

The post Deploy Windows Admin Center in HA through Kemp Load Balancer appeared first on Tech-Coffee.

]]>
Windows Admin Center (formerly Honolulu Project) was released in April 2018 by Microsoft. WAC is a web-based management tool to help to administrate Windows Server and hyperconverged cluster. In part of my job, I use primarily Windows Admin Center for Storage Spaces Direct Cluster and to manage Windows Server in Core edition especially drivers. Since the release of Windows Admin Center, Microsoft provides the capability to deploy it in high availability. In this topic we’ll see how to deploy Windows Admin Center in this manner. Moreover, some of customers want to connect to WAC through a load balancer such as Kemp to avoid private certificate management and to be able to connect from the Internet. So, we’ll see also how to connect to WAC through a Kemp load balancer.

Requirements

To follow this topic, you need the following:

  • 2x virtual machines
    • I set 2vCPU, 4GB of memory, a dynamic OS disk of 60GB
    • I deployed Windows Server 2016 in Core edition
    • 1x Network Adapter for management
    • 1x Network Adapter for cluster
    • The VM must be joined to the Active Directory domain
  • 1x shared disk of 10GB for these two VMs. You can use traditional iSCSI, FC LUN or shared VHDX / VHD Set
  • 1x IP in management network for the cluster
  • 1x IP in management network for Windows Admin Center cluster resource
  • 1x Name for the cluster (in this example: Cluster-WAC01.SeromIT.local)
  • 1x Name for Windows Admin Center cluster resource (in this example: WAC.SeromIT.local)

You need also to download the latest Windows Admin Center build from this link and the script to deploy WAC in high availability from this link.

Deploy the cluster

First of all, we have to deploy features on both virtual machine. I install Failover Clustering and its PowerShell module with these cmdlet:

Install-WindowsFeature RSAT-Clustering-PowerShell, Failover-Clustering -ComputerName "Node1"
Install-WindowsFeature RSAT-Clustering-PowerShell, Failover-Clustering -ComputerName "Node2"

Then I initialize the shared disk. First, I show disks connected to the VM. The disk 0 is for operating system and disk 1 is the shared disk. Then I initialize the disk and I create a NTFS volume:

Initialize-Disk -Number 1
New-Volume -DiskNumber 1 -FriendlyName Data -FileSystem NTFS

Once the volume is created, I run a test cluster to check if nodes are compliant to be part of a cluster. To execute this validation, I run the following cmdlet:

Test-Cluster -Node Node1,Node2

N.B: My test reports an issue related to software update levels: it is because I have not the last Windows Defender signature on one node.

Once you have validated the report, you can create the cluster by running the following cmdlet. I specify NoStorage option to avoid that my shared disk is taken by the cluster for witness usage.

New-Cluster -Node Node1, Node2, -Name ClusterName -StaticAddress ClusterIPAddress -NoStorage

Once the cluster is created, I move the Cluster Name Object (CNO) to a specific OU. Then I add the permission to this CNO to create computer object in this OU.

Next I rename cluster network to Management and Cluster. The network cluster with Cluster and Client role is renamed Management and the one with the cluster role is called … cluster.

(Get-Cluster -Name ClusterName | Get-ClusterNetwork -Name "Cluster Network 1").Name="Management"
(Get-Cluster -Name ClusterName | Get-ClusterNetwork -Name "Cluster Network 2").Name="Cluster"

Then I add a file share witness. For that I have created a share on my domain controller server called Cluster-WAC$:

Get-Cluster -Name ClusterName | Set-ClusterQuorum -FileShareWitness "\\path\to\the\file\share\witness"

To finish I add a the Cluster Shared Volume (CSV):

Get-Disk -Number 1 | Add-ClusterDisk
Add-ClusterSharedVolume -Name "Cluster Disk 1"
(Get-ClusterSharedVolume -Name "Cluster Disk 1").Name="Data"
Rename-Item C:\ClusterStorage\Volume1\ Data

As you can see in the failover clustering console, the file share witness is well configured.

The cluster network are renamed to Management and Cluster.

The CSV is present in the cluster and it’s called Data.

(Optionnal) Get a certificate from enterprise PKI

If you want to use your own enterprise PKI, you can follow these steps. Connect to an enterprise CA and manage the template. Duplicate the Web Server template. In the Subject Name, choose Supply in the request. Allow also the private key to be exportable.

Then request a certificate from the MMC or from the web interface and specify the following information:

  • Subject Name: Common Name as the Windows Admin Center cluster resource Name
  • Subject Alternative Name:
    • DNS: Windows Admin Center Cluster resource name
    • DNS: first node FQDN
    • DNS: second node FQDN

Then export the certificate and its private key in a PFX file.

Deploy Windows Admin Center

In a folder on a node of the cluster, you should have the following files: (WAC.pfx only if you have created your own certificate from the enterprise PKI)

Run the following cmdlets to deploy Windows Admin Center in the cluster:

$CertPassword = Read-Host -AsSecureString
.\Install-WindowsAdminCenterHA.ps1 -ClusterStorage c:\ClusterStorage\Data -ClientAccessPoint WACClusterResourceName -MSIPath c:\path\to\WAC\build.msi -CertPath c:\path\to\pfx\file.pfx -CertPassword $CertPassword -StaticAddress IPAddressForWAC

N.B: If you have no enterprise PKI, you can deploy the service by running the following cmdlet:

.\Install-WindowsAdminCenterHA.ps1 -ClusterStorage c:\ClusterStorage\Data -ClientAccessPoint WACClusterResourceName -MSIPath c:\path\to\WAC\build.msi -StaticAddress IPAddressForWAC -GenerateSSLCert

After some times, the service is deployed in the failover clustering and you have now Windows Admin Center in high availability.

If you specify the name of the WAC cluster resource as below, you can connect to Windows Admin Center.

Configure Kemp Load Balancer

First of all, I create a rule to redirect the traffic to the right service. Because this is a reverse proxy, a single IP address is used for several web services. In this configuration I use the web service URL to redirect traffic to the right web server. To make it work, a rule as the following must be created.

Then I create a Sub Virtual Service in my reverse proxy virtual service. I name it Windows Admin Center and I specify the name of the WAC cluster resource.

Then I map the rule I have previously created with the Windows Admin Center Sub Virtual Service:

To finish, verify that the SSL Acceleration is activated with the right public certificate as below:

Then I connect to Windows Admin Center through the Kemp Load Balancer. As you can see, the certificate is validated without any warning and I can get access to WAC. Thanks to these settings, you can access to WAC through the Internet.

The post Deploy Windows Admin Center in HA through Kemp Load Balancer appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-windows-admin-center-in-ha-through-kemp-load-balancer/feed/ 4 6318
The cluster resource could not be deleted since it is a core resource //www.tech-coffee.net/the-cluster-resource-could-not-be-deleted-since-it-is-a-core-resource/ //www.tech-coffee.net/the-cluster-resource-could-not-be-deleted-since-it-is-a-core-resource/#comments Tue, 16 Jan 2018 15:50:51 +0000 //www.tech-coffee.net/?p=6069 The last month, I wanted to change the Witness of a cluster from a Cloud Witness to a File Share Witness. The cluster is a 2-node S2D cluster and as discussed with Microsoft, Cloud Witness should not be used with a 2-node cluster. If the Cloud Witness fails (for example when subscription has expired), the ...

The post The cluster resource could not be deleted since it is a core resource appeared first on Tech-Coffee.

]]>
The last month, I wanted to change the Witness of a cluster from a Cloud Witness to a File Share Witness. The cluster is a 2-node S2D cluster and as discussed with Microsoft, Cloud Witness should not be used with a 2-node cluster. If the Cloud Witness fails (for example when subscription has expired), the cluster can crashes. I’ve experienced this in production. So, all my customers, I decided to change the cloud witness to file share witness.

Naivly, I tried to change the Cloud Witness to a File Share witness. But it doesn’t work 🙂 You’ll get this message: The cluster resource could not be deleted since it is a core resource. In this topic I’ll show you the issue and the resolution

Issue

As you can see below, my cluster is using a Cloud Witness to add an additional vote.

So, I decide to replace the Cloud Witness by a File Share Witness. I right click on the cluster | more Actions | Configure Cluster Quorum Settings.

Then I choose Select the quorum witness.

I select configure a file share witness.

I specify the file share path as usual.

To finish I click next and it should replace the cloud witness by the file share witness.

Actually no, you should get the following error message.

If you check the witness state, it is offline.

This issue occurs because we can’t remove cluster core resources. So how I remove the Cloud Witness?

Resolution

To remove the Cloud Witness, choose again to configure the cluster quorum and this time select Advanced Quorum Configuration.

Then select Do not configure a quorum witness.

Voilà, the Cloud Witness is gone. No you can add the File Share Witness.

In the below screenshot you can see the configuration to add a file share witness to the cluster.

Now the file share witness is added to the cluster.

Conclusion

If you want to remove a Cloud Witness or a File Share Witness, you have first to not configure a witness for the quorum. Then you can add the witness type you want.

The post The cluster resource could not be deleted since it is a core resource appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/the-cluster-resource-could-not-be-deleted-since-it-is-a-core-resource/feed/ 14 6069
Dell R730XD bluescreen with S130 adapter and S2D //www.tech-coffee.net/dell-r730xd-bluescreen-with-s130-adapter-and-s2d/ //www.tech-coffee.net/dell-r730xd-bluescreen-with-s130-adapter-and-s2d/#respond Thu, 09 Nov 2017 17:31:59 +0000 //www.tech-coffee.net/?p=5870 This week I worked for a customer which had issue with his Storage Spaces Direct cluster (S2D). When he restarted a node, Windows Server didn’t start and a bluescreen appeared. It is because the operating system disks were plugged on S130 while Storage Spaces Direct devices were connected to HBA330mini. It is an unsupported configuration ...

The post Dell R730XD bluescreen with S130 adapter and S2D appeared first on Tech-Coffee.

]]>
This week I worked for a customer which had issue with his Storage Spaces Direct cluster (S2D). When he restarted a node, Windows Server didn’t start and a bluescreen appeared. It is because the operating system disks were plugged on S130 while Storage Spaces Direct devices were connected to HBA330mini. It is an unsupported configuration especially with Dell R730XD. In this topic, I’ll describe how I have changed the configuration to be supported.

This topic was co-written with my fellow Frederic Stefani (Dell – Solution Architect). Thanks for the HBA330 image 🙂

Symptom

You have several Dell R730XD added to a Windows Server 2016 failover cluster where S2D is enabled. Moreover the operating system is installed on two storage devices connected to a S130 in software RAID. When you reboot a node, you have the following bluescreen.

How to resolve issue

This issue occurs because S2D and operating system connected to S130 is not supported. You have to connect the operating system on HBA330mini. This means that the operating system will not be installed on a RAID 1. But a S2D node is redeployed quickly if you have written proper PowerShell script .

To make the hardware change, you need a 50cm SFF-8643 cable to connect operating system to HBA330mini. Moreover, you have to reinstall the operating system (sorry about that). Then a HBA330mini firmware image must be applied otherwise enclosure will not be present in operating system.

Connect the operating system disk to HBA330mini

First place the node into maintenance mode. In the cluster manager, right click on the node, select Pause and Drain Roles.

Then stop the node. When the node is shutdown, you can evict the node from the cluster and delete the Active Directory computer object related to the node (you have to reinstall the operating system).

First we have to remove the cable where connector are circle in red in the below picture. This cable connects the both operating system storage devices to S130.

To connect the operating system device, you need a SFF-8643 cable as below

So disconnect the SAS cable between operating system devices and S130.

Then we need to remove those fans to be able to plug the SFF-8643 on the backplane. To remove the fans, turn the blue items which are circle in red in the below picture

Then connect operating system device to backplane with SFF-8643 cable. Plug the cable in SAS A1 port on the backplane. Remove also the left operating system device from the server (the top left one in the below picture). This is now your spare device.

Add the fans in the server and turn the blue items.

Start the server and open the BIOS settings. Navigate to SATA Settings and set Embedded SATA to Off. Restart the server.

Start again the BIOS settings and open device settings. Check that S130 is not in the menu and select the HBA330mini device.

Check that another physical disk is connected to the HBA as below.

Reinstall operating system and apply HBA330mini image

Now that the operating system disk is connected to HBA330mini, it’s time to reinstall operating system. So use your favorite way to install Windows Server 2016. One the OS is installed, mount your virtual media from iDRAC and mount this image to the removable virtual media:

Next change the next boot to Virtual floppy.

On the next boot, the image is loaded and applied to the system. The server thinks that the HBA has been changed.

Add the node to the cluster

Now you can add the node to the cluster again. You should see enclosure.

Conclusion

When you build a S2D solution based on Dell R730XD, don’t connect your operating system disks to S130: you will get bluescreen on reboot. If you have already bought servers with S130, you can follow this topic to resolve your issue. If you plan to deploy S2D on R740XD, you can connect your operating system disks to BOSS and S2D devices to HBA330+.

The post Dell R730XD bluescreen with S130 adapter and S2D appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/dell-r730xd-bluescreen-with-s130-adapter-and-s2d/feed/ 0 5870
Next gen Microsoft management tool: Honolulu //www.tech-coffee.net/next-gen-microsoft-management-tool-honolulu/ //www.tech-coffee.net/next-gen-microsoft-management-tool-honolulu/#respond Mon, 25 Sep 2017 06:10:47 +0000 //www.tech-coffee.net/?p=5791 Since the beginning of the year, Microsoft is working on a new management tool based on modern web languages such as HTML5, Angular and so on. This tool is called Honolulu. Honolulu is a user-friendly web interface that enables to manage Windows Server, Failover Clustering and Hyperconverged cluster. Currently, to manage hyperconverged cluster, Honolulu requires ...

The post Next gen Microsoft management tool: Honolulu appeared first on Tech-Coffee.

]]>
Since the beginning of the year, Microsoft is working on a new management tool based on modern web languages such as HTML5, Angular and so on. This tool is called Honolulu. Honolulu is a user-friendly web interface that enables to manage Windows Server, Failover Clustering and Hyperconverged cluster. Currently, to manage hyperconverged cluster, Honolulu requires Semi-Annual Windows Server release.

Honolulu is currently in public preview release which means that the product is under construction :). Honolulu is built in a modular way where you can add or remove extensions. Each management feature is included in an extension that you can add or remove. Microsoft expects later that vendors develop third party extensions. To be honest with you, this is the set of tools I’m waiting for a while ago. Microsoft was in late in management tools compared to other companies such as VMware. I hope that Honolulu will close the gap with VMware vCenter and Nutanix Prism.

Microsoft listens customers and feedback to improve this product. So you can download the product here and report feedback in this user voice.

In this topic, we will see an overview of Honolulu. I’ll dedicate a topic about Honolulu and Microsoft hyperconverged solution because Honolulu requires Windows Server 2016 RS3 release (in Semi-Annual Channel) to work with and I have not yet upgraded my lab.

Getting started with Honolulu

In the below screenshot, you can see Honolulu home page. You get all your connections (and the type) and you can add more of them.

By clicking on arrow next to Project Honolulu, you can filter the connection type on Server Manager, Failover Cluster Manager and Hyper-Converged Cluster Manager.

By clicking on the wheel (top right), you can access to extension manager and you get installed extensions. For example you have extensions for firewall management, Hyper-V, failover clustering and so on. You can remove extensions you don’t want.

Sever Manager

As you have seen before, you can manage a single server from Honolulu. I will not show you all management tools but just an overview of Honolulu. By adding and connecting to a server, you get the following dashboard. In this dashboard you can retrieve real-time metrics (CPU, memory and network) and information, you can restart or shutdown the system or edit RDP access and environment variables. For the moment you can’t resize columns and tables and I think in near future that Microsoft will add this feature.

An interesting module is the Events. In this pane, you get the same thing as this good old Event Viewer. You can retrieve all the events of your system and you can filter them. Maybe a checkbox enabling real-time events could be interesting :).

The devices pane is also available. In a single view, you have all hardware installed in the system. If Microsoft adds the ability to install drivers from there, Honolulu can replace DevCon for Core servers.

You can also browse the system files and manage file and folders.

Another pane enables to manage the network adapters as you can see below. For the moment this pane is limited because it doesn’t allow to manage advanced feature such as RDMA, RSS, VMMQ and so on.

You can also add or remove roles and features from Honolulu. It is really cool that you can manage this from a Web service.

If you use Hyper-V, you can manage VMs from Honoulu. The dashboard also is really nice because there is counters about VMs and last events.

Another super cool feature is the ability to manage updates from Honolulu. I hope Microsoft will add WSUS configuration from this pane with some scheduling.

Failover Cluster management

Honolulu enables also to manage failover cluster. You can add a failover cluster connection from Honolulu home page. Just click on Add.

Then specify the cluster name. Honolulu asks if you want to add also the servers member of the cluster.

One it is added, you can select it and you get this dashboard. You get cluster core ressource states, and some information about the cluster such as the number of roles, networks and disks.

By clicking on disks, you can get a list of Cluster Shared Volumes in the cluster and information about them.

If your cluster hosts Hyper-V VMs (not in hyperconverged way), you can manage VMs from there. You get the same pane than in Honolulu server manager. The VMs and related metrics are shown and you can create or delete virtual machines. A limited set of option is currently available.

You can also get the vSwitches deployed in each node. It’s pitty that Switch Embedded Teaming is not yet supported but I think the support will be added later.

Hyperconverged cluster management

As I said earlier, hyperconverged cluster is supported but only for Windows Server Semi-Annual channel (for the moment). I’ll dedicate a topic about Honolulu and hyperconverged cluster once I’ll upgrade my lab.

Update Honolulu

When a Honolulu update is released, you get notified by Update Available mention. Currently, the update process is not really user-friendly because when you click on Update Available, an executable is downloaded and you have to run again the Honolulu installation (specify installation path, certificate thumbprint etc.). I hope in the future that the update process will be a self-update.

When I have downloaded the executable, I checked the package size and it is amazing: only 31MB.

Conclusion

Finally, they did it! A true modern management tool. I try for Microsoft this tool for 3 months and I can say you that developers work really quickly and they make a great job. Features are added quickly and Microsoft listens customers. I recommend you to post in the user voice the features you want. The tool is currently not perfect, some features are missing but Honolulu is still in preview release ! Microsoft is in the right direction with Honolulu and I hope this tool will be massively used. I hope also that Honolulu will help to install more Windows Server in Core edition, especially for Hyper-V and storage server.

The post Next gen Microsoft management tool: Honolulu appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/next-gen-microsoft-management-tool-honolulu/feed/ 0 5791
Cluster Health Service in Windows Server 2016 //www.tech-coffee.net/cluster-health-service-in-windows-server-2016/ //www.tech-coffee.net/cluster-health-service-in-windows-server-2016/#comments Tue, 29 Nov 2016 21:22:00 +0000 //www.tech-coffee.net/?p=4881 Before Windows Server 2016, the alerting and the monitoring of the cluster were managed by monitoring tools such as System Center Operations Manager (SCOM). The monitoring tool used WMI, PowerShell scripts, performance counters or whatever to get the health of the cluster. In Windows Server 2016, Microsoft has added the Health Service in the cluster ...

The post Cluster Health Service in Windows Server 2016 appeared first on Tech-Coffee.

]]>
Before Windows Server 2016, the alerting and the monitoring of the cluster were managed by monitoring tools such as System Center Operations Manager (SCOM). The monitoring tool used WMI, PowerShell scripts, performance counters or whatever to get the health of the cluster. In Windows Server 2016, Microsoft has added the Health Service in the cluster that provides metrics and fault information. Currently, the Health Service is enabled only when using Storage Spaces Direct and no other scenario. When you enable the Storage Spaces Direct in the cluster, the Health Service is also enabled automatically.

The health service aggregates monitoring information (fault and metrics) of all nodes in the cluster. These information are available from a single point and can be used by PowerShell or across API. The Health Service can raise alerts in real-time regarding event in the cluster. These alerts contain the severity, the description, the recommended action and the location information related to the fault domain. Health Service raises alerts for several faults as you can see below:

The rollup monitors can help to find a root cause of a fault. For example, in order that the server monitor is healthy, all underlying monitor must be also healthy. If an underlying monitor is not healthy, the parent monitor shows an alert.

In the above example, a drive is down in a node. So, the Health Service raises an alert for the drive and the parent node monitor is in error state.

In the next version, the Health Service will be smart. The cluster monitor will be “only” in warning state because the cluster still has enough node to run the service and after all, a single drive down is not a severe issue for the service. This feature should be called severity masking

The Health Service is also able to gather metrics about the cluster such as IOPS, capacity, CPU usage and so on.

Use Cluster Health Service

Show Metrics

To show the metrics gathered by the Health Service, run the cmdlet Get-StorageHealthReport as below:

Get-StorageSubSystem *Cluster* | Get-StorageHealthReport

As you can see, you have consolidated information as the memory available, the IOPS, the capacity, the average CPU usage and so on. We can imagine a tool that gather information from the API several times per minute to show charts or pies with these information.

Show Alerts

To get current alerts in the cluster, run the following cmdlet:

Get-StorageSubSystem *Cluster* | Debug-StorageSubSystem

To show you screenshot, I run cmdlet against my lab Storage Spaces Direct cluster which is not best practices. The following alert is raised because I have not enough reserve capacity:

Then I stop a node in my cluster:

I have several issues in my cluster! The Health Service has detected that the node is done and that some cables are disconnected. It is because my Mellanox adapters are direct attached to the other node.

SCOM Dashboard

This dashboard is not yet available at the time of writing but in the future, Microsoft should releaser the below SCOM dashboard which leverage the Cluster Health Service.

Another example: DataOn Must

DataOn is a company that provides hardware which are compliants with Storage Spaces (Direct). DataOn has also released dashboards called DataOn Must which are based on Health Service. DataOn Must is currently only available when you buy DataOn hardware. Thanks to Health Service API, we can have fancy and readable charts and pies about the health of the Storage Spaces Direct Cluster.

I would like thanks Cosmos Darwin for the topic review and to have left me the opportunity to talk about severity masking.

The post Cluster Health Service in Windows Server 2016 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/cluster-health-service-in-windows-server-2016/feed/ 2 4881
Fault Domain Awareness with Storage Spaces Direct //www.tech-coffee.net/fault-domain-awareness-with-storage-spaces-direct/ //www.tech-coffee.net/fault-domain-awareness-with-storage-spaces-direct/#comments Mon, 07 Nov 2016 13:49:33 +0000 //www.tech-coffee.net/?p=4862 Fault Domain Awareness is a new feature in Failover Clustering since Windows Server 2016. Fault Domain Awareness brings a new approach of the high availability which is more flexible and Cloud oriented. In previous edition, the high availability was based only on node: if a node failed, the resources were moved to another node. With ...

The post Fault Domain Awareness with Storage Spaces Direct appeared first on Tech-Coffee.

]]>
Fault Domain Awareness is a new feature in Failover Clustering since Windows Server 2016. Fault Domain Awareness brings a new approach of the high availability which is more flexible and Cloud oriented. In previous edition, the high availability was based only on node: if a node failed, the resources were moved to another node. With Fault Domain Awareness, the point of failure can be a node (as previously), a chassis, a rack or a site. This enables a greater flexibility and a modern approach of the high availability. Datacenters which are Cloud oriented, require this kind of flexibility to change the point of failure of the cluster from the single node to an entire rack which contains several nodes.

In Microsoft definition, a fault domain is a set of hardware that shares the same point of failure. The default fault domain in a cluster is the node. You can also create the fault domain based on chassis, rack and site. Moreover, a fault domain can belong to another fault domain. For example, you can create racks fault domains and configure them to specify that the parent is a site.

Storage Spaces Direct (S2D) can leverage Fault Domain Awareness to spread block replicas across fault domains (unfortunately it is not yet possible to spread block replicas across sites because Storage Spaces Direct doesn’t support stretched cluster with Storage Replica). Let think about a three-way mirroring implementation of S2D: this means that we have three times the data (the original and two replicas). S2D is able for example to create the original data on a first rack, and each replica are copied in several racks. In this way, even if you lose a rack, the storage keeps working.

In S2D documentation, Microsoft doesn’t say anymore the number nodes required for each resilience type:

  • 2-Way Mirroring: two fault domains
  • 3-way Mirroring: three fault domains
  • Erasure Coding: from four fault domains.

These statements are really important for design consideration. If you plan to use fault domain awareness with racks and you plan to use erasure coding, you need also four racks at least. Each rack must have the same number of nodes. So, in the case of there is four racks, the number of nodes per cluster can be 4, 8, 12 or 16. So by using fault domain awareness, you lose some flexibility for deployment, but you increase the availability capabilities.

Configure Fault Domain Awareness

This section introduces how to configure fault domain in the cluster. It is heavily recommended to make this configuration, before that you enable Storage Spaces Direct!

By using PowerShell

In this example, I show you how to configure the fault domain in your cluster with a two nodes cluster. It is not really useful for a two-node cluster to create fault domain but I just want to show you how to create them in the cluster configuration.

Before running the below cmdlet, I have initialized $CIM variable by using the following command (Cluster-Hyv01 is the name of my cluster):

$CIM = New-CimSession -ComputerName Cluster-Hyv01

Then I gather fault domain information by using the Get-ClusterFaultDomain cmdlet:

As you can see above, a fault domain is automatically created for each node. To create an additional fault domain, you can use the cmdlet New-ClusterFaultDomain as below.

If I run again the Get-ClusterFaultDomain cmdlet, you can see each fault domain.

Then I run the following cmdlet to set the Fault Domain parents:

Set-ClusterFaultDomain -Name Rack-22U -Parent Lyon
Set-ClusterFaultDomain -Name Chassis-Fabric -Parent Rack-22U
Set-ClusterFaultDomain -Name pyhyv01 -Parent Chassis-Fabric
Set-ClusterFaultDomain -Name pyhyv02 -Parent Chassis-Fabric

In the Failover Clustering manager, you can see the result by opening the node tab. As you can see below, each node belongs to Rack-22U and the site Lyon.

By using XML

You can also declare your physical infrastructure by using a XML File as below:

<Topology>
    <Site Name="Lyon" Location="Lyon 8e">
        <Rack Name="Rack-22U" Location="Restroom">
            <Node Name="pyhyv01" Location="Rack 6U" />
            <Node Name="pyhyv02" Location="Rack 12U" />
        </Rack>
    </Site>
</Topology>

Once your topology is written, you can configure your cluster with the XML File:

$xml = Get-Content <XML File> | Out-String
Set-ClusterFaultDomainXML -XML $xml

Conclusion

Fault Domain Awareness is a great feature to improve the availability of your infrastructure, especially with Storage Spaces Direct. The fault domain can be oriented on racks instead of nodes. This means that you can lose a higher number of nodes and keep the service running. On the other hand, it is necessary to be careful during the design phase because an equivalent number of nodes must be installed in each rack. If you need erasure coding, you require 4 racks.

The post Fault Domain Awareness with Storage Spaces Direct appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/fault-domain-awareness-with-storage-spaces-direct/feed/ 4 4862