Romain Serre – Tech-Coffee //www.tech-coffee.net Fri, 09 Aug 2019 11:34:26 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Don’t do it: enable performance history in an Azure Stack HCI mixed mode cluster //www.tech-coffee.net/dont-do-it-enable-performance-history-in-an-azure-stack-hci-mixed-mode-cluster/ //www.tech-coffee.net/dont-do-it-enable-performance-history-in-an-azure-stack-hci-mixed-mode-cluster/#respond Fri, 09 Aug 2019 11:34:16 +0000 //www.tech-coffee.net/?p=6935 Lately I worked for a customer to add two nodes in an existing 2-nodes Storage Spaces Direct cluster. The existing nodes are running on Windows Server 2016 while the new ones are running on Windows Server 2019. So, when I integrated the new nodes to the cluster, it was in mixed operating system mode because ...

The post Don’t do it: enable performance history in an Azure Stack HCI mixed mode cluster appeared first on Tech-Coffee.

]]>
Lately I worked for a customer to add two nodes in an existing 2-nodes Storage Spaces Direct cluster. The existing nodes are running on Windows Server 2016 while the new ones are running on Windows Server 2019. So, when I integrated the new nodes to the cluster, it was in mixed operating system mode because two different versions of Windows Server were in the cluster. For further information about this process, you can read this topic.

After the integration of the new nodes, I left the customer because the data in the cluster were replicated and spread on them. During this period, the customer ran this command:

Start-ClusterPerformanceHistory

This command start performance history in a Storage Spaces Direct cluster. In a native Windows Server 2019 cluster, a cluster shared volume called ClusterPerformanceHistory is created to store performance metrics. Because the cluster were in mixed operating system mode and not in native Windows Server 2019 mode, it resulted in an unexpected behavior. Several ClusterPerformanceHistory CSV were created. Even if they were deleted, new ClusterPerformanceHistory were created indefinitely.

A circuit board

Description automatically generated

The customer tried to run the following cmdlet without success:

Stop-ClusterPerformanceHistory -DeleteHistory

How to resolve the performance history issue

To solve this issue, the customer ran this cmdlets:

$StorageSubSystem = Get-StorageSubSystem Cluster* $StorageSubSystem | Set-StorageHealthSetting -Name “System.PerformanceHistory.AutoProvision.Enabled” -Value “False”

The option System.PerformanceHistory.AutoProvision.Enabled is set to True when the cmdlet Start-ClusterPerformanceHistory is run. However, the cmdlet Stop-ClusterPerformanceHistory doesn’t disable this setting.

The post Don’t do it: enable performance history in an Azure Stack HCI mixed mode cluster appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/dont-do-it-enable-performance-history-in-an-azure-stack-hci-mixed-mode-cluster/feed/ 0 6935
Keep Dell Azure Stack HCI hardware up to date with WSSD Catalog //www.tech-coffee.net/keep-dell-azure-stack-hci-hardware-up-to-date-with-wssd-catalog/ //www.tech-coffee.net/keep-dell-azure-stack-hci-hardware-up-to-date-with-wssd-catalog/#respond Fri, 03 May 2019 10:17:53 +0000 //www.tech-coffee.net/?p=6902 The firmware and driver’s management can be a pain during the lifecycle of an Azure Stack HCI. Some firmware are not supported, others must be installed to solve an issue. In the case of Dell hardware, a support matrix is available here. If you look at that matrix, you’ll see firmware and drivers for storage ...

The post Keep Dell Azure Stack HCI hardware up to date with WSSD Catalog appeared first on Tech-Coffee.

]]>
The firmware and driver’s management can be a pain during the lifecycle of an Azure Stack HCI. Some firmware are not supported, others must be installed to solve an issue. In the case of Dell hardware, a support matrix is available here. If you look at that matrix, you’ll see firmware and drivers for storage devices, Host Bus Adapter, switches, network adapters and so on. It’s nice to get that support matrix but should I find and download each drivers or firmware manually? Of course not.

Dell provides since a few months a WSSD catalog that enables to download only latest supported firmware and drivers for Azure Stack HCI and for your hardware. You can use this catalog from OpenManage (OME) or from Dell EMC Repository Manager. I prefer the second option because not all my customers have deployed OME. Dell EMC Repository Manager can be downloaded from this link.

Download the WSSD Catalog

The best way to download the WSSD Catalog is this webpage. Download the file and unzip it. You should get two files: the catalog and the signature file.

Add the catalog to Dell EMC Repository Manager

Now that you have the WSSD catalog file, you can add it to Dell EMC Repository Manager. When you open it, just click on Add Repository.

Specify a repository name and click on Choose File in base Catalog. Then select the WSSD catalog file.

Then you have to choose the Repository Type. Either I choose Manual or Integration. Integration is nice because you can specify an iDRAC name or IP. Then only specific firmware and drivers are downloaded for the hardware. You can also choose Manual for a new infrastructure to prepare your deployment. In this example, I choose Manual and I select the 740XD model and Windows Server 2019. When you have finished, click on Add.

Create a custom SUU

Once the repository is added, you should see firmware and drivers. Select it and click on export.

Then select the SUU ISO tab. Choose a location where will be exported the SUU file.

Once the export job is finished, you get a SUU image file to update your Azure Stack HCI servers. You just have to copy it to each server, mount the ISO and run suu.cmd -e. Or you can create a script to make a package to deploy firmware and drivers automatically.

Conclusion

The WSSD Catalog provided by Dell enables to ease the management of firmware and drivers in an Azure Stack HCI solution. They have to be updated several times a year and before it would be time consuming. Now it’s straightforward and you don’t have excuse to not update your platform.

The post Keep Dell Azure Stack HCI hardware up to date with WSSD Catalog appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/keep-dell-azure-stack-hci-hardware-up-to-date-with-wssd-catalog/feed/ 0 6902
Archive Rubrik backup in Microsoft Azure //www.tech-coffee.net/archive-rubrik-backup-in-microsoft-azure/ //www.tech-coffee.net/archive-rubrik-backup-in-microsoft-azure/#respond Thu, 04 Apr 2019 09:15:28 +0000 //www.tech-coffee.net/?p=6866 The last time I talked about Rubrik, I presented the first steps to protect vSphere VM and store backup within the appliance. Today, I would like to present how to archive Rubrik backup in Microsoft Azure. Rubrik provides a turnkey interface. It’s the same thing for archival. All you need is a blob storage account ...

The post Archive Rubrik backup in Microsoft Azure appeared first on Tech-Coffee.

]]>
The last time I talked about Rubrik, I presented the first steps to protect vSphere VM and store backup within the appliance. Today, I would like to present how to archive Rubrik backup in Microsoft Azure.

Rubrik provides a turnkey interface. It’s the same thing for archival. All you need is a blob storage account in Microsoft Azure. Then you can configure Rubrik to talk with this storage account and store retention. We’ll see in this topic how to do these steps.

Prepare Microsoft Azure resources

First, you need to create a blob storage account in Microsoft Azure. From the marketplace, select storage account and create a Storage v2 account. I recommend you to choose a Cool access tier.

A screenshot of a social media post

Description automatically generated

Once the storage account is created, navigate to the blob storage and create a container. I called mine rubrik.

A screenshot of a cell phone

Description automatically generated

Configure the Rubrik Appliance

First of all, you need openssl toolkit. A private key is required and openssl is used to generate one. Run the following command to create the private key:

Openssl genrsa -out rubrik_encryption_key.pem 2048.

Then open the Rubrik dashboard and click on the wheel located at the top right corner. Then select Archival Locations.

A screenshot of a cell phone

Description automatically generated

Select Azure Archival Type and specify the storage account name. Then copy the primary key of the storage account and specify the Container name. Finally copy past the RSA key you have just generated from openssl.

A screenshot of a cell phone

Description automatically generated

If Rubrik is connected successfully to the storage account, you should see a new archival location like the following screenshot:

A screenshot of a social media post

Description automatically generated

Archive Rubrik backup in Microsoft Azure

The archiving configuration is set in SLA domain. Click on the SLA domain you want to configure.

A screenshot of a cell phone

Description automatically generated

At this moment, the Archival Policy is not configured. So, edit properties of this SLA domain.

A screenshot of a cell phone

Description automatically generated

In the edit SLA domain window, click on Remote Settings (lower right corner).

A screenshot of a cell phone

Description automatically generated

Now that archival location is added, you can change the retention on Rubrik. Just by changing the cursor from the right to the left, you change the retention on Brik and in Microsoft Azure. By enabling Instant Archive, all snapshots are transferred to Azure as an archive.

A screenshot of a cell phone

Description automatically generated

Now that Archival Policy is set, you get information in the SLA Domain Policy window.

A screenshot of a cell phone

Description automatically generated

After some time, you should get some data in your archival location.

A screenshot of a cell phone

Description automatically generated

Now when you look at snapshot, a Cloud logo is added on top of the Brik: that means that the snapshot is stored on Brik and in Microsoft Azure. If the snapshot is archived only, just the cloud logo appears and you can restore data from this view like data stored on Brik.

A screenshot of a cell phone

Description automatically generated

If you look at the storage account in Microsoft Azure, now you should get additional data.

A screenshot of a cell phone

Description automatically generated

The post Archive Rubrik backup in Microsoft Azure appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/archive-rubrik-backup-in-microsoft-azure/feed/ 0 6866
Getting started with Azure Update Management to handle Windows updates //www.tech-coffee.net/getting-started-with-azure-update-management-to-handle-windows-updates/ //www.tech-coffee.net/getting-started-with-azure-update-management-to-handle-windows-updates/#comments Thu, 28 Mar 2019 09:49:20 +0000 //www.tech-coffee.net/?p=6803 For most of the companies, the patch management is a challenge. All customers don’t have SCCM. And WSUS is aging and is not agile (you have to create several GPOs to handle different patch windows). This is why Azure Update Management is welcome to replace this tool. If you do only Azure Update Management in ...

The post Getting started with Azure Update Management to handle Windows updates appeared first on Tech-Coffee.

]]>
For most of the companies, the patch management is a challenge. All customers don’t have SCCM. And WSUS is aging and is not agile (you have to create several GPOs to handle different patch windows). This is why Azure Update Management is welcome to replace this tool. If you do only Azure Update Management in your automation account, the solution is nearly free (while you don’t exceed 500mn of usage per month).

For most of the usage, Azure Update Management helps to improve your patch management. However, clusters are not handled for the moment (a shame for my S2D clusters). Some features are missing such as run an update process “now” and the information are not assessed immediately after an update. Despite all these lacks, I use only Azure Update Management to handle Windows Update in my lab and I try to convince my customers to use this product instead of WSUS. In this topic I’ll show you how to deploy and use Azure Update Management.

Azure resources creation

The following Azure resources are required to deploy Azure Update Management:

  • Log Analytics workspace
  • Azure Automation Account

So I create these resources from the Azure Marketplace.

Then, once you created the Azure Automation Account and the Log Analytics workspace, open the Azure Automation Account blade and navigate to Update Management. Select the Log Analytics workspace and click on Enable.

Connect on-prem machines to Azure Update Management

Open Log Analytics Workspace blade. In overview pane, locate Connect a data source. Then click on Windows, Linux and others sources.

Then download the Windows Agent. Copy the workspace ID and the primary key: you need these information to complete the agent installation.

Once you downloaded the agent binaries, run the installation. Check the box saying Connect the agent to Azure log analytics (OMS).

Next specify the workspace ID and key. Select Azure Commercial.

N.B: You can also install the agent by using a command line:

setup.exe /qn NOAPM=1 ADD_OPINSIGHTS_WORKSPACE=1 OPINSIGHTS_WORKSPACE_AZURE_CLOUD_TYPE=0 OPINSIGHTS_WORKSPACE_ID= OPINSIGHTS_WORKSPACE_KEY= AcceptEndUserLicenseAgreement=1

It can take a while before information are pulled up in Azure. Once the agent is detected in Azure Update Management, you should get a message saying that a machine does not have “Update Management” enabled. Click on the link beside.

Choose the option you want and click on OK.

Once you have enabled update management of machines, you should get information about update states on your On-Prem computers.

Create an update deployment

Now that machines are well reported in the Update Management portal, we can create an update deployment to install the updates. Click on Schedule update deployment. First provide a name for this update deployment. Then, select machine to update and click on Machines. Select machine you want to upgrade.

Then configure the schedule. For this rule I choose to run it once a time. As you can see also in the below screenshot, you can specify a pre and post script.

Finally, specify the maintenance window and the reboot options as specified in the following screenshot.

Once the schedule update is created, you can retrieve it in scheduled update deployments tab.

Create a recurring update deployment

You can also create a recurring update deployment to install automatically updates each month. Create a new update deployment and this time in schedule settings choose recurring.

Several scheduled update deployments can be created as you can see in the following screenshot.

When a deployment update is running, you can see the progression in Update Deployments tab.

Finally, when update process is finished, you have to wait almost 30mn to get the new assessment from on-prem machines. After updates are installed you should get all your machines compliant.

The post Getting started with Azure Update Management to handle Windows updates appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/getting-started-with-azure-update-management-to-handle-windows-updates/feed/ 1 6803
Getting started with Rubrik to backup VMware VMs //www.tech-coffee.net/getting-started-with-rubrik-to-backup-vmware-vms/ //www.tech-coffee.net/getting-started-with-rubrik-to-backup-vmware-vms/#comments Mon, 18 Mar 2019 13:17:36 +0000 //www.tech-coffee.net/?p=6771 Rubrik is a new competitor on the backup market. Instead of providing only software as most of the other products such as Veeam or Commvault, Rubrik provides also the hardware and so that is a backup appliance software. That is a turnkey solution where you don’t need to handle the underlying hardware and software. All ...

The post Getting started with Rubrik to backup VMware VMs appeared first on Tech-Coffee.

]]>
Rubrik is a new competitor on the backup market. Instead of providing only software as most of the other products such as Veeam or Commvault, Rubrik provides also the hardware and so that is a backup appliance software. That is a turnkey solution where you don’t need to handle the underlying hardware and software. All you have to do is to rack the appliance in datacenter and use the service. Rubrik provides a modern HTML5 interface and the product has been designed for ease of management. In this topic, I will introduce Rubrik to backup VMware VMs.

Add vCenter Server

First of all, a vCenter must be added in order to protect VMs. Click on the wheel located at the top right corner and select vCenter Servers.

In the next view, click on the “+” .

In the pop-up, specify the vCenter name or ip address. Then provide credentials with administrator privilege.

When the vCenter is added, the data are fetched and Rubrik make an inventory of VMs, hosts and folders.

Once the inventory is finished, you can navigate to vSphere VMs to list them.

And what about physical servers and Hyper-V

Rubrik is able to backup Hyper-V VMs, Nutanix AHV or physical servers. For Hyper-V VMs, you can add a SCVMM instance or Hyper-V hosts. Unfortunately, in the lab kindly provided by Rubrik, no Hyper-V host are available. If I have enough time, I’ll try to create a guest Hyper-V cluster.

When you want to backup physical servers, you have to install Rubrik Backup Servers on the servers either Windows or Linux.

You can protect all these kinds of physical machines.

Protect virtual machines

In Rubrik world, the VM protection is set via SLA Domains. An SLA is a set of parameters where you defined the retention policy, and the backup frequency. A restore point is called a snapshot. To create an SLA domain, navigate to Local Domains and click on “+” button.

First, specify an SLA Domain Name. Then specify when you want to take snapshot and the retention. Snapshots can be based on hours, days, months and years and retention can be set for each of time base. In snapshot window section you can configure the backup window and when you do the first full backup. Rubrik make a full backup the first time. Then each snapshot is an incremental. There’s no synthetic backup. In a next topic, we will see the advanced configuration, especially to configure Cloud archiving.

Once you create your SLA Domain, you can apply it to a VM. Navigate in vSphere VMs (for VMware) and select VMs you want to protect. You can also protect a folder or a cluster / node. Then select Manage Protection.

Then select the SLA Domain to apply protection.

Once the SLA domain is applied to the VM, you should see its name in the SLA Domain column.

Manage the protection

By clicking on the SLA Domain, you can review settings and usage. In the Storage part, you can see the capacity used on the brik. In vSphere VMs, all VM protected by this SLA domain are listed.

If you click on a VM, you can get an overview of snapshots taken and how is protected the VM. The schedule on the right shows restore point. By clicking on a restore point you can restore your data. We will see the restore process in a further topic.

In the same place, you can review activities to troubleshoot backup issues.

By selecting an activity, you can review its details and download logs.

Rubrik Dashboard

The rubrik GUI has many dashboards. This one provides information about hardware, storage consumption and throughput.

The following dashboard provides information about SLA domain, tasks and protection. You can also get information about local snapshot storage and the archive repository.

The post Getting started with Rubrik to backup VMware VMs appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/getting-started-with-rubrik-to-backup-vmware-vms/feed/ 2 6771
Create a Hub-and-Spoke topology with Azure Virtual Network Peering //www.tech-coffee.net/virtual-network-peering/ //www.tech-coffee.net/virtual-network-peering/#comments Mon, 28 Jan 2019 11:07:41 +0000 //www.tech-coffee.net/?p=6705 Currently I’m working on AZ-102 certification and I wanted to share with you a small lab I created to try Azure virtual network and especially remote gateway. To create a Hub-and-Spoke topology, you need that each spoke virtual network communicates through the hub virtual network. To implement this kind of solution, you need several virtual ...

The post Create a Hub-and-Spoke topology with Azure Virtual Network Peering appeared first on Tech-Coffee.

]]>
Currently I’m working on AZ-102 certification and I wanted to share with you a small lab I created to try Azure virtual network and especially remote gateway. To create a Hub-and-Spoke topology, you need that each spoke virtual network communicates through the hub virtual network. To implement this kind of solution, you need several virtual networks and peering. I would like to implement the following solution:

All VMs must be able to communicate through NE01-VMProject1 which is the hub. A peering will be established between NE01-NET – NE02-NET and NE01-NET – NE03-NET. To prepare this topic, I’ve already created the following resources:

  • Resource groups
  • Virtual machines
  • Virtual networks

As you can see below, the VM NE01VM1 is connected to NE01-NET virtual network with the IP 10.11.0.4.

The VM NE02VM1 is connected to NE02-NET virtual network with the IP 10.12.0.4.

Because no peering is created, a VM cannot ping another:

Create the peering

First, I edit Peerings from NE02-NET.

I call it NE02-NET-NE01-NET and I select the virtual network NE01-NET. For the moment, I leave default configuration.

From NE01-NET virtual network, I do the same thing to peer it to NE02-NET. I leave also the default configuration for the moment.

When peers are created, you should get the peering status to Connected.

Now, VM from NE01-VMProject1 and NE02-VMProject2 are able to communicate:

So, I create the peers between NE03-VMProject3 and NE01-VMProject1. I repeat the same steps as previously. I create a peer from NE01-NET to connect to NE03-NET.

Then I create a peer from NE03-NET to connect to NE01-NET.

From this point, VMs from NE03-VMProject3 are able to communicate with NE01-VMProject1 VMs and VMs from NE02-VMProject2 can ping VM from NE01-VMProject1. However, VM from NE03-VMProject3 can’t communicate with NE02-VMProject2 because gateway and routes are missing:

Create virtual gateway and route tables

First, create a virtual gateway in your hub network (NE01-NET) with the following settings. The gateway takes the 4th IP address in gateway subnet. You need this information for later. So, in this example, the internal IP address of this virtual network gateway is 10.11.1.4.

Then in NE02-VMProject2 and NE03-VMProject3, create a route table resource with the following settings:

Now, navigate in route table resource and click on Routes. Click on Add.

Configure the route as the following:

Route Name Address prefix Next hop type Next hop address
NE02-NET-ROUTE To-NE03-NET 10.13.0.0/16 Virtual appliance 10.11.1.4
NE03-NET-ROUTE To-NE02-NET 10.12.0.0/16 Virtual appliance 10.11.1.4

Now, click on Subnet and Associate.

Associate the NE02-NET-ROUTE to NE02-NET virtual network and NE03-NET-ROUTE to NE03-NET.

Configure hub peers

Now we need to allow gateway transit in each hub peer. Open each peering configuration in NE01-NET and Allow gateway transit as below.

Configure spoke peers

In each spoke peer (NE02-NET and NE03-NET), enable Use remote gateways option.

Wait a few minutes and then all VMs should be able to communicate.

The post Create a Hub-and-Spoke topology with Azure Virtual Network Peering appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/virtual-network-peering/feed/ 2 6705
Design the network for a Storage Spaces Direct cluster //www.tech-coffee.net/design-the-network-for-a-storage-spaces-direct-cluster/ //www.tech-coffee.net/design-the-network-for-a-storage-spaces-direct-cluster/#comments Mon, 07 Jan 2019 12:57:57 +0000 //www.tech-coffee.net/?p=6689 In a Storage Spaces Direct cluster, the network is the most important part. If the network is not well designed or implemented, you can expect poor performance and high latency. All Software-Defined are based on a healthy network whether it is Nutanix, VMware vSAN or Microsoft S2D. When I audit S2D configuration, most of the ...

The post Design the network for a Storage Spaces Direct cluster appeared first on Tech-Coffee.

]]>
In a Storage Spaces Direct cluster, the network is the most important part. If the network is not well designed or implemented, you can expect poor performance and high latency. All Software-Defined are based on a healthy network whether it is Nutanix, VMware vSAN or Microsoft S2D. When I audit S2D configuration, most of the time the issue comes from the network. This is why I wrote this topic: how to design the network for a Storage Spaces Direct cluster.

Network requirements

The following statements come from the Microsoft documentation:

Minimum (for small scale 2-3 node)

  • 10 Gbps network interface
  • Direct-connect (switchless) is supported with 2-nodes

Recommended (for high performance, at scale, or deployments of 4+ nodes)

  • NICs that are remote-direct memory access (RDMA) capable, iWARP (recommended) or RoCE
  • Two or more NICs for redundancy and performance
  • 25 Gbps network interface or higher

As you can see, for a 4-Node S2D cluster or more, Microsoft recommends 25 Gbps network. I think it is a good recommendation, especially for a full flash configuration or when NVMe are implemented. Because S2D uses SMB to establish communication between nodes, RDMA can be leveraged (SMB Direct).

RDMA: iWARP and RoCE

You remember about DMA (Direct Memory Access)? This feature allows a device attached to a computer (like an SSD) to access to memory without passing by CPU. Thanks to this feature, we achieve better performance and reduce CPU usage. RDMA (Remote Direct Memory Access) is the same thing but across the network. RDMA allows a remote device to access to the local memory directly. Thanks to RDMA the CPU and latency is reduced while throughput is increased. RDMA is not a mandatory feature for S2D but it’s recommended to have it. Last year Microsoft stated RDMA increases S2D performance about 15% in average. So, I recommend heavily to implement it if you deploy a S2D cluster.

Two RDMA implementation is supported by Microsoft: iWARP (Internet Wide-area RDMA Protocol) and RoCE (RDMA over Converged Ethernet). And I can tell you one thing about these implementations: this is war! Microsoft recommends iWARP while lot of consultants prefer RoCE. In fact, Microsoft recommends iWARP because less configuration is required compared to RoCE. Because of RoCE, the number of Microsoft cases were high. But consultants prefer RoCE because Mellanox is behind this implementation. Mellanox provides valuable switches and network adapters with great firmware and drivers. Each time a new Windows Server build is released, a supported Mellanox driver / firmware is also released.

If you want more information about RoCE and iWARP, I suggest you this series of topics from Didier Van Hoye.

Switch Embedded Teaming

Before choosing the right switches, cables and network adapters, it’s important to understand what is the software story. In Windows Server 2012R2 and prior, you had to create a teaming. When the teaming was implemented, a tNIC was created. The tNIC is a sort of virtual NIC but connected to the Teaming. Then you were able to create the virtual switch connected to the tNIC. After that, the virtual NICs for management, storage, VMs and so on were added.

In addition to complexity, this solution prevents the use of RDMA on virtual network adapter (vNIC). This is why Microsoft has improved this part with Windows Server 2016. Now you can implement Switch Embedded Teaming (SET):

This solution reduces the network complexity and vNICs can support RDMA. However, there are some limitations with SET:

  • Each physical network adapter (pNIC) must be the same (same firmware, same drivers, same model)
  • Maximum of 8 pNIC in a SET
  • The following Load Balancing mode are supported: Hyper-V Port (specific case) and Dynamic. This limitation is a good thing because Dynamic is the appropriate choice for most of the case.

For more information about Load Balancing mode, Switch Embedded Teaming and limitation, you can read this documentation. Switch Embedded Teaming brings another great advantage: you can create an affinity between vNIC and pNIC. Let’s think about a SET where two pNICs are member of the teaming. On this vSwitch, you create two vNICs for storage purpose. You can create an affinity between one vNIC and one pNIC and another for the second vNIC and pNIC. It ensures that each pNIC are used.

The design presented below are based on Switch Embedded Teaming.

Network design: VMs traffics and storage separated

Some customers want to separate the VM traffics from the storage traffics. The first reason is they want to connect VM to 1Gbps network. Because storage network requires 10Gbps, you need to separate them. The second reason is they want to dedicate a device for storage such as switches. The following schema introduces this kind of design:

If you have 1Gbps network port for VMs, you can connect them to 1Gbps switches while network adapters for storage are connected to 10Gbps switches.

Whatever you choose, the VMs will be connected to the Switch Embedded Teaming (SET) and you have to create a vNIC for management on top of it. So, when you will connect to nodes through RDP, you will go through the SET. The physical NIC (pNIC) that are dedicated for storage (those on the right on the scheme) are not in a teaming. Instead, we leverage SMB MultiChannel which allows to use multiple network connections simultaneously. So, both network adapters will be used to establish SMB session.

Thanks to Simplified SMB MultiChannel, both pNICs can belong to the same network subnet and VLAN. Live-Migration is configured to use this network subnet and to leverage SMB.

Network Design: Converged topology

The following picture introduces my favorite design: a fully converged network. For this kind of topology, I recommend you 25Gbps network at least, especially with NVMe or full flash. In this case, only one SET is created with two or more pNICs. Then we create the following vNIC:

  • 1x vNIC for host management (RDP, AD and so on)
  • 2x vNIC for Storage (SMB, S2D and Live-Migration)

The vNIC for storage can belong to the same network subnet and VLAN thanks to simplified SMB MultiChannel. Live-Migration is configured to use this network and SMB protocol. RDMA are enabled on these vNICs as well as pNICs if they support it. Then an affinity is created between vNICs and pNICs.

I love this design because it really simple. You have one network adapter for BMC (iDRAC, ILO etc.) and only two network adapters for S2D and VM. So, the physical installation in datacenter and the software configuration are easy.

Network Design: 2-node S2D cluster

Because we are able to direct-attach both nodes in a 2-Node configuration, you don’t need switch for storage. However, Virtual Machines and host management vNIC requires connection so switches are required for these usages. But it can be 1Gbps switches to drastically reduce the solution cost.

The post Design the network for a Storage Spaces Direct cluster appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/design-the-network-for-a-storage-spaces-direct-cluster/feed/ 11 6689
S2D Real case: detect a lack of cache //www.tech-coffee.net/s2d-real-case-detect-a-lack-of-cache/ //www.tech-coffee.net/s2d-real-case-detect-a-lack-of-cache/#comments Tue, 04 Dec 2018 18:11:39 +0000 //www.tech-coffee.net/?p=6667 Last week I worked for a customer who went through a performance issue on a S2D cluster. The customer’s infrastructure is composed of one compute cluster (Hyper-V) and one 4-node S2D cluster. First, I checked if it was related to the network and then if it’s a hardware failure that produces this performance drop. Then ...

The post S2D Real case: detect a lack of cache appeared first on Tech-Coffee.

]]>
Last week I worked for a customer who went through a performance issue on a S2D cluster. The customer’s infrastructure is composed of one compute cluster (Hyper-V) and one 4-node S2D cluster. First, I checked if it was related to the network and then if it’s a hardware failure that produces this performance drop. Then I ran the script watch-cluster.ps1 from VMFleet.

The following screenshot comes from watch-cluster.ps1 script. As you can see, a CSV has almost 25ms of latency. A high latency impacts overall performance especially when intensive IO applications are hosted. If we look into the cache, a lot of miss per second are registered especially on the high latency CSV. But why Miss/sec can produce a high latency?

What happens in case of lack of cache?

The solution I troubleshooted is composed of 2 SSD and 8 HDD per node. The cache ratio is 1:4 and its capacity is almost of 6,5% of the raw capacity. The IO path in normal operation is depicted in the following schema:

Now in the current situation, I have a lot Miss/Sec, that means that SSD cannot handle these IO because there is not enough cache. Below the schema depicts the IO path for miss IO:

You can see that in case of miss, the IO go to HDD directly without being cached in SSD. HDD is really slow compared to SSD and each time IO works directly with this kind of storage device, the latency is increased. When the latency is increased, the overall performance decrease.

How to resolve that?

To resolve this issue, I told to customer to add two SSD in each node. These SSD should be equivalent (or almost) than those already installed in nodes. By adding SSD, I improve the cache ratio to 1:2 and the capacity to 10% compared to raw capacity.

It’s really important to size kindly the cache tier when you design your solution to avoid this issue. As said a fellow MVP: storage is cheap, downtime is expensive.

The post S2D Real case: detect a lack of cache appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/s2d-real-case-detect-a-lack-of-cache/feed/ 1 6667
Getting started with Azure File Sync //www.tech-coffee.net/getting-started-with-azure-file-sync/ //www.tech-coffee.net/getting-started-with-azure-file-sync/#respond Thu, 22 Nov 2018 09:29:42 +0000 //www.tech-coffee.net/?p=6629 Azure File Sync is a Microsoft feature released in July 2018. It enables to synchronize multiple On-Premise file servers with Azure. In other words, we can replace DFS-R for branch office. Azure File Sync brings also a tiering feature that enables to cache in On-Prem servers the most used files (based on the access data) ...

The post Getting started with Azure File Sync appeared first on Tech-Coffee.

]]>
Azure File Sync is a Microsoft feature released in July 2018. It enables to synchronize multiple On-Premise file servers with Azure. In other words, we can replace DFS-R for branch office. Azure File Sync brings also a tiering feature that enables to cache in On-Prem servers the most used files (based on the access data) and to keep in Azure the others. The data can be protected with Azure Backup to avoid to manage backup in each On-Prem file servers and in case of disaster, the data remains in Azure. In this topic, I’ll show you how to implement Azure File Sync.

Requirement

To follow this topic, you need:

  • An On-Prem file server (physical or virtual) running on Windows Server 2012R2, 2016 or 2019.
  • An Azure account

Azure side configuration

First, create a Storage Account. I don’t need performance, so I choose a standard performance account with cool access tier. Regarding the Replication, you must choose regarding the SLA you require.

Once the storage account is created, open its properties and create a file share. I called mine branch1.

Open the Azure marketplace and look for Azure File Sync.

Create the resource in the same location as the storage account. Usually I add Azure File Sync in the same resource group than the storage account.

Once Azure File Sync is created, you can browse registered servers and click on Azure File Sync agent link.

Download the agent for your Windows Server version. I downloaded the version for Windows Server 2019 because my On-Prem server is running on Windows Server 2019. Download the file and copy it to the On-Prem server.

Implement agent in On-Prem Server

Connect to the On-Prem server and run the following cmdlet to install AzureRM PowerShell cmdlet.

Then run the Azure File Sync agent setup.

Once the agent is installed, the following window is raised. Specify your tenant ID and click on sign-in. Another pop-up is raised to ask you credentials.

Next choose the Azure Subscription, the resource group and the Storage Sync Service.

Once you are registered, your server should be present in Azure File Sync (registered server tab). My Windows Server is running on Windows Server 2019 but operating system in Azure File Sync is Windows Server 2016 :).

To finish, I create a folder in P:\ called AFS. This folder will be synchronized with Azure File Sync. I copy files in this folder.

Manage Azure File Sync

Now that Azure File Sync is installed, agent is ready and file are presents somewhere in the On-Prem server, we can sync data between On-Prem and Azure. To create the synchronization job, navigate to Sync Groups in Azure File Sync.

Provide a name for this Sync Group and select the storage account and the Azure File Share that you created at the beginning.

Now that the cloud endpoints is created, we can add servers to the sync group. So, click on Add server endpoint.

Select the On-Prem server, the path to synchronize (P:\AFS) and enable the cloud tiering if you wish.

Once the synchronization has run, you should retrieve files in the storage account.

Conclusion

In large company with branch office, DFS-R is often implemented to replicate branch office data to main datacenter (in a single way). Now Microsoft provides a new solution to replace DFS-R with Azure File Sync. Thanks to Cloud Tiering, your On-Prem file servers don’t require plenty of storage. Data can be accessed from everywhere because they are stored in Azure. It’s a nice hybrid cloud scenario.

The post Getting started with Azure File Sync appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/getting-started-with-azure-file-sync/feed/ 0 6629
Register Windows Admin Center in Microsoft Azure //www.tech-coffee.net/register-windows-admin-center-in-microsoft-azure/ //www.tech-coffee.net/register-windows-admin-center-in-microsoft-azure/#respond Tue, 06 Nov 2018 18:02:29 +0000 //www.tech-coffee.net/?p=6597 With Windows Server 2019 and Windows Admin Center, we are able to build hybrid cloud in an easy way. First Windows Admin Center provide a GUI to configure features such as Azure Backup, Azure Site Recovery or Azure File Sync. With Windows Server 2019, we can interconnect an On-Prem host to an Azure virtual network ...

The post Register Windows Admin Center in Microsoft Azure appeared first on Tech-Coffee.

]]>
With Windows Server 2019 and Windows Admin Center, we are able to build hybrid cloud in an easy way. First Windows Admin Center provide a GUI to configure features such as Azure Backup, Azure Site Recovery or Azure File Sync. With Windows Server 2019, we can interconnect an On-Prem host to an Azure virtual network thanks to Azure Virtual Network Adapter. Finally, Storage Migration Service enables to migrate a file server to an Azure File Service such as Azure File Sync. But to be able to leverage all these features from Windows Admin Center, it must be registered in Microsoft Azure. In this topic, I’ll show you step-by-step how to register Windows Admin Center in Microsoft Azure.

Requiements

To be able to follow this topic, you need the following:

  • An Azure subscription
  • A running Windows Admin Center (1809 at least).

Register Windows Admin Center in Microsoft Azure

From a web browser (Edge or Chrome), open Windows Admin Center and click on the wheel at the top right corner. Then click on Azure and Register.

Then copy the code and click on Device Login and past the code you just copied. A Microsoft login pop-up should be raised: enter your Azure Credentials.

If you have several tenant, choose the right one. You can find the tenant ID from the Azure Portal by clicking on Switch Directory. If you have already register a Windows Admin Center before, you can reuse the Azure AD App by selecting the option.

Now you are asked to grant permissions to the Azure AD App. Open an Azure Portal from the browser of your choice.

Then navigate to App Registrations and select your Windows Admin Center App. Edit its settings and click on Required permissions. Finally click on Grant Permissions.

If the Windows Admin Center works well, you should have the following information.

Now you can enjoy Azure Hybrid features such as Azure Backup from Windows Admin Center.

If you wish, you can also use Azure Active Directory to authenticate users and administrators on Windows Admin Center.

Conclusion

With Windows Server 2019 and Windows Admin Center has promised to simplify hybrid scenario. Thanks to Windows Admin Center we are able to configure On-Prem hosts in Azure Site Recovery and Azure Backup. The “hybrid” extensions of Windows Admin Center are still in preview. Just by upgrading extensions, we’ll have more features. This is why Windows Admin Center is a good product (and it’s free !)

The post Register Windows Admin Center in Microsoft Azure appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/register-windows-admin-center-in-microsoft-azure/feed/ 0 6597