Software-defined storage – Tech-Coffee //www.tech-coffee.net Fri, 26 Jan 2018 10:34:51 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Deploy a Software-Defined Storage solution with StarWind Virtual SAN //www.tech-coffee.net/deploy-a-software-defined-storage-solution-with-starwind-vsan/ //www.tech-coffee.net/deploy-a-software-defined-storage-solution-with-starwind-vsan/#respond Fri, 26 Jan 2018 10:34:51 +0000 //www.tech-coffee.net/?p=6117 StarWind Virtual SAN is a Software-Defined Storage solution which enables to replicate data across several nodes to ensure availability. The data are mirrored between two or more nodes. Hypervisor can be installed in the StarWind Virtual SAN nodes (hyperconverged) or separated from hypervisor (converged). StarWind Virtual SAN is easy to use and provide high performance. ...

The post Deploy a Software-Defined Storage solution with StarWind Virtual SAN appeared first on Tech-Coffee.

]]>
StarWind Virtual SAN is a Software-Defined Storage solution which enables to replicate data across several nodes to ensure availability. The data are mirrored between two or more nodes. Hypervisor can be installed in the StarWind Virtual SAN nodes (hyperconverged) or separated from hypervisor (converged). StarWind Virtual SAN is easy to use and provide high performance. Moreover, StarWind provides pro active support. In this topic I’ll show you how to deploy a 3-Nodes StarWind VSAN to use with Hyper-V or ESXi.

Lab overview

To write this topic, I have deployed three VMware VMs running on Windows Server 2016. Each VM has the following configuration:

  • 2 vCPU
  • 8GB of memory
  • 1x VMXNET3 NIC in management network (for Active Directory, RDP, VM management)
  • 1x VMXNET3 NIC in cluster network (synchronization and heartbeat)
  • 1x VMXNET3 NIC in Storage network (iSCSI with hypervisor)
  • 1x 100GB Data disk

If you plan to deploy StarWind VSAN in production, you need physical server with enough storage and with enough network adapters.

StarWind Virtual SAN installation

First, download StarWind VSAN from their website. Once you have downloaded the installer, execute it on each StarWind VSAN node. First, accept the license agreement.

In the next screen, click on Next.

Specify a folder location where StarWind Virtual SAN will be installed.

Select StarWind Virtual SAN Server in the drop down menu.

Specify the start menu folder and click on Next.

If you want a desktop icon, enable the checkbox.

If you have already a license key, select Thank you, I do have a key already and click on Next.

Specify the location of the license file and click on Next.

Review the license information and click on Next.

If the iSCSI service is not started and disabled, you’ll get this pop-up. Click on OK to enable and start the Microsoft iSCSI initiator service.

Once you have installed StarWind Virtual SAN in each node, you can start next step.

Create an iSCSI target and a storage device

Open StarWind Management Console and click on Add Server.

Then add each node and click on OK. In the below screenshot, I clicked on Scan StarWind Servers to discover automatically nodes.

When you connect to each node, you get this warning. Choose the default location of the storage pool (storage devices).

Right click on the first node and select Add Target.

Specify a target alias and be sure to allow multiple concurrent iSCSI connections.

Once the target has been created, you get the following screen:

Now, right click on the target and select Add new Device to Target.

Select Hard Disk Device and click on Next.

Choose the option which apply to your configuration. In my case, I choose virtual disk.

Specify a name and a size for the virtual disk.

Choose thick-provisioned or Log Structured File System (LSFS). LSFS is designed for Virtual Machine because this file system eliminates IO Blender effect. With LSFS you can also enable deduplication. Choose also the right block cache size.

In the next screen, you can choose where are held metadata and how may worker threads you want.

Choose a device RAM cache parameters.

You can also specify a flash cache capacity if you have installed SSD in your nodes.

Then click on Create to create the storage device.

Once the storage device is created, you get the following screen:

At this time, you have a virtual disk on the first node. This virtual disk can store your data. But this storage device has no resiliency. In the next steps, we will replicate this storage devices with the two other nodes.

Replicate the storage device in other StarWind VSAN nodes

Right click on the storage device and select Replication Manager.

In the replication manager, select Add Replica.

Select Synchronous Two-Way Replication to replicate data across StarWind Virtual SAN nodes.

Specify the hostname and the port of the partner and click on Next.

Then select the failover strategy: Heartbeat or Node Majority. In my case I choose Node Majority. This mode requires that the majority of nodes are online. In a three nodes configuration, you can support only one loss.

Then choose to create a new partner device.

Specify the target name and the location of the storage device in partner node.

Select the network for synchronization. In my case, I select the cluster network.

Then select to synchronize from existing device.

To start the creation of the replication, click on Create Replica.

Repeat the same previous steps for the third node. At the end, the configuration should be similar to the following screenshot:

In StarWind Management Console, if you click on a target, you can see each iSCSI session: each node has two iSCSI sessions because there are three nodes.

iSCSI connection

Now that StarWind Virtual SAN is ready, you can connect by using iSCSI your favorite hypervisor. Don’t forget to configure MPIO to support the multipath. For ESXi you can read this topic.

The post Deploy a Software-Defined Storage solution with StarWind Virtual SAN appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-software-defined-storage-solution-with-starwind-vsan/feed/ 0 6117
Deploy a SMB storage solution for Hyper-V with StarWind VSAN free //www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/ //www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/#comments Wed, 14 Jun 2017 14:29:22 +0000 //www.tech-coffee.net/?p=5543 StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS ...

The post Deploy a SMB storage solution for Hyper-V with StarWind VSAN free appeared first on Tech-Coffee.

]]>
StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS solution with Windows Server 2016 Standard Edition and StarWind VSAN free. It is an affordable solution for your Hyper-V VM storage.

In this topic, we’ll see how to deploy StarWind VSAN free on two nodes based on Windows Server 2016 Standard Core edition. Then we’ll deploy a Failover Clustering with SOFS to deliver storage to Hyper-V nodes.

Architecture overview

This solution should be deployed on physical servers with physical disks (NVMe, SSD or HDD etc.). For the demonstration, I have used two virtual machines. Each virtual machine has:

  • 4 vCPU
  • 4GB of Memories
  • 1x OS disk (60GB dynamic) – Windows Server 2016 Standard Core edition
  • 1x Data disk (127GB dynamic)
  • 3x vNIC (1x Management / iSCSI, 1x Hearbeart, 1x Synchronization)

Both nodes are deployed and joined to the domain.

Node preparation

On both nodes, I run the following cmdlet to install the features and prepare a volume for StarWind:

# Install FS-FileServer, Failover Clustering and MPIO
install-WindowsFeature FS-FileServer, Failover-Clustering, MPIO -IncludeManagementTools -Restart

# Set the iSCSI service startup to automatic
get-service MSiSCSI | Set-Service -StartupType Automatic

# Start the iSCSI service
Start-Service MSiSCSI

# Create a volume with disk
New-Volume -DiskNumber 1 -FriendlyName Data -FileSystem NTFS -DriveLetter E

# Enable automatic claiming of iSCSI devices
Enable-MSDSMAutomaticClaim -BusType iSCSI

StarWind installation

Because I have installed nodes in Core edition, I install and configure components from PowerShell and command line. You can download StarWind VSAN free from this link. To install StarWind from command line, you can use the following parameters:

Starwind-v8.exe /SILENT /COMPONENTS="comma separated list of component names" /LICENSEKEY="path to license file"

Current list of components:

Service: StarWind iSCSI SAN server.

service\haprocdriver: HA Processor Driver, it is used to support devices that have been created with older versions of the Software.

service\starflb: Loopback Accelerator, it is used with Windows 2012 and upper versions to accelerate iSCSI operation when client resides on the same machine as server.

service\starportdriver: StarPort driver that is required for operation of Mirror devices.

Gui : Management Console ;

StarWindXDll: StarWindX COM object;

StarWindXDll\powerShellEx: StarWindX PowerShell module.

To install StarWind, I have run the following command:

C:\temp\Starwind-v8.exe /SILENT /COMPONENTS="Service,service\starflb,service\starportdriver,StarWindxDll,StarWindXDll\powerShellEx /LICENSEKEY="c:\temp\ StarWind_Virtual_SAN_Free_License_Key.swk"

I run this command on both nodes. After this command is run, StarWind is installed and ready to be configured.

StarWind configuration

StarWind VSAN free provides a trial of 30 days for the management console. After the 30 days, you have to manage the solution from PowerShell. So I decided to configure the solution from PowerShell:

Import-Module StarWindX

try
{
    $server = New-SWServer -host 10.10.0.54 -port 3261 -user root -password starwind

    $server.Connect()

    $firstNode = new-Object Node

    $firstNode.ImagePath = "My computer\E"
    $firstNode.ImageName = "VMSTO01"
    $firstNode.Size = 65535
    $firstNode.CreateImage = $true
    $firstNode.TargetAlias = "vmsan01"
    $firstNode.AutoSynch = $true
    $firstNode.SyncInterface = "#p2=10.10.100.55:3260"
    $firstNode.HBInterface = "#p2=10.10.100.55:3260"
    $firstNode.CacheSize = 64
    $firstNode.CacheMode = "wb"
    $firstNode.PoolName = "pool1"
    $firstNode.SyncSessionCount = 1
    $firstNode.ALUAOptimized = $true
    
    #
    # device sector size. Possible values: 512 or 4096(May be incompatible with some clients!) bytes. 
    #
    $firstNode.SectorSize = 512
	
	#
	# 'SerialID' should be between 16 and 31 symbols. If it not specified StarWind Service will generate it. 
	# Note: Second node always has the same serial ID. You do not need to specify it for second node
	#
	$firstNode.SerialID = "050176c0b535403ba3ce02102e33eab" 
    
    $secondNode = new-Object Node

    $secondNode.HostName = "10.10.0.55"
    $secondNode.HostPort = "3261"
    $secondNode.Login = "root"
    $secondNode.Password = "starwind"
    $secondNode.ImagePath = "My computer\E"
    $secondNode.ImageName = "VMSTO01"
    $secondNode.Size = 65535
    $secondNode.CreateImage = $true
    $secondNode.TargetAlias = "vmsan02"
    $secondNode.AutoSynch = $true
    $secondNode.SyncInterface = "#p1=10.10.100.54:3260"
    $secondNode.HBInterface = "#p1=10.10.100.54:3260"
    $secondNode.ALUAOptimized = $true
        
    $device = Add-HADevice -server $server -firstNode $firstNode -secondNode $secondNode -initMethod "Clear"
    
    $syncState = $device.GetPropertyValue("ha_synch_status")

    while ($syncState -ne "1")
    {
        #
        # Refresh device info
        #
        $device.Refresh()

        $syncState = $device.GetPropertyValue("ha_synch_status")
        $syncPercent = $device.GetPropertyValue("ha_synch_percent")

        Start-Sleep -m 2000

        Write-Host "Synchronizing: $($syncPercent)%" -foreground yellow
    }
}
catch
{
    Write-Host "Exception $($_.Exception.Message)" -foreground red 
}

$server.Disconnect()

Once this script is run, two HA images are created and they are synchronized. Now we have to connect to this device through iSCSI.

iSCSI connection

To connect to the StarWind devices, I use iSCSI. I choose to set iSCSI from PowerShell to automate the deployment. In the first node, I run the following cmdlets:

New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260
New-IscsiTargetPortal -TargetPortalAddress 10.10.0.55 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.54
Get-IscsiTarget | Connect-IscsiTarget -isMultipathEnabled $True

In the second node, I run the following cmdlets:

New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260
New-IscsiTargetPortal -TargetPortalAddress 10.10.0.54 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.55
Get-IscsiTarget | Connect-IscsiTarget -isMultiPathEnabled $True

You can run the command iscsicpl in a server core to show the iSCSI GUI. You should have something like that:

PS: If you have a 1GB/s network, set the load balance policy to Failover and leave Active the 127.0.0.1 path. If you have 10GB/s network, choose Round Robin policy.

Configure Failover Clustering

Now that a shared volume is available for both node, you can create the cluster:

Test-Cluster -node VMSAN01, VMSAN02

Review the report and if all is ok, you can create the cluster:

New-Cluster -Node VMSAN01, VMSAN02 -Name Cluster-STO01 -StaticAddress 10.10.0.65 -NoStorage

Navigate to Active Directory (dsa.msc) and locate the OU where is located the Cluster Name Object. Edit the permissions on this OU to allow the Cluster Name Object to create computer object:

Now we can create the Scale-Out File Server role:

Add-ClusterScaleOutFileServerRole -Name VMStorage01

Then we can initialize the StarWind disk to convert it later in CSV. Then we can create a SMB share:

# Initialize the disk
get-disk |? OperationalStatus -like Offline | Initialize-Disk

# Create a CSVFS NTFS partition
New-Volume -DiskNumber 3 -FriendlyName VMSto01 -FileSystem CSVFS_NTFS

# Rename the link in C:\ClusterStorage
Rename-Item C:\ClusterStorage\Volume1 VMSTO01

# Create a folder
new-item -Type Directory -Path C:\ClusterStorage\VMSto01 -Name VMs

# Create a share
New-SmbShare -Name 'VMs' -Path C:\ClusterStorage\VMSto01\VMs -FullAccess everyone

The cluster looks like that:

Now from Hyper-V, I am able to store VM in this cluster like that:

Conclusion

StarWind VSAN free and Windows Server 2016 Standard Edition provides an affordable SDS solution. Thanks to this solution, you can deploy a 2-node storage cluster which provides SMB 3.11 shares. So Hyper-V can uses these shares to host virtual machines.

The post Deploy a SMB storage solution for Hyper-V with StarWind VSAN free appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/feed/ 4 5543
Storage Quality of Service in Windows Server 2016 //www.tech-coffee.net/storage-quality-service-windows-server-2016/ //www.tech-coffee.net/storage-quality-service-windows-server-2016/#respond Mon, 25 Jan 2016 08:20:40 +0000 //www.tech-coffee.net/?p=4376 To manage the storage performance priority of virtual machines, Microsoft has introduced in Windows Server 2012 R2 the Storage Quality of Service (Storage QoS). This feature enables to set a maximum input/output per second (IOPS) to a virtual hard disk. In a multi-tenant environment you can set the maximum IOPS according to a service level ...

The post Storage Quality of Service in Windows Server 2016 appeared first on Tech-Coffee.

]]>
To manage the storage performance priority of virtual machines, Microsoft has introduced in Windows Server 2012 R2 the Storage Quality of Service (Storage QoS). This feature enables to set a maximum input/output per second (IOPS) to a virtual hard disk. In a multi-tenant environment you can set the maximum IOPS according to a service level or to avoid that a tenant VM impact storage performance of others tenant VMs. This feature enables also to notify an administrator if the minimum IOPS on a virtual hard disk is reached.

The Storage QoS in Windows Server 2012 R2 is great when you have a standalone Hyper-V. In cluster case, it is another story. In a cluster configuration, usually nodes share a storage solution. So several nodes can store virtual hard disk on a single storage solution. Thanks to Storage QoS you can set the Maximum IOPS (or the Minimum IOPS) on virtual hard disks (VHD) on each node. Because the Storage QoS is not centralized, you can set a total Maximum IOPS of all VHD hosted on a single storage solution that reach and exceed the maximum storage performance.

So with Windows Server 2016, Microsoft has enhanced the Storage QoS. In Windows Server 2016, we are able to centrally manage the Storage QoS policies for a group of virtual machines. And these policies are applied at the cluster level since policies are stored in the cluster database. Because a policy can be applied to a group of VMs you can easily create service level at the cluster level. Moreover, Microsoft has added PowerShell cmdlet to configure and monitor performance related to Storage QoS.

Currently, Storage QoS supports two scenarios :

  • Hyper-V using Scale-Out File Server
  • Hyper-V using Cluster Shared Volume

So Storage QoS can be applied to both Software-Defined Storage scenario provided in Windows Server 2016 : disaggregated deployment and Hyper-Convergence.

Manage Storage Quality of Service

In the below example, I have implemented a Hyper-Converged cluster consisting of four Nano Server Nodes. So I will manage the Storage QoS remotely by using CIM Session. On the cluster, there is a VM deployed where IOMeter is running.

First of all, it is necessary to open a CIM session to a cluster node because we are managing the cluster from a management machine using RSAT Tools :

$CimSession = New-CimSession -Credential inthomecloud\rserre -ComputerName HC-Nano01

Then I use the below command to show the IOPS for each flow.

Get-StorageQosFlow -CimSession $CimSession |
Sort-Object StorageNodeIOPs -Descending |
ft InitiatorName, @{Expression={$_.InitiatorNodeName.Substring(0,$_.InitiatorNodeName.IndexOf('.'))};Label="InitiatorNodeName"}, StorageNodeIOPs, Status, @{Expression={$_.FilePath.Substring($_.FilePath.LastIndexOf('\')+1)};Label="File"} -AutoSize

You can find the result of this command in the below screenshot. I have the VM name, the Hyper-V which hosts the VM, the IOPS and the name of the file.

To get more information as the PolicyID, the minimum and the maximum IOPS, you can run the below command. In the below example, no Storage QoS policy is applied to the flow.

Get-StorageQoSFlow -InitiatorName VM01 -CimSession $CimSession | Format-List

You can have also a summary about the storage QoS applied to a volume :

Get-StorageQoSVolume -CimSession $CimSession | fl *

You can see in the above screenshot that no reservation is applied on this volume.

To make the test, I have run an IOMeter in the VM01. You can find the result without policy applied.

Then I create a new policy called bronze with a minimum IOPS of 50 and a maximum IOPS of 150.

New-StorageQosPolicy -Name bronze -MinimumIops 50 -MaximumIops 150 -CimSession $CimSession

Next I apply the Storage QoS policy to the VM01 disks.

Get-VM -Name VM01 -ComputerName HC-Nano02|
Get-VMHardDiskDrive |
Set-VMHardDiskDrive -QoSPolicyID (Get-StorageQosPolicy -Name Bronze -CimSession $CimSession).PolicyId

If I run again the Get-StorageQoSVolume, you can see that a reservation is applied to the volume (50 as the policy).

Then I run again the Get-StorageQoSFlow. You can see that the StorageNodeIOPS is reduced to 169 (instead of 435).

Then, if you show all details, you can see that a policy is applied on the flow and there are a minimum and a maximum IOPS.

To finish, you can see in the below screenshot that the IOPS inside the VM is also reduced compared to before Storage QoS application.

The post Storage Quality of Service in Windows Server 2016 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/storage-quality-service-windows-server-2016/feed/ 0 4376
Storage Replica //www.tech-coffee.net/storage-replica/ //www.tech-coffee.net/storage-replica/#respond Mon, 04 Jan 2016 15:39:59 +0000 //www.tech-coffee.net/?p=4356 Storage Replica is a new feature in Windows Server 2016 that enables to replicate data from the storage level when you are using Windows Storage solution. Storage Replica uses SMB3 to replicate and can leverage on RDMA to increase throughput and to decrease CPU utilization. Storage Replica implementation Currently Storage Replica supports four different scenarios: ...

The post Storage Replica appeared first on Tech-Coffee.

]]>
Storage Replica is a new feature in Windows Server 2016 that enables to replicate data from the storage level when you are using Windows Storage solution. Storage Replica uses SMB3 to replicate and can leverage on RDMA to increase throughput and to decrease CPU utilization.

Storage Replica implementation

Currently Storage Replica supports four different scenarios:

  • Stretch cluster implemented by using Storage Replica
  • Cluster-To-Cluster storage replication
  • Server-To-Server storage replication
  • Server-To-Self replication (to replicate between volumes in a single machine)

In the above scenario, a cluster is stretched between two rooms. The machines in the Room 1 have a first storage solution while the servers in Room 2 have another storage solution. To synchronize data between storage solutions, the Storage Replica can be used. In this way, the cluster can be stretched because the stored data are the same. Even if Storage Replica enables to make Stretched cluster easier, I still don’t like this kind of solution especially for Hyper-V cluster. This kind of solution bring complexity when we simplify infrastructure as Hyper-Convergence allows.

The above solution is a Cluster-To-Cluster replication. Let’s think about two rooms (in two different locations) with one cluster in each. The first cluster is the active one while the other is the passive. The storage is replicated from the active to the passive cluster. Now the first cluster brings down. The passive cluster can start applications because all the data have been replicated. This solution brings a good DRP solution.

The above solution works as same as Cluster-To-Cluster replication except that instead of a cluster we have just a single server. We have two servers in two different locations. So the first server is the active one while the server 2 is the passive. Even if the server 1 is down, the server 2 can start applications. Thanks to Storage Replica, we have implemented a DRP easily.

To finish, it is also possible to replicate volumes in a single server. Thanks to this, you can replace your Robocopy processJ.

Replication mode

Storage Replica requires at least two volumes to work: the logs and the data volume. The data volume contains the data that you want replicated. The logs volume should be high speed (as SATA SSD or NVMe SSD) and it’s used for log replication.

Storage Replica supports two different replication modes:

  • Synchronous
  • Asynchronous (Server-To-Server only)

The synchronous replication is a near real-time replication, which offers a Zero Data Loss. This solution should be chosen for HA and DR solution. However, be careful because there is a risk of degraded application performance because the application write is acknowledged once the replication has been done. This is why the Logs volume should be on SSD devices.

  1. Application writes data
  2. Log data is written and the data is replicated to the remote site
  3. Log data is written at the remote site
  4. Acknowledgement from the remote site
  5. Application write acknowledged

t & t1 : Data flushed to the volume, logs always write through

The Asynchronous replication brings less risk of degraded application performance because the application write is acknowledged once the log data is written. However, because the replication is not near real-time replication, there is a risk of a loss of data (Near Zero Data Loss). So the latency and the distance are less important than if synchronous replication was implemented.

  1. Application writes data
  2. Log data written
  3. Application write acknowledged
  4. Data replicated to the remote site
  5. Log data written at the remote site
  6. Acknowledgement from the remote site

t & t1 : Data flushed to the volume, logs always write through

Implementation Example

In this implementation example, I will replicate data from a first cluster to a second cluster (Cluster-To-Cluster). The storage solution of each cluster is based on Storage Spaces Direct.

My both Clusters are implemented on Nano Server nodes. If I follow the documentation I need these packages:

  • File Server role and other storage components
  • Reverse forwarders
  • Deploy Nano Server image with EnableRemoteManagementPort option

For more information about Nano Server, you can read this topic.

Once your node deployed, you need to install the Storage Replica feature. For that you can run the below command:

Install-WindowsFeature Storage-Replica –ComputerName <ComputerName>

Regarding my clusters, I have two disks: the VMStorage01 and a Logs volume. I have attributed the L letter to the Logs volume. It is important that the volume are identical between source and destination cluster. Otherwise, it won’t work.

From the cluster view, it seems like that:

My both clusters are configured the same. Then I run the below command to grand Storage Replica access from the first cluster to the second and vice versa.

Grant-SRAccess -ComputerName <a Node of the first cluster> -Cluster <second cluster name>
Grant-SRAccess -ComputerName <a Node of the second cluster> -Cluster <first cluster name>

To finish, I run the New-SRPartnership command as below:

And below the result in the disk view of the first cluster. The logs disk has the log source replication role and the VMStorage01 has the Source replication role.

On the destination side, the destination volume is no more accessible. Once the initial replication is finished, the replication status is continuously replicating. The Logs disk becomes the log destination replication role and the VMStorage01 disk becomes the destination replication role.

The post Storage Replica appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/storage-replica/feed/ 0 4356
Build a HyperConverged infrastructure with NanoServer //www.tech-coffee.net/build-hyperconverged-infrastructure-nanoserver/ //www.tech-coffee.net/build-hyperconverged-infrastructure-nanoserver/#respond Sat, 05 Dec 2015 11:15:50 +0000 //www.tech-coffee.net/?p=4324 Thanks to Windows Server 2016, we will able to implement HyperConverged infrastructure. This marketing word means that storage, network and compute components will be installed locally on servers. So in this solution, there is no need SAN. Instead the Storage Spaces Direct solution will be used (for further information please read this topic). In this topic ...

The post Build a HyperConverged infrastructure with NanoServer appeared first on Tech-Coffee.

]]>
Thanks to Windows Server 2016, we will able to implement HyperConverged infrastructure. This marketing word means that storage, network and compute components will be installed locally on servers. So in this solution, there is no need SAN. Instead the Storage Spaces Direct solution will be used (for further information please read this topic). In this topic I’ll describe how to deploy a HyperConverged infrastructure on Nano Servers in Windows Server 2016 TP4. Almost all the configuration will be done with Powershell.

You said HyperConverged?

HyperConverged infrastructure is based on servers where disks are Direct-Attached Storage (DAS) connected internally or by using a JBOD tray. Each server (at least four to implement Storage Space Direct) has their own storage devices. So there are no shared disks or JBODs.

HyperConverged infrastructure is based on known features as Failover Cluster, Cluster Shared Volume, and Storage Space. However, because storage devices are not shared between each node, we need something more to create a Clustered Storage Space with DAS devices. This is called Storage Space Direct. Below you can find the Storage Spaces Direct stack.

On network side, Storage Space Direct leverage at least 10G networks RDMA capable. This is because replications that occur though Software Storage Bus need low latency that RDMA provides.

Requirements

Because I have not enough hardware in my lab, I deploy the hyperconverged infrastructure to virtual machines. Now that we have nested Hyper-V, we can do this J. To follow this topic, you need these requirements:

  • Windows Server 2016 Technical Preview 4 ISO
  • A Hyper-V host installed with Windows Server 2016 Technical Preview 4
  • This script to enable nested Hyper-V

Create Nano Server VHDX image

To create a Nano Server image, you have to copy Convert-WindowsImage.ps1 and NanoServerImageGenerator.psm1 to a folder.

Then I have written a short script to create the four Nano Server VHDX :

Import-Module C:\temp\NanoServer\NanoServerImageGenerator.psm1
# Nano Server Name
$NanoServers = "HCNano01", "HCNano02", "HCNano03", "HCNano04"
$IP = 170
Foreach ($HCNano in $NanoServers){
	New-NanoServerImage -MediaPath "D:" `
			    -BasePath C:\temp\NanoServer\Base `
			    -TargetPath $("C:\temp\NanoServer\Images\" + $HCNano + ".vhdx")`
			    -ComputerName $HCNano `
			    -InterfaceNameOrIndex Ethernet `
			    -Ipv4Address 10.10.0.$IP `
			    -Ipv4SubnetMask 255.255.255.0 `
			    -Ipv4Gateway 10.10.0.1 `
			    -DomainName int.homecloud.net `
			    -Clustering `
			    -GuestDrivers `
			    -Storage `
			    -Packages Microsoft-NanoServer-Compute-Package, Microsoft-Windows-Server-SCVMM-Compute-Package, Microsoft-Windows-Server-SCVMM-Package `
			    -EnableRemoteManagementPort
	 $IP++
}

This script creates a Nano Server VHDX image for each machine called HCNano01, HCNano02, HCNano03 and HCNano04. I set also the domain and the IP address. I add cluster feature, guest drivers, storage and Hyper-V features and SCVMM agent if you need to add your cluster to VMM later. For more information about the Nano Server image creation, please read this topic.

Then I launch this script. Sorry, you can’t take a coffee while the VHDX are created because you have to enter manually administrator password for each image J

Once the script is finished, you should have four VHDX as below.

Ok, now we have our four images. The next step is the Virtual Machine creation.

Virtual Machine configuration

Create Virtual Machines

To create virtual machines, connect to your Hyper-V Host running on Windows Server 2016 TP4. To create and configure Virtual Machines I have written this script:

$NanoServers = "HCNano01", "HCNano02", "HCNano03", "HCNano04"
Foreach ($HCNano in $NanoServers){
	New-VM -Name $HCNano `
	       -Path D: `
               -NoVHD `
	       -Generation 2 `
	       -MemoryStartupBytes 8GB `
	       -SwitchName LS_VMWorkload

	Set-VM -Name $HCNano `
	       -ProcessorCount 4 `
	       -StaticMemory

	Add-VMNetworkAdapter -VMName $HCNano -SwitchName LS_VMWorkload
	Set-VMNetworkAdapter -VMName $HCNano -MacAddressSpoofing On -AllowTeaming On
}

This script creates four virtual machines called HCNano01, HCNano02, HCNano03 and HCNano04. These Virtual Machines will be stored in D:\ and no VHDx will be mounted. These Virtual Machines are Gen2 with 4 vCPU and 8GB of static memory. Then I add a second network adapter to make a teaming inside the Virtual Machines (with Switch Embedded Teaming). So I enable Mac Spoofing and the Teaming on Virtual Network Adapters. Below you can find the result.

Now, copy each Nano Server VHDX image inside its related Virtual Machine folder. Below an example:

Then I run this script to add this VHDX to each virtual machine:

$NanoServers = "HCNano01", "HCNano02", "HCNano03", "HCNano04"
Foreach ($HCNano in $NanoServers){
    
    # Add the virtual disk to VMs
    Add-VMHardDiskDrive -VMName $HCNano `
                        -Path $("D:\" + $HCNano + "\" + $HCNano + ".vhdx")

    
    $VirtualDrive = Get-VMHardDiskDrive -VMName $HCNano `
                                        -ControllerNumber 0
    # Change the boot order
    Set-VMFirmware -VMName $HCNano -FirstBootDevice $VirtualDrive
}

This script adds the VHDX to each Virtual Machine and change the boot device order to boot hard drive first. Ok our virtual machines are ready. Now we have to add storage for Storage Space usage.

Create virtual disks for storage

Now I’m going to create 10 disks for each virtual machine. These disks are dynamic and their sizes are 10GB. (oh come-on, it’s a lab :p).

$NanoServers = "HCNano01", "HCNano02", "HCNano03", "HCNano04"
Foreach ($HCNano in $NanoServers){
	$NbrVHDX = 10
	For ($i = 1 ;$i -le $NbrVHDX; $i++){
		New-VHD -Path $("D:\" + $HCNano + "\" + $HCNano + "-Disk" + $i + ".vhdx") `
			-SizeBytes 10GB `
			-Dynamic

		Add-VMHardDiskDrive -VMName $HCNano `
				    -Path $("D:\" + $HCNano + "\" + $HCNano + "-Disk" + $i + ".vhdx")
	}
	Start-VM -Name $HCNano
}

This script creates 10 virtual disks for each virtual machine and mount them to these VMs. Then each Virtual Machine is started.

At this moment, we have four VMs with hardware well configured. So now we have to configure the software part.

Hyper-V host side configuration

Enable trunk on Nano Server virtual network adapters

Each Nano Server will have four different virtual NICs with four different subnets and four different VLAN. These virtual NICs are connected to the Hyper-V host virtual NICs (yes nested virtualization is really Inception). Because we need four different VLANs we have to configure trunk on Virtual NICs on the Hyper-V Host. By running the Get-VMNetworkAdapterVLAN Powershell command you should have something like that:

For each Network Adapter that you have created in Virtual Machines, you have the allowed VLAN and the Mode (trunk, Access or untagged). It’s depending on the configuration that you made in the virtual machine. But regarding your HyperConverged Virtual Machines, we need three allowed VLAN and so trunk mode. To configure Virtual network Adapter to trunk mode, I run the below script on my Hyper-V host:

Set-VMNetworkAdapterVlan -VMName HCNano01 -Trunk -NativeVlanId 0 -AllowedVlanIdList "10,100,101,102"
Set-VMNetworkAdapterVlan -VMName HCNano02 -Trunk -NativeVlanId 0 -AllowedVlanIdList "10,100,101,102"
Set-VMNetworkAdapterVlan -VMName HCNano03 -Trunk -NativeVlanId 0 -AllowedVlanIdList "10,100,101,102"
Set-VMNetworkAdapterVlan -VMName HCNano04 -Trunk -NativeVlanId 0 -AllowedVlanIdList "10,100,101,102"

So for each network adapter, I allow VLAN 10, 100, 101 and 102. The NativeVlanId is set to 0 to leave untagged all other traffics. After running the above script, I run again Get-VMNetworkAdapterVLAN and I have the below configuration.

Enable nested Hyper-V

Now we have to enable nested Hyper-V to be able to run Hyper-V inside Hyper-V (oh my god, I have a headache. Where is Dicaprio?). Microsoft provides a script to enable nested Hyper-V. You can find it here. I have copied the script to a file called nested.ps1. Next just run nested.ps1 –VMName <VMName> for each VM. Then you should have something as below. Then the VM will be stopped.


Then restart the four Nano Servers VMs.

Configure Nano Server system

To configure Nano Server, I will sometime leverage Powershell Direct. To use it, just run Enter-PSSession –VMName <VMName> -Credential <VMName>\Administrator. Once you are connected to the system, you can configure it. So because I’m a lazy guy, I have written one script to configure each server. The below script change the time zone, create a Switch Embedded Teaming, set the IP Addresses, enable RDMA and install the features needed. This time, when you have run this script, you can take a coffee J.

$Credential = Get-Credential
$IPMgmt = 170
$IPSto = 10
$IPLM = 170
$IPClust = 170
$NanoServers = "HCNano01", "HCNano02", "HCNano03", "HCNano04"

Foreach ($HCNano in $NanoServers){
    Enter-PSSession –VMName $HCNano –Credential $Credential
    #Change the TimeZone to Romance Standard Time
    tzutil /s "Romance Standard Time"
    # Create a Swtich Embedded Teaming with both Network Adapters
    New-VMSwitch -Name Management -EnableEmbeddedTeaming $True -AllowManagementOS $True -NetAdapterName "Ethernet", "Ethernet 2"
    # Add Virtual NICs for Storage, Cluster and Live-Migration
    Add-VMNetworkAdapter -ManagementOS -Name "Storage" -SwitchName Management
    Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName Management
    Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName Management
    # Set the IP Address for each virtual NIC
    netsh interface ip set address "vEthernet (Management)" static 10.10.0.$IPMgmt 255.255.255.0 10.10.0.1
    netsh interface ip set dns "vEthernet (Management)" static 10.10.0.20
    netsh interface ip set address "vEthernet (Storage)" static 10.10.102.$IPSto 255.255.255.192
    netsh interface ip set address "vEthernet (LiveMigration)" static 10.10.101.$IPLM 255.255.255.0
    netsh interface ip set address "vEthernet (Cluster)" static 10.10.100.$IPClust 255.255.255.0
    # Enable RDMA on Storage and Live-Migration
    Enable-NetAdapterRDMA -Name "vEthernet (Storage)"
    Enable-NetAdapterRDMA -Name "vEthernet (LiveMigration)"
    # Add DNS on Management vNIC
    netsh interface ip set dns "vEthernet (Management)" static 10.10.0.20
    Exit
    # Install File Server and Storage Replica feature
    install-WindowsFeature FS-FileServer, Storage-Replica -ComputerName $HCNano
    # Restarting the VM
    Restart-VM –Name $HCNano
    $IPMgmt++
    $IPSto++
    $IPLM++
    $IPClust++
}

At the end, I have 4 virtual NICs as you can see with Get-NetAdapter command:

Then I have a Switch Embedded Teaming called Management composed of two Network Adapters.

To finish I have well activated RDMA on Storage and Live-Migration Networks.

So the network is ready on each node. Now we have just to create the cluster J

Create and configure the cluster

First of all, I run a Test-Cluster to verify if my nodes are ready to be part of a Storage Space Direct Cluster. So I run the below command:

Test-Cluster -Node "HCNano01", "HCNano02", "HCNano03", "HCNano04" -Include "Storage Spaces Direct", Inventory,Network,"System Configuration"

I have a warning in the network configuration because I use private networks for storage and cluster and so there is no ping possible across these both networks. So I ignore this Warning. So let’s start the cluster creation:

New-Cluster –Name HCCluster –Node HCNode01, HCNode02, HCNode03, HCNode04 –NoStorage –StaticAddress 10.10.0.174

Then the cluster is formed:

Now we can configure the Networks. I start by changing the name and set the Storage Network’s role by Cluster and Client.

(Get-ClusterNetwork -Cluster HCCluster -Name "Cluster Network 1").Name="Management"
(Get-ClusterNetwork -Cluster HCCluster -Name "Cluster Network 2").Name="Storage"
(Get-ClusterNetwork -Cluster HCCluster -Name "Cluster Network 3").Name="Cluster"
(Get-ClusterNetwork -Cluster HCCluster -Name "Cluster Network 4").Name="Live-Migration"
(Get-ClusterNetwork -Cluster HCCluster -Name "Storage").Role="ClusterAndClient"

Then I change the Live-Migration settings in order that the cluster use Live-Migration network for … Live-Migration usage. I don’t use Powershell for this step because It is easier to make it by using the GUI.

Then I configure a witness by using the new feature in Windows Server 2016: the Cloud Witness.

Set-ClusterQuorum -Cluster HCCluster -CloudWitness -AccountName homecloud -AccessKey &lt;AccessKey&gt;

Then I enable Storage Spaces Direct on my Cluster. Please read this topic to use the right command to enable the Storage Spaces Direct. It can change related to storage devices you use (NVMe + SSD, or SSD + HDD and so on).

Ok, the cluster is now ready. We can create some Storage Spaces for our Virtual Machines.

Create Storage Spaces

Because I run commands remotely, we have first to connect to the storage provider with Register-StorageSubSystem PowerShell cmdlet.

Register-StorageSubSystem –ComputerName HCCluster.int.homecloud.net –ProviderName *

Now when I run Get-StorageSubSystem as below, I can see the storage provider on the cluster.

Now I verify if disks are well recognized and can be added to a Storage Pool. As you can see below, I have my 40 disks.

Next I create a Storage Pool with all disks available.

New-StoragePool -StorageSubSystemName HCCluster.int.homecloud.net `
                -FriendlyName VMPool `
                -WriteCacheSizeDefault 0 `
                -ProvisioningTypeDefault Fixed `
                -ResiliencySettingNameDefault Mirror `
                -PhysicalDisk (Get-StorageSubSystem -Name HCCluster.int.homecloud.net | Get-PhysicalDisk)

If I come back in the Failover Cluster GUI, I have a new Storage Pool called VMPool.

Then I create two volumes in mirroring. One has 4 columns and the other 2 columns. Each volume is formatted in ReFS and their sizes are 50GB.

New-Volume -StoragePoolFriendlyName VMPool `
           -FriendlyName VMStorage01 `
           -NumberOfColumns 4 `
           -PhysicalDiskRedundancy 2 `
           -FileSystem CSVFS_REFS `
           –Size 50GB

New-Volume -StoragePoolFriendlyName VMPool `
           -FriendlyName VMStorage02 `
           -NumberOfColumns 2 `
           -PhysicalDiskRedundancy 2 `
           -FileSystem CSVFS_REFS `
           –Size 50GB

Now I have two Cluster Virtual Disks and they are mounted on C:\ClusterStorage\VolumeX.

Host VMs

So I create a Virtual Machines and I use C:\ClusterStorage to host VMs files.

Then I try to run the virtual machines. If it is running, all is working, included nested Hyper-V J.

I try a ping on my Active Directory and on Google. It’s working oh yeah.

Conclusion

HyperConverged is a great flexible solution. As you have seen above it is not so hard to install this solution. However, I think this solution needs a real hard work to size it. It is necessary to have strong knowledge on Storage Space, networks and Hyper-V. In my opinion, Nano Servers are a great solution for this kind of solution because of the small footprint on disks and compute resources. Moreover, they are less exposed to Microsoft Update and so to reboot. To finish, Nano Servers reboot quickly and so we can enter in the trend “Fail Hard but fail fast”. I hope that Microsoft will not kill its HyperConverged solution with Windows Server 2016 license model …

The post Build a HyperConverged infrastructure with NanoServer appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/build-hyperconverged-infrastructure-nanoserver/feed/ 0 4324
Manage Storage Space Direct from Virtual Machine Manager //www.tech-coffee.net/manage-storage-space-direct-from-virtual-machine-manager/ //www.tech-coffee.net/manage-storage-space-direct-from-virtual-machine-manager/#comments Tue, 03 Nov 2015 10:53:45 +0000 //www.tech-coffee.net/?p=4251 In a previous topic, I shown how to implement a Storage Space Direct on Windows Server 2016 TP2 (it is almost the same thing in Technical Preview 3). In this previous topic I created Storage Pool, storage space and some share from Failover Clustering console. In this topic, I’ll show you how doing the same operation ...

The post Manage Storage Space Direct from Virtual Machine Manager appeared first on Tech-Coffee.

]]>
In a previous topic, I shown how to implement a Storage Space Direct on Windows Server 2016 TP2 (it is almost the same thing in Technical Preview 3). In this previous topic I created Storage Pool, storage space and some share from Failover Clustering console. In this topic, I’ll show you how doing the same operation from Virtual Machine Manager Technical Preview 3.

Requirements

To follow this topic you need:

  • A Scale-Out File Server implementation. In this topic I use storage space direct;
  • A Virtual Machine Manager 2012R2 Update Rollup 8 installation (on my side I’m in Technical Preview 3).

Storage Space Direct implementation

To make this topic, I have deployed four virtual machines on Windows Server 2016 Technical Preview 3. These machine are in a cluster called HyperConverged.int.homecloud.net. I have installed Hyper-V and File Servers role on these servers because it is a POC for Hyper-Convergence (I’m waiting for nested Hyper-V on Windows Server J). Each virtual machine is connected to 5 disks of 40GB.

Each server is connected to four networks.

  • Cluster: cluster communication;
  • Management: AD, RDP, MMC and so on;
  • Storage: dedicated network between Hyper-V and cluster for storage workloads;
  • Live-Migration: dedicated network to migrate VM from one host to another.

The Scale-Out File Server role is deployed in the cluster. I called it VMSto. VMSto can be reachable from storage network.

To finish, I have added a vmm runas account in the Administrators group in each server.

Manage Storage Space Direct

Now I’m connecting to Virtual Machine Manager in the Fabric. I add a Storage Device (right click on Arrays, Add Storage Devices).

Next select the provider type. With Scale-Out File Server, select Windows-Based File Server.

Next type the cluster name and select the RunAs account related to the account that you have added in local Administrators group in each server.

Then the Scale-Out File Server should be discovered with 0GB capacity. It is because there is no Storage Pool created yet. Just click on next.

Then select the Scale-Out File Server to place under management and click on next.

Once the storage device is added to VMM, you can navigate to File Servers and right click on the device. Select Manage Pools.

In the next window, there is the list of storage pool. Because no storage pool is created nothing appears. So click on New to create a storage pool.

Give a name to the storage pool and select a classification.

Then select disks that will be in this storage pool.

To finish you can specify the Interleave.

Once the storage pool is created, you should see it in Storage Pools window as below.

Next run a rescan on the provider and navigate to Array. Now a pool is managed by VMM.

Moreover it has been added to the right classification.

Now, I create a file share to store my VMs. So I select create file share. I give a name to the share and I select a storage pool.

Then I specify a size for the volume, a file system, the resiliency and an allocation unit size. If you have SSD and HDD in the pool, VMM will ask you if you want to Enable Storage Tiers.

Once the share is created, a new LUN (in fact it is a Cluster Share Volume) is added under the storage pool.

In File Share view, you should have your new File Share.

Now you have just to add the share in the Hyper-V configuration as below.

Now you can deploy VMs in this share as you can see below.

Overview in Failover Clustering console

If we come back in the failover clustering console, you should retrieve the Storage Pool, CSV and share that we have created from VMM. First if you navigate to Pools, you should have a new storage pool called Bronze-Tier01.

Then in Disks, you should have a new CSV belonging to your storage pool.

To finish, if you navigate to the Scale-Out File Server role and you select share tab, you should see the new file share.

Manage using PowerShell

Create the storage Pool

You can list disks available from VMM to add them to a storage pool. For that I used Get-SCStoragePhysicalDisk cmdlet.

Then I use the below script to create a storage pool with the selected physical disk.

$storageArray = Get-SCStorageArray -Name "Clustered Windows Storage on HyperConverged"
$disksToAdd = @()
$disksToAdd += Get-SCStoragePhysicalDisk -ID "69d0702d-5de1-4ac4-82f2-224d1b47676c"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "a77c70bd-96df-482c-87e2-314f288e7142"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "cb94acd1-4269-4db5-bab9-42aeea1897dd"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "97dd5243-7502-48cc-9302-433288a487f3"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "e44d24ab-9e47-44bd-94ea-5d57f25d8d66"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "c77e6d97-e7c7-4d88-abd8-72ffe468418d"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "90d7408f-d7be-4aaf-b88c-7cb3c0860c2e"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "5b6217ce-5eff-489c-b074-c97c64c9d1c6"
$classification = Get-SCStorageClassification -Name "Bronze"
$pool_0 = New-SCStoragePool -Name "Bronze-Tier01" -StoragePhysicalDisk $disksToAdd -StorageArray $storageArray -StorageClassification $classification

Create the file share

To create the file share in a storage pool, I use the New-SCStorageFileShare cmdlet as below.

$storageFileServer = Get-SCStorageFileServer -Name VMSto.int.HomeCloud.net
$storagePool = Get-SCStoragePool -name "Bronze-Tier01"
$storageClassification = Get-SCStorageClassification -Name "Bronze"
$storageFileShare = New-SCStorageFileShare -StorageFileServer $storageFileServer -StoragePool $storagePool -Name "Bronze01" -Description "" -SizeMB 102400 -RunAsynchronously -FileSystem "CSVFS_ReFS" -ResiliencySettingName "Mirror" -PhysicalDiskRedundancy "2" -AllocationUnitSizeKB "64" -StorageClassification $storageClassification

The post Manage Storage Space Direct from Virtual Machine Manager appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/manage-storage-space-direct-from-virtual-machine-manager/feed/ 2 4251
Hyper-V converged networking and storage design //www.tech-coffee.net/hyper-v-converged-networking-and-storage-design/ //www.tech-coffee.net/hyper-v-converged-networking-and-storage-design/#respond Tue, 04 Aug 2015 12:10:35 +0000 //www.tech-coffee.net/?p=3727 Since Windows Server 2012, the converged networking is supported by Microsoft. This concept enables to share an Ethernet adapter for several network traffics. Before that, it was recommended to dedicate a network adapter per network traffic (backup, cluster and so on). So thanks to the converged networking, we can use a single Ethernet adapter (or ...

The post Hyper-V converged networking and storage design appeared first on Tech-Coffee.

]]>
Since Windows Server 2012, the converged networking is supported by Microsoft. This concept enables to share an Ethernet adapter for several network traffics. Before that, it was recommended to dedicate a network adapter per network traffic (backup, cluster and so on).

So thanks to the converged networking, we can use a single Ethernet adapter (or teaming) to carry several network traffics. However, if the design is not good, the link can quickly reach the bandwidth limit. So when designing converged networking, keep in mind the QoS (Quality of Service) setting. This is this setting which will ensure that the traffic will have the appropriate bandwidth.

When you implement the converged networking, you can play with a setting called QoS weight. You can assign a value from 1 to 100. More the value is high; more the traffic associated with this value has priority.

When you design networks for Hyper-V/VMM, you have usually four networks for hosts: Host fabric Management, Live Migration, Cluster and Backup. I have detailed some examples in the next part Common Network requirements. The other network traffics are related to Virtual Machines. Usually you have at least a network for the fabric Virtual Machines.

Common network requirements

Host Management networks

In the below table, you can find an example of networks for the Hyper-V Hosts. I have specified the VLAN and the QOS Weight also. The Host Fabric Management has a VLAN number set to 0 because packets will be untagged. In this way, even if my Hyper-V host has no VLAN configuration, it can answer to DHCP request. It is useful to deploy host by using Bare-Metal from Virtual Machine Manager.

Network Name

VLAN

Subnet

Description

QoS weight

Host Fabric Management

0

10.10.0.0/24

LAN for host management (AD, RDP …)

10

Live Migration

100

10.10.100.0/24

Live Migration Network

40

Host Cluster

101

10.10.101.0/24

Cluster hearbeat network

10

Host Backup

102

10.10.102.0/24

Backup network

40

In the above configuration, Live-Migration and Backup traffics have a better priority than Host Fabric Management and Cluster traffics. It is because Live-Migration and Backup require a larger bandwidth.

VM Workloads

In the below table, you can find example of VM networks. In this example, I have isolated the network for the Fabric VMs, DMZ VMs and their cluster en backup traffics. In this way I can apply a QoS setting for each type of traffic. Here, Backup traffics have a higher weight than other networks because backup traffics use a larger bandwidth.

Network Name

VLAN

Subnet

Description

QoS weight

VM Fabric

1

10.10.1.0/24

Network for the fabric VM

10

VM DMZ

2

10.10.2.0/24

Network for VM in DMZ

10

VM Fabric Cluster

50

10.10.50.0/24

Cluster network for fabric VM

10

VM DMZ Cluster

51

10.10.51.0/24

Cluster network for DMZ VM

10

VM Fabric Backup

60

10.10.60.0/24

Backup network for fabric VM

30

VM DMZ Backup

61

10.10.61.0/24

Backup network for DMZ VM

30

Hyper-V converged networking and storage designs

Now that you have your network requirements on paper, we can work on the storage part. First you have to choose the storage solution: FC SAN, iSCSI SAN or Software-Defined Storage?

To choose the storage solution you must look at your needs and your history. If you have already a FC SAN with good performance, keep this solution to save money. If you start a new infrastructure and you want to store only VMs on the storage solution, maybe you can implement a Software-Defined Storage.

In the next sections, I have drawn a schema for each storage solution usually implemented. They certainly did not suit all needs but they allow understanding the principle.

Using Fibre Channel storage

Fibre Channel (not fiber-optic cables) is a protocol used to connect a server to the storage solution (SAN: Storage Area Network) with high-speed network. Usually fiber-optic cables are used to interconnect the SAN with the server. The adapters where are connected the fiber-optic on the server are called HBA (Host Bus Adapter).

In the below schema, the Parent Partition traffics are represented by green links while VMs traffics are orange.

On Ethernet side, I implement two dynamic teaming with two physical NICs each:

  • Host Management traffics (Live-Migration, Cluster, Host Backup, host management);
  • VM Workloads (VM Fabric, VM DMZ, VM Backup and so on).

On the storage side, I split also Parent Partition traffics and VM traffics:

  • The Parent Partition traffics are mainly related to Cluster Shared Volume to store Virtual Machines;
  • The VM traffics can be LUN mounted on VMs for Guest Cluster usage (Witness disk), database servers and so on.

To mount LUN directly on VMs, you need HBA with NPIV enabled and you need also to create vSAN on Hyper-V host. Then you have to deploy MPIO inside the VMs. For more information, you can read this TechNet topic.

To support the multi-channel on the parent partition, it is also necessary to enable MPIO on the Hyper-V host.

For a production environment, you need four 10GB Ethernet NICs and four HBA. This is the most expensive solution.

Using iSCSI storage

iSCSI (Internet Small Computer System Interface) is a protocol that carries SCSI commands over IP networks from the server to the SAN. This solution is less effective that Fibre Channel but it is also less expensive.

The network design is the same that the previous solution. Regarding the storage solution, I isolate the parent partition traffics and the VM workloads. MPIO is implemented for CSV to support Multi-Channel. When VMs need direct access to storage, I deploy two NICs bound on each VM Volumes physical NICs. Then I deploy MPIO inside the VMs. To finish, I prefer to use dedicated switches between hosts and SAN.

For each Hyper-V hosts, you need eight 10GB Ethernet Adapter.

Using Software-Defined Storage

This solution is based on software storage solution (as Scale-Out File Servers).

The network is the same as previous solutions. On the storage side, at least two RDMA NICs capable are required for better performance. SMB3 over RDMA (Remote Direct Memory Access) enables to increase throughput and to decrease the CPU load. This solution is also called SMB Direct. To support Multipath, the SMB Multichannel must be enabled (not teaming!!).

When VM needs a Witness disk or other shared volume for Guest Clustering, it is possible to use Shared VHDX to share a virtual hard drive between virtual machines.

This solution is less expensive because the software-defined storage is cheaper than SAN.

What about Windows Server 2016

In Windows Server 2016, you will be able to converged NIC across tenant and RDMA traffic to optimize costs, enabling high performance and network fault tolerance with only 2 NICs instead of 4.

The post Hyper-V converged networking and storage design appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/hyper-v-converged-networking-and-storage-design/feed/ 0 3727