iSCSI – Tech-Coffee //www.tech-coffee.net Tue, 20 Mar 2018 10:54:54 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Connect StarWind Virtual Tape Library to a Windows Server 2016 //www.tech-coffee.net/connect-starwind-virtual-tape-library-to-a-windows-server-2016/ //www.tech-coffee.net/connect-starwind-virtual-tape-library-to-a-windows-server-2016/#comments Tue, 20 Mar 2018 10:54:54 +0000 //www.tech-coffee.net/?p=6234 StarWind Virtual Tape Library (VTL) is a feature included in StarWind Virtual SAN. StarWind VTL provides a virtual LTO library to store your backup archival. StarWind VTL eliminates the heavy process of tape management and also a costly LTO library. StarWind VTL provides virtual tape to store your backup archival. The connection between StarWind Virtual ...

The post Connect StarWind Virtual Tape Library to a Windows Server 2016 appeared first on Tech-Coffee.

]]>
StarWind Virtual Tape Library (VTL) is a feature included in StarWind Virtual SAN. StarWind VTL provides a virtual LTO library to store your backup archival. StarWind VTL eliminates the heavy process of tape management and also a costly LTO library. StarWind VTL provides virtual tape to store your backup archival. The connection between StarWind Virtual Tape Library and a server is made by using iSCSI. StarWind VTL emulates virtual tape by Hewlett Packard VTL. In this topic we’ll see how to connect StarWind Virtual Tape Library to a Windows Server 2016. In a next topic, I’ll use the StarWind VTL to archive backup with Veeam Backup & Replication U3.

Requirements

To install StarWind VTL, you need a server with at least the following hardware:

  • 2x (v)CPU
  • 4GB of RAM
  • 2x Network Adapters (one for management and the other for iSCSI)
  • 1x hard drive for OS
  • 1x hard drive for virtual tape

From my side, I have deployed a virtual machine to host StarWind VTL

StarWind Virtual Tape Library installation

To deploy StarWind VTL, firstly you need to download StarWind Virtual SAN from this link. Then run the installation and when you have to choose the components, specify settings as the following:

Once the product is installed, run the StarWind management console.

StarWind Virtual Tape Library configuration

Firstly, I change the management interface to bind only to the management IP address. In this way, we can’t manage the product from iSCSI network adapter. To change the management interface, navigate to Configuration | Management Interface.

Next in General pane, click on Add VTL Device.

Then, specify a name for your Virtual Tape Library ad a location.

Then leave the default option and click on Next.

On the next screen, you are asked to create the iSCSI target. Choose to Create new target. Then specify a target alias.

If the creation is successful, you should get the following information:

Now in StarWind Management Console, you have a VTL device.

If you wish, you can add another virtual tape library to your iSCSI target.

Connect Windows Server to StarWind VTL

On the Windows Server, open iSCSI initiator properties. You are asked to start the MS iSCSI service automatically. Choose yes. Then in target, enter the IP address of the StarWInd VTL iSCSI IP address. Then connect to the target.

Once connected, you can open the Device Manager. As you can see in the below screenshot, you

Conclusion

If you don’t want to invest in a Tape Library you can use StarWind Virtual Tape Library to archive your data. In a real world, usually you use a physical machine with a lot of SATA devices. Instead of using Tape, you use SATA devices. In addition to be a cheaper solution, you don’t need to implement a process for tape rotation. However, the StarWind VTL should be in another datacenter in case of datacenter disaster.

The post Connect StarWind Virtual Tape Library to a Windows Server 2016 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/connect-starwind-virtual-tape-library-to-a-windows-server-2016/feed/ 1 6234
Deploy a Software-Defined Storage solution with StarWind Virtual SAN //www.tech-coffee.net/deploy-a-software-defined-storage-solution-with-starwind-vsan/ //www.tech-coffee.net/deploy-a-software-defined-storage-solution-with-starwind-vsan/#respond Fri, 26 Jan 2018 10:34:51 +0000 //www.tech-coffee.net/?p=6117 StarWind Virtual SAN is a Software-Defined Storage solution which enables to replicate data across several nodes to ensure availability. The data are mirrored between two or more nodes. Hypervisor can be installed in the StarWind Virtual SAN nodes (hyperconverged) or separated from hypervisor (converged). StarWind Virtual SAN is easy to use and provide high performance. ...

The post Deploy a Software-Defined Storage solution with StarWind Virtual SAN appeared first on Tech-Coffee.

]]>
StarWind Virtual SAN is a Software-Defined Storage solution which enables to replicate data across several nodes to ensure availability. The data are mirrored between two or more nodes. Hypervisor can be installed in the StarWind Virtual SAN nodes (hyperconverged) or separated from hypervisor (converged). StarWind Virtual SAN is easy to use and provide high performance. Moreover, StarWind provides pro active support. In this topic I’ll show you how to deploy a 3-Nodes StarWind VSAN to use with Hyper-V or ESXi.

Lab overview

To write this topic, I have deployed three VMware VMs running on Windows Server 2016. Each VM has the following configuration:

  • 2 vCPU
  • 8GB of memory
  • 1x VMXNET3 NIC in management network (for Active Directory, RDP, VM management)
  • 1x VMXNET3 NIC in cluster network (synchronization and heartbeat)
  • 1x VMXNET3 NIC in Storage network (iSCSI with hypervisor)
  • 1x 100GB Data disk

If you plan to deploy StarWind VSAN in production, you need physical server with enough storage and with enough network adapters.

StarWind Virtual SAN installation

First, download StarWind VSAN from their website. Once you have downloaded the installer, execute it on each StarWind VSAN node. First, accept the license agreement.

In the next screen, click on Next.

Specify a folder location where StarWind Virtual SAN will be installed.

Select StarWind Virtual SAN Server in the drop down menu.

Specify the start menu folder and click on Next.

If you want a desktop icon, enable the checkbox.

If you have already a license key, select Thank you, I do have a key already and click on Next.

Specify the location of the license file and click on Next.

Review the license information and click on Next.

If the iSCSI service is not started and disabled, you’ll get this pop-up. Click on OK to enable and start the Microsoft iSCSI initiator service.

Once you have installed StarWind Virtual SAN in each node, you can start next step.

Create an iSCSI target and a storage device

Open StarWind Management Console and click on Add Server.

Then add each node and click on OK. In the below screenshot, I clicked on Scan StarWind Servers to discover automatically nodes.

When you connect to each node, you get this warning. Choose the default location of the storage pool (storage devices).

Right click on the first node and select Add Target.

Specify a target alias and be sure to allow multiple concurrent iSCSI connections.

Once the target has been created, you get the following screen:

Now, right click on the target and select Add new Device to Target.

Select Hard Disk Device and click on Next.

Choose the option which apply to your configuration. In my case, I choose virtual disk.

Specify a name and a size for the virtual disk.

Choose thick-provisioned or Log Structured File System (LSFS). LSFS is designed for Virtual Machine because this file system eliminates IO Blender effect. With LSFS you can also enable deduplication. Choose also the right block cache size.

In the next screen, you can choose where are held metadata and how may worker threads you want.

Choose a device RAM cache parameters.

You can also specify a flash cache capacity if you have installed SSD in your nodes.

Then click on Create to create the storage device.

Once the storage device is created, you get the following screen:

At this time, you have a virtual disk on the first node. This virtual disk can store your data. But this storage device has no resiliency. In the next steps, we will replicate this storage devices with the two other nodes.

Replicate the storage device in other StarWind VSAN nodes

Right click on the storage device and select Replication Manager.

In the replication manager, select Add Replica.

Select Synchronous Two-Way Replication to replicate data across StarWind Virtual SAN nodes.

Specify the hostname and the port of the partner and click on Next.

Then select the failover strategy: Heartbeat or Node Majority. In my case I choose Node Majority. This mode requires that the majority of nodes are online. In a three nodes configuration, you can support only one loss.

Then choose to create a new partner device.

Specify the target name and the location of the storage device in partner node.

Select the network for synchronization. In my case, I select the cluster network.

Then select to synchronize from existing device.

To start the creation of the replication, click on Create Replica.

Repeat the same previous steps for the third node. At the end, the configuration should be similar to the following screenshot:

In StarWind Management Console, if you click on a target, you can see each iSCSI session: each node has two iSCSI sessions because there are three nodes.

iSCSI connection

Now that StarWind Virtual SAN is ready, you can connect by using iSCSI your favorite hypervisor. Don’t forget to configure MPIO to support the multipath. For ESXi you can read this topic.

The post Deploy a Software-Defined Storage solution with StarWind Virtual SAN appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-software-defined-storage-solution-with-starwind-vsan/feed/ 0 6117
Enable Direct Storage Access in Veeam Backup & Replication //www.tech-coffee.net/enable-direct-storage-access-in-veeam-backup-replication/ //www.tech-coffee.net/enable-direct-storage-access-in-veeam-backup-replication/#comments Mon, 31 Jul 2017 14:39:27 +0000 //www.tech-coffee.net/?p=5640 In environment with virtualization infrastructure connected to a SAN, you may want to connect your Veeam proxies (usually a physical servers) to the SAN to collect data. This configuration avoids using the production network to get data and reduce the backup window by increasing the speed of the backup process. This configuration requires a direct ...

The post Enable Direct Storage Access in Veeam Backup & Replication appeared first on Tech-Coffee.

]]>
In environment with virtualization infrastructure connected to a SAN, you may want to connect your Veeam proxies (usually a physical servers) to the SAN to collect data. This configuration avoids using the production network to get data and reduce the backup window by increasing the speed of the backup process. This configuration requires a direct connection between Veeam proxies and the SAN by using usually iSCSI or FC. In this topic, we’ll see how to configure the Veeam proxies to enable Direct Storage Access and to backup VMware VM.

Design overview

The Veeam server is based on a physical server (Dell R630) and running on Windows Server 2016 (July cumulative updates). The Veeam version is 9.5 update 2. This server has two network adapters member of a teaming for production network and two network adapters for iSCSI. I’d like to collect VM data across the iSCSI network adapters. So Veeam collect VM information and process the VM snapshot from the production network and data are copied accross the backup network. In this topic, I’ll connect VMFS LUN (VMware) to Veeam Server. In this configuration, the Veeam proxy is deployed in the Veeam server.

N.B: If you plan to dedicate servers for Veeam proxies, you must connect each proxy to the production storage.

Configure MPIO

First of all, you need to install MPIO if you have several network links connected to a SAN. MPIO ensures that all paths to a LUN are managed to ensure high availability and bandwidth aggregation (if you have not set MPIO to failover). Install MPIO from Windows features and run mpiocpl (it’s work also in Core edition) to configure MPIO:

After you have added the iSCSI or/and SAS support, a reboot is required.

Disable automount

This option prevents Windows from automatically mounting any new basic and dynamic volumes that are added to the system. The volume must not be mounted in Veeam server. We just need access to the block storage. This is why this option is set.

iSCSI configuration

First, you need to start the iSCSI service. This service should also be automatically started:

Next open iSCSIcpl (it’s work on Server Core also) and add portal address. Once the targets are discovered, you can connect to them.

Once you are connected, open the disk management and check if LUNs are well mounted in the system:

Veeam configuration

Now that you have connected your Veeam proxies to the production storage, you can now change the transport mode. In Veeam Backup & Replication console, navigate to backup infrastructure and edit the proxies. Change the transport mode and choose Direct Storage Access.

Now, when you backup a VM located in a SAN datastore, you should have something like this:

Conclusion

If you have a SAN iSCSI or FC and you want to dedicate these networks for backup, Direct Storage Access can be the way to go. The servers must be connected to the SAN and then Veeam is able to copy at block level from the mounted LUN. If your production network is 1GB/s and your storage network is 10GB/s, you can also save a lot of time and reduce the backup windows.

The post Enable Direct Storage Access in Veeam Backup & Replication appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/enable-direct-storage-access-in-veeam-backup-replication/feed/ 2 5640
Deploy a 2-node StarWind VSAN Free for VMware ESXi 6.5 //www.tech-coffee.net/deploy-a-2-node-starwind-vsan-free-for-vmware-esxi-6-5/ //www.tech-coffee.net/deploy-a-2-node-starwind-vsan-free-for-vmware-esxi-6-5/#comments Tue, 18 Apr 2017 17:31:28 +0000 //www.tech-coffee.net/?p=5388 StarWind VSAN Free provides a storage solution for several purposes such as store virtual machines. With VSAN Free, you can deploy a 2-node cluster to present a highly available storage solution to hypervisor such as Hyper-V or ESXi for free. When you deploy multi VSAN Free nodes, the data are synchronized across the network between ...

The post Deploy a 2-node StarWind VSAN Free for VMware ESXi 6.5 appeared first on Tech-Coffee.

]]>
StarWind VSAN Free provides a storage solution for several purposes such as store virtual machines. With VSAN Free, you can deploy a 2-node cluster to present a highly available storage solution to hypervisor such as Hyper-V or ESXi for free. When you deploy multi VSAN Free nodes, the data are synchronized across the network between nodes.

You can deploy StarWind VSAN Free in hyperconverged model where VSAN Free is installed on hypervisor nodes or in a disaggregated model where compute and storage are separated. In the below table, you can find the difference between StarWind VSAN Free and paid edition.

As you can see, the big differences between both editions are the technical support assistance and management capabilities. With StarWind VSAN Free, you are able to manage only by PowerShell (and 30 trial days across StarWind Management console) and you have no access to technical support assistance.

Thanks to this free version, we are able to deploy a highly available storage solution. In this topic, we will see how to deploy StarWind VSAN Free in a 2-node configuration on Windows Server 2016 Core edition. Then we will connect the storage to VMware ESXi.

Requirements

To write this topic, I have deployed the following virtual machines. In production, I recommend you to implement the solution on physical servers. Each VM has the following hardware:

  • 2 vCPU
  • 4GB of memory
  • 1x OS disk (60GB dynamic)
  • 4x 100GB disks (100GB dynamic)
  • 1x vNIC for management (and heartbeat)
  • 1x vNIC for storage sync

In the end of the topic, I will connect the storage to a VMware ESXi. So if you want to follow this topic, you need a running vSphere environment.

You can download StarWind VSAN Free here.

Architecture overview

Both StarWind VSAN Free nodes will be deployed with Windows Server 2016 Core Edition. Both nodes have two network adapters each. One network is used for the synchronization between both nodes (not routed network). The other is used for iSCSI and management. Ideally, you should isolate management and iSCSI traffic in two separated vNICs.

Configure the data disks

Once the operating system is deployed, I run the following script to create a storage pool and a volume to host Starwind image files.

# Initialize data disks
get-disk |? OperationalStatus -notlike "Online" | Initialize-Disk

#Create a storage pool with previously initialized data disks
New-StoragePool -StorageSubSystemFriendlyName "*VMSAN*" `
                -FriendlyName Pool `
                -PhysicalDisks (get-physicaldisk |? canpool -like $True)

#Create a NTFS volume in 2-Way mirroring with maximum space. Letter: D:\
New-Volume -StoragePoolFriendlyName Pool `
           -FriendlyName Storage `
           -FileSystem NTFS `
           -DriveLetter D `
           -PhysicalDiskRedundancy 1 `
           -UseMaximumSize

#Create a folder on D: called Starwind
new-item -type Directory -Path D:\ -Name Starwind 

Install StarWind VSAN Free

I have copied the StarWind VSAN Free binaries in both nodes. Then I run from command line the installer. On the welcome screen, just click on next.

In the next screen, accept the license agreement and click on next.

The next window introduces the new features and improvements of StarWind Virtual SAN v8. Once you have read them, just click on next.

Next, choose a folder where will be installed StarWind VSAN Free binaries.

Then choose which features you want install. You can install powerful features such as SMI-S to connect to Virtual Machine Manager, the PowerShell management library or the cluster service.

In the next screen choose the start menu folder and click on next.

In the next screen, you can request the free version key. StarWind has already kindly given me a license file so I choose Thank you, I do have a key already.

Then I specify the license file and I click on next.

Next you should have information about the provided license key. Just click on next.

To finish, click on install to deploy the product.

You have to repeat these steps for each node.

Deploy the 2-node configuration

StarWind provides some PowerShell script samples to configure the product from command line. To create the 2-node cluster, we will leverage the script CreateHA(two nodes).ps1. You can get script samples in <InstallPath>\StarWind Software\StarWind\StarWindX\Samples\PowerShell.

Copy scripts CreateHA(two nodes).ps1 and enumDevicesTargets.ps1 and edit them.

Below you can find my edited CreateHA(two nodes).ps1:

Import-Module StarWindX

try
{
    #specify the IP address and credential (this is default cred) of a first node
    $server = New-SWServer -host 10.10.0.46 -port 3261 -user root -password starwind

    $server.Connect()

    $firstNode = new-Object Node

    # Specify the path where image file is stored
    $firstNode.ImagePath = "My computer\D\Starwind"
    # Specify the image name
    $firstNode.ImageName = "VMSto1"
    # Size of the image
    $firstNode.Size = 65536
    # Create the image
    $firstNode.CreateImage = $true
    # iSCSI target alias (lower case only supported because of RFC)
    $firstNode.TargetAlias = "vmsan01"
    # Synchro auto ?
    $firstNode.AutoSynch = $true
    # partner synchronization interface (second node)
    $firstNode.SyncInterface = "#p2=10.10.100.47:3260"
    # partner heartbeat interface (second node)
    $firstNode.HBInterface = "#p2=10.10.0.47:3260"
    # cache size
    $firstNode.CacheSize = 64
    # cache mode (write-back cache)
    $firstNode.CacheMode = "wb"
    # storage pool name
    $firstNode.PoolName = "pool1"
    # synchronization session count. Leave this value to 1
    $firstNode.SyncSessionCount = 1
    # ALUA enable or not
    $firstNode.ALUAOptimized = $true
    
    #
    # device sector size. Possible values: 512 or 4096(May be incompatible with some clients!) bytes. 
    #
    $firstNode.SectorSize = 512
	
	#
	# 'SerialID' should be between 16 and 31 symbols. If it not specified StarWind Service will generate it. 
	# Note: Second node always has the same serial ID. You do not need to specify it for second node
	#
	$firstNode.SerialID = "050176c0b535403ba3ce02102e33eab" 
    
    $secondNode = new-Object Node

    $secondNode.HostName = "10.10.0.47"
    $secondNode.HostPort = "3261"
    $secondNode.Login = "root"
    $secondNode.Password = "starwind"
    $secondNode.ImagePath = "My computer\D\Starwind"
    $secondNode.ImageName = "VMSto1"
    $secondNode.Size = 65536
    $secondNode.CreateImage = $true
    $secondNode.TargetAlias = "vmsan02"
    $secondNode.AutoSynch = $true
    # First node synchronization IP address
    $secondNode.SyncInterface = "#p1=10.10.100.46:3260"
    # First node heartbeat IP address
    $secondNode.HBInterface = "#p1=10.10.0.46:3260"
    $secondNode.ALUAOptimized = $true
        
    $device = Add-HADevice -server $server -firstNode $firstNode -secondNode $secondNode -initMethod "Clear"
    
    $syncState = $device.GetPropertyValue("ha_synch_status")

    while ($syncState -ne "1")
    {
        #
        # Refresh device info
        #
        $device.Refresh()

        $syncState = $device.GetPropertyValue("ha_synch_status")
        $syncPercent = $device.GetPropertyValue("ha_synch_percent")

        Start-Sleep -m 2000

        Write-Host "Synchronizing: $($syncPercent)%" -foreground yellow
    }
}
catch
{
    Write-Host "Exception $($_.Exception.Message)" -foreground red 
}

$server.Disconnect() 

Next I run the script. An image file will be created in both nodes. These image files will be synchronized.

Thanks to the 30 trial days of the management console, you can get graphical information about the configuration. As you can see below, you have information about image files.

You can also review the configuration of network interfaces:

If you browse the starWind folder in each node, you should have the image files.

Now you can edit and run the script enumDevicesTargets.ps1:

Import-Module StarWindX

# Specify the IP address and credential of the node you want to enum
$server = New-SWServer 10.10.0.46 3261 root starwind

$server.Connect()

if ( $server.Connected )
{
    write-host "Targets:"
    foreach($target in $server.Targets)
    {
        $target
    }
    
    write-host "Devices:"
    foreach($device in $server.Devices)
    {
        $device
    }
    
    $server.Disconnect()
}

By running this script, you should have the following result:

If I run the same script against 10.10.0.47, I have these information:

Connect to vSphere environment

Now that the storage solution is ready, I can connect it to vSphere. So, I connect to vCenter Web Client and I edit the target server on my software iSCSI adapter. I add the following static target server.

Next if I navigate to paths tab, you should have both paths marked as Active.

Now you can create a new datastore and use the previously created StarWind image file.

Conclusion

StarWind VSAN free provides an inexpensive software storage solution for POC or small environment. You need only to buy hardware and deploy the product as we’ve seen in this topic. If you use Hyper-V, you can deploy StarWind VSAN Free in the Hyper-V node to get a HyperConverged solution. Just don’t forget that the StarWind VSAN Free edition doesn’t provide any kind of technical assistance (excepted on StarWind forum) and management console (just 30 trial days).

The post Deploy a 2-node StarWind VSAN Free for VMware ESXi 6.5 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-2-node-starwind-vsan-free-for-vmware-esxi-6-5/feed/ 2 5388
Connect vSphere 6.5 to iSCSI storage NAS //www.tech-coffee.net/connect-vsphere-iscsi-storage-nas/ //www.tech-coffee.net/connect-vsphere-iscsi-storage-nas/#comments Fri, 17 Feb 2017 13:40:39 +0000 //www.tech-coffee.net/?p=5153 When you implement ESXi cluster, you need also a shared storage to store the virtual machine files. It can be a NAS, a SAN or vSAN. When using NAS or SAN, you can connect vSphere by using Fibre Channel (FC), FC over Ethernet (FCoE) or iSCSI. In this topic, I’d like to share with you ...

The post Connect vSphere 6.5 to iSCSI storage NAS appeared first on Tech-Coffee.

]]>
When you implement ESXi cluster, you need also a shared storage to store the virtual machine files. It can be a NAS, a SAN or vSAN. When using NAS or SAN, you can connect vSphere by using Fibre Channel (FC), FC over Ethernet (FCoE) or iSCSI. In this topic, I’d like to share with you how to connect vSphere to iSCSI storage such as NAS/SAN.

The NAS model used for this topic is a Synology RS815 NAS. But from vSphere perspective, this is the same configuration for others NAS/SAN model.

Understand type of iSCSI adapters

Before deploying an iSCSI solution, it is important to understand that several types of iSCSI network adapters exist:

  • Software iSCSI adapters
  • Hardware iSCSI adapters

Software iSCSI adapter is managed by the VMkernel. This solution enables to bind to standard network adapters without buying additional network adapters dedicated for iSCSI. However, because this model of iSCSI adapters is handled by the VMkernel, you can have an increase of CPU overhead on the host.

In the other hand, hardware iSCSI adapters are dedicated physical iSCSI adapters that can offload the iSCSI and related network workloads from the host. There are two kind of hardware iSCSI adapters:

  • Independent hardware iSCSI adapters
  • Dependent hardware iSCSI adapters

The independent hardware iSCSI adapter is a third-party adapter that don’t depend on vSphere network. It implements its own networking and iSCSI configuration and management interfaces. This kind of adapter is able to offload the iSCSI workloads from the host. In another word, this is a Host Bus Adapter (HBA).

The dependent hardware iSCSI adapter is a third-party adapter that depends on vSphere network and management interfaces. This kind of adapter is able to offload the iSCSI workloads from the host. In another word, this is a hardware-accelerated adapter.

For this topic, I’ll implement a Software iSCSI adapter.

Architecture overview

Before writing this topic, I have created a vNetwork Distributed Switch (vDS). You can review the vDS implementation from this topic. The NAS is connected to two switches with Vlan 10 and Vlan 52 (Vlan 10 is also used for SMB, NFS for vacation movies but it is a lab right :)). From vSphere perspective, I’ll create one software iSCSI adapter with two iSCSI paths.

The vSphere environment is composed of two ESXi nodes in a cluster and vCenter (VCSA) 6.5. Each host has two standard network adapters where all traffics is converged. From the NAS perspective, there are three LUNs: two for datastores and one for content library.

NAS configuration

In the Synology NAS, I have created three LUNs called VMStorage01, VMStorage02 and vSphereLibrary.

Then I have created four iSCSI targets (two for each ESXi host). This ensures that each node connects to NAS with two iSCSI paths. Each iSCSI target is mapped to all LUNs previously created.

Connect vSphere to iSCSI storage

Below you can find the vDS schema of my configuration. At this time, I have one port group dedicated for iSCSI. A create also a second port group for iSCSI.

Configure iSCSI port group

Once you have created your port group, we need to change teaming and failover configuration. In the above configuration, each node has two network adapters. Each network adapter is attached to an uplink.

Edit the settings of the port group and navigate to Teaming and failover. In the Failover order list, set an uplink to unused. From the first port group and set Uplink 2 to unused uplinks.

From the second port group, I set Uplink 1 to unused.

Add VMKernel adapters

From the vDS summary pane, click on Add and Manage Hosts. Then edit the VMKernel network adapters for all hosts. Next, click on new adapter.

Next, select the first iSCSI network and click on next.

On the next screen, just click on next.

Then specify the IP address of the VMKernel adapter.

Repeat these steps for the other nodes.

You can repeat this section for the second VMKernel iSCSI adapter. When you have finished your configuration, you should have something like that:

Add and configure the software iSCSI adapter

Then select the storage iSCSI adapter and navigate to Network Port Binding. Click on the “add” button and select both VMKernel network adapter.

Next, navigate to Targets, and select dynamic or static discovery regarding your needs. I choose Static Discovery and I click on add. Create one entry for each path with the right IP and target name.

When the configuration is finished, I have two targets as below. Run a rescan before to continue.

If you come back to Network Port Binding, both VMKernel network adapters should be marked as Active.

In Paths tab, you should have two paths for each LUN.

Create a datastore

Now that hosts have visibility on LUNs, we can create datastore. In vCenter navigate to storage tab and right click on the datacenter (or folder). Then select New datastore.

Select the datastore type. I create a VMFS datastore.

Then specify a name for the datastore and select the right LUN.

Next, choose the VMFS version. I choose VMFS 6.

Next specify the partition configuration as the datastore size, the block size and so on.

Once the wizard is finished, you should have your first datastore to store VM files.

Change the multipath algorithm

By default, the multipath algorithm is set to last used. So, only the last used VMKernel will be used. To leverage both VMKernel simultaneously, you have to change the multipath policy. To change the policy, click on the datastore, select the host and choose Edit multipathing.

Then select Round Robin to use both link. Once it is done, all paths should be marked as Active (I/O).

Repeat this step for each datastore.

Create a datastore cluster

To get access to Storage DRS feature, you can create a datastore cluster. If you have several datastore dedicated for VM, you can add them to datastore cluster and use Storage DRS to optimize resource usage. In the below screenshot, I have two datastore for VM (VMStorage01 and VMStorage02) and the content library. So, I’m going to create a datastore cluster where VMStorage01 & VMStorage02 are used.

Navigate to Datastore Clusters pane and click on New Datastore Cluster.

Give a name to the datastore cluster and choose if you want to enable the Storage DRS.

Choose the Storage DRS automation level and options.

In the next screen, you can enable I/O metric for SDRS recommendation to take into consideration the I/O workloads for recommendations. Then you set threshold to leave some space on datastore and latency.

Next select ESXi hosts that need access to the datastore cluster.

Choose the datastore that will be used in the datastore cluster.

Once the datastore cluster is created, you should have something like that:

Now, when you create a virtual machine, you can choose the datastore cluster and automatically, vSphere store the VM files on the least used datastore (regarding the Storage DRS policy).

The post Connect vSphere 6.5 to iSCSI storage NAS appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/connect-vsphere-iscsi-storage-nas/feed/ 3 5153
Hyper-V converged networking and storage design //www.tech-coffee.net/hyper-v-converged-networking-and-storage-design/ //www.tech-coffee.net/hyper-v-converged-networking-and-storage-design/#respond Tue, 04 Aug 2015 12:10:35 +0000 //www.tech-coffee.net/?p=3727 Since Windows Server 2012, the converged networking is supported by Microsoft. This concept enables to share an Ethernet adapter for several network traffics. Before that, it was recommended to dedicate a network adapter per network traffic (backup, cluster and so on). So thanks to the converged networking, we can use a single Ethernet adapter (or ...

The post Hyper-V converged networking and storage design appeared first on Tech-Coffee.

]]>
Since Windows Server 2012, the converged networking is supported by Microsoft. This concept enables to share an Ethernet adapter for several network traffics. Before that, it was recommended to dedicate a network adapter per network traffic (backup, cluster and so on).

So thanks to the converged networking, we can use a single Ethernet adapter (or teaming) to carry several network traffics. However, if the design is not good, the link can quickly reach the bandwidth limit. So when designing converged networking, keep in mind the QoS (Quality of Service) setting. This is this setting which will ensure that the traffic will have the appropriate bandwidth.

When you implement the converged networking, you can play with a setting called QoS weight. You can assign a value from 1 to 100. More the value is high; more the traffic associated with this value has priority.

When you design networks for Hyper-V/VMM, you have usually four networks for hosts: Host fabric Management, Live Migration, Cluster and Backup. I have detailed some examples in the next part Common Network requirements. The other network traffics are related to Virtual Machines. Usually you have at least a network for the fabric Virtual Machines.

Common network requirements

Host Management networks

In the below table, you can find an example of networks for the Hyper-V Hosts. I have specified the VLAN and the QOS Weight also. The Host Fabric Management has a VLAN number set to 0 because packets will be untagged. In this way, even if my Hyper-V host has no VLAN configuration, it can answer to DHCP request. It is useful to deploy host by using Bare-Metal from Virtual Machine Manager.

Network Name

VLAN

Subnet

Description

QoS weight

Host Fabric Management

0

10.10.0.0/24

LAN for host management (AD, RDP …)

10

Live Migration

100

10.10.100.0/24

Live Migration Network

40

Host Cluster

101

10.10.101.0/24

Cluster hearbeat network

10

Host Backup

102

10.10.102.0/24

Backup network

40

In the above configuration, Live-Migration and Backup traffics have a better priority than Host Fabric Management and Cluster traffics. It is because Live-Migration and Backup require a larger bandwidth.

VM Workloads

In the below table, you can find example of VM networks. In this example, I have isolated the network for the Fabric VMs, DMZ VMs and their cluster en backup traffics. In this way I can apply a QoS setting for each type of traffic. Here, Backup traffics have a higher weight than other networks because backup traffics use a larger bandwidth.

Network Name

VLAN

Subnet

Description

QoS weight

VM Fabric

1

10.10.1.0/24

Network for the fabric VM

10

VM DMZ

2

10.10.2.0/24

Network for VM in DMZ

10

VM Fabric Cluster

50

10.10.50.0/24

Cluster network for fabric VM

10

VM DMZ Cluster

51

10.10.51.0/24

Cluster network for DMZ VM

10

VM Fabric Backup

60

10.10.60.0/24

Backup network for fabric VM

30

VM DMZ Backup

61

10.10.61.0/24

Backup network for DMZ VM

30

Hyper-V converged networking and storage designs

Now that you have your network requirements on paper, we can work on the storage part. First you have to choose the storage solution: FC SAN, iSCSI SAN or Software-Defined Storage?

To choose the storage solution you must look at your needs and your history. If you have already a FC SAN with good performance, keep this solution to save money. If you start a new infrastructure and you want to store only VMs on the storage solution, maybe you can implement a Software-Defined Storage.

In the next sections, I have drawn a schema for each storage solution usually implemented. They certainly did not suit all needs but they allow understanding the principle.

Using Fibre Channel storage

Fibre Channel (not fiber-optic cables) is a protocol used to connect a server to the storage solution (SAN: Storage Area Network) with high-speed network. Usually fiber-optic cables are used to interconnect the SAN with the server. The adapters where are connected the fiber-optic on the server are called HBA (Host Bus Adapter).

In the below schema, the Parent Partition traffics are represented by green links while VMs traffics are orange.

On Ethernet side, I implement two dynamic teaming with two physical NICs each:

  • Host Management traffics (Live-Migration, Cluster, Host Backup, host management);
  • VM Workloads (VM Fabric, VM DMZ, VM Backup and so on).

On the storage side, I split also Parent Partition traffics and VM traffics:

  • The Parent Partition traffics are mainly related to Cluster Shared Volume to store Virtual Machines;
  • The VM traffics can be LUN mounted on VMs for Guest Cluster usage (Witness disk), database servers and so on.

To mount LUN directly on VMs, you need HBA with NPIV enabled and you need also to create vSAN on Hyper-V host. Then you have to deploy MPIO inside the VMs. For more information, you can read this TechNet topic.

To support the multi-channel on the parent partition, it is also necessary to enable MPIO on the Hyper-V host.

For a production environment, you need four 10GB Ethernet NICs and four HBA. This is the most expensive solution.

Using iSCSI storage

iSCSI (Internet Small Computer System Interface) is a protocol that carries SCSI commands over IP networks from the server to the SAN. This solution is less effective that Fibre Channel but it is also less expensive.

The network design is the same that the previous solution. Regarding the storage solution, I isolate the parent partition traffics and the VM workloads. MPIO is implemented for CSV to support Multi-Channel. When VMs need direct access to storage, I deploy two NICs bound on each VM Volumes physical NICs. Then I deploy MPIO inside the VMs. To finish, I prefer to use dedicated switches between hosts and SAN.

For each Hyper-V hosts, you need eight 10GB Ethernet Adapter.

Using Software-Defined Storage

This solution is based on software storage solution (as Scale-Out File Servers).

The network is the same as previous solutions. On the storage side, at least two RDMA NICs capable are required for better performance. SMB3 over RDMA (Remote Direct Memory Access) enables to increase throughput and to decrease the CPU load. This solution is also called SMB Direct. To support Multipath, the SMB Multichannel must be enabled (not teaming!!).

When VM needs a Witness disk or other shared volume for Guest Clustering, it is possible to use Shared VHDX to share a virtual hard drive between virtual machines.

This solution is less expensive because the software-defined storage is cheaper than SAN.

What about Windows Server 2016

In Windows Server 2016, you will be able to converged NIC across tenant and RDMA traffic to optimize costs, enabling high performance and network fault tolerance with only 2 NICs instead of 4.

The post Hyper-V converged networking and storage design appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/hyper-v-converged-networking-and-storage-design/feed/ 0 3727