Connect vSphere 6.5 to iSCSI storage NAS

When you implement ESXi cluster, you need also a shared storage to store the virtual machine files. It can be a NAS, a SAN or vSAN. When using NAS or SAN, you can connect vSphere by using Fibre Channel (FC), FC over Ethernet (FCoE) or iSCSI. In this topic, I’d like to share with you how to connect vSphere to iSCSI storage such as NAS/SAN.

The NAS model used for this topic is a Synology RS815 NAS. But from vSphere perspective, this is the same configuration for others NAS/SAN model.

Understand type of iSCSI adapters

Before deploying an iSCSI solution, it is important to understand that several types of iSCSI network adapters exist:

  • Software iSCSI adapters
  • Hardware iSCSI adapters

Software iSCSI adapter is managed by the VMkernel. This solution enables to bind to standard network adapters without buying additional network adapters dedicated for iSCSI. However, because this model of iSCSI adapters is handled by the VMkernel, you can have an increase of CPU overhead on the host.

In the other hand, hardware iSCSI adapters are dedicated physical iSCSI adapters that can offload the iSCSI and related network workloads from the host. There are two kind of hardware iSCSI adapters:

  • Independent hardware iSCSI adapters
  • Dependent hardware iSCSI adapters

The independent hardware iSCSI adapter is a third-party adapter that don’t depend on vSphere network. It implements its own networking and iSCSI configuration and management interfaces. This kind of adapter is able to offload the iSCSI workloads from the host. In another word, this is a Host Bus Adapter (HBA).

The dependent hardware iSCSI adapter is a third-party adapter that depends on vSphere network and management interfaces. This kind of adapter is able to offload the iSCSI workloads from the host. In another word, this is a hardware-accelerated adapter.

For this topic, I’ll implement a Software iSCSI adapter.

Architecture overview

Before writing this topic, I have created a vNetwork Distributed Switch (vDS). You can review the vDS implementation from this topic. The NAS is connected to two switches with Vlan 10 and Vlan 52 (Vlan 10 is also used for SMB, NFS for vacation movies but it is a lab right :)). From vSphere perspective, I’ll create one software iSCSI adapter with two iSCSI paths.

The vSphere environment is composed of two ESXi nodes in a cluster and vCenter (VCSA) 6.5. Each host has two standard network adapters where all traffics is converged. From the NAS perspective, there are three LUNs: two for datastores and one for content library.

NAS configuration

In the Synology NAS, I have created three LUNs called VMStorage01, VMStorage02 and vSphereLibrary.

Then I have created four iSCSI targets (two for each ESXi host). This ensures that each node connects to NAS with two iSCSI paths. Each iSCSI target is mapped to all LUNs previously created.

Connect vSphere to iSCSI storage

Below you can find the vDS schema of my configuration. At this time, I have one port group dedicated for iSCSI. A create also a second port group for iSCSI.

Configure iSCSI port group

Once you have created your port group, we need to change teaming and failover configuration. In the above configuration, each node has two network adapters. Each network adapter is attached to an uplink.

Edit the settings of the port group and navigate to Teaming and failover. In the Failover order list, set an uplink to unused. From the first port group and set Uplink 2 to unused uplinks.

From the second port group, I set Uplink 1 to unused.

Add VMKernel adapters

From the vDS summary pane, click on Add and Manage Hosts. Then edit the VMKernel network adapters for all hosts. Next, click on new adapter.

Next, select the first iSCSI network and click on next.

On the next screen, just click on next.

Then specify the IP address of the VMKernel adapter.

Repeat these steps for the other nodes.

You can repeat this section for the second VMKernel iSCSI adapter. When you have finished your configuration, you should have something like that:

Add and configure the software iSCSI adapter

Then select the storage iSCSI adapter and navigate to Network Port Binding. Click on the “add” button and select both VMKernel network adapter.

Next, navigate to Targets, and select dynamic or static discovery regarding your needs. I choose Static Discovery and I click on add. Create one entry for each path with the right IP and target name.

When the configuration is finished, I have two targets as below. Run a rescan before to continue.

If you come back to Network Port Binding, both VMKernel network adapters should be marked as Active.

In Paths tab, you should have two paths for each LUN.

Create a datastore

Now that hosts have visibility on LUNs, we can create datastore. In vCenter navigate to storage tab and right click on the datacenter (or folder). Then select New datastore.

Select the datastore type. I create a VMFS datastore.

Then specify a name for the datastore and select the right LUN.

Next, choose the VMFS version. I choose VMFS 6.

Next specify the partition configuration as the datastore size, the block size and so on.

Once the wizard is finished, you should have your first datastore to store VM files.

Change the multipath algorithm

By default, the multipath algorithm is set to last used. So, only the last used VMKernel will be used. To leverage both VMKernel simultaneously, you have to change the multipath policy. To change the policy, click on the datastore, select the host and choose Edit multipathing.

Then select Round Robin to use both link. Once it is done, all paths should be marked as Active (I/O).

Repeat this step for each datastore.

Create a datastore cluster

To get access to Storage DRS feature, you can create a datastore cluster. If you have several datastore dedicated for VM, you can add them to datastore cluster and use Storage DRS to optimize resource usage. In the below screenshot, I have two datastore for VM (VMStorage01 and VMStorage02) and the content library. So, I’m going to create a datastore cluster where VMStorage01 & VMStorage02 are used.

Navigate to Datastore Clusters pane and click on New Datastore Cluster.

Give a name to the datastore cluster and choose if you want to enable the Storage DRS.

Choose the Storage DRS automation level and options.

In the next screen, you can enable I/O metric for SDRS recommendation to take into consideration the I/O workloads for recommendations. Then you set threshold to leave some space on datastore and latency.

Next select ESXi hosts that need access to the datastore cluster.

Choose the datastore that will be used in the datastore cluster.

Once the datastore cluster is created, you should have something like that:

Now, when you create a virtual machine, you can choose the datastore cluster and automatically, vSphere store the VM files on the least used datastore (regarding the Storage DRS policy).

About Romain Serre

Romain Serre works in Lyon as a Senior Consultant. He is focused on Microsoft Technology, especially on Hyper-V, System Center, Storage, networking and Cloud OS technology as Microsoft Azure or Azure Stack. He is a MVP and he is certified Microsoft Certified Solution Expert (MCSE Server Infrastructure & Private Cloud), on Hyper-V and on Microsoft Azure (Implementing a Microsoft Azure Solution).


  1. Good article. I have two Synology DS1815+ units, and found this helpful to get them setup. Thanks!

  2. Great article! I have followed it all the way to the cluster storage. I have 4 hosts and a Synology that I would like to access from each host. I cannot get the datastore to connect to more than 1 host. Could you share some insight?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.


Check Also

Replace vCSA 6.5u1 certificate by an ADCS signed certificate

If you are using vCSA 6.x, maybe you want to replace the self-signed certificate by ...

Step-by-step: Migrate Windows vCenter server to vCSA 6.5u1

Last week I wrote a topic about how to upgrade an old VMware vCenter Server ...

Step-by-Step: Upgrade VMware vCenter Server Appliance 5.5 to 6.5u1

With the release of VMware 6.5(u1), lot of customers upgrade or migrate their vCenter to ...