vSAN – Tech-Coffee //www.tech-coffee.net Thu, 08 Mar 2018 21:50:23 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Monitor and troubleshoot VMware vSAN performance issue //www.tech-coffee.net/monitor-and-troubleshoot-vmware-vsan-performance-issue/ //www.tech-coffee.net/monitor-and-troubleshoot-vmware-vsan-performance-issue/#respond Thu, 08 Mar 2018 21:50:23 +0000 //www.tech-coffee.net/?p=6211 When you deploy VMware vSAN in the vSphere environment, the solution comes from several tools to monitor, find performance bottleneck and to troubleshoot VMware vSAN issue. All the information that I’ll introduce you in this topic are built-in to vCenter. Unfortunately, all vSAN configuration, metrics and alerts are not available yet from HTML5 board. So ...

The post Monitor and troubleshoot VMware vSAN performance issue appeared first on Tech-Coffee.

]]>
When you deploy VMware vSAN in the vSphere environment, the solution comes from several tools to monitor, find performance bottleneck and to troubleshoot VMware vSAN issue. All the information that I’ll introduce you in this topic are built-in to vCenter. Unfortunately, all vSAN configuration, metrics and alerts are not available yet from HTML5 board. So the screenshots were taken from VMware vCenter flash board.

Check the overall health of VMware vSAN

Many information are available from the vSAN cluster pane. VMware has added a dedicated tab for vSAN and some performance counters. In the below screenshot, I show the overall vSAN Health. VMware has included several tests to validate the cluster health such as the hardware compatibility, the network, the physical disk, the cluster and so on.

The hardware compatibility list is downloaded from VMware to validate if vSAN is supported on your hardware. If you take a look at the below screenshot, you can see that my lab is not really supported because my HBA are not referenced by VMware. Regarding the network, several tests are also validated such as the good IP configuration, the MTU, if ping is working and so on. Thanks to this single pane, we are able to check if the cluster is healthy or not.

In the capacity section, you get information about the storage consumption and how the deduplication ratio.

In the same pane you get also a charts which give you the storage usage by object types (before deduplication and compression).

The next pane is useful when a node was down because of an outage or for updates. When you restart a node in vSAN cluster, this last must resync information from its buddy. When the node was down, lot of data were change on the storage and the node must resync these data. This pane indicates which vSAN objects must be resynced to support the chosen RAID level and the FTT (Failure To Tolerate). In case of resync, this pane indicates of many components to resync, the remaining bytes to resync and an estimated time for this process. You can also manage the resync throttling.

In Virtual Objects pane, you can get for each vSAN object the health state. You can check also if the object is compliant with the VM storage policy that you have defined (FTT, RAID Level, Cache pining etc.). Moreovoer, in the physical disk placement tab, you get also the component placement and which are active or not. In my lab, I have a two-node vSAN cluster and I have defined in my storage policy RAID 1 with FTT=1. So for each object, I have three components: two times the data and witness.

In physical disks pane, you can list the physical disks involved in vSAN for each node. You can know also which components are store on which physical disks.

In the proactive tests, you can test a VM creation to validate that everything is working. For example, this test helped me one time to troubleshoot MTU issue between hosts and switches.

vSAN performance counters

Sometime you get poor performance and you expect better. So, you need to find the performance bottleneck. The performance counters can help you to troubleshoot the issue. In performance tab you get the classical performance counters about CPU memory and so on.

VMware has also added two sections dedicated for vSAN performance counters: vSAN – Virtual Machine Consumption and vSAN – Backend. The below screenshot shows you the first section. It is useful because this section indicates you the throughput, the latency and the congestion.

The other section presents performance counters related to backend. You can get the throughput taken by resync job, the IOPS ad latency of vSAN.

The post Monitor and troubleshoot VMware vSAN performance issue appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/monitor-and-troubleshoot-vmware-vsan-performance-issue/feed/ 0 6211
Upgrade VMware vSAN to 6.6 //www.tech-coffee.net/upgrade-vmware-vsan-to-6-6/ //www.tech-coffee.net/upgrade-vmware-vsan-to-6-6/#comments Wed, 19 Apr 2017 11:32:08 +0000 //www.tech-coffee.net/?p=5414 Yesterday VMware released vSAN 6.6. vSAN 6.6 brings a lot of new features and improvements such as encryption, increase of performance and simplified management. You can get the release notes here. Currently my lab is running on vSAN 6.5 and I have decided to upgrade to vSAN 6.6. In this topic I’ll show you how ...

The post Upgrade VMware vSAN to 6.6 appeared first on Tech-Coffee.

]]>
Yesterday VMware released vSAN 6.6. vSAN 6.6 brings a lot of new features and improvements such as encryption, increase of performance and simplified management. You can get the release notes here. Currently my lab is running on vSAN 6.5 and I have decided to upgrade to vSAN 6.6. In this topic I’ll show you how to upgrade VMware vSAN from 6.5 to 6.6

Step 1: upgrade your vCenter Server Appliance

In my lab, I have deployed a vCenter Server Appliance. So, to update the VCSA I’m connecting the Appliance Management (https://<IP or DNS of VCSA>:5480). Then, I navigate to update. Click on check updates from repository.

Once the update is installed, click on summary tab and reboot the VCSA. You should have a new version.

Step 2: Update ESXi nodes

Manage patch baseline in Update Manager

My configuration consists of two ESXi 6.5 nodes and one vSAN witness appliance 6.5. To update these hosts, I use Update Manager. To create / edit a baseline open the Update Manager from “hamburger” menu.

I have created an update baseline called ESXi 6.5 updates.

This baseline is dynamic which means that patches are added automatically regarding criteria.

The criteria are any patches for the product VMware ESXi 6.5.0.

Update nodes

Once the baseline is created, you can attach it to the nodes. Navigate to Hosts and Clusters and select the cluster (or a node) and open the update manager tab. In this tab, you can attach the baseline. Then you can click on Scan for Updates to verify if the node is compliant with the baseline (in other words, if the node has the last patches).

My configuration is specific because it is a lab. I run a configuration which is absolutely not supported because the witness appliance is hosted on the same vSAN cluster. To avoid issues, I manually set to maintenance mode the node I want to update and I move VM to the other node. Then I click on Remediate in Update Manager tab.

Next I select the baseline and I click on next.

Then I select the target node.

Two patches are not installed on the node. These patches are related to vSAN 6.6.

I don’t want to schedule later this update so I just click on next.

In host remediation options tab, you can change the VM Power state. I prefer to not change the VM Power state and run a vMotion.

In the next screen, I choose to disable the HA admission control as recommended by the wizard.

Next you can run a Pre-check remediation. Once you have validated the options you can click on finish to install updates on the node.

The node will be rebooted and when the update is finished you can exit the maintenance mode. I do these steps again for the second node and the witness appliance.

Note: in a production infrastructure, you just have to run the update manager from the cluster and not for each node. I add the node to maintenance mode and I move manually the VM because my configuration is not supported and specific.

Step 3: Upgrade disk configurations

Now that nodes and vCenter are updated, we have to upgrade the disk format version. To upgrade these disks, select your cluster, navigate to configure and general. Then run a Pre-check Upgrade to validate the configuration.

If the Pre-Check is successful, you should have something as below. Then click on Upgrade.

Then the disks are upgrading …

Once all disks are upgraded, disks should be on version 5.0.

That’s all. Now you can enjoy VMware vSAN 6.6.

The post Upgrade VMware vSAN to 6.6 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/upgrade-vmware-vsan-to-6-6/feed/ 2 5414
Deploy a 2-node vSAN cluster //www.tech-coffee.net/deploy-a-2-node-vsan-cluster/ //www.tech-coffee.net/deploy-a-2-node-vsan-cluster/#comments Mon, 27 Mar 2017 09:12:19 +0000 //www.tech-coffee.net/?p=5283 A 2-node hyperconverged cluster is useful in branch office where you need high availability or for small infrastructure. With 2-node hyperconverged solution, you don’t need to leverage a NAS or a SAN for the shared storage. So, the hardware footprint is reduced and the manageability is improved because hyperconverged solution are easier to use than ...

The post Deploy a 2-node vSAN cluster appeared first on Tech-Coffee.

]]>
A 2-node hyperconverged cluster is useful in branch office where you need high availability or for small infrastructure. With 2-node hyperconverged solution, you don’t need to leverage a NAS or a SAN for the shared storage. So, the hardware footprint is reduced and the manageability is improved because hyperconverged solution are easier to use than standard infrastructure with SAN. VMware provides a Software-Defined Storage solution called vSAN. vSAN can be deployed from 2 nodes to 16 nodes. 2-node cluster should be used for ROBO (Remote Office and Branch Office).

A 2-node cluster requires a Witness Appliance provided by VMware freely while the appliance is virtual. The Witness Appliance is based on ESXi. This is the first time that VMware supports a scenario in production with a nested ESXi. This topic describes how to deploy a 2-node vSAN cluster and its witness appliance.

Why you need a witness appliance

vSAN is something like a RAID over the network. vSAN currently supports RAID 1 and RAID 5/6. When you deploy a 2-node vSAN cluster, only the RAID 1 is available. When a VM objects such as VMDK is stored in vSAN, the data is written to a node and replicated to another (such as classical RAID 1 across two physical disks). So, two components will be created: the original data and the replica.

In vSAN environment, a storage object such as VMDK need more than half its components alive to be ready. So, in the above vSAN cluster, if a node is down, you lose half of the VMDK components and so the VMDK is not ready anymore. Not really a resilient solution :).

To solve this issue, VMware has introduced the vSAN Witness Appliance. Thanks to this appliance, in addition of these two components, a witness will be created. So even if you lose a node or the witness appliance, more than half of the components are available.

The Witness Appliance must not be located in the 2-node vSAN Cluster. It is not supported by VMware. You can deploy a third ESXi and deploy the Witness Appliance inside this ESXi. But the witness appliance must have access to the vSAN network.

The witness appliance is provided by VMware from an OVA file. It is free and a special license is provided with the appliance. So, it is really easy to deploy.

Requirements

To deploy this infrastructure, you need two nodes (physical or virtual) and at least a storage device for the cache and a storage device for the capacity. If you deploy a full flash solution, a 10Gb/s network is recommended for vSAN traffic. On my side, I have deployed the 2-node vSAN on this hardware for each node:

  • 1x Asrock D1520D4i (Xeon 1520) (NIC: 2x 1GB Intel i210 for VM and management)
  • 4x16GB DDR4 ECC Unregistered
  • 1x Intel NVMe 600T 128GB (Operating System)
  • 1x Intel S3610 400GB (Cache)
  • 1x Samsung SM863 480GB (Capacity)
  • 1x Intel x520-DA2 for the vSAN traffic and vMotion

These both nodes are already in a cluster and connected to a Synology NAS. Currently all VMs are stored on Synology NAS. Both nodes are direct connected via 10Gb adapters.

The storage adapter provided by the D1520D4i motherboard is not in the vSAN HCL. I strongly recommend you to check HCL before buying hardware for production.

To compute the memory resource needed for vSAN you can use this formula provided by VMware:

BaseConsumption + (NumDiskGroups x ( DiskGroupBaseConsumption + (SSDMemOverheadPerGB x SSDSize)))

  • BaseConsumption: This is the fixed amount of memory consumed by vSAN per ESXi host. This is currently 3 GB. This memory is mostly used to house the vSAN directory, per host metadata, and memory caches.
  • NumDiskGroups: This is the number of disk groups in the host, should range from 1 to 5.
  • DiskGroupBaseConsumption: This is the fixed amount of memory consumed by each individual disk group in the host. This is currently 500 MB. This is mainly used to allocate resources used to support inflight operations on a per disk group level.
  • SSDMemOverheadPerGB: This is the fixed amount of memory we allocate for each GB of SSD capacity. This is currently 2 MB in hybrid systems and is 7 MB for all flash systems. Most of this memory is used for keeping track of blocks in the SSD used for write buffer and read cache.
  • SSDSize: The size of the SSD in GB. (cache)

So, in my case:

3GB + (1 x (0,5GB + (0,007GB x 400GB)))= 6,3GB

My node requires at least 6,3GB of free memory for vSAN.

Regarding the vSAN witness appliance (version 6.2), you can download the OVA here. In my deployment, I will do something not supported. I will place the witness appliance in the 2-node vSAN Cluster. It is absolutely not supported in production so, don’t reproduce this for your production environment. Deploy the witness appliance inside a third ESXi node.

I also recommend you the following PDF:

Deploy the vSAN witness appliance

To deploy the witness appliance, navigate to vSphere web client and right click on the cluster or node where you want host the appliance. Select Deploy OVF template.

Next choose a host or a cluster to run the witness appliance.

In the next screen, you can review the details of the OVF that you deploy. As indicated in the below screenshot, the product is VMware Virtual SAN Witness Appliance.

Next accept the license agreements and click on next.

The model provides three deployment configurations. Choose one of them regarding your environment. In the description, you can review the supported environment for each deployment configuration.

Then choose a storage where you want to store the witness appliance files.

Next choose the network to connect the witness appliance.

To finish specify a root password. Then click on next and run the deployment.

Configure the witness appliance network

Once the witness appliance is deployed, you can start it. Then open a remote console.

When the appliance has started, you can configure the network like any ESXi nodes.

So, I set the network by configuring static IP. I also configure the name of the appliance and I disable IPv6.

When I have finished the settings, my appliance looks like this:

Add appliance to vCenter

The witness appliance can be added to vCenter like any ESXi nodes. Just right click on a datacenter or folder and select Add host.

Next provide connection settings and credentials. When you are in assign license screen, select the license related the the witness appliance.

When you have finished the wizard, the witness appliance should be added to vCenter.

Once you have added the witness appliance, navigate to Configure | VMKernel Adapters and check if vmk1 has vSAN traffic enabled.


Deploy 2-Node vSAN Cluster

Because my two nodes are already in a DRS cluster, I have to turn off the vSphere HA. You can’t enable vSAN in a cluster where vSphere HA is enabled. To turn off vSphere HA, select the cluster and select Configure | vSphere Availability.


Next navigate to Virtual SAN and select General. Then click on Configure.


Then I enable the Deduplication and Compression and I choose Configure two host Virtual SAN cluster.


Next the wizard check if vSAN adapters are available.


Then the wizard claims disk for cache tier and capacity tier.


Next choose the witness appliance and click on next.


Next, you should have a disk for the cache tier and another for the capacity tier. Just click on next.


To enable vSAN, just click on finish.


When vSAN is enabled successfully, you should have three servers and at least three diskgroups (2 nodes and the witness appliance).


In Fault Domains & Stretched Cluster you should have something like this screenshot. The witness host should be enabled. You can see that the 2-node configuration is the same as stretched cluster.

Now you can enable again the vSphere HA as below.

After moving a virtual machine to vSAN, you can see the below configuration. The VMDK has two components and a witness. Even if I lose one of the components or the witness, the VMDK will be ready.

Final configuration

In this section, you can find some recommendations provided by VMware for vSAN. These recommendations regard the configuration of the cluster, especially the vSphere Availability. First I change the heartbeat datastores setting to Use datastores only from the specified list and select no Datastore. This is a VMware recommendation for vSAN when vSAN nodes are also connected to another VMFS or NFS datastore. The heartbeat datastores is disabled to leave only the network heartbeat. If you leave heartbeat datastores enabled, if the network fails, the vSphere HA will not restart VM to another node. If you don’t want a VM restart to another node in case of network failure, keep this setting enabled.

To avoid warning because of datastore heartbeat is disabled (number of vSphere HA heartbeat datastore for this host is 0, which is less than required:2), you can add the following line in advanced options:

Das.ignoreInsufficientHbDatastore = True

For vSAN configuration, VMware recommends to enable Host Monitoring and change the response for host isolation to Power off and restart VMs. Thanks to Host Monitoring, the network will be used as heartbeating to determine the state of a host. The datastore with PDL (Permanent Device lost) and APD (All Path Down) should be disabled (for further information read this documentation). To finish, configure the VM Monitoring as you wish.

Conclusion

VMware vSAN provides an easy way for HA VM storage in branch office. If I compare with Microsoft Storage Spaces Direct, the 2-Node vSAN cluster is more complex to deploy because of the Witness Appliance. This appliance requires a third ESXi node in the same site or another datacenter. With Storage Spaces Direct, I can use a simple file share or Microsoft Azure as a Witness. Except this issue, vSAN is a great solution for your hyperconverged infrastructure.

The post Deploy a 2-node vSAN cluster appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-2-node-vsan-cluster/feed/ 15 5283
Working with VM Storage Policy in a VMware vSAN cluster //www.tech-coffee.net/working-with-vm-storage-policy-in-a-vmware-vsan-cluster/ //www.tech-coffee.net/working-with-vm-storage-policy-in-a-vmware-vsan-cluster/#respond Wed, 21 Sep 2016 20:35:53 +0000 //www.tech-coffee.net/?p=4794 In a previous topic, I have described how to deploy a VMware vSAN cluster. VMware vSAN enables to create a hyperconverged cluster in the vSphere environment. The last time we saw how to deploy this cluster by configuring, ESXi hosts, distributed switch and the storage. Now that the cluster is working, we can deploy VMs ...

The post Working with VM Storage Policy in a VMware vSAN cluster appeared first on Tech-Coffee.

]]>
In a previous topic, I have described how to deploy a VMware vSAN cluster. VMware vSAN enables to create a hyperconverged cluster in the vSphere environment. The last time we saw how to deploy this cluster by configuring, ESXi hosts, distributed switch and the storage. Now that the cluster is working, we can deploy VMs inside the vSAN datastore. To ensure the resilience of the VM storage in case of a fault (a storage device issue, an ESX host down and so on) and to ensure performance, we have to leverage VM Storage Policies.

To be deployed in the vSAN cluster, the VM must be bound to a VM Storage Policy. This policy enables to configure per VM the following settings:

  • IOPS limit for object: This setting enables to configure the IOPS limit of an object as a VMDK
  • Flash read cache reservation (%): This setting is useful only in hybrid vSAN configuration (SSD + HDD). It enables to define the amount of flash storage capacity reserved for read IO of the storage object such as VMDK.
  • Disable object checksum: vSAN provides a system to check if an object such as VMDK is corrupted to resolve automatically the issue. By default, this verification is made once a year. For performance reasons, you can disable this with this setting (not recommended).
  • Force provisioning: VM Storage Policy verifies that the datastore is compliant with rules defined in the policy. If the datastore is not compliant, the VM is not deployed. If you choose to enable Force provisioning, even if the policy is not compliant with the datastore, the VM is deployed.
  • Object space reservation: By default, the VMDK is deployed in thin provisioning in vSAN datastore because this value is set to 0%. If you change this value to 100%, the VMDK is deployed in thick provisioning. If you set the value to 50%, half of the VMDK capacity is reserved. If deduplication is enabled, you can set this value only to 0% or 100% but not between.
  • Number of disk stripes per object: this setting defines the number of stripes for a VM object such as the VMDK. By default, the value is set to 1. If you choose for example two stripes, the VMDK is stripped across two physical disks. Change this value only if you have performance issues and you have identified that it’s coming from a lack of IOPS on the physical disk. This setting has no impact on the resilience.
  • Number of failures to tolerate (FTT): this setting specifies the resilience of the object inside the vSAN and the number of faults tolerated. If the value specified is 1, the object can support a single failure. If the value is set to 2, the object can support two failures and so on. More the number of failures to tolerate is high, more the object will consume storage for resilience.
  • Failure tolerance method: This setting enables you to specify either RAID-1 or RAID5/6. Then the RAID system works through the network and is spread across the vSAN nodes. This setting works in conjunction of FTT. If I assume you set the number of FTT to 1, the RAID1 configuration requires at least three nodes and the RAID 5 requires 4 nodes (We will go deeper in the next section about this aspect).

RAID Level, FTT and stripes

In vSAN environment, a storage object as a VMDK is ready when more than half of its components is alive (this is the quorum). With this statement, this means that in RAID 1 configuration, if I lose a replica, the VMDK is not ready (50% of components are down). It is not good for the high availability J. This is why VMware has introduced a witness system. The witness has a vote like the components. So the witness takes part in the quorum. The below schema presents a VMDK deployed in RAID 1 with FTT = 1.

In this example, even if I lose a node, the VMDK is still ready. If I lose two nodes, the VMDK is not ready anymore. So the VMDK is resilient to one failure (FTT = 1). If I choose to set the FTT to 2, an additional component will be deployed. Each component’s size is equal to the VMDK size. This is why, more you increase the FTT, more the data consumption dedicated to resilience will be high.

The number of disk stripes affects also the number of node required. Each stripe will be also a component. If I take again the above example and this time I set the number of stripes to 2, it will have no impact of the required number of nodes:

In the above example, I have 6 components and one witness. If I lose the node 1 or 2, I will have 3 components and the witness remaining (the majority). So the VMDK will be ready.

Now I’m implementing a VM Storage Policy with RAID 1, FTT = 2 and the number of stripes set to 2:

Despite I have added the fourth node in the vSAN cluster for the third replica (component 3), the above solution is not good. In this solution I have a total of 9 components. What happens if I lose Nodes 1 and Nodes 2? Just three components and the witness are remaining. So the quorum is not reached and the VMDK is not ready. Below, the schema introduces the right design for a RAID 1 with FTT = 2 and the number of stripes set to 2. And yes, you need 5 nodes to achieve this configuration.

Play with VM Storage Policy

Last time I have implemented a vSAN cluster composed of three nodes. Each node has four disks (one for flash, and three for HDD).

The default VM Storage policy is called Virtual SAN Default Storage Policy. But you are encouraged to create your own and not use this one. To create a VM Storage Policy, navigate to Policies and Profiles and click on VM Storage Policies. Next, click on the button circled in red in the below screenshot.

Then give a name to the VM Storage Policy. I have called mine Bronze VMs.

The below screenshot describes you that a VM Storage Policy can include multiple rule set to establish the storage requirements.

For this example, I set the failure tolerance method to RAID 1, the FTT to 1 and the number of disk stripes to 1 (this will deploy the same example described in the first schema of the last section).

Then the wizard shows me the storage which are compatible with the VM storage policy.

To finish, the wizard shows you the summary of your VM storage policy. Click on finish to create the policy.

Deploy a VM in vSAN

Once the VM storage policy is created, you can deploy a VM. When you create the VM, select the vSAN cluster has shown in the below screenshot.

Then select the VM Storage policy and a compatible datastore as below.

Once the VM is deployed, you can navigate to the VM, and Monitor. Then select Policies. As you can see below, I have two components spread across two nodes and one witness hosted in another node.

The post Working with VM Storage Policy in a VMware vSAN cluster appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/working-with-vm-storage-policy-in-a-vmware-vsan-cluster/feed/ 0 4794
Create a VMware vSAN cluster step-by-step //www.tech-coffee.net/create-a-vmware-vsan-cluster-step-by-step/ //www.tech-coffee.net/create-a-vmware-vsan-cluster-step-by-step/#comments Thu, 25 Aug 2016 09:57:06 +0000 //www.tech-coffee.net/?p=4754 As Microsoft, VMware has a Software-Defined Storage solution called vSAN which is currently in version 6.2. This solution enables to aggregate local device storages as mechanical disks or SSD and create a highly available datastore. There are two deployment models: hybrid solution and full-flash solution. In hybrid solution, you must have flash cache devices and ...

The post Create a VMware vSAN cluster step-by-step appeared first on Tech-Coffee.

]]>
As Microsoft, VMware has a Software-Defined Storage solution called vSAN which is currently in version 6.2. This solution enables to aggregate local device storages as mechanical disks or SSD and create a highly available datastore. There are two deployment models: hybrid solution and full-flash solution.

In hybrid solution, you must have flash cache devices and mechanical cache devices (SAS or SATA). In full-flash solution you have only flash devices for cache and capacity. The disks either flash or capacity will be aggregated in disk groups. In each disk group, you can have 1 cache device and 7 capacity devices. Moreover, each host can handle 5 disk groups at maximum (35 capacity devices per host).

In this topic I will describe how to implement a hybrid solution in a three-nodes cluster. For the demonstration, ESXi nodes are located in virtual machines hosted by VMware Workstation. Unfortunately, Hyper-V under Windows Server 2016 handle not very well ESXi. Only IDE controllers and legacy network adapters are supported. So I can’t use my Storage Spaces Direct lab to host a vSAN cluster 🙂.

VMware vSAN lab overview

To run this lab, I have installed VMware Workstation 12.x pro on a traditional machine (gaming computer) running with Windows 10 version 1607. Each ESXi virtual machine is configured as below:

  • ESXi 6.0 update 2
  • 2x CPU with 2x Cores each
  • 16GB of memories (6GB required and more than 8GB recommended)
  • 1x OS disk (40GB)
  • 15x hard disks (10GB each)

Then I have deployed the vCenter server 6.0 update 2 in a single Windows Server 2012 R2 virtual machine.

I have deployed the following networks:

  • Management: 10.10.0.0/24 (VLAN ID: 10) – Native VLAN
  • vSAN traffic: 10.10.101.0/24 (VLAN ID: 12)
  • vMotion traffic: 10.10.102.0/24 (VLAN ID: 13)

In this topic, I assume that you have already installed your ESXi and vCenter server. I assume also that each server is reachable on the network and that you have created at least one datacenter in the inventory. All the screenshots have been taken from the vSphere Web Client.

Add ESXi host to the inventory

First of all, connect to your vSphere Web Client and navigate to Hosts and Clusters. As you can see in the following screenshot, I have already created several datacenters and folders. To add the host to the inventory, right click on a folder and select Add Host.

Next specify the host name or IP address of the ESXi node.

Then specify the credential to connect to the host. Once the connection is made, a permanent account is created and used for management and not anymore the specified account.

Then select the license to assign the the ESXi node.

 On the next screen, choose if you want to prevent a user to logging directly into this host or not.

 To finish, choose the VM location.

 Repeat these steps to add more ESXi node to inventory. For the vSAN usage, I will add two additional nodes.

Create and configure the distributed switch

When you buy a vSAN license, a single distributed switch support is included. To support the vSAN, vMotion and management traffic, I’m going to create a distributed switch with three VMKernels. To create the distributed switch, navigate to Networking and right click on VM Network in a datacenter and choose New Distributed Switch as below.

Specify a distributed switch name and click on Next.

Choose a distributed switch version. Because I have only ESXi version 6.0, I choose the last version of the distributed switch.

Next change the number of uplinks as needed and specify the name of the port group. This port group will contain VMKernel adapters for vMotion, vSAN and management traffic.

Once the distributed switch is created, click on it and navigate to Manage and Topology. Click on the button encircled in red in the below screenshot to add physical NICs to uplink port group and to create VMKernel adapters.

In the first screen of the wizard, select Add hosts.

Specify each host name and click on Next.

Leave the default selection and click on Next. By selecting the following tasks to perform, I’ll add physical adapters to uplink port group and I’ll create VMKernel adapters.

In the next screen, assign the physical adapter (vmnic0) to the uplink port group of the distributed switch which has just been created. Once you have assigned all physical adapters, click on Next.

On the next screen, I’ll create the VMKernel adapters. To create them, just click on New adapter.

Select the port group associated to the distributed switch and click on Next.

Then select the purpose of the VMKernel adapter. For this one I choose Virtual SAN traffic.

Then specify an IP address for this virtual adapter. Click on Next to finish the creation of VMKernel adapter.

I create again a new VMKernel adapter for vMotion traffic.

Repeat the creation of VMKernel adapters for each ESXi host. At the end, you should have something like below:

Before make the configuration, the wizard analyzes the impact. Once all is ok, click on Next.

When the distributed switch is configured, it looks like that:

Create the cluster

Now the distributed switch and virtual network adapters are set, we can create the cluster. Come back to Hosts and Clusters in the navigator. Right click on your folder and select New Cluster.

Give a name to your cluster and for the moment, just turn on virtual SAN. I choose a manual disk claiming because I have set to manually which disks are flash and which disks are HDD. This is because ESXi nodes are in VMs and hard disks are detected all in flash.

Next, move the node in the cluster (drag and drop). Once all nodes are in the cluster, you should have an alert saying that there is no capacity. This is because we have selected manual claiming and no disk are for the moment suitable for vSAN.

Claim storage devices into vSAN

To claim a disk, select the cluster where vSAN is enabled and navigate to Disk Management. Click on the button encircled in red in the below screenshot:

As you saw in the below screenshot, all the disks are marked in flash. In this topic I want to implement a hybrid solution. vSphere Web Client offers the opportunity to mark manually a disk as HDD. This is possible because in production, some hardware are not well detected. In this case, you can set it manually. For this lab, I leave three disks in flash and I set 12 disks as HDD for each node. With this configuration, I will create three disk groups composing of one cache device and four capacity device.

Then you have to claim disks. For each node, select the three flash disks and claim them for cache tier. All disks that you have marked as HDD can be claimed for capacity tier.

Once the claiming wizard is finished, you should have three disk groups per node.

If you want to assign the license to your vSAN, navigate to Licensing and select the license.

Final configuration

Now that vSAN is enabled, you can Turn On vSphare HA and vSphere DRS to distribute virtual machines across the nodes.

Some vSphere HA settings must be changed in vSAN environment. You can read these recommendations in this post.

VM Storage policy

vSAN is based on VM Storage policy to configure the storage capacity. This configuration is applied by VM per VM basis with the VM Storage policy. We will discuss about VM Storage policy in another topic. But for the moment, just verify that the Virtual SAN Default Storage policy exists in the VM Storage Policies store.

Conclusion

In this topic we have seen how to create a vSAN cluster. There is no challenge in this but it is just the beginning. To use the vSAN you have to create VM Storage Policy and some of the capacity concept are not easy. We will discuss later about VM Storage policy. If you are interested by the same Microsoft solution, you can read this whitepaper.

The post Create a VMware vSAN cluster step-by-step appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/create-a-vmware-vsan-cluster-step-by-step/feed/ 4 4754