ESXi – Tech-Coffee //www.tech-coffee.net Fri, 18 May 2018 08:59:30 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Migrate VMs from VMware to Nutanix AHV with Nutanix Xtract //www.tech-coffee.net/migrate-vms-from-vmware-to-nutanix-ahv-with-nutanix-xtract/ //www.tech-coffee.net/migrate-vms-from-vmware-to-nutanix-ahv-with-nutanix-xtract/#comments Fri, 18 May 2018 08:57:48 +0000 //www.tech-coffee.net/?p=6361 Nutanix AHV is a custom KVM hypervisor integrated to Nutanix ecosystem such as Prism. This is an enterprise-class hypervisor and an alternative solution to VMware ESXi or Microsoft Hyper-V when deploying Nutanix. Nutanix AHV is fully integrated to Nutanix Prism and there is no other GUI to manage this hypervisor. To eases the migration to ...

The post Migrate VMs from VMware to Nutanix AHV with Nutanix Xtract appeared first on Tech-Coffee.

]]>
Nutanix AHV is a custom KVM hypervisor integrated to Nutanix ecosystem such as Prism. This is an enterprise-class hypervisor and an alternative solution to VMware ESXi or Microsoft Hyper-V when deploying Nutanix. Nutanix AHV is fully integrated to Nutanix Prism and there is no other GUI to manage this hypervisor. To eases the migration to the Nutanix hypervisor from VMware, Nutanix has released a web appliance called Nutanix Xtract. In this topic, we’ll see how to deploy the appliance and how to migrate virtual machines from VMware vSphere to Nutanix AHV.

Requirements

The VMs with the following configurations are not supported by Nutanix Xtract.

  • Guest OSes not supported by AHV (see Supported Guest VM Types for AHV in the Nutanix Support Portal)
  • VM names with non-English characters
  • Custom vCenter ports
  • Selecting individual ESXi hosts as source of VMs
  • PCIE pass-through (only certain devices)
  • Independent disks
  • Physical RDM based disks
  • VMs with multi-writer disks attached
  • VMs with 2 GB sparse disk attached
  • VMs with SCSI controllers with a SCSI bus sharing attached

Following operating system are fully supported:

  • Windows 2016 Standard, 2016 Datacenter
  • Windows 7, 8, 8.1, 10
  • Windows Server 2008 R2, 2012, 2012 R2, 2016
  • CentOS 6.4, 6.5, 6.6, 6.7, 6.8, 7.0, 7.1, 7.2, 7.3
  • Ubuntu 12.04.5, 14.04.x, 16.04.x, 16.10, Server, Desktop (32-bit and 64-bit)
  • FreeBSD 9.3, 10.0, 10.1,10.2, 10.3, 11.0
  • SUSE Linux Enterprise Server 11 SP3 / SP4
  • SUSE Linux Enterprise Server 12 Oracle Linux 6.x, 7.x
  • RHEL 6.4, 6.5, 6.6, 6.7, 6.8, 7.0, 7.1, 7.2, 7.3

Following operating system are partially supported:

  • Windows 32-bit operating systems
  • Windows with UAC enabled
  • RHEL 4.0, 5.x
  • CentOS Linux 4.0, 5.x
  • Ubuntu 12.x or lower
  • VMs using UEFI-VMs requiring PCI or IDE bus

The following configurations are required by Nutanix Xtract:

  • Supported browsers: Google Chrome
  • VMware Tools must be installed and up to date on the guest VMs for migration
  • Virtual hardware version running on a VM must be 7.0 minimum.
  • Source VMs must support Changed Block Tracking (CBT). See https://kb.vmware.com/kb/1020128
  • CBT-based snapshots are supported for certain VMs.
  • Disks must be either sparse or flat format and must have a minimum version of 2.
  • ESXi version must be 5.5 minimum.
  • Hosts must not be in maintenance mode.
  • vCenter reachable from Xtract appliance on Port TCP 443.
  • ESXi hosts reachable from Xtract appliance on Ports TCP 443 and TCP 902.
  • Every VM must have a UUID.
  • ESXi hosts must be have complete configuration details of the VMs.
  • Complete VM configuration details in ESXi.
  • VMs must have multiple compatible snapshots.
  • Allow port 2049 and port 111 between the Xtract for VM network and the AHV cluster network (CVMs).
  • Accounts used for performing in-guest operations require Login as Batch Job rights in the local security policy on Windows or within the group policy, see https://technet.microsoft.com/en-us/library/cc957131.aspx. Administrator users do not have sufficient rights.

Before a migration, the VMware tools must be started and running and the snapshots must be deleted

Deploy Nutanix Xtract

First of all, download the appliance image from the Nutanix portal. Then log on Nutanix Xtract and navigate to Home |VM.

Next click on the wheel and select Image Configuration.

Then click on Create Image, specify a name and an annotation. Choose Disk image type and upload the qcow2 file from the Nutanix Xtract that you have previously downloaded.

The image upload take a moment and you can check the progression in task menu.

Then create a VM with the following settings:

  • 2 vCPUs
  • 2 Cores per vCPU
  • 4GB of Memory

If you scroll down to Disks setting, you’ll get this message. Click on Add New Disk.

Configure the disk as the following and select the Nutanix Xtract image you’ve just uploaded. Then click on Add.

In Network Adapters section, specify the VLAN where will be connected Nutanix Xtract.

To finish, enable Custom Script and upload the script called xtract-vm-cloudinit-script located in the Nutanix Xtract archival that you have previously downloaded from Nutanix portal.

Then start the VM, connect to the console and wait a while. From my side, the appliance was ready after 30 minutes.

Configure the appliance

When the appliance is ready, you can enter admin credentials (admin / nutanix/4u).

When you are logged with admin user, run the rs command and type again the admin password.

Edit the file /etc/sysconfig/network-scripts/ifcfg-eth0 and specify a static IP as below configuration.

Then restart the service network by running service network restart.

Next edit the file /etc/resolv.conf and specify your suffix DNS and the DNS server(s):

  • search mydomain.local
  • nameserver 10.10.201.2

Restart the Nutanix Xtract appliance. Now connect through HTTPS to the appliance by using the static IP you have set previously. Accept the license agreement and click on Continue.

Next specify a password for the nutanix account.

Now you can log on Nutanix Xtract with the nutanix account.

Configure Nutanix Xtract

Now that you are connected to the appliance, you have to add the source and the target environment. First click on Add Source environment.

Then enter the source name, the vCenter Server address and admin credentials.

Next click on Add target environment and specify your Nutanix Prism.

Now you have the source and the target environment. You are ready to migrate VMware VM to Nutanix AHV.

Migrate a VMware VM to Nutanix AHV

Now to migrate VMs, we have to create a migration plan. To create it, click on Create a Migration Plan.

Provide a name for the migration plan and click on OK.

Next select the target environment and the target container where you want to store VMs.

Next you can look for VMs you want to migrate by using the search field. Then click on the “+” button to add VM into the migration plan.

The guest credentials are used if you run guest operations on source VMs such as install the VirtIO tools. I recommend to not bypass Guest Operations on Source VMs to install VirtIO automatically. Lot of VMs I have migrated without these operations didn’t boot. You can also make the mapping between the source network and the target network.

Next check the migration plan summary and click on Save And Start to run immediately the migration. The data will be copied but the cutover will be done manually later.

Then you can monitor the migration progression.

When you are ready to cutover the VM, you can click on Cutover. The source VMs will be shutdown and the target VMs will be started. I final incremental data copy is executed.

When the copy is finished, the migration status should be completed. Congratulation, you have migrated VMware VMs to Nutanix AHV easily :).

Conclusion

Nutanix provides a powerful tool to migrate VMware VM to Nutanix AHV. All is included to plan the migration and you can schedule the failover. I had some issue with Microsoft UAC but globally the tool works great.

The post Migrate VMs from VMware to Nutanix AHV with Nutanix Xtract appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/migrate-vms-from-vmware-to-nutanix-ahv-with-nutanix-xtract/feed/ 4 6361
Deploy a 2-node StarWind VSAN Free for VMware ESXi 6.5 //www.tech-coffee.net/deploy-a-2-node-starwind-vsan-free-for-vmware-esxi-6-5/ //www.tech-coffee.net/deploy-a-2-node-starwind-vsan-free-for-vmware-esxi-6-5/#comments Tue, 18 Apr 2017 17:31:28 +0000 //www.tech-coffee.net/?p=5388 StarWind VSAN Free provides a storage solution for several purposes such as store virtual machines. With VSAN Free, you can deploy a 2-node cluster to present a highly available storage solution to hypervisor such as Hyper-V or ESXi for free. When you deploy multi VSAN Free nodes, the data are synchronized across the network between ...

The post Deploy a 2-node StarWind VSAN Free for VMware ESXi 6.5 appeared first on Tech-Coffee.

]]>
StarWind VSAN Free provides a storage solution for several purposes such as store virtual machines. With VSAN Free, you can deploy a 2-node cluster to present a highly available storage solution to hypervisor such as Hyper-V or ESXi for free. When you deploy multi VSAN Free nodes, the data are synchronized across the network between nodes.

You can deploy StarWind VSAN Free in hyperconverged model where VSAN Free is installed on hypervisor nodes or in a disaggregated model where compute and storage are separated. In the below table, you can find the difference between StarWind VSAN Free and paid edition.

As you can see, the big differences between both editions are the technical support assistance and management capabilities. With StarWind VSAN Free, you are able to manage only by PowerShell (and 30 trial days across StarWind Management console) and you have no access to technical support assistance.

Thanks to this free version, we are able to deploy a highly available storage solution. In this topic, we will see how to deploy StarWind VSAN Free in a 2-node configuration on Windows Server 2016 Core edition. Then we will connect the storage to VMware ESXi.

Requirements

To write this topic, I have deployed the following virtual machines. In production, I recommend you to implement the solution on physical servers. Each VM has the following hardware:

  • 2 vCPU
  • 4GB of memory
  • 1x OS disk (60GB dynamic)
  • 4x 100GB disks (100GB dynamic)
  • 1x vNIC for management (and heartbeat)
  • 1x vNIC for storage sync

In the end of the topic, I will connect the storage to a VMware ESXi. So if you want to follow this topic, you need a running vSphere environment.

You can download StarWind VSAN Free here.

Architecture overview

Both StarWind VSAN Free nodes will be deployed with Windows Server 2016 Core Edition. Both nodes have two network adapters each. One network is used for the synchronization between both nodes (not routed network). The other is used for iSCSI and management. Ideally, you should isolate management and iSCSI traffic in two separated vNICs.

Configure the data disks

Once the operating system is deployed, I run the following script to create a storage pool and a volume to host Starwind image files.

# Initialize data disks
get-disk |? OperationalStatus -notlike "Online" | Initialize-Disk

#Create a storage pool with previously initialized data disks
New-StoragePool -StorageSubSystemFriendlyName "*VMSAN*" `
                -FriendlyName Pool `
                -PhysicalDisks (get-physicaldisk |? canpool -like $True)

#Create a NTFS volume in 2-Way mirroring with maximum space. Letter: D:\
New-Volume -StoragePoolFriendlyName Pool `
           -FriendlyName Storage `
           -FileSystem NTFS `
           -DriveLetter D `
           -PhysicalDiskRedundancy 1 `
           -UseMaximumSize

#Create a folder on D: called Starwind
new-item -type Directory -Path D:\ -Name Starwind 

Install StarWind VSAN Free

I have copied the StarWind VSAN Free binaries in both nodes. Then I run from command line the installer. On the welcome screen, just click on next.

In the next screen, accept the license agreement and click on next.

The next window introduces the new features and improvements of StarWind Virtual SAN v8. Once you have read them, just click on next.

Next, choose a folder where will be installed StarWind VSAN Free binaries.

Then choose which features you want install. You can install powerful features such as SMI-S to connect to Virtual Machine Manager, the PowerShell management library or the cluster service.

In the next screen choose the start menu folder and click on next.

In the next screen, you can request the free version key. StarWind has already kindly given me a license file so I choose Thank you, I do have a key already.

Then I specify the license file and I click on next.

Next you should have information about the provided license key. Just click on next.

To finish, click on install to deploy the product.

You have to repeat these steps for each node.

Deploy the 2-node configuration

StarWind provides some PowerShell script samples to configure the product from command line. To create the 2-node cluster, we will leverage the script CreateHA(two nodes).ps1. You can get script samples in <InstallPath>\StarWind Software\StarWind\StarWindX\Samples\PowerShell.

Copy scripts CreateHA(two nodes).ps1 and enumDevicesTargets.ps1 and edit them.

Below you can find my edited CreateHA(two nodes).ps1:

Import-Module StarWindX

try
{
    #specify the IP address and credential (this is default cred) of a first node
    $server = New-SWServer -host 10.10.0.46 -port 3261 -user root -password starwind

    $server.Connect()

    $firstNode = new-Object Node

    # Specify the path where image file is stored
    $firstNode.ImagePath = "My computer\D\Starwind"
    # Specify the image name
    $firstNode.ImageName = "VMSto1"
    # Size of the image
    $firstNode.Size = 65536
    # Create the image
    $firstNode.CreateImage = $true
    # iSCSI target alias (lower case only supported because of RFC)
    $firstNode.TargetAlias = "vmsan01"
    # Synchro auto ?
    $firstNode.AutoSynch = $true
    # partner synchronization interface (second node)
    $firstNode.SyncInterface = "#p2=10.10.100.47:3260"
    # partner heartbeat interface (second node)
    $firstNode.HBInterface = "#p2=10.10.0.47:3260"
    # cache size
    $firstNode.CacheSize = 64
    # cache mode (write-back cache)
    $firstNode.CacheMode = "wb"
    # storage pool name
    $firstNode.PoolName = "pool1"
    # synchronization session count. Leave this value to 1
    $firstNode.SyncSessionCount = 1
    # ALUA enable or not
    $firstNode.ALUAOptimized = $true
    
    #
    # device sector size. Possible values: 512 or 4096(May be incompatible with some clients!) bytes. 
    #
    $firstNode.SectorSize = 512
	
	#
	# 'SerialID' should be between 16 and 31 symbols. If it not specified StarWind Service will generate it. 
	# Note: Second node always has the same serial ID. You do not need to specify it for second node
	#
	$firstNode.SerialID = "050176c0b535403ba3ce02102e33eab" 
    
    $secondNode = new-Object Node

    $secondNode.HostName = "10.10.0.47"
    $secondNode.HostPort = "3261"
    $secondNode.Login = "root"
    $secondNode.Password = "starwind"
    $secondNode.ImagePath = "My computer\D\Starwind"
    $secondNode.ImageName = "VMSto1"
    $secondNode.Size = 65536
    $secondNode.CreateImage = $true
    $secondNode.TargetAlias = "vmsan02"
    $secondNode.AutoSynch = $true
    # First node synchronization IP address
    $secondNode.SyncInterface = "#p1=10.10.100.46:3260"
    # First node heartbeat IP address
    $secondNode.HBInterface = "#p1=10.10.0.46:3260"
    $secondNode.ALUAOptimized = $true
        
    $device = Add-HADevice -server $server -firstNode $firstNode -secondNode $secondNode -initMethod "Clear"
    
    $syncState = $device.GetPropertyValue("ha_synch_status")

    while ($syncState -ne "1")
    {
        #
        # Refresh device info
        #
        $device.Refresh()

        $syncState = $device.GetPropertyValue("ha_synch_status")
        $syncPercent = $device.GetPropertyValue("ha_synch_percent")

        Start-Sleep -m 2000

        Write-Host "Synchronizing: $($syncPercent)%" -foreground yellow
    }
}
catch
{
    Write-Host "Exception $($_.Exception.Message)" -foreground red 
}

$server.Disconnect() 

Next I run the script. An image file will be created in both nodes. These image files will be synchronized.

Thanks to the 30 trial days of the management console, you can get graphical information about the configuration. As you can see below, you have information about image files.

You can also review the configuration of network interfaces:

If you browse the starWind folder in each node, you should have the image files.

Now you can edit and run the script enumDevicesTargets.ps1:

Import-Module StarWindX

# Specify the IP address and credential of the node you want to enum
$server = New-SWServer 10.10.0.46 3261 root starwind

$server.Connect()

if ( $server.Connected )
{
    write-host "Targets:"
    foreach($target in $server.Targets)
    {
        $target
    }
    
    write-host "Devices:"
    foreach($device in $server.Devices)
    {
        $device
    }
    
    $server.Disconnect()
}

By running this script, you should have the following result:

If I run the same script against 10.10.0.47, I have these information:

Connect to vSphere environment

Now that the storage solution is ready, I can connect it to vSphere. So, I connect to vCenter Web Client and I edit the target server on my software iSCSI adapter. I add the following static target server.

Next if I navigate to paths tab, you should have both paths marked as Active.

Now you can create a new datastore and use the previously created StarWind image file.

Conclusion

StarWind VSAN free provides an inexpensive software storage solution for POC or small environment. You need only to buy hardware and deploy the product as we’ve seen in this topic. If you use Hyper-V, you can deploy StarWind VSAN Free in the Hyper-V node to get a HyperConverged solution. Just don’t forget that the StarWind VSAN Free edition doesn’t provide any kind of technical assistance (excepted on StarWind forum) and management console (just 30 trial days).

The post Deploy a 2-node StarWind VSAN Free for VMware ESXi 6.5 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-2-node-starwind-vsan-free-for-vmware-esxi-6-5/feed/ 2 5388
Step-By-Step: Deploy Veeam 9.5 and backup VMware VM //www.tech-coffee.net/step-by-step-deploy-veeam-9-5-and-backup-vmware-vm/ //www.tech-coffee.net/step-by-step-deploy-veeam-9-5-and-backup-vmware-vm/#respond Tue, 14 Mar 2017 09:56:04 +0000 //www.tech-coffee.net/?p=5214 The following topics describe how to deploy Veeam 9.5 Backup and Replication and how to backup and restore your VMware VM. The processus are presented step-by-step in three topics: Deploy Veeam 9.5 Backup & Replication Connect Veeam to vCenter and add a backup repository Backup and restore your first VMware VM  

The post Step-By-Step: Deploy Veeam 9.5 and backup VMware VM appeared first on Tech-Coffee.

]]>
The following topics describe how to deploy Veeam 9.5 Backup and Replication and how to backup and restore your VMware VM. The processus are presented step-by-step in three topics:

 

The post Step-By-Step: Deploy Veeam 9.5 and backup VMware VM appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/step-by-step-deploy-veeam-9-5-and-backup-vmware-vm/feed/ 0 5214
Connect vSphere 6.5 to iSCSI storage NAS //www.tech-coffee.net/connect-vsphere-iscsi-storage-nas/ //www.tech-coffee.net/connect-vsphere-iscsi-storage-nas/#comments Fri, 17 Feb 2017 13:40:39 +0000 //www.tech-coffee.net/?p=5153 When you implement ESXi cluster, you need also a shared storage to store the virtual machine files. It can be a NAS, a SAN or vSAN. When using NAS or SAN, you can connect vSphere by using Fibre Channel (FC), FC over Ethernet (FCoE) or iSCSI. In this topic, I’d like to share with you ...

The post Connect vSphere 6.5 to iSCSI storage NAS appeared first on Tech-Coffee.

]]>
When you implement ESXi cluster, you need also a shared storage to store the virtual machine files. It can be a NAS, a SAN or vSAN. When using NAS or SAN, you can connect vSphere by using Fibre Channel (FC), FC over Ethernet (FCoE) or iSCSI. In this topic, I’d like to share with you how to connect vSphere to iSCSI storage such as NAS/SAN.

The NAS model used for this topic is a Synology RS815 NAS. But from vSphere perspective, this is the same configuration for others NAS/SAN model.

Understand type of iSCSI adapters

Before deploying an iSCSI solution, it is important to understand that several types of iSCSI network adapters exist:

  • Software iSCSI adapters
  • Hardware iSCSI adapters

Software iSCSI adapter is managed by the VMkernel. This solution enables to bind to standard network adapters without buying additional network adapters dedicated for iSCSI. However, because this model of iSCSI adapters is handled by the VMkernel, you can have an increase of CPU overhead on the host.

In the other hand, hardware iSCSI adapters are dedicated physical iSCSI adapters that can offload the iSCSI and related network workloads from the host. There are two kind of hardware iSCSI adapters:

  • Independent hardware iSCSI adapters
  • Dependent hardware iSCSI adapters

The independent hardware iSCSI adapter is a third-party adapter that don’t depend on vSphere network. It implements its own networking and iSCSI configuration and management interfaces. This kind of adapter is able to offload the iSCSI workloads from the host. In another word, this is a Host Bus Adapter (HBA).

The dependent hardware iSCSI adapter is a third-party adapter that depends on vSphere network and management interfaces. This kind of adapter is able to offload the iSCSI workloads from the host. In another word, this is a hardware-accelerated adapter.

For this topic, I’ll implement a Software iSCSI adapter.

Architecture overview

Before writing this topic, I have created a vNetwork Distributed Switch (vDS). You can review the vDS implementation from this topic. The NAS is connected to two switches with Vlan 10 and Vlan 52 (Vlan 10 is also used for SMB, NFS for vacation movies but it is a lab right :)). From vSphere perspective, I’ll create one software iSCSI adapter with two iSCSI paths.

The vSphere environment is composed of two ESXi nodes in a cluster and vCenter (VCSA) 6.5. Each host has two standard network adapters where all traffics is converged. From the NAS perspective, there are three LUNs: two for datastores and one for content library.

NAS configuration

In the Synology NAS, I have created three LUNs called VMStorage01, VMStorage02 and vSphereLibrary.

Then I have created four iSCSI targets (two for each ESXi host). This ensures that each node connects to NAS with two iSCSI paths. Each iSCSI target is mapped to all LUNs previously created.

Connect vSphere to iSCSI storage

Below you can find the vDS schema of my configuration. At this time, I have one port group dedicated for iSCSI. A create also a second port group for iSCSI.

Configure iSCSI port group

Once you have created your port group, we need to change teaming and failover configuration. In the above configuration, each node has two network adapters. Each network adapter is attached to an uplink.

Edit the settings of the port group and navigate to Teaming and failover. In the Failover order list, set an uplink to unused. From the first port group and set Uplink 2 to unused uplinks.

From the second port group, I set Uplink 1 to unused.

Add VMKernel adapters

From the vDS summary pane, click on Add and Manage Hosts. Then edit the VMKernel network adapters for all hosts. Next, click on new adapter.

Next, select the first iSCSI network and click on next.

On the next screen, just click on next.

Then specify the IP address of the VMKernel adapter.

Repeat these steps for the other nodes.

You can repeat this section for the second VMKernel iSCSI adapter. When you have finished your configuration, you should have something like that:

Add and configure the software iSCSI adapter

Then select the storage iSCSI adapter and navigate to Network Port Binding. Click on the “add” button and select both VMKernel network adapter.

Next, navigate to Targets, and select dynamic or static discovery regarding your needs. I choose Static Discovery and I click on add. Create one entry for each path with the right IP and target name.

When the configuration is finished, I have two targets as below. Run a rescan before to continue.

If you come back to Network Port Binding, both VMKernel network adapters should be marked as Active.

In Paths tab, you should have two paths for each LUN.

Create a datastore

Now that hosts have visibility on LUNs, we can create datastore. In vCenter navigate to storage tab and right click on the datacenter (or folder). Then select New datastore.

Select the datastore type. I create a VMFS datastore.

Then specify a name for the datastore and select the right LUN.

Next, choose the VMFS version. I choose VMFS 6.

Next specify the partition configuration as the datastore size, the block size and so on.

Once the wizard is finished, you should have your first datastore to store VM files.

Change the multipath algorithm

By default, the multipath algorithm is set to last used. So, only the last used VMKernel will be used. To leverage both VMKernel simultaneously, you have to change the multipath policy. To change the policy, click on the datastore, select the host and choose Edit multipathing.

Then select Round Robin to use both link. Once it is done, all paths should be marked as Active (I/O).

Repeat this step for each datastore.

Create a datastore cluster

To get access to Storage DRS feature, you can create a datastore cluster. If you have several datastore dedicated for VM, you can add them to datastore cluster and use Storage DRS to optimize resource usage. In the below screenshot, I have two datastore for VM (VMStorage01 and VMStorage02) and the content library. So, I’m going to create a datastore cluster where VMStorage01 & VMStorage02 are used.

Navigate to Datastore Clusters pane and click on New Datastore Cluster.

Give a name to the datastore cluster and choose if you want to enable the Storage DRS.

Choose the Storage DRS automation level and options.

In the next screen, you can enable I/O metric for SDRS recommendation to take into consideration the I/O workloads for recommendations. Then you set threshold to leave some space on datastore and latency.

Next select ESXi hosts that need access to the datastore cluster.

Choose the datastore that will be used in the datastore cluster.

Once the datastore cluster is created, you should have something like that:

Now, when you create a virtual machine, you can choose the datastore cluster and automatically, vSphere store the VM files on the least used datastore (regarding the Storage DRS policy).

The post Connect vSphere 6.5 to iSCSI storage NAS appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/connect-vsphere-iscsi-storage-nas/feed/ 3 5153
Deploy ESXi 6.5 from USB stick and unattended file //www.tech-coffee.net/deploy-esxi-6-5-from-usb-stick-and-unattended-file/ //www.tech-coffee.net/deploy-esxi-6-5-from-usb-stick-and-unattended-file/#respond Wed, 18 Jan 2017 07:22:41 +0000 //www.tech-coffee.net/?p=5023 VMware ESXi 6.5 has been released last month and I decide to share with you how I have deployed ESXi 6.5 from a USB stick and with an unattend file. There is no major new feature with ESXi 6.5 related to deployment from unattended file. But I decide to build a vSphere lab and deploy ...

The post Deploy ESXi 6.5 from USB stick and unattended file appeared first on Tech-Coffee.

]]>
VMware ESXi 6.5 has been released last month and I decide to share with you how I have deployed ESXi 6.5 from a USB stick and with an unattend file. There is no major new feature with ESXi 6.5 related to deployment from unattended file. But I decide to build a vSphere lab and deploy ESXi node without a click.

This topic shows you how to prepare a USB stick and an unattend file to deploy nearly automatically ESXi 6.5.

Architecture overview

In order that the following deployment works, I have done some configurations from a network perspective. I have configured the following:

  • DHCP Server
  • DNS Server (forward and reverse lookup zone)

The network address space is 10.10.50.0/24. In the DHCP server configuration, I have set a static IP for both ESXi:

Below you can find the forward lookup zone configuration in the Synology:

And below, you can find the reverse lookup zone:

Thanks to this configuration, the ESXi will obtain its production IP address (in DHCP) and its hostname from the reverse lookup zone during deployment. Then by using a script, the IP address has just to be set from DHCP to static.

Requirements

To follow this topic, you need the following:

  • An USB stick with 8GB at least
  • Rufus to prepare the USB stick
  • ISO of VMware ESXi 6.5

Prepare the USB stick

To prepare the USB stick, plug it into your computer and run Rufus. This software is portable. Select the ISO image of ESXi 6.5 and set Rufus as the following screenshot:

If you have the following message when you start the format, just click on Yes.

Build the unattend file

To deploy my ESXi, I have used the following script. You can find an explanation in the comments. This script can be used for each ESXi to deploy while the static IP in DHCP and DNS are set.


#Accept VMware License agreement

accepteula


# Set the root password

rootpw MyPassword


# Install ESXi on the first disk (Local first, then remote then USB)

install --firstdisk --overwritevmfs


# Set the keyboard

keyboard French


# Set the network

network --bootproto=dhcp


# reboot the host after installation is completed

reboot


# run the following command only on the firstboot

%firstboot --interpreter=busybox


# enable & start remote ESXi Shell (SSH)

vim-cmd hostsvc/enable_ssh

vim-cmd hostsvc/start_ssh


# enable & start ESXi Shell (TSM)

vim-cmd hostsvc/enable_esx_shell

vim-cmd hostsvc/start_esx_shell


# supress ESXi Shell shell warning - Thanks to Duncan (https://www.yellow-bricks.com/2011/07/21/esxi-5-suppressing-the-localremote-shell-warning/)

esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1


# Get Network adapter information

NetName="vmk0"


# Get the IP address assigned by DHCP

IPAddress=$(localcli network ip interface ipv4 get | grep "${NetName}" | awk '{print $2}')


#Get the netmask assigned by DHCP

NetMask=$(localcli network ip interface ipv4 get | grep "${NetName}" | awk '{print $3}')


# Get the gateway provided by DHCP

Gateway=$(localcli network ip interface ipv4 get | grep "${NetName}" | awk '{print $6}')

DNS="10.10.0.229"

VlanID="50"


# Get the hostname assigned thanks to reverse lookup zone

HostName=$(hostname -s)

SuffixDNS="vsphere.lab"

FQDN="${HostName}.${SuffixDNS}"


# set static IP + default route + DNS

esxcli network ip interface ipv4 set --interface-name=vmk0 --ipv4=${IPAddress} --netmask=${NetMask} --type=static --gateway=${Gateway}

esxcli network ip dns server add --server ${DNS}


# Set VLAN ID

esxcli network vswitch standard portgroup set --portgroup-name "Management Network" --vlan-id 50


#Disable ipv6

esxcli network ip set --ipv6-enabled=0


# set suffix and FQDN host configuration

esxcli system hostname set --fqdn=${FQDN}

esxcli network ip dns search add --domain=${SuffixDNS}


# NTP Configuration (thanks to https://www.virtuallyghetto.com)

cat > /etc/ntp.conf << __NTP_CONFIG__

restrict default kod nomodify notrap noquerynopeer

restrict 127.0.0.1

server 0.fr.pool.ntp.org

server 1.fr.pool.ntp.org

__NTP_CONFIG__

/sbin/chkconfig ntpd on


# rename local datastore to something more meaningful

vim-cmd hostsvc/datastore/rename datastore1 "Local - $(hostname -s)"


# restart a last time

reboot

Save the file and name it ks.cfg. Copy the file in the root of the USB stick.

Use the unattend file during deployment

Now we have to configure the boot the load the ks.cfg automatically for the deployment. Open the USB stick and edit Boot.cfg. Replace the following line kernelopt=runweasel by kernelopt=ks=usb:/ks.cfg.

Unplug the USB Stick and plug it on the server. You can boot the USB key to run the installer.

Deployment

During deployment, the installer will load the ks.cfg config file.

It starts by check if the config file is correct.

After the first reboot, the installer configures the ESXi as specified in the config file.

Once the system has rebooted for a second time, the configuration is finished.

For example, the SSH and the ESXi Shell are well enabled.

Conclusion

VMware provides a way to deploy quickly standard ESXi. If your infrastructure is not ready and you have not yet Auto Deploy, the deployment with unattended file can be a good option.

The post Deploy ESXi 6.5 from USB stick and unattended file appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-esxi-6-5-from-usb-stick-and-unattended-file/feed/ 0 5023
Create a VMware vSAN cluster step-by-step //www.tech-coffee.net/create-a-vmware-vsan-cluster-step-by-step/ //www.tech-coffee.net/create-a-vmware-vsan-cluster-step-by-step/#comments Thu, 25 Aug 2016 09:57:06 +0000 //www.tech-coffee.net/?p=4754 As Microsoft, VMware has a Software-Defined Storage solution called vSAN which is currently in version 6.2. This solution enables to aggregate local device storages as mechanical disks or SSD and create a highly available datastore. There are two deployment models: hybrid solution and full-flash solution. In hybrid solution, you must have flash cache devices and ...

The post Create a VMware vSAN cluster step-by-step appeared first on Tech-Coffee.

]]>
As Microsoft, VMware has a Software-Defined Storage solution called vSAN which is currently in version 6.2. This solution enables to aggregate local device storages as mechanical disks or SSD and create a highly available datastore. There are two deployment models: hybrid solution and full-flash solution.

In hybrid solution, you must have flash cache devices and mechanical cache devices (SAS or SATA). In full-flash solution you have only flash devices for cache and capacity. The disks either flash or capacity will be aggregated in disk groups. In each disk group, you can have 1 cache device and 7 capacity devices. Moreover, each host can handle 5 disk groups at maximum (35 capacity devices per host).

In this topic I will describe how to implement a hybrid solution in a three-nodes cluster. For the demonstration, ESXi nodes are located in virtual machines hosted by VMware Workstation. Unfortunately, Hyper-V under Windows Server 2016 handle not very well ESXi. Only IDE controllers and legacy network adapters are supported. So I can’t use my Storage Spaces Direct lab to host a vSAN cluster 🙂.

VMware vSAN lab overview

To run this lab, I have installed VMware Workstation 12.x pro on a traditional machine (gaming computer) running with Windows 10 version 1607. Each ESXi virtual machine is configured as below:

  • ESXi 6.0 update 2
  • 2x CPU with 2x Cores each
  • 16GB of memories (6GB required and more than 8GB recommended)
  • 1x OS disk (40GB)
  • 15x hard disks (10GB each)

Then I have deployed the vCenter server 6.0 update 2 in a single Windows Server 2012 R2 virtual machine.

I have deployed the following networks:

  • Management: 10.10.0.0/24 (VLAN ID: 10) – Native VLAN
  • vSAN traffic: 10.10.101.0/24 (VLAN ID: 12)
  • vMotion traffic: 10.10.102.0/24 (VLAN ID: 13)

In this topic, I assume that you have already installed your ESXi and vCenter server. I assume also that each server is reachable on the network and that you have created at least one datacenter in the inventory. All the screenshots have been taken from the vSphere Web Client.

Add ESXi host to the inventory

First of all, connect to your vSphere Web Client and navigate to Hosts and Clusters. As you can see in the following screenshot, I have already created several datacenters and folders. To add the host to the inventory, right click on a folder and select Add Host.

Next specify the host name or IP address of the ESXi node.

Then specify the credential to connect to the host. Once the connection is made, a permanent account is created and used for management and not anymore the specified account.

Then select the license to assign the the ESXi node.

 On the next screen, choose if you want to prevent a user to logging directly into this host or not.

 To finish, choose the VM location.

 Repeat these steps to add more ESXi node to inventory. For the vSAN usage, I will add two additional nodes.

Create and configure the distributed switch

When you buy a vSAN license, a single distributed switch support is included. To support the vSAN, vMotion and management traffic, I’m going to create a distributed switch with three VMKernels. To create the distributed switch, navigate to Networking and right click on VM Network in a datacenter and choose New Distributed Switch as below.

Specify a distributed switch name and click on Next.

Choose a distributed switch version. Because I have only ESXi version 6.0, I choose the last version of the distributed switch.

Next change the number of uplinks as needed and specify the name of the port group. This port group will contain VMKernel adapters for vMotion, vSAN and management traffic.

Once the distributed switch is created, click on it and navigate to Manage and Topology. Click on the button encircled in red in the below screenshot to add physical NICs to uplink port group and to create VMKernel adapters.

In the first screen of the wizard, select Add hosts.

Specify each host name and click on Next.

Leave the default selection and click on Next. By selecting the following tasks to perform, I’ll add physical adapters to uplink port group and I’ll create VMKernel adapters.

In the next screen, assign the physical adapter (vmnic0) to the uplink port group of the distributed switch which has just been created. Once you have assigned all physical adapters, click on Next.

On the next screen, I’ll create the VMKernel adapters. To create them, just click on New adapter.

Select the port group associated to the distributed switch and click on Next.

Then select the purpose of the VMKernel adapter. For this one I choose Virtual SAN traffic.

Then specify an IP address for this virtual adapter. Click on Next to finish the creation of VMKernel adapter.

I create again a new VMKernel adapter for vMotion traffic.

Repeat the creation of VMKernel adapters for each ESXi host. At the end, you should have something like below:

Before make the configuration, the wizard analyzes the impact. Once all is ok, click on Next.

When the distributed switch is configured, it looks like that:

Create the cluster

Now the distributed switch and virtual network adapters are set, we can create the cluster. Come back to Hosts and Clusters in the navigator. Right click on your folder and select New Cluster.

Give a name to your cluster and for the moment, just turn on virtual SAN. I choose a manual disk claiming because I have set to manually which disks are flash and which disks are HDD. This is because ESXi nodes are in VMs and hard disks are detected all in flash.

Next, move the node in the cluster (drag and drop). Once all nodes are in the cluster, you should have an alert saying that there is no capacity. This is because we have selected manual claiming and no disk are for the moment suitable for vSAN.

Claim storage devices into vSAN

To claim a disk, select the cluster where vSAN is enabled and navigate to Disk Management. Click on the button encircled in red in the below screenshot:

As you saw in the below screenshot, all the disks are marked in flash. In this topic I want to implement a hybrid solution. vSphere Web Client offers the opportunity to mark manually a disk as HDD. This is possible because in production, some hardware are not well detected. In this case, you can set it manually. For this lab, I leave three disks in flash and I set 12 disks as HDD for each node. With this configuration, I will create three disk groups composing of one cache device and four capacity device.

Then you have to claim disks. For each node, select the three flash disks and claim them for cache tier. All disks that you have marked as HDD can be claimed for capacity tier.

Once the claiming wizard is finished, you should have three disk groups per node.

If you want to assign the license to your vSAN, navigate to Licensing and select the license.

Final configuration

Now that vSAN is enabled, you can Turn On vSphare HA and vSphere DRS to distribute virtual machines across the nodes.

Some vSphere HA settings must be changed in vSAN environment. You can read these recommendations in this post.

VM Storage policy

vSAN is based on VM Storage policy to configure the storage capacity. This configuration is applied by VM per VM basis with the VM Storage policy. We will discuss about VM Storage policy in another topic. But for the moment, just verify that the Virtual SAN Default Storage policy exists in the VM Storage Policies store.

Conclusion

In this topic we have seen how to create a vSAN cluster. There is no challenge in this but it is just the beginning. To use the vSAN you have to create VM Storage Policy and some of the capacity concept are not easy. We will discuss later about VM Storage policy. If you are interested by the same Microsoft solution, you can read this whitepaper.

The post Create a VMware vSAN cluster step-by-step appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/create-a-vmware-vsan-cluster-step-by-step/feed/ 4 4754