vSphere – Tech-Coffee //www.tech-coffee.net Mon, 18 Mar 2019 13:17:42 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Getting started with Rubrik to backup VMware VMs //www.tech-coffee.net/getting-started-with-rubrik-to-backup-vmware-vms/ //www.tech-coffee.net/getting-started-with-rubrik-to-backup-vmware-vms/#comments Mon, 18 Mar 2019 13:17:36 +0000 //www.tech-coffee.net/?p=6771 Rubrik is a new competitor on the backup market. Instead of providing only software as most of the other products such as Veeam or Commvault, Rubrik provides also the hardware and so that is a backup appliance software. That is a turnkey solution where you don’t need to handle the underlying hardware and software. All ...

The post Getting started with Rubrik to backup VMware VMs appeared first on Tech-Coffee.

]]>
Rubrik is a new competitor on the backup market. Instead of providing only software as most of the other products such as Veeam or Commvault, Rubrik provides also the hardware and so that is a backup appliance software. That is a turnkey solution where you don’t need to handle the underlying hardware and software. All you have to do is to rack the appliance in datacenter and use the service. Rubrik provides a modern HTML5 interface and the product has been designed for ease of management. In this topic, I will introduce Rubrik to backup VMware VMs.

Add vCenter Server

First of all, a vCenter must be added in order to protect VMs. Click on the wheel located at the top right corner and select vCenter Servers.

In the next view, click on the “+” .

In the pop-up, specify the vCenter name or ip address. Then provide credentials with administrator privilege.

When the vCenter is added, the data are fetched and Rubrik make an inventory of VMs, hosts and folders.

Once the inventory is finished, you can navigate to vSphere VMs to list them.

And what about physical servers and Hyper-V

Rubrik is able to backup Hyper-V VMs, Nutanix AHV or physical servers. For Hyper-V VMs, you can add a SCVMM instance or Hyper-V hosts. Unfortunately, in the lab kindly provided by Rubrik, no Hyper-V host are available. If I have enough time, I’ll try to create a guest Hyper-V cluster.

When you want to backup physical servers, you have to install Rubrik Backup Servers on the servers either Windows or Linux.

You can protect all these kinds of physical machines.

Protect virtual machines

In Rubrik world, the VM protection is set via SLA Domains. An SLA is a set of parameters where you defined the retention policy, and the backup frequency. A restore point is called a snapshot. To create an SLA domain, navigate to Local Domains and click on “+” button.

First, specify an SLA Domain Name. Then specify when you want to take snapshot and the retention. Snapshots can be based on hours, days, months and years and retention can be set for each of time base. In snapshot window section you can configure the backup window and when you do the first full backup. Rubrik make a full backup the first time. Then each snapshot is an incremental. There’s no synthetic backup. In a next topic, we will see the advanced configuration, especially to configure Cloud archiving.

Once you create your SLA Domain, you can apply it to a VM. Navigate in vSphere VMs (for VMware) and select VMs you want to protect. You can also protect a folder or a cluster / node. Then select Manage Protection.

Then select the SLA Domain to apply protection.

Once the SLA domain is applied to the VM, you should see its name in the SLA Domain column.

Manage the protection

By clicking on the SLA Domain, you can review settings and usage. In the Storage part, you can see the capacity used on the brik. In vSphere VMs, all VM protected by this SLA domain are listed.

If you click on a VM, you can get an overview of snapshots taken and how is protected the VM. The schedule on the right shows restore point. By clicking on a restore point you can restore your data. We will see the restore process in a further topic.

In the same place, you can review activities to troubleshoot backup issues.

By selecting an activity, you can review its details and download logs.

Rubrik Dashboard

The rubrik GUI has many dashboards. This one provides information about hardware, storage consumption and throughput.

The following dashboard provides information about SLA domain, tasks and protection. You can also get information about local snapshot storage and the archive repository.

The post Getting started with Rubrik to backup VMware VMs appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/getting-started-with-rubrik-to-backup-vmware-vms/feed/ 2 6771
Monitor and troubleshoot VMware vSAN performance issue //www.tech-coffee.net/monitor-and-troubleshoot-vmware-vsan-performance-issue/ //www.tech-coffee.net/monitor-and-troubleshoot-vmware-vsan-performance-issue/#respond Thu, 08 Mar 2018 21:50:23 +0000 //www.tech-coffee.net/?p=6211 When you deploy VMware vSAN in the vSphere environment, the solution comes from several tools to monitor, find performance bottleneck and to troubleshoot VMware vSAN issue. All the information that I’ll introduce you in this topic are built-in to vCenter. Unfortunately, all vSAN configuration, metrics and alerts are not available yet from HTML5 board. So ...

The post Monitor and troubleshoot VMware vSAN performance issue appeared first on Tech-Coffee.

]]>
When you deploy VMware vSAN in the vSphere environment, the solution comes from several tools to monitor, find performance bottleneck and to troubleshoot VMware vSAN issue. All the information that I’ll introduce you in this topic are built-in to vCenter. Unfortunately, all vSAN configuration, metrics and alerts are not available yet from HTML5 board. So the screenshots were taken from VMware vCenter flash board.

Check the overall health of VMware vSAN

Many information are available from the vSAN cluster pane. VMware has added a dedicated tab for vSAN and some performance counters. In the below screenshot, I show the overall vSAN Health. VMware has included several tests to validate the cluster health such as the hardware compatibility, the network, the physical disk, the cluster and so on.

The hardware compatibility list is downloaded from VMware to validate if vSAN is supported on your hardware. If you take a look at the below screenshot, you can see that my lab is not really supported because my HBA are not referenced by VMware. Regarding the network, several tests are also validated such as the good IP configuration, the MTU, if ping is working and so on. Thanks to this single pane, we are able to check if the cluster is healthy or not.

In the capacity section, you get information about the storage consumption and how the deduplication ratio.

In the same pane you get also a charts which give you the storage usage by object types (before deduplication and compression).

The next pane is useful when a node was down because of an outage or for updates. When you restart a node in vSAN cluster, this last must resync information from its buddy. When the node was down, lot of data were change on the storage and the node must resync these data. This pane indicates which vSAN objects must be resynced to support the chosen RAID level and the FTT (Failure To Tolerate). In case of resync, this pane indicates of many components to resync, the remaining bytes to resync and an estimated time for this process. You can also manage the resync throttling.

In Virtual Objects pane, you can get for each vSAN object the health state. You can check also if the object is compliant with the VM storage policy that you have defined (FTT, RAID Level, Cache pining etc.). Moreovoer, in the physical disk placement tab, you get also the component placement and which are active or not. In my lab, I have a two-node vSAN cluster and I have defined in my storage policy RAID 1 with FTT=1. So for each object, I have three components: two times the data and witness.

In physical disks pane, you can list the physical disks involved in vSAN for each node. You can know also which components are store on which physical disks.

In the proactive tests, you can test a VM creation to validate that everything is working. For example, this test helped me one time to troubleshoot MTU issue between hosts and switches.

vSAN performance counters

Sometime you get poor performance and you expect better. So, you need to find the performance bottleneck. The performance counters can help you to troubleshoot the issue. In performance tab you get the classical performance counters about CPU memory and so on.

VMware has also added two sections dedicated for vSAN performance counters: vSAN – Virtual Machine Consumption and vSAN – Backend. The below screenshot shows you the first section. It is useful because this section indicates you the throughput, the latency and the congestion.

The other section presents performance counters related to backend. You can get the throughput taken by resync job, the IOPS ad latency of vSAN.

The post Monitor and troubleshoot VMware vSAN performance issue appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/monitor-and-troubleshoot-vmware-vsan-performance-issue/feed/ 0 6211
Step-by-Step: Upgrade VMware vCenter Server Appliance 5.5 to 6.5u1 //www.tech-coffee.net/upgrade-vmware-vcenter-server-appliance-5-5-to-6-5u1/ //www.tech-coffee.net/upgrade-vmware-vcenter-server-appliance-5-5-to-6-5u1/#comments Wed, 09 Aug 2017 09:59:24 +0000 //www.tech-coffee.net/?p=5667 With the release of VMware 6.5(u1), lot of customers upgrade or migrate their vCenter to 6.5(u1) from older version such as vSphere 5.5 or 6.0. In this topic, I’ll show you how to upgrade VMware vCenter Server Appliance (vCSA) 5.5 to vCSA 6.5. To follow this topic, you need to download the vCSA 6.5(u1) from ...

The post Step-by-Step: Upgrade VMware vCenter Server Appliance 5.5 to 6.5u1 appeared first on Tech-Coffee.

]]>
With the release of VMware 6.5(u1), lot of customers upgrade or migrate their vCenter to 6.5(u1) from older version such as vSphere 5.5 or 6.0. In this topic, I’ll show you how to upgrade VMware vCenter Server Appliance (vCSA) 5.5 to vCSA 6.5. To follow this topic, you need to download the vCSA 6.5(u1) from VMware. Then mount the ISO on a machine. From my side, I have mounted the ISO on my laptop running on Windows 10 1607.

The VMware vCSA upgrade is done in 2 steps:

  • The vCSA deployment
  • The data migration from source to destination

Before beginning you need the following:

  • A new name for the new VM or rename the old vCenter VM Name with _old prefix for example
  • A temporary IP address
  • Enough storage for the appliance
  • Enough compute resources to run the appliance

Step 1: Deploy a new appliance

Once you have mounted the ISO, open <ISO Drive Letter>\vcsa-ui-installer\win32\installer.exe. Then choose Upgrade.


The next screen introduces the steps to follow to upgrade your appliance from vCSA 5.5 or 6.0 to vCSA 6.5u1. Just click on Next.


Once the next screen, just accept the license agreement and lick on Next.


In the next window, specify the vCenter FQDN or IP address and password to connect to. Then specify the ESXi name which hosts the vCenter Appliance. I specify the ESXi instead of the vCenter because I want to upgrade this vCenter server. When the upgrade will occur, the current vCSA will be shutdown.


Then choose the deployment type and click on next.


Then specify an ESXi or vCenter name. Because I migrate the only one vCenter I have, I choose to specify the ESXi name and credentials to connect to.


Next choose a destination VM folder and click on Next.


Then choose an ESXi in the list.


Next specify a VM name and the root password for the target vCSA.


In the next window, regarding your needs, choose the right appliance size. In the table, you have information about supported number of hosts and VMs.


Next choose the datastore where you want to store the vCSA VM file. You can also deploy the appliance in thin provisioning.


Next specify the temporary IP address. This IP is used only during the data migration step.


In the next screen, you can review the settings you apply previously. When you have reviewed the settings just click on Finish to run the vCSA deployment.



Once the appliance deployment is finished, you can click on continue to process the step 2.


Step 2: Migrate configuration for vCSA 5.5 to vCSA 6.5

The next screen introduces the step2 which consists of copying data from source vCenter Server Appliance to the new appliance.


The next step runs some verifications to check if the configuration can be migrated. For example, in the below screenshot is indicated that a plugin cannot be migrated and to check if DRS is not enabled on the ESXi which host the new appliance. If the DRS is enabled, the new appliance can be migrated and so the wizard will be not able to contact this VM anymore (we have specified the ESXi in step 1).


In the screen, the wizard asks you which data you want to migrate.


Then you can choose to join the CEIP or not.


Next you can review the settings before run the data copies. To run the migration, just click on Finish.



Once the migration is finished, you can connect to the vCenter by using the web client and enjoy the new web interface (either flash or html). The source appliance should be shutdown.



The post Step-by-Step: Upgrade VMware vCenter Server Appliance 5.5 to 6.5u1 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/upgrade-vmware-vcenter-server-appliance-5-5-to-6-5u1/feed/ 27 5667
Deploy a 2-node StarWind VSAN Free for VMware ESXi 6.5 //www.tech-coffee.net/deploy-a-2-node-starwind-vsan-free-for-vmware-esxi-6-5/ //www.tech-coffee.net/deploy-a-2-node-starwind-vsan-free-for-vmware-esxi-6-5/#comments Tue, 18 Apr 2017 17:31:28 +0000 //www.tech-coffee.net/?p=5388 StarWind VSAN Free provides a storage solution for several purposes such as store virtual machines. With VSAN Free, you can deploy a 2-node cluster to present a highly available storage solution to hypervisor such as Hyper-V or ESXi for free. When you deploy multi VSAN Free nodes, the data are synchronized across the network between ...

The post Deploy a 2-node StarWind VSAN Free for VMware ESXi 6.5 appeared first on Tech-Coffee.

]]>
StarWind VSAN Free provides a storage solution for several purposes such as store virtual machines. With VSAN Free, you can deploy a 2-node cluster to present a highly available storage solution to hypervisor such as Hyper-V or ESXi for free. When you deploy multi VSAN Free nodes, the data are synchronized across the network between nodes.

You can deploy StarWind VSAN Free in hyperconverged model where VSAN Free is installed on hypervisor nodes or in a disaggregated model where compute and storage are separated. In the below table, you can find the difference between StarWind VSAN Free and paid edition.

As you can see, the big differences between both editions are the technical support assistance and management capabilities. With StarWind VSAN Free, you are able to manage only by PowerShell (and 30 trial days across StarWind Management console) and you have no access to technical support assistance.

Thanks to this free version, we are able to deploy a highly available storage solution. In this topic, we will see how to deploy StarWind VSAN Free in a 2-node configuration on Windows Server 2016 Core edition. Then we will connect the storage to VMware ESXi.

Requirements

To write this topic, I have deployed the following virtual machines. In production, I recommend you to implement the solution on physical servers. Each VM has the following hardware:

  • 2 vCPU
  • 4GB of memory
  • 1x OS disk (60GB dynamic)
  • 4x 100GB disks (100GB dynamic)
  • 1x vNIC for management (and heartbeat)
  • 1x vNIC for storage sync

In the end of the topic, I will connect the storage to a VMware ESXi. So if you want to follow this topic, you need a running vSphere environment.

You can download StarWind VSAN Free here.

Architecture overview

Both StarWind VSAN Free nodes will be deployed with Windows Server 2016 Core Edition. Both nodes have two network adapters each. One network is used for the synchronization between both nodes (not routed network). The other is used for iSCSI and management. Ideally, you should isolate management and iSCSI traffic in two separated vNICs.

Configure the data disks

Once the operating system is deployed, I run the following script to create a storage pool and a volume to host Starwind image files.

# Initialize data disks
get-disk |? OperationalStatus -notlike "Online" | Initialize-Disk

#Create a storage pool with previously initialized data disks
New-StoragePool -StorageSubSystemFriendlyName "*VMSAN*" `
                -FriendlyName Pool `
                -PhysicalDisks (get-physicaldisk |? canpool -like $True)

#Create a NTFS volume in 2-Way mirroring with maximum space. Letter: D:\
New-Volume -StoragePoolFriendlyName Pool `
           -FriendlyName Storage `
           -FileSystem NTFS `
           -DriveLetter D `
           -PhysicalDiskRedundancy 1 `
           -UseMaximumSize

#Create a folder on D: called Starwind
new-item -type Directory -Path D:\ -Name Starwind 

Install StarWind VSAN Free

I have copied the StarWind VSAN Free binaries in both nodes. Then I run from command line the installer. On the welcome screen, just click on next.

In the next screen, accept the license agreement and click on next.

The next window introduces the new features and improvements of StarWind Virtual SAN v8. Once you have read them, just click on next.

Next, choose a folder where will be installed StarWind VSAN Free binaries.

Then choose which features you want install. You can install powerful features such as SMI-S to connect to Virtual Machine Manager, the PowerShell management library or the cluster service.

In the next screen choose the start menu folder and click on next.

In the next screen, you can request the free version key. StarWind has already kindly given me a license file so I choose Thank you, I do have a key already.

Then I specify the license file and I click on next.

Next you should have information about the provided license key. Just click on next.

To finish, click on install to deploy the product.

You have to repeat these steps for each node.

Deploy the 2-node configuration

StarWind provides some PowerShell script samples to configure the product from command line. To create the 2-node cluster, we will leverage the script CreateHA(two nodes).ps1. You can get script samples in <InstallPath>\StarWind Software\StarWind\StarWindX\Samples\PowerShell.

Copy scripts CreateHA(two nodes).ps1 and enumDevicesTargets.ps1 and edit them.

Below you can find my edited CreateHA(two nodes).ps1:

Import-Module StarWindX

try
{
    #specify the IP address and credential (this is default cred) of a first node
    $server = New-SWServer -host 10.10.0.46 -port 3261 -user root -password starwind

    $server.Connect()

    $firstNode = new-Object Node

    # Specify the path where image file is stored
    $firstNode.ImagePath = "My computer\D\Starwind"
    # Specify the image name
    $firstNode.ImageName = "VMSto1"
    # Size of the image
    $firstNode.Size = 65536
    # Create the image
    $firstNode.CreateImage = $true
    # iSCSI target alias (lower case only supported because of RFC)
    $firstNode.TargetAlias = "vmsan01"
    # Synchro auto ?
    $firstNode.AutoSynch = $true
    # partner synchronization interface (second node)
    $firstNode.SyncInterface = "#p2=10.10.100.47:3260"
    # partner heartbeat interface (second node)
    $firstNode.HBInterface = "#p2=10.10.0.47:3260"
    # cache size
    $firstNode.CacheSize = 64
    # cache mode (write-back cache)
    $firstNode.CacheMode = "wb"
    # storage pool name
    $firstNode.PoolName = "pool1"
    # synchronization session count. Leave this value to 1
    $firstNode.SyncSessionCount = 1
    # ALUA enable or not
    $firstNode.ALUAOptimized = $true
    
    #
    # device sector size. Possible values: 512 or 4096(May be incompatible with some clients!) bytes. 
    #
    $firstNode.SectorSize = 512
	
	#
	# 'SerialID' should be between 16 and 31 symbols. If it not specified StarWind Service will generate it. 
	# Note: Second node always has the same serial ID. You do not need to specify it for second node
	#
	$firstNode.SerialID = "050176c0b535403ba3ce02102e33eab" 
    
    $secondNode = new-Object Node

    $secondNode.HostName = "10.10.0.47"
    $secondNode.HostPort = "3261"
    $secondNode.Login = "root"
    $secondNode.Password = "starwind"
    $secondNode.ImagePath = "My computer\D\Starwind"
    $secondNode.ImageName = "VMSto1"
    $secondNode.Size = 65536
    $secondNode.CreateImage = $true
    $secondNode.TargetAlias = "vmsan02"
    $secondNode.AutoSynch = $true
    # First node synchronization IP address
    $secondNode.SyncInterface = "#p1=10.10.100.46:3260"
    # First node heartbeat IP address
    $secondNode.HBInterface = "#p1=10.10.0.46:3260"
    $secondNode.ALUAOptimized = $true
        
    $device = Add-HADevice -server $server -firstNode $firstNode -secondNode $secondNode -initMethod "Clear"
    
    $syncState = $device.GetPropertyValue("ha_synch_status")

    while ($syncState -ne "1")
    {
        #
        # Refresh device info
        #
        $device.Refresh()

        $syncState = $device.GetPropertyValue("ha_synch_status")
        $syncPercent = $device.GetPropertyValue("ha_synch_percent")

        Start-Sleep -m 2000

        Write-Host "Synchronizing: $($syncPercent)%" -foreground yellow
    }
}
catch
{
    Write-Host "Exception $($_.Exception.Message)" -foreground red 
}

$server.Disconnect() 

Next I run the script. An image file will be created in both nodes. These image files will be synchronized.

Thanks to the 30 trial days of the management console, you can get graphical information about the configuration. As you can see below, you have information about image files.

You can also review the configuration of network interfaces:

If you browse the starWind folder in each node, you should have the image files.

Now you can edit and run the script enumDevicesTargets.ps1:

Import-Module StarWindX

# Specify the IP address and credential of the node you want to enum
$server = New-SWServer 10.10.0.46 3261 root starwind

$server.Connect()

if ( $server.Connected )
{
    write-host "Targets:"
    foreach($target in $server.Targets)
    {
        $target
    }
    
    write-host "Devices:"
    foreach($device in $server.Devices)
    {
        $device
    }
    
    $server.Disconnect()
}

By running this script, you should have the following result:

If I run the same script against 10.10.0.47, I have these information:

Connect to vSphere environment

Now that the storage solution is ready, I can connect it to vSphere. So, I connect to vCenter Web Client and I edit the target server on my software iSCSI adapter. I add the following static target server.

Next if I navigate to paths tab, you should have both paths marked as Active.

Now you can create a new datastore and use the previously created StarWind image file.

Conclusion

StarWind VSAN free provides an inexpensive software storage solution for POC or small environment. You need only to buy hardware and deploy the product as we’ve seen in this topic. If you use Hyper-V, you can deploy StarWind VSAN Free in the Hyper-V node to get a HyperConverged solution. Just don’t forget that the StarWind VSAN Free edition doesn’t provide any kind of technical assistance (excepted on StarWind forum) and management console (just 30 trial days).

The post Deploy a 2-node StarWind VSAN Free for VMware ESXi 6.5 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-2-node-starwind-vsan-free-for-vmware-esxi-6-5/feed/ 2 5388
Connect vSphere 6.5 to iSCSI storage NAS //www.tech-coffee.net/connect-vsphere-iscsi-storage-nas/ //www.tech-coffee.net/connect-vsphere-iscsi-storage-nas/#comments Fri, 17 Feb 2017 13:40:39 +0000 //www.tech-coffee.net/?p=5153 When you implement ESXi cluster, you need also a shared storage to store the virtual machine files. It can be a NAS, a SAN or vSAN. When using NAS or SAN, you can connect vSphere by using Fibre Channel (FC), FC over Ethernet (FCoE) or iSCSI. In this topic, I’d like to share with you ...

The post Connect vSphere 6.5 to iSCSI storage NAS appeared first on Tech-Coffee.

]]>
When you implement ESXi cluster, you need also a shared storage to store the virtual machine files. It can be a NAS, a SAN or vSAN. When using NAS or SAN, you can connect vSphere by using Fibre Channel (FC), FC over Ethernet (FCoE) or iSCSI. In this topic, I’d like to share with you how to connect vSphere to iSCSI storage such as NAS/SAN.

The NAS model used for this topic is a Synology RS815 NAS. But from vSphere perspective, this is the same configuration for others NAS/SAN model.

Understand type of iSCSI adapters

Before deploying an iSCSI solution, it is important to understand that several types of iSCSI network adapters exist:

  • Software iSCSI adapters
  • Hardware iSCSI adapters

Software iSCSI adapter is managed by the VMkernel. This solution enables to bind to standard network adapters without buying additional network adapters dedicated for iSCSI. However, because this model of iSCSI adapters is handled by the VMkernel, you can have an increase of CPU overhead on the host.

In the other hand, hardware iSCSI adapters are dedicated physical iSCSI adapters that can offload the iSCSI and related network workloads from the host. There are two kind of hardware iSCSI adapters:

  • Independent hardware iSCSI adapters
  • Dependent hardware iSCSI adapters

The independent hardware iSCSI adapter is a third-party adapter that don’t depend on vSphere network. It implements its own networking and iSCSI configuration and management interfaces. This kind of adapter is able to offload the iSCSI workloads from the host. In another word, this is a Host Bus Adapter (HBA).

The dependent hardware iSCSI adapter is a third-party adapter that depends on vSphere network and management interfaces. This kind of adapter is able to offload the iSCSI workloads from the host. In another word, this is a hardware-accelerated adapter.

For this topic, I’ll implement a Software iSCSI adapter.

Architecture overview

Before writing this topic, I have created a vNetwork Distributed Switch (vDS). You can review the vDS implementation from this topic. The NAS is connected to two switches with Vlan 10 and Vlan 52 (Vlan 10 is also used for SMB, NFS for vacation movies but it is a lab right :)). From vSphere perspective, I’ll create one software iSCSI adapter with two iSCSI paths.

The vSphere environment is composed of two ESXi nodes in a cluster and vCenter (VCSA) 6.5. Each host has two standard network adapters where all traffics is converged. From the NAS perspective, there are three LUNs: two for datastores and one for content library.

NAS configuration

In the Synology NAS, I have created three LUNs called VMStorage01, VMStorage02 and vSphereLibrary.

Then I have created four iSCSI targets (two for each ESXi host). This ensures that each node connects to NAS with two iSCSI paths. Each iSCSI target is mapped to all LUNs previously created.

Connect vSphere to iSCSI storage

Below you can find the vDS schema of my configuration. At this time, I have one port group dedicated for iSCSI. A create also a second port group for iSCSI.

Configure iSCSI port group

Once you have created your port group, we need to change teaming and failover configuration. In the above configuration, each node has two network adapters. Each network adapter is attached to an uplink.

Edit the settings of the port group and navigate to Teaming and failover. In the Failover order list, set an uplink to unused. From the first port group and set Uplink 2 to unused uplinks.

From the second port group, I set Uplink 1 to unused.

Add VMKernel adapters

From the vDS summary pane, click on Add and Manage Hosts. Then edit the VMKernel network adapters for all hosts. Next, click on new adapter.

Next, select the first iSCSI network and click on next.

On the next screen, just click on next.

Then specify the IP address of the VMKernel adapter.

Repeat these steps for the other nodes.

You can repeat this section for the second VMKernel iSCSI adapter. When you have finished your configuration, you should have something like that:

Add and configure the software iSCSI adapter

Then select the storage iSCSI adapter and navigate to Network Port Binding. Click on the “add” button and select both VMKernel network adapter.

Next, navigate to Targets, and select dynamic or static discovery regarding your needs. I choose Static Discovery and I click on add. Create one entry for each path with the right IP and target name.

When the configuration is finished, I have two targets as below. Run a rescan before to continue.

If you come back to Network Port Binding, both VMKernel network adapters should be marked as Active.

In Paths tab, you should have two paths for each LUN.

Create a datastore

Now that hosts have visibility on LUNs, we can create datastore. In vCenter navigate to storage tab and right click on the datacenter (or folder). Then select New datastore.

Select the datastore type. I create a VMFS datastore.

Then specify a name for the datastore and select the right LUN.

Next, choose the VMFS version. I choose VMFS 6.

Next specify the partition configuration as the datastore size, the block size and so on.

Once the wizard is finished, you should have your first datastore to store VM files.

Change the multipath algorithm

By default, the multipath algorithm is set to last used. So, only the last used VMKernel will be used. To leverage both VMKernel simultaneously, you have to change the multipath policy. To change the policy, click on the datastore, select the host and choose Edit multipathing.

Then select Round Robin to use both link. Once it is done, all paths should be marked as Active (I/O).

Repeat this step for each datastore.

Create a datastore cluster

To get access to Storage DRS feature, you can create a datastore cluster. If you have several datastore dedicated for VM, you can add them to datastore cluster and use Storage DRS to optimize resource usage. In the below screenshot, I have two datastore for VM (VMStorage01 and VMStorage02) and the content library. So, I’m going to create a datastore cluster where VMStorage01 & VMStorage02 are used.

Navigate to Datastore Clusters pane and click on New Datastore Cluster.

Give a name to the datastore cluster and choose if you want to enable the Storage DRS.

Choose the Storage DRS automation level and options.

In the next screen, you can enable I/O metric for SDRS recommendation to take into consideration the I/O workloads for recommendations. Then you set threshold to leave some space on datastore and latency.

Next select ESXi hosts that need access to the datastore cluster.

Choose the datastore that will be used in the datastore cluster.

Once the datastore cluster is created, you should have something like that:

Now, when you create a virtual machine, you can choose the datastore cluster and automatically, vSphere store the VM files on the least used datastore (regarding the Storage DRS policy).

The post Connect vSphere 6.5 to iSCSI storage NAS appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/connect-vsphere-iscsi-storage-nas/feed/ 3 5153
Deploy a converged network with vSphere 6.5 //www.tech-coffee.net/deploy-a-converged-network-with-vsphere-6-5/ //www.tech-coffee.net/deploy-a-converged-network-with-vsphere-6-5/#comments Tue, 14 Feb 2017 18:10:29 +0000 //www.tech-coffee.net/?p=5115 With the increased network card rates, we are now able to let several flows pass on the same network links. We can find on the market network adapters with 10gb/s, 25Gb/s, or 100Gb/s! So, there is no reason to dedicate a network adapter for a specific traffic. Thanks to converged networks, we can deploy VMware ...

The post Deploy a converged network with vSphere 6.5 appeared first on Tech-Coffee.

]]>
With the increased network card rates, we are now able to let several flows pass on the same network links. We can find on the market network adapters with 10gb/s, 25Gb/s, or 100Gb/s! So, there is no reason to dedicate a network adapter for a specific traffic. Thanks to converged networks, we can deploy VMware ESXi nodes with less network adapters. This brings flexibility and software-oriented network management. Once you have configured VLAN from switches perspective, you just have to create some port groups from vCenter perspective. In this topic, I’ll show you how to deploy a converged network in vSphere 6.5. For this topic, I leverage the vNetwork distributed switch which enables to deploy a consistent network configuration across nodes.

Network configuration overview

To write this topic, I’ve worked on two VMware ESXi 6.5 nodes. Each node has two network adapters (1Gb/s). Each network adapter is plugged on a separate switch. The switch ports are configured in trunk mode where VLAN 50, 51, 52, 53 and 54 are allowed. The VLAN 50 is untagged. I’ve not set any LACP configuration.

The following network will be configured:

  • Management – VLAN 50 (untagged) – 10.10.50.0/24: will be used to manage ESXi nodes
  • vMotion – VLAN 51 – 10.10.51.0/24: used for vMotion traffic
  • iSCSI – VLAN 52 – 10.10.52.0/24: network dedicated for iSCSI
  • DEV – VLAN 53 – 10.10.53.0/24: testing VM will be connected to this network
  • PROD – VLAN 54 – 10.10.54.0/24: production VM will be connected to this network

I’ll call the vNetwork Distributed Switch (vDS) with the following name: vDS-CAN-1G. To implement the following design I will need:

  • 1x vNetwork Distributed Switch
  • 2x Uplinks
  • 5x distributed port groups

So, let’s go 🙂

vNetwork Distributed Switch creation

To create a distributed switch, open vSphere Web Client and navigate to network menu (in navigator). Right click on your datacenter and select Distributed Switch | New Distributed Switch

Then specify a name for the distributed switch. I call mine vDS-CNA-1G.

Next choose a distributed switch version. Depending on the version, you can access to more features. I choose the last version Distributed switch: 6.5.0.

Next you can specify the number of uplinks. In this example, only two uplinks are required. But I leave the default number of uplinks value to 4. You can choose to enable or not the Network I/O Control (NIOC). This feature provides QoS management. Then I choose to create a default port group called Management. This port group will contain VMKernel adapters to manage ESXi nodes.

Once you have reviewed the settings, you can click on finish to create the vNetwork distributed switch.

Now that vDS is created, we can add host to it.

Add hosts to vNetwork distributed switch

To add hosts to vDS, click on Add and Manage Hosts icon (on top of vDS summary page). Next choose Add hosts.

Next click on New hosts and add each host you want.

Check the following tasks:

  • Manage physical adapters: association of physical network adapters to uplinks
  • Manage VMKernel adapters: manage VMKernel adapters (host virtual NICs).

In the next screen, for each node, add physical network adapters to uplink. In this example, I have added each vmnic0 of both node to Uplink 1 and vmnic1 to Uplink 2.

When you deploy ESXi, by default a vSwitch0 is created with one VMKernel for management. This vSwitch is a standard switch. To move the VMKernel to vDS without connection lost, we can reassign the VMKernel to the vDS. To make this operation, select the VMkernels and click on Assign port group. Then select the Management port group.

The next screen presents the impact of the network configuration. When you have reviewed the impacts, you can click on next to add and assign hosts to vDS.

Add additional distributed port group

Now that hosts are associated to vDS, we can add more distributed port group. In this section, I add a distributed port group for vMotion. From the vDS summary pane, click on New Distributed Port Group icon (on top of the pane). Give a name to the distributed port group.

In the next screen, you can configure the port binding and port allocation. You can have more information about port binding in this topic. The recommended port binding for general use is Static binding. I set the number of ports to 8 but because I configure the port allocation to Elastic, the ports are increased or decreased as needed. To finish, I set the VLAN ID to 51.

Add additional VMKernel adapters

Now that the distributed port group is created, we can add VMKernel to this port group. Click on Add and Manage Hosts from vDS summary pane. Then select Manage host networking.

Next click on Attached hosts and select hosts you want.

In the next screen, just check Manage VMKernel adapters.

Then click on New adapter.

In Select an existing network area, click on Browse and choose vMotion.

In the next screen, choose vMotion services. In this way, the vMotion traffic will use this VMKernel.

To finish, specify TCP/IP settings and click on finish.

When this configuration is finished, the vDS schema looks like this:

So we have two port groups and two uplinks. In this configuration we have converged the management and vMotion traffics. Note that Management network has no VLAN ID because I’ve set the VLAN 50 to untagged from switches perspective.

Final result

By repeating the above steps, I have created more distributed port groups. I have not yet created VMkernel iSCSI adapters (for a next topic about storage :)) but I think you know what I’m saying. If you compare the below schema and the one of network overview, they are very similar.

The final job concerns QoS to leave enough bandwidth to specific traffic as vMotion. You can set the QoS thanks to Network I/O Control (NIOC).

The post Deploy a converged network with vSphere 6.5 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-converged-network-with-vsphere-6-5/feed/ 5 5115
Deploy ESXi 6.5 from USB stick and unattended file //www.tech-coffee.net/deploy-esxi-6-5-from-usb-stick-and-unattended-file/ //www.tech-coffee.net/deploy-esxi-6-5-from-usb-stick-and-unattended-file/#respond Wed, 18 Jan 2017 07:22:41 +0000 //www.tech-coffee.net/?p=5023 VMware ESXi 6.5 has been released last month and I decide to share with you how I have deployed ESXi 6.5 from a USB stick and with an unattend file. There is no major new feature with ESXi 6.5 related to deployment from unattended file. But I decide to build a vSphere lab and deploy ...

The post Deploy ESXi 6.5 from USB stick and unattended file appeared first on Tech-Coffee.

]]>
VMware ESXi 6.5 has been released last month and I decide to share with you how I have deployed ESXi 6.5 from a USB stick and with an unattend file. There is no major new feature with ESXi 6.5 related to deployment from unattended file. But I decide to build a vSphere lab and deploy ESXi node without a click.

This topic shows you how to prepare a USB stick and an unattend file to deploy nearly automatically ESXi 6.5.

Architecture overview

In order that the following deployment works, I have done some configurations from a network perspective. I have configured the following:

  • DHCP Server
  • DNS Server (forward and reverse lookup zone)

The network address space is 10.10.50.0/24. In the DHCP server configuration, I have set a static IP for both ESXi:

Below you can find the forward lookup zone configuration in the Synology:

And below, you can find the reverse lookup zone:

Thanks to this configuration, the ESXi will obtain its production IP address (in DHCP) and its hostname from the reverse lookup zone during deployment. Then by using a script, the IP address has just to be set from DHCP to static.

Requirements

To follow this topic, you need the following:

  • An USB stick with 8GB at least
  • Rufus to prepare the USB stick
  • ISO of VMware ESXi 6.5

Prepare the USB stick

To prepare the USB stick, plug it into your computer and run Rufus. This software is portable. Select the ISO image of ESXi 6.5 and set Rufus as the following screenshot:

If you have the following message when you start the format, just click on Yes.

Build the unattend file

To deploy my ESXi, I have used the following script. You can find an explanation in the comments. This script can be used for each ESXi to deploy while the static IP in DHCP and DNS are set.


#Accept VMware License agreement

accepteula


# Set the root password

rootpw MyPassword


# Install ESXi on the first disk (Local first, then remote then USB)

install --firstdisk --overwritevmfs


# Set the keyboard

keyboard French


# Set the network

network --bootproto=dhcp


# reboot the host after installation is completed

reboot


# run the following command only on the firstboot

%firstboot --interpreter=busybox


# enable & start remote ESXi Shell (SSH)

vim-cmd hostsvc/enable_ssh

vim-cmd hostsvc/start_ssh


# enable & start ESXi Shell (TSM)

vim-cmd hostsvc/enable_esx_shell

vim-cmd hostsvc/start_esx_shell


# supress ESXi Shell shell warning - Thanks to Duncan (https://www.yellow-bricks.com/2011/07/21/esxi-5-suppressing-the-localremote-shell-warning/)

esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1


# Get Network adapter information

NetName="vmk0"


# Get the IP address assigned by DHCP

IPAddress=$(localcli network ip interface ipv4 get | grep "${NetName}" | awk '{print $2}')


#Get the netmask assigned by DHCP

NetMask=$(localcli network ip interface ipv4 get | grep "${NetName}" | awk '{print $3}')


# Get the gateway provided by DHCP

Gateway=$(localcli network ip interface ipv4 get | grep "${NetName}" | awk '{print $6}')

DNS="10.10.0.229"

VlanID="50"


# Get the hostname assigned thanks to reverse lookup zone

HostName=$(hostname -s)

SuffixDNS="vsphere.lab"

FQDN="${HostName}.${SuffixDNS}"


# set static IP + default route + DNS

esxcli network ip interface ipv4 set --interface-name=vmk0 --ipv4=${IPAddress} --netmask=${NetMask} --type=static --gateway=${Gateway}

esxcli network ip dns server add --server ${DNS}


# Set VLAN ID

esxcli network vswitch standard portgroup set --portgroup-name "Management Network" --vlan-id 50


#Disable ipv6

esxcli network ip set --ipv6-enabled=0


# set suffix and FQDN host configuration

esxcli system hostname set --fqdn=${FQDN}

esxcli network ip dns search add --domain=${SuffixDNS}


# NTP Configuration (thanks to https://www.virtuallyghetto.com)

cat > /etc/ntp.conf << __NTP_CONFIG__

restrict default kod nomodify notrap noquerynopeer

restrict 127.0.0.1

server 0.fr.pool.ntp.org

server 1.fr.pool.ntp.org

__NTP_CONFIG__

/sbin/chkconfig ntpd on


# rename local datastore to something more meaningful

vim-cmd hostsvc/datastore/rename datastore1 "Local - $(hostname -s)"


# restart a last time

reboot

Save the file and name it ks.cfg. Copy the file in the root of the USB stick.

Use the unattend file during deployment

Now we have to configure the boot the load the ks.cfg automatically for the deployment. Open the USB stick and edit Boot.cfg. Replace the following line kernelopt=runweasel by kernelopt=ks=usb:/ks.cfg.

Unplug the USB Stick and plug it on the server. You can boot the USB key to run the installer.

Deployment

During deployment, the installer will load the ks.cfg config file.

It starts by check if the config file is correct.

After the first reboot, the installer configures the ESXi as specified in the config file.

Once the system has rebooted for a second time, the configuration is finished.

For example, the SSH and the ESXi Shell are well enabled.

Conclusion

VMware provides a way to deploy quickly standard ESXi. If your infrastructure is not ready and you have not yet Auto Deploy, the deployment with unattended file can be a good option.

The post Deploy ESXi 6.5 from USB stick and unattended file appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-esxi-6-5-from-usb-stick-and-unattended-file/feed/ 0 5023
Step-by-Step: Deploy vCenter Server Appliance (VCSA) 6.5 //www.tech-coffee.net/step-by-step-deploy-vcenter-server-appliance-vcsa-6-5/ //www.tech-coffee.net/step-by-step-deploy-vcenter-server-appliance-vcsa-6-5/#comments Mon, 02 Jan 2017 14:46:16 +0000 //www.tech-coffee.net/?p=4982 VMware vCenter is a management software for your vSphere environment. It enables to manage from a single pane of glass all your VMware virtual infrastructure. Last month, VMware has released the vSphere 6.5 version which includes the vCenter. vCenter comes in two versions: A Software to be deployed on a Windows Server (physical or virtual) ...

The post Step-by-Step: Deploy vCenter Server Appliance (VCSA) 6.5 appeared first on Tech-Coffee.

]]>
VMware vCenter is a management software for your vSphere environment. It enables to manage from a single pane of glass all your VMware virtual infrastructure. Last month, VMware has released the vSphere 6.5 version which includes the vCenter. vCenter comes in two versions:

  • A Software to be deployed on a Windows Server (physical or virtual)
  • A virtual appliance that is based on Linux (vCenter Server Appliance: VCSA)

Since vSphere 6, the VCSA can manage more hosts and more VM and is more robust and scalable. With vSphere 6.5, the VCSA support the simplified native vCenter High Availability which is available only for the VCSA (not for Windows).

The below table introduces the Windows versus VCSA scalability (vSphere 6.0 information):

As you can see, there is no advantage anymore to use Windows vCenter. Moreover, with vSphere 6.5, the update manager is integrated to vCenter. You don’t need Windows for that anymore. The VCSA is free where you have to pay a license for the Windows vCenter. The only con of VCSA is that it is a black box.

In this topic, I’ll show you how to deploy a standalone VCSA 6.5 from a client computer.

Requirements

To deploy your VCSA 6.5 you need the following:

  • A running ESXi host reachable from the network
  • The ISO of VCSA 6.5 (you can download it from here)
  • At least 4GB on your host and 20GB on a datastore

Step 1: Deploy the VCSA on an ESXi

Once you have downloaded the VCSA 6.5 ISO, you can run vcsa-ui-installer\win32\installer.exe

When you have run the installer, you can see that you have several options:

  • Install: to run the VCSA installation (I choose this option)
  • Upgrade: if you want to upgrade an existing VCSA to 6.5 version
  • Migrate: to migrate a Windows vCenter Server to vCenter Server Appliance
  • Restore: to recover the VCSA from a previous backup

In the next screen, the wizard explains you there is two steps to deploy the VCSA. In the first step, we will deploy the appliance and in the second one, we will configure it.

Next you have to accept license agreement and click on next.

Then choose the deployment model. You can select to embed the Platform Services Controller (PSC) with the vCenter Server. Or you can separate the role as explain in the below schema. PSC manages SSO, certificate stores, licensing service and so on. The second deployment model is recommended when you want share these services between multiple vCenter Server instances. For this example, I choose the first one and I click on next.

Then specify the ESXi or the vCenter Server where the appliance will be deployed. I specify a running ESXi, the management port and the root credential.

Next I specify the VM Name and the root password for the VCSA.

In the next screen, you can choose the appliance size. More the virtual infrastructure is huge, more the VCSA needs vCPU, RAM and storage.

Then choose a datastore where the VM will be deployed and click on next.

In the next screen, specify the network configuration of the VCSA. If you specify a FQDN in system name, be sure that the entry exists (with the right IP address) in the DNS server. Otherwise you will have an error message.

To run the appliance deployment, click on finish in the below screen.

While the deployment occurs, a progress bar will show you where you are in the deployment process.

If you connect to the ESXi from the web interface, you can see that the VM is well deployed.

When the deployment is completed, you should have the below screen.

Click on continue to process in the step 2.

Step 2: Configure the appliance

In the step 2, we will configure the appliance. In the first screen, just click on next.

Then, specify some NTP server to synchronize the time.

In the next screen, provide SSO information to manage your vSphere infrastructure.

Next you can accept to join the VMWare’s Customer Experience Improvement Program (CEIP) or not.

To finish, click on finish to run the configuration.

During the configuration, you should have a progress bar to inform you where you are in the process.

Once the configuration is finished, you should have the below screen.

You can now connect to the vSphere Web Client. The URL is indicated in the above screenshot.

Appliance monitoring

The VCSA provides an interface for the monitoring. You can connect from https://<SystemName>:5480. You can use root credential.

As you can see in the below screenshot, you can have the overall health status from this interface.

You can also monitor the CPU and memory of the appliance.

And you can also update the appliance from this interface.

Conclusion

Since vSphere 6.0, the VCSA is really highlighted by VMware. Moreover, since vSphere 6.5, the Update Manager (VUM) is integrated in vCenter. In my point of view there is no advantage to use Windows vCenter Server anymore compared to VCSA. As you have seen in this topic, the VCSA deployment is really turnkey and easy.

The post Step-by-Step: Deploy vCenter Server Appliance (VCSA) 6.5 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/step-by-step-deploy-vcenter-server-appliance-vcsa-6-5/feed/ 52 4982