Network – Tech-Coffee //www.tech-coffee.net Thu, 26 Apr 2018 09:57:59 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Configure Dell S4048 switches for Storage Spaces Direct //www.tech-coffee.net/configure-dell-s4048-switches-for-storage-spaces-direct/ //www.tech-coffee.net/configure-dell-s4048-switches-for-storage-spaces-direct/#comments Thu, 26 Apr 2018 09:57:59 +0000 //www.tech-coffee.net/?p=6288 When we deploy Storage Spaces Direct (S2D), either hyperconverged or disaggregated, we have to configure the networking part. Usually we work with Dell hardware to deploy Storage Spaces Direct and the one of the switches supported by the Dell reference architectures is the Dell S4048 (Force 10). In this topic, we will see how to ...

The post Configure Dell S4048 switches for Storage Spaces Direct appeared first on Tech-Coffee.

]]>
When we deploy Storage Spaces Direct (S2D), either hyperconverged or disaggregated, we have to configure the networking part. Usually we work with Dell hardware to deploy Storage Spaces Direct and the one of the switches supported by the Dell reference architectures is the Dell S4048 (Force 10). In this topic, we will see how to configure this switch from scratch.

This topic has been co-written with Frederic Stefani – Dell architect solution.

Stack or not

Usually, customers know the stack feature which is common to all network vendors such as Cisco, Dell, HP and so on. This feature enables to add several identical switches in a single configuration managed by a master switch. Because all switches share the same configuration, for the network administrators, all these switches are seen like a single one. So, the administrators connect to the master switch and then edit the configuration on all switches member of the stack.

If the stacking is sexy on the paper, there is a main issue especially with storage solution such as S2D. With S4048 stack, when you run an update, all switches reload at the same time. Because S2D highly relies on the network, your storage solution will crash. This is why the Dell reference architecture for S2D recommends to deploy a VLT (Virtual Link Trunking).

With Stacking you have a single control plane (you configure all switches from a single switch) and a single data plane in a loop free topology. In a VLT configuration, you have also a single data plane in a loop free topology but several control planes, which allow you to reboot switches one by one.

For this reason, the VLT (or MLAG) technology is the preferred way for Storage Spaces Direct.

S4048 overview

A S4048 switch has 48x 10GB/s SFP+ ports, 6x 40GB/s QSFP+ ports, a management port (1GB/s) and a serial port. The management and the serial ports are located on the back. In the below diagram, there is three kinds of connection:

  • Connection for S2D (in this example from port 1 to 16, but you can connect until port 48)
  • VLTi connection
  • Core connection: the uplink to connect to core switches

In the below architecture schema, you can find both S4048 interconnected by using VLTi ports and several S2D nodes (hyperconverged or disaggregated, that doesn’t matter) connected to port 1 to 16. In this topic, we will configure these switches regarding this configuration.

Switches initial configuration

When you start the switch for the first time you have to configure the initial settings such as switch name, IP address and so on. Plug a serial cable from the switch to your computer and connect through Telnet with the following settings:

  • Baud Rate: 115200
  • No Parity
  • 8 data bits
  • 1 stop bit
  • No flow control

Then you can run the following configuration:

Enable
Configure

# Configure the hostname
hostname SwitchName-01

# Set the IP address to the management ports, to connect to switch through IP
interface ManagementEthernet 1/1
ip address 192.168.1.1/24
no shutdown
exit

# Set the default gateway
ip route 0.0.0.0/0 192.168.1.254/24

# Enable SSH
ip ssh server enable

# Create a user and a password to connect to the switch
username admin password 7 MyPassword privilege 15

# Disable Telnet through IP
no ip telnet server enable
Exit

# We leave enabled Rapid Spanning Tree Protocol.
protocol spanning-tree rstp
no disable
Exit

Exit

# Write the configuration in memory
Copy running-configuration startup-configuration

After this configuration is applied, you can connect to the switch through SSH. Apply the same configuration to the other switch (excepted the name and IP address).

Configure switches for RDMA (RoCEv2)

N.B: For this part we assume that you know how RoCE v2 is working, especially DCB, PFC and ETS.

Because we implement the switches for S2D, we have to configure the switches for RDMA (RDMA over Converged Ethernet v2 implementation). Don’t forget that with RoCE v2, you have to configure DCB and PFC end to end (on servers and on switches side). In this configuration, we assume that you use the Priority ID 3 for SMB traffic.

# By default the queue value is 0 for all dot1p (QoS) traffic. We enable <a href="https://www.dell.com/support/manuals/fr/fr/frbsdt1/force10-s4810/s4810_9.7.0.0_cli_pub-v1/service-class-dynamic-dot1p?guid=guid-6bbc7b99-4dde-433c-baf2-98a614eb665e&amp;lang=en-us">this command</a> globally to change this behavivor.
service-class dynamic dot1p

# Data-Center-Bridging enabled. This enable to configure Lossless and latency sensitive traffic in a Priority Flow Control (PFC) queue.
dcb enable

# Provide a name to the DCB buffer threshold
dcb-buffer-threshold RDMA
priority 3 buffer-size 100 pause-threshold 50 resume-offset 35
exit

# Create a dcb map to configure the PFC and ETS rule (Enhanced Transmission Control)
dcb-map RDMA

# For priority group 0, we allocate 50% of the bandwidth and PFC is disabled
priority-group 0 bandwidth 50 pfc off

# For priority group 3, we allocate 50% of the bandwidth and PFC is enabled
priority-group 3 bandwidth 50 pfc on

# Priority group 3 contains traffic with dot1p priority 3.
priority-pgid 0 0 0 3 0 0 0 0

Exit

Exit
Copy running-configuration startup-configuration

Repeat this configuration on the other switch.

VLT domain implementation

First of all, we have to create Port Channel with two QSFP+ ports (port 1/49 and 1/50):

Enable
Configure

# Configure the port-channel 100 (make sure it is not used)
interface Port-channel 100

# Provide a description
description VLTi

# Do not apply an IP address to this port channel
no ip address

#Set the maximum MTU to 9216
mtu 9216

# Add port 1/49 and 1/50
channel-member fortyGigE 1/49,1/50

# Enable the port channel
no shutdown

Exit

Exit
Copy Running-Config Startup-Config

Repeat this configuration on the second switch Then we have to create the VLT domain and use this port-channel. Below the configuration on the first switch:

# Configure the VLT domain 1
vlt domain 1

# Specify the port-channel number which will be used by this VLT domain
peer-link port-channel 100

# Specify the IP address of the other switch
back-up destination 192.168.1.2

# Specify the priority of each switch
primary-priority 1

# Give an used MAC address for the VLT
system-mac mac-address 00:01:02:01:02:05

# Give an ID for each switch
unit-id 0

# Wait 10s before the configuration saved is applied after the switch reload or the peer link restore
delay-restore 10

Exit

Exit
Copy Running-Configuration Startup-Configuration

On the second switch, the configuration looks like this:

vlt domain 1
peer-link port-channel 100
back-up destination 192.168.1.1
primary-priority 2
system-mac mac-address 00:01:02:01:02:05
unit-id 1
delay-restore 10

Exit

Exit
Copy Running-Configuration Startup-Configuration

No the VLT is working. You don’t have to specify VLAN ID on this link. The VLT manage itself tagged and untagged traffic.

S2D port configuration

To finish the switch configuration, we have to configure ports and VLAN for S2D nodes:

Enable
Configure
Interface range Ten 1/1-1/16

# No IP address assigned to these ports
no ip address

# Enable the maximum MTU to 9216
mtu 9216

# Enable the management of untagged and tagged traffic
portmode hybrid

# Enable switchport Level 2 and this port is added to default VLAN to send untagged traffic.
Switchport

# Configure the port to Edge-Port
spanning-tree 0 portfast

# Enable BPDU guard on these port
spanning-tree rstp edge-port bpduguard

#Apply the DCB policy to these port
dcb-policy buffer-threshold RDMA

# Apply the DCB map to this port
dcb-map RDMA

# Enable port
no shutdown

Exit

Exit
Copy Running-Configuration Startup-Configuration

You can copy this configuration to the other switch. Now just VLAN are missing. To create VLAN and assign to port you can run the following configuration:

Interface VLAN 10
Description "Management"
Name "VLAN-10"
Untagged TenGigabitEthernet 1/1-1/16
Exit

Interface VLAN 20
Description "SMB"
Name "VLAN-20"
tagged TenGigabitEthernet 1/1-1/16
Exit

[etc.]
Exit
Copy Running-Config Startup-Config

Once you have finished, copy this configuration on the second switch.

The post Configure Dell S4048 switches for Storage Spaces Direct appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/configure-dell-s4048-switches-for-storage-spaces-direct/feed/ 6 6288
RDS 2016 Farm: Create Microsoft Azure networks, storage and Windows image //www.tech-coffee.net/rds-2016-farm-create-microsoft-azure-networks-storage-and-windows-image/ //www.tech-coffee.net/rds-2016-farm-create-microsoft-azure-networks-storage-and-windows-image/#comments Mon, 10 Apr 2017 10:20:20 +0000 //www.tech-coffee.net/?p=5319 This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. In this topic, we will see how to deploy the Microsoft Azure networks resources, the storage account and how to prepare a Windows Image. You can find the other topics of the series in the ...

The post RDS 2016 Farm: Create Microsoft Azure networks, storage and Windows image appeared first on Tech-Coffee.

]]>
This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. In this topic, we will see how to deploy the Microsoft Azure networks resources, the storage account and how to prepare a Windows Image. You can find the other topics of the series in the following menu:

Github

I have published the complete JSON template on my github. You can copy it and make your modifications as you wish.

JSON template explanation

The JSON template consists of parameters, variables and resources. Parameters and variable are easy to understand. However, it is a little more complicated for resources. The below resource is a Virtual Network. This virtual network takes settings in parameters and variables. The below JSON code create a virtual network with four subnets (Internal, DMZ, Cluster and Gateway).

{
      "apiVersion": "[variables('API-Version')]",
      "location": "[variables('ResourcesLocation')]",
      "name": "[parameters('vNETName')]",
      "properties": {
        "addressSpace": {
          "addressPrefixes": [
            "[parameters('vNETPrefix')]"
          ]
        },
        "subnets": [
          {
            "name": "[parameters('vNETSubIntName')]",
            "properties": {
              "addressPrefix": "[parameters('vNETSubIntPrefix')]"
            }
          },
          {
            "name": "[parameters('vNETSubExtName')]",
            "properties": {
              "addressPrefix": "[parameters('vNETSubExtPrefix')]"
            }
          },
          {
            "name": "[parameters('vNETSubCluName')]",
            "properties": {
              "addressPrefix": "[parameters('vNETSubCluPrefix')]"
            }
          },
          {
            "name": "[Parameters('vNETSubGtwName')]",
            "properties": {
              "addressPrefix": "[Parameters('vNETSubGtwPrefix')]"
            }
          }
        ]
      },
      "tags": {
        "displayName": "Virtual Network"
      },
      "type": "Microsoft.Network/virtualNetworks"
    },

The following block code creates a Public IP address for the Azure Gateway.

{
      "apiVersion": "[variables('API-Version')]",
      "location": "[variables('ResourcesLocation')]",
      "name": "[parameters('S2SPIPName')]",
      "properties": {
        "publicIPAllocationMethod": "Dynamic"
      },
      "tags": {
        "displayName": "Public IP Address"
      },
      "type": "Microsoft.Network/publicIPAddresses"
    }

The following JSON code deploys the local gateway. The S2SGtwOnPremPIP specifies the public IP address of the On-Prem Gateway. The S2SLocalIPSubnet specifies the On-Prem routed IP subnets.

{
      "apiVersion": "[variables('API-version')]",
      "location": "[variables('ResourcesLocation')]",
      "name": "[parameters('S2SGtwOnPremName')]",
      "properties": {
        "localNetworkAddressSpace": {
          "addressPrefixes": [
            "[parameters('S2SLocalIPSubnet')]"
          ]
        },
        "gatewayIpAddress": "[parameters('S2SGtwOnPremPIP')]"
      },
      "tags": {
        "displayName": "Local Gateway"
      },
      "type": "Microsoft.Network/localNetworkGateways"
    }

The following JSON code deploys the Microsoft Azure Gateway by taking the previously created Public IP address. The Microsoft Azure Gateway is located in the gateway subnet.

{
      "apiVersion": "[variables('API-version')]",
      "dependsOn": [
        "[concat('Microsoft.Network/publicIPAddresses/', parameters('S2SPIPName'))]",
        "[concat('Microsoft.Network/virtualNetworks/', parameters('vNETName'))]"
      ],
      "location": "[Variables('Resourceslocation')]",
      "name": "[parameters('S2SGtwAzureName')]",
      "properties": {
        "enableBgp": false,
        "gatewayType": "Vpn",
        "ipConfigurations": [
          {
            "properties": {
              "privateIPAllocationMethod": "Dynamic",
              "publicIPAddress": {
                "id": "[resourceId('Microsoft.Network/publicIPAddresses',parameters('S2SPIPName'))]"
              },
              "subnet": {
                "id": "[variables('vNETSubGtwRef')]"
              }
            },
            "name": "vnetGatewayConfig"
          }
        ],
        "vpnType": "[parameters('S2SGtwVPNType')]"
      },
      "tags": {
        "displayName": "Azure Gateway"
      },
      "type": "Microsoft.Network/virtualNetworkGateways"
    }

To finish, the following block code creates a storage account. This storage account will be used for VM diagnostic logs.

{
      "name": "[parameters('StoAcctLogName')]",
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2016-05-01",
      "tags": {
        "displayName": "Log Storage Account"
      },
      "sku": {
        "name": "[parameters('StoAcctLogType')]"
      },
      "kind": "Storage",
      "location": "[variables('ResourcesLocation')]"
    }

Import the template

To import the template, connect to Microsoft Azure and search for Template Deployment. Copy past the template. You should have something as below:

Then change the parameters as you wish and click on Purchase (don’t worry, it’s free :p).

Once the template is deployed, you should have 5 resources as below. So the virtual network, the gateways and storage account are created.

You can review the virtual network configuration as the following screenshot:

The public IP is also created:

Create the VPN connection

Now I create the VPN connection between On-Prem and Microsoft Azure. Select the On-Prem gateway and click on Configuration. Please review if the local gateway IP address is correct.

Then select Connections, and create a new connection. Provide a name, select Site-to-Site and specify the virtual network gateway and the local network gateway. To finish, provide a shared key.

Now, you have to configure your local gateway. I have an Ubiquiti gateway and I have set it with the following command lines:

set vpn ipsec auto-firewall-nat-exclude disable
set vpn ipsec disable-uniqreqids
set vpn ipsec esp-group esp-azure compression disable
set vpn ipsec esp-group esp-azure lifetime 3600
set vpn ipsec esp-group esp-azure mode tunnel
set vpn ipsec esp-group esp-azure pfs disable
set vpn ipsec esp-group esp-azure proposal 1 encryption aes256
set vpn ipsec esp-group esp-azure proposal 1 hash sha1
set vpn ipsec ike-group ike-azure ikev2-reauth no
set vpn ipsec ike-group ike-azure key-exchange ikev2
set vpn ipsec ike-group ike-azure lifetime 28800
set vpn ipsec ike-group ike-azure proposal 1 dh-group 2
set vpn ipsec ike-group ike-azure proposal 1 encryption aes256
set vpn ipsec ike-group ike-azure proposal 1 hash sha1
set vpn ipsec ipsec-interfaces interface pppoe0
set vpn ipsec nat-traversal enable
set vpn ipsec site-to-site peer <Azure Gateway Public IP> authentication mode pre-shared-secret
set vpn ipsec site-to-site peer <Azure Gateway Public IP> authentication pre-shared-secret <Shared Key>
set vpn ipsec site-to-site peer <Azure Gateway Public IP> connection-type initiate
set vpn ipsec site-to-site peer <Azure Gateway Public IP> default-esp-group esp-azure
set vpn ipsec site-to-site peer <Azure Gateway Public IP> ike-group ike-azure
set vpn ipsec site-to-site peer <Azure Gateway Public IP> ikev2-reauth inherit
set vpn ipsec site-to-site peer <Azure Gateway Public IP> local-address any
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 allow-nat-networks disable
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 allow-public-networks disable
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 esp-group esp-azure
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 local prefix 10.10.0.0/16
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 protocol all
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 remote prefix 10.11.0.0/16

Once the VPN is connected, you should have a Succeeded status as below:

Create the Windows Server 2016 image

To create the Windows Server 2016 image, first I deploy a new Azure VM. I call it zTemplate.

Then I choose a VM size.

I choose to use managed disks and I connect the VM in the Internal subnet. I don’t need Network Security Group for this VM. I enable the boot diagnostics and I choose the previously created storage account to store logs.

Once the Azure VM is started, I customize the operating system and I apply updates. Then I run sysprep as below:

Once the VM is stopped, I click on Capture:

Then I specify an image name and the resource group. I choose also to automatically delete the VM after creating the image.

At the end of this topic, I have the following resources in the resource group:

Next topic

In the next topic, we will deploy all Azure VMs for the Remote Desktop farm. The VM will be deployed from the Windows Image and from a JSON template.

The post RDS 2016 Farm: Create Microsoft Azure networks, storage and Windows image appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-create-microsoft-azure-networks-storage-and-windows-image/feed/ 4 5319
Deploy a converged network with vSphere 6.5 //www.tech-coffee.net/deploy-a-converged-network-with-vsphere-6-5/ //www.tech-coffee.net/deploy-a-converged-network-with-vsphere-6-5/#comments Tue, 14 Feb 2017 18:10:29 +0000 //www.tech-coffee.net/?p=5115 With the increased network card rates, we are now able to let several flows pass on the same network links. We can find on the market network adapters with 10gb/s, 25Gb/s, or 100Gb/s! So, there is no reason to dedicate a network adapter for a specific traffic. Thanks to converged networks, we can deploy VMware ...

The post Deploy a converged network with vSphere 6.5 appeared first on Tech-Coffee.

]]>
With the increased network card rates, we are now able to let several flows pass on the same network links. We can find on the market network adapters with 10gb/s, 25Gb/s, or 100Gb/s! So, there is no reason to dedicate a network adapter for a specific traffic. Thanks to converged networks, we can deploy VMware ESXi nodes with less network adapters. This brings flexibility and software-oriented network management. Once you have configured VLAN from switches perspective, you just have to create some port groups from vCenter perspective. In this topic, I’ll show you how to deploy a converged network in vSphere 6.5. For this topic, I leverage the vNetwork distributed switch which enables to deploy a consistent network configuration across nodes.

Network configuration overview

To write this topic, I’ve worked on two VMware ESXi 6.5 nodes. Each node has two network adapters (1Gb/s). Each network adapter is plugged on a separate switch. The switch ports are configured in trunk mode where VLAN 50, 51, 52, 53 and 54 are allowed. The VLAN 50 is untagged. I’ve not set any LACP configuration.

The following network will be configured:

  • Management – VLAN 50 (untagged) – 10.10.50.0/24: will be used to manage ESXi nodes
  • vMotion – VLAN 51 – 10.10.51.0/24: used for vMotion traffic
  • iSCSI – VLAN 52 – 10.10.52.0/24: network dedicated for iSCSI
  • DEV – VLAN 53 – 10.10.53.0/24: testing VM will be connected to this network
  • PROD – VLAN 54 – 10.10.54.0/24: production VM will be connected to this network

I’ll call the vNetwork Distributed Switch (vDS) with the following name: vDS-CAN-1G. To implement the following design I will need:

  • 1x vNetwork Distributed Switch
  • 2x Uplinks
  • 5x distributed port groups

So, let’s go 🙂

vNetwork Distributed Switch creation

To create a distributed switch, open vSphere Web Client and navigate to network menu (in navigator). Right click on your datacenter and select Distributed Switch | New Distributed Switch

Then specify a name for the distributed switch. I call mine vDS-CNA-1G.

Next choose a distributed switch version. Depending on the version, you can access to more features. I choose the last version Distributed switch: 6.5.0.

Next you can specify the number of uplinks. In this example, only two uplinks are required. But I leave the default number of uplinks value to 4. You can choose to enable or not the Network I/O Control (NIOC). This feature provides QoS management. Then I choose to create a default port group called Management. This port group will contain VMKernel adapters to manage ESXi nodes.

Once you have reviewed the settings, you can click on finish to create the vNetwork distributed switch.

Now that vDS is created, we can add host to it.

Add hosts to vNetwork distributed switch

To add hosts to vDS, click on Add and Manage Hosts icon (on top of vDS summary page). Next choose Add hosts.

Next click on New hosts and add each host you want.

Check the following tasks:

  • Manage physical adapters: association of physical network adapters to uplinks
  • Manage VMKernel adapters: manage VMKernel adapters (host virtual NICs).

In the next screen, for each node, add physical network adapters to uplink. In this example, I have added each vmnic0 of both node to Uplink 1 and vmnic1 to Uplink 2.

When you deploy ESXi, by default a vSwitch0 is created with one VMKernel for management. This vSwitch is a standard switch. To move the VMKernel to vDS without connection lost, we can reassign the VMKernel to the vDS. To make this operation, select the VMkernels and click on Assign port group. Then select the Management port group.

The next screen presents the impact of the network configuration. When you have reviewed the impacts, you can click on next to add and assign hosts to vDS.

Add additional distributed port group

Now that hosts are associated to vDS, we can add more distributed port group. In this section, I add a distributed port group for vMotion. From the vDS summary pane, click on New Distributed Port Group icon (on top of the pane). Give a name to the distributed port group.

In the next screen, you can configure the port binding and port allocation. You can have more information about port binding in this topic. The recommended port binding for general use is Static binding. I set the number of ports to 8 but because I configure the port allocation to Elastic, the ports are increased or decreased as needed. To finish, I set the VLAN ID to 51.

Add additional VMKernel adapters

Now that the distributed port group is created, we can add VMKernel to this port group. Click on Add and Manage Hosts from vDS summary pane. Then select Manage host networking.

Next click on Attached hosts and select hosts you want.

In the next screen, just check Manage VMKernel adapters.

Then click on New adapter.

In Select an existing network area, click on Browse and choose vMotion.

In the next screen, choose vMotion services. In this way, the vMotion traffic will use this VMKernel.

To finish, specify TCP/IP settings and click on finish.

When this configuration is finished, the vDS schema looks like this:

So we have two port groups and two uplinks. In this configuration we have converged the management and vMotion traffics. Note that Management network has no VLAN ID because I’ve set the VLAN 50 to untagged from switches perspective.

Final result

By repeating the above steps, I have created more distributed port groups. I have not yet created VMkernel iSCSI adapters (for a next topic about storage :)) but I think you know what I’m saying. If you compare the below schema and the one of network overview, they are very similar.

The final job concerns QoS to leave enough bandwidth to specific traffic as vMotion. You can set the QoS thanks to Network I/O Control (NIOC).

The post Deploy a converged network with vSphere 6.5 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-converged-network-with-vsphere-6-5/feed/ 5 5115
2-node hyperconverged cluster with Windows Server 2016 //www.tech-coffee.net/2-node-hyperconverged-cluster-with-windows-server-2016/ //www.tech-coffee.net/2-node-hyperconverged-cluster-with-windows-server-2016/#comments Fri, 07 Oct 2016 08:01:52 +0000 //www.tech-coffee.net/?p=4827 Last week, Microsoft announced the final release of Windows Server 2016 (the bits can be downloaded here). In addition, Microsoft has announced that Windows Server 2016 supports now a 2-node hyperconverged cluster configuration. I can now publish the setup of my lab configuration which is almost a production platform. Only SSD are not enterprise grade and ...

The post 2-node hyperconverged cluster with Windows Server 2016 appeared first on Tech-Coffee.

]]>
Last week, Microsoft announced the final release of Windows Server 2016 (the bits can be downloaded here). In addition, Microsoft has announced that Windows Server 2016 supports now a 2-node hyperconverged cluster configuration. I can now publish the setup of my lab configuration which is almost a production platform. Only SSD are not enterprise grade and one Xeon is missing per server. But to show you how it is easy to implement a hyperconverged solution it is fine. In this topic, I will show you how to deploy a 2-node hyperconverged cluster from the beginning with Windows Server 2016. But before running some PowerShell cmdlet, let’s take a look on the design.

Design overview

In this part I’ll talk about the implemented hardware and how are connected both nodes. Then I’ll introduce the network design and the required software implementation.

file

Hardware consideration

First of all, it is necessary to present you the design. I have bought two nodes that I have built myself. Both nodes are not provided by a manufacturer. Below you can find the hardware that I have implemented in each node:

  • CPU: Xeon 2620v2
  • Motherboard: Asus Z9PA-U8 with ASMB6-iKVM for KVM-over-Internet (Baseboard Management Controller)
  • PSU: Fortron 350W FSP FSP350-60GHC
  • Case: Dexlan 4U IPC-E450
  • RAM: 128GB DDR3 registered ECC
  • Storage devices:
    • 1x Intel SSD 530 128GB for the Operating System
    • 1x Samsung NVMe SSD 950 Pro 256GB (Storage Spaces Direct cache)
    • 4x Samsung SATA SSD 850 EVO 500GB (Storage Spaces Direct capacity)
  • Network Adapters:
    • 1x Intel 82574L 1GB for VM workloads (two controllers). Integrated to motherboard
    • 1x Mellanox Connectx3-Pro 10GB for storage and live-migration workloads (two controllers). Mellanox are connected with two passive copper cables with SFP provided by Mellanox
  • 1x Switch Ubiquiti ES-24-Lite 1GB

If I were in production, I’d replace SSD by enterprise grade SSD and I’d add a NVMe SSD for the caching. To finish I’d buy server with two Xeon. Below you can find the hardware implementation.

Network design

To support this configuration, I have created five network subnets:

  • Management network: 10.10.0.0/24 – VID 10 (Native VLAN). This network is used for Active Directory, management through RDS or PowerShell and so on. Fabric VMs will be also connected to this subnet.
  • DMZ network: 10.10.10.0/24 – VID 11. This network is used by DMZ VMs as web servers, AD FS etc.
  • Cluster network: 10.10.100/24 – VID 100. This is the cluster heart beating network
  • Storage01 network: 10.10.101/24 – VID 101. This is the first storage network. It is used for SMB 3.11 transaction and for Live-Migration.
  • Storage02 network: 10.10.102/24 – VID 102. This is the second storage network. It is used for SMB 3.11 transaction and for Live-Migration.

I can’t leverage Simplified SMB MultiChannel because I don’t have a 10GB switch. So each 10GB controller must belong to separate subnets.

I will deploy a Switch Embedded Teaming for 1GB network adapters. I will not implement a Switch Embedded Teaming for 10GB because a switch is missing.

Logical design

I will have two nodes called pyhyv01 and pyhyv02 (Physical Hyper-V).

The first challenge concerns the failover cluster. Because I have no other physical server, the domain controllers will be virtual. if I implement domain controllers VM in the cluster, how can start the cluster? So the DC VMs must not be in the cluster and must be stored locally. To support high availability, both nodes will host a domain controller locally in the system volume (C:\). In this way, the node boot, the DC VM start and then the failover cluster can start.

Both nodes are deployed in core mode because I really don’t like graphical user interface for hypervisors. I don’t deploy the Nano Server because I don’t like the Current Branch for Business model for Hyper-V and storage usage. The following feature will be deployed for both nodes:

  • Hyper-V + PowerShell management tools
  • Failover Cluster + PowerShell management tools
  • Storage Replica (this is optional, only if you need the storage replica feature)

The storage configuration will be easy: I’ll create a unique Storage Pool with all SATA and NVMe SSD. Then I will create two Cluster Shared Volumes that will be distributed across both nodes. The CSV will be called CSV-01 and CSV-02.

Operating system configuration

I show how to configure a single node. You have to repeat these operations for the second node in the same way. This is why I recommend you to make a script with the commands: the script will help to avoid human errors.

Bios configuration

The bios may change regarding the manufacturer and the motherboard. But I always do the same things in each server:

  • Check if the server boot in UEFI
  • Enable virtualization technologies as VT-d, VT-x, SLAT and so on
  • Configure the server in high performance (in order that CPUs have the maximum frequency available)
  • Enable HyperThreading
  • Disable all unwanted hardware (audio card, serial/com port and so on)
  • Disable PXE boot on unwanted network adapters to speed up the boot of the server
  • Set the date/time

Next I check if the memory is seen, and all storage devices are plugged. When I have time, I run a memtest on server to validate hardware.

OS first settings

I have deployed my nodes from a USB stick configured with Easy2Boot. Once the system is installed, I have deployed drivers for motherboard and for Mellanox network adapters. Because I can’t connect with a remote MMC to Device Manager, I use the following commands to list if drivers are installed:

gwmi Win32_SystemDriver | select name,@{n="version";e={(gi $_.pathname).VersionInfo.FileVersion}}
gwmi Win32_PnPSignedDriver | select devicename,driverversion

After all drivers are installed, I configure the server name, the updates, the remote connection and so on. For this, I use sconfig.

This tool is easy, but don’t provide automation. You can do the same thing with PowerShell cmdlet, but I have only two nodes to deploy and I find this easier. All you have to do, is to move in menu and set parameters. Here I have changed the computer name, I have enabled the remote desktop and I have downloaded and installed all updates. I heavily recommend you to install all updates before deploying the Storage Spaces Direct.

Then I configure the power options to “performance” by using the bellow cmdlet:

POWERCFG.EXE /S SCHEME_MIN

Once the configuration is finished, you can install the required roles and features. You can run the following cmdlet on both nodes:

Install-WindowsFeature Hyper-V, Data-Center-Bridging, Failover-Clustering, RSAT-Clustering-Powershell, Hyper-V-PowerShell, Storage-Replica

Once you have run this cmdlet the following roles and features are deployed:

  • Hyper-V + PowerShell module
  • Datacenter Bridging
  • Failover Clustering + PowerShell module
  • Storage Replica

Network settings

Once the OS configuration is finished, you can configure the network. First, I rename network adapters as below:

get-netadapter |? Name -notlike vEthernet* |? InterfaceDescription -like Mellanox*#2 | Rename-NetAdapter -NewName Storage-101

get-netadapter |? Name -notlike vEthernet* |? InterfaceDescription -like Mellanox*Adapter | Rename-NetAdapter -NewName Storage-102

get-netadapter |? Name -notlike vEthernet* |? InterfaceDescription -like Intel*#2 | Rename-NetAdapter -NewName Management01-0

get-netadapter |? Name -notlike vEthernet* |? InterfaceDescription -like Intel*Connection | Rename-NetAdapter -NewName Management02-0

Next I create the Switch Embedded Teaming with both 1GB network adapters called SW-1G:

New-VMSwitch -Name SW-1G -NetAdapterName Management01-0, Management02-0 -EnableEmbeddedTeaming $True -AllowManagementOS $False

Now we can create two virtual network adapters for the management and the heartbeat:

Add-VMNetworkAdapter -SwitchName SW-1G -ManagementOS -Name Management-0
Add-VMNetworkAdapter -SwitchName SW-1G -ManagementOS -Name Cluster-100

Then I configure VLAN on vNIC and on storage NIC:

Set-VMNetworkAdapterVLAN -ManagementOS -VMNetworkAdapterName Cluster-100 -Access -VlanId 100
Set-NetAdapter -Name Storage-101 -VlanID 101 -Confirm:$False
Set-NetAdapter -Name Storage-102 -VlanID 102 -Confirm:$False

Below screenshot shows the VLAN configuration on physical and virtual adapters.

Next I disable VM queue (VMQ) on 1GB network adapters and I set it on 10GB network adapters. When I set the VMQ, I use multiple of 2 because hyperthreading is enabled. I start with a base processor number of 2 because it is recommended to leave the first core (core 0)  for other processes.

Disable-NetAdapterVMQ -Name Management*

# Core 1, 2 & 3 will be used for network traffic on Storage-101
Set-NetAdapterRSS Storage-101 -BaseProcessorNumber 2 -MaxProcessors 2 -MaxProcessorNumber 4

#Core 4 & 5 will be used for network traffic on Storage-102
Set-NetAdapterRSS Storage-102 -BaseProcessorNumber 6 -MaxProcessors 2 -MaxProcessorNumber 8

untitled

 

Next I configure Jumbo Frame on each network adapter.

Get-NetAdapterAdvancedProperty -Name * -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 9014

Now we can enable RDMA on storage NICs:

Get-NetAdapter *Storage* | Enable-NetAdapterRDMA

The below screenshot is the result of Get-NetAdapterRDMA.

Even if it is useless because I have no switch and other connections on 10GB network adapters, I configure DCB:

# Turn on DCB
Install-WindowsFeature Data-Center-Bridging

# Set a policy for SMB-Direct
New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3

# Turn on Flow Control for SMB
Enable-NetQosFlowControl -Priority 3

# Make sure flow control is off for other traffic
Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7

# Apply policy to the target adapters
Enable-NetAdapterQos -InterfaceAlias "Storage-101"
Enable-NetAdapterQos -InterfaceAlias "Storage-102"

# Give SMB Direct 30% of the bandwidth minimum
New-NetQosTrafficClass "SMB" -Priority 3 -BandwidthPercentage 30 -Algorithm ETS

Ok, now that network adapters are configured, we can configure IP addresses and try the communication on the network.

New-NetIPAddress -InterfaceAlias "vEthernet (Management-0)" -IPAddress 10.10.0.5 -PrefixLength 24 -DefaultGateway 10.10.0.1 -Type Unicast | Out-Null
Set-DnsClientServerAddress -InterfaceAlias "vEthernet (Management-0)" -ServerAddresses 10.10.0.20 | Out-Null

New-NetIPAddress -InterfaceAlias "vEthernet (Cluster-100)" -IPAddress 10.10.100.5 -PrefixLength 24 -Type Unicast | Out-Null

New-NetIPAddress -InterfaceAlias "Storage-101" -IPAddress 10.10.101.5 -PrefixLength 24 -Type Unicast | Out-Null

New-NetIPAddress -InterfaceAlias "Storage-102" -IPAddress 10.10.102.5 -PrefixLength 24 -Type Unicast | Out-Null

#Disable DNS registration of Storage and Cluster network adapter (Thanks to Philip Elder :))

Set-DNSClient -InterfaceAlias Storage* -RegisterThisConnectionsAddress $False
Set-DNSClient -InterfaceAlias *Cluster* -RegisterThisConnectionsAddress $False

Then I try the Jumbo Frame: it is working.

Now my nodes can communicate with other friends through the network. Once you have reproduced these steps on the second node, we can deploy the domain controller.

Connect to Hyper-V remotely

To make future actions, I work from my laptop with remote PowerShell. To display the Hyper-V VM consoles, I have installed RSAT on my Windows 10. Then I have installed the Hyper-V console:

Before being able to connect to Hyper-V remotely, some configurations are required from the server and client perspectives. In both nodes, run the following cmdlets:

Enable-WSManCredSSP -Role server

In your laptop, run the following cmdlets (replace fqdn-of-hyper-v-host by the future Hyper-V hosts FQDN):

Set-Item WSMan:\localhost\Client\TrustedHosts -Value "10.10.0.5"
Set-Item WSMan:\localhost\Client\TrustedHosts -Value "fqdn-of-hyper-v-host"
Set-Item WSMan:\localhost\Client\TrustedHosts -Value "10.10.0.6"
Set-Item WSMan:\localhost\Client\TrustedHosts -Value "fqdn-of-hyper-v-host"

Enable-WSManCredSSP -Role client -DelegateComputer "10.10.0.5"
Enable-WSManCredSSP -Role client -DelegateComputer "fqdn-of-hyper-v-host"
Enable-WSManCredSSP -Role client -DelegateComputer "10.10.0.6"
Enable-WSManCredSSP -Role client -DelegateComputer "fqdn-of-hyper-v-host"

Then, run gpedit.msc and configure the following policy:

Now you can leverage the new Hyper-V manager capability which enable to use an alternative credential to connect to Hyper-V.

Domain controller deployment

Before deploying the VM, I have copied the Windows Server 2016 ISO in c:\temp of both nodes. Then I have run the following script from my laptop:

# Create the first DC VM
Enter-PSSession -ComputerName 10.10.0.5 -Credential pyhyv01\administrator

$VMName = "VMADS01"
# Create Gen 2 VM with dynamic memory, autostart action to 0s and auto stop action set. 2vCPU
New-VM -Generation 2 -Name $VMName -SwitchName SW-1G -NoVHD -MemoryStartupBytes 2048MB -Path C:\VirtualMachines
Set-VM -Name $VMName -ProcessorCount 2 -DynamicMemory -MemoryMinimumBytes 1024MB -MemoryMaximumBytes 4096MB -MemoryStartupBytes 2048MB -AutomaticStartAction Start -AutomaticStopAction ShutDown -AutomaticStartDelay 0 -AutomaticCriticalErrorAction None -CheckpointType Production

# Create and add a 60GB dynamic VHDX to the VM
New-VHD -Path C:\VirtualMachines\$VMName\W2016-STD-1.0.vhdx -SizeBytes 60GB -Dynamic
Add-VMHardDiskDrive -VMName $VMName -Path C:\VirtualMachines\$VMName\W2016-STD-1.0.vhdx

# Rename the network adapter
Get-VMNetworkAdapter -VMName $VMName | Rename-VMNetworkAdapter -NewName Management-0

# Add a DVD drive with W2016 ISO
Add-VMDvdDrive -VMName $VMName
Set-VMDvdDrive -VMName $VMName -Path C:\temp\14393.0.160715-1616.RS1_RELEASE_SERVER_EVAL_X64FRE_EN-US.ISO

# Set the DVD drive as first boot
$VD = Get-VMHardDiskDrive -VMName $VMName -ControllerNumber 0 -ControllerLocation 1
Set-VMFirmware -VMName $VMName -FirstBootDevice $VD

# Add a data disk to the VM (10GB dynamic)
New-VHD -Path C:\VirtualMachines\$VMName\data.vhdx -SizeBytes 10GB -Dynamic
Add-VMHardDiskDrive -VMName $VMName -Path C:\VirtualMachines\$VMName\Data.vhdx

# Start the VM
Start-VM
Exit

# Create the second DC VM with the same capabilities as below
Enter-PSSession -ComputerName 10.10.0.6 -Credential pyhyv02\administrator
$VMName = "VMADS02"

New-VM -Generation 2 -Name $VMName -SwitchName SW-1G -NoVHD -MemoryStartupBytes 2048MB -Path C:\VirtualMachines

Set-VM -Name $VMName -ProcessorCount 2 -DynamicMemory -MemoryMinimumBytes 1024MB -MemoryMaximumBytes 4096MB -MemoryStartupBytes 2048MB -AutomaticStartAction Start -AutomaticStopAction ShutDown -AutomaticStartDelay 0 -AutomaticCriticalErrorAction None -CheckpointType Production

New-VHD -Path C:\VirtualMachines\$VMName\W2016-STD-1.0.vhdx -SizeBytes 60GB -Dynamic
Add-VMHardDiskDrive -VMName $VMName -Path C:\VirtualMachines\$VMName\W2016-STD-1.0.vhdx
Get-VMNetworkAdapter -VMName $VMName | Rename-VMNetworkAdapter -NewName Management-0
Add-VMDvdDrive -VMName $VMName
Set-VMDvdDrive -VMName $VMName -Path C:\temp\14393.0.160715-1616.RS1_RELEASE_SERVER_EVAL_X64FRE_EN-US.ISO
$VD = Get-VMHardDiskDrive -VMName $VMName -ControllerNumber 0 -ControllerLocation 1
Set-VMFirmware -VMName $VMName -FirstBootDevice $VD
New-VHD -Path C:\VirtualMachines\$VMName\data.vhdx -SizeBytes 10GB -Dynamic
Add-VMHardDiskDrive -VMName $VMName -Path C:\VirtualMachines\$VMName\Data.vhdx
Start-VM
Exit

Deploy the first domain controller

Once the VMs are created, you can connect to their consoles from Hyper-V manager to install the OS. A better way is to use a sysprep’d image. But because it is a “from scratch” infrastructure, I don’t have a gold master. By using sconfig, you can install updates and enable Remote Desktop. Once the operating systems are deployed, you can connect to the VM across PowerShell Direct.

Below you can find the configuration of the first domain controller:

# Remote connection to first node
Enter-PSSession -ComputerName 10.10.0.5 -Credential pyhyv01\administrator

# Establish a PowerShell direct session to VMADS01
Enter-PSSession -VMName VMADS01 -Credential VMADS01\administrator

# Rename network adapter
Rename-NetAdapter -Name Ethernet -NewName Management-0

# Set IP Addresses
New-NetIPAddress -InterfaceAlias "Management-0" -IPAddress 10.10.0.20 -PrefixLength 24 -Type Unicast | Out-Null

# Set the DNS (this IP is my DNS server for internet in my lab)
Set-DnsClientServerAddress -InterfaceAlias "Management-0" -ServerAddresses 10.10.0.229 | Out-Null

# Initialize and mount the data disk
initialize-disk -Number 1
New-Volume -DiskNumber 1 -FileSystem NTFS -FriendlyName Data -DriveLetter E

# Install required feature
install-WindowsFeature AD-Domain-Services, DNS -IncludeManagementTools

# Deploy the forest
Import-Module ADDSDeployment

Install-ADDSForest `
    -CreateDnsDelegation:$false `
    -DatabasePath "E:\NTDS" `
    -DomainMode "WinThreshold" ` #should be soon Win2016
    -DomainName "int.HomeCloud.net" `
    -DomainNetbiosName "INTHOMECLOUD" `
    -ForestMode "WinThreshold" ` #should be soon Win2016
    -InstallDns:$true `
    -LogPath "E:\NTDS" `
    -NoRebootOnCompletion:$false `
    -SysvolPath "E:\SYSVOL" `
    -Force:$true

Promote the second domain controller

Once the first domain controller is deployed and the forest is ready, you can promote the second domain controller:

Enter-PSSession -ComputerName 10.10.0.6 -Credential pyhyv02\administrator

# Establish a PowerShell direct session to VMADS02
Enter-PSSession -VMName VMADS02 -Credential VMADS02\administrator

# Rename network adapter
Rename-NetAdapter -Name Ethernet -NewName Management-0

# Set IP Addresses
New-NetIPAddress -InterfaceAlias "Management-0" -IPAddress 10.10.0.21 -PrefixLength 24 -Type Unicast | Out-Null

# Set the DNS to the first DC
Set-DnsClientServerAddress -InterfaceAlias "Management-0" -ServerAddresses 10.10.0.20 | Out-Null

# Initialize and mount the data disk
initialize-disk -Number 1
New-Volume -DiskNumber 1 -FileSystem NTFS -FriendlyName Data -DriveLetter E

# Install required feature
install-WindowsFeature AD-Domain-Services, DNS -IncludeManagementTools

# Deploy the forest
Import-Module ADDSDeployment
Install-ADDSDomainController `
    -NoGlobalCatalog:$false `
    -CreateDnsDelegation:$false `
    -Credential (Get-Credential) `
    -CriticalReplicationOnly:$false `
    -DatabasePath "E:\NTDS" `
    -DomainName "int.HomeCloud.net" `
    -InstallDns:$true `
    -LogPath "E:\NTDS" `
    -NoRebootOnCompletion:$false `
    -SiteName "Default-First-Site-Name" `
    -SysvolPath "E:\SYSVOL" `
    -Force:$true

Configure the directory

Once the second server has rebooted, we can configure the directory has below:

Enter-PSSession -computername VMADS01.int.homecloud.net
#Requires -version 4.0
$DN = "DC=int,DC=HomeCloud,DC=net"

# New Default OU
New-ADOrganizationalUnit -Name "Default" -Path $DN
$DefaultDN = "OU=Default,$DN"
New-ADOrganizationalUnit -Name "Computers" -Path $DefaultDN
New-ADOrganizationalUnit -Name "Users" -Path $DefaultDN

# Redir container to OU
cmd /c redircmp "OU=Computers,OU=Default,$DN"
cmd /c redirusr "OU=Users,OU=Default,$DN"

# Create Accounts tree
New-ADOrganizationalUnit -Name "Accounts" -Path $DN
$AccountOU = "OU=Accounts,$DN"
New-ADOrganizationalUnit -Name "Users" -Path $AccountOU
New-ADOrganizationalUnit -Name "Groups" -Path $AccountOU
New-ADOrganizationalUnit -Name "Services" -Path $AccountOU

# Create Servers tree
New-ADOrganizationalUnit -Name "Servers" -Path $DN
$ServersOU = "OU=Servers,$DN"
New-ADOrganizationalUnit -Name "Computers" -Path $ServersOU
New-ADOrganizationalUnit -Name "Groups" -Path $ServersOU
New-ADOrganizationalUnit -Name "CNO" -Path $ServersOU

# New User's groups
$GroupAcctOU = "OU=Groups,$AccountOU"
New-ADGroup -Name "GG-FabricAdmins" -Path $GroupAcctOU -GroupScope DomainLocal -Description "Fabric Server's administrators"
New-ADGroup -Name "GG-SQLAdmins" -Path $GroupAcctOU -GroupScope DomainLocal -Description "SQL Database's administrators"

# New Computer's groups
$GroupCMPOU = "OU=Groups,$ServersOU"
New-ADGroup -Name "GG-Hyperv" -Path $GroupCMPOU -GroupScope DomainLocal -Description "Hyper-V Servers"
New-ADGroup -Name "GG-FabricServers" -Path $GroupCMPOU -GroupScope DomainLocal -Description "Fabric servers"
New-ADGroup -Name "GG-SQLServers" -Path $GroupCMPOU -GroupScope DomainLocal -Description "SQL Servers"
Exit

Ok, our Active Directory is ready, we can now add Hyper-V nodes to the domain 🙂

Add nodes to domain

To add both nodes to the domain, I run the following cmdlets from my laptop:

Enter-PSSession -ComputerName 10.10.0.5 -Credential pyhyv01\administrator
$domain = "int.homecloud.net"
$password = "P@$$w0rd" | ConvertTo-SecureString -asPlainText -Force
$username = "$domain\administrator"
$credential = New-Object System.Management.Automation.PSCredential($username,$password)
Add-Computer -DomainName $domain -Credential $credential -OUPath "OU=Computers,OU=Servers,DC=int,DC=HomeCloud,DC=net" -Restart

Wait that pyhyv01 has rebooted and run the following cmdlet on pyhyv02. Now you can log on on pyhyv01 and pyhyv02 with domain credential. You can install Domain Services RSAT on the laptop to parse the Active Directory.

2-node hyperconverged cluster deployment

Now that the Active Directory is available, we can deploy the cluster. First, I test the cluster to verify that all is ok:

Enter-PSSession -ComputerName pyhyv01.int.homecloud.net -credential inthomecloud\administrator
Test-Cluster pyhyv01, pyhyv02 -Include "Storage Spaces Direct",Inventory,Network,"System Configuration"

Check the report if they are issues with the configuration. If the report is good, run the following cmdlets:

# Create the cluster
New-Cluster -Name Cluster-Hyv01 -Node pyhyv01,pyhyv02 -NoStorage -StaticAddress 10.10.0.10

Once the cluster is created, I set a Cloud Witness in order that Azure has a vote for the quorum.

# Add a cloud Witness (require Microsoft Azure account)
Set-ClusterQuorum -CloudWitness -Cluster Cluster-Hyv01 -AccountName "&amp;lt;StorageAccount&amp;gt;" -AccessKey "&amp;lt;AccessKey&amp;gt;"

Then I configure the network name in the cluster:

#Configure network name
(Get-ClusterNetwork -Name "Cluster Network 1").Name="Storage-102"
(Get-ClusterNetwork -Name "Cluster Network 2").Name="Storage-101"
(Get-ClusterNetwork -Name "Cluster Network 3").Name="Cluster-100"
(Get-ClusterNetwork -Name "Cluster Network 4").Name="Management-0"

Next I configure the Node Fairness to run each time a node is added to the cluster and every 30mn. When the CPU of a node will be utilized at 70%, the node fairness will balance the VM across other nodes.

# Configure Node Fairness
(Get-Cluster).AutoBalancerMode = 2
(Get-Cluster).AutoBalancerLevel = 2

Then I configure the Fault Domain Awareness to have a fault tolerance based on rack. It is useless in this configuration, but if you add nodes to the cluster, it can be useful. I enable this because it is recommended to make this configuration before enabling Storage Spaces Direct.

# Configure the Fault Domain Awareness
New-ClusterFaultDomain -Type Site -Name "Lyon"
New-ClusterFaultDomain -Type Rack -Name "Rack-22U-01"
New-ClusterFaultDomain -Type Rack -Name "Rack-22U-02"
New-ClusterFaultDomain -Type Chassis -Name "Chassis-Fabric-01"
New-ClusterFaultDomain -Type Chassis -Name "Chassis-Fabric-02"

Set-ClusterFaultDomain -Name Lyon -Location "France, Lyon 8e"
Set-ClusterFaultDomain -Name Rack-22U-01 -Parent Lyon
Set-ClusterFaultDomain -Name Rack-22U-02 -Parent Lyon
Set-ClusterFaultDomain -Name Chassis-Fabric-01 -Parent Rack-22U-01
Set-ClusterFaultDomain -Name Chassis-Fabric-02 -Parent Rack-22U-02
Set-ClusterFaultDomain -Name pyhyv01 -Parent Chassis-Fabric-01
Set-ClusterFaultDomain -Name pyhyv02 -Parent Chassis-Fabric-02

To finish with the cluster, we have to enable Storage Spaces Direct, and create volume. But before, I run the following script to clean up disks:

icm (Get-Cluster -Name Cluster-Hyv01 | Get-ClusterNode) {
    Update-StorageProviderCache

    Get-StoragePool |? IsPrimordial -eq $false | Set-StoragePool -IsReadOnly:$false -ErrorAction SilentlyContinue

    Get-StoragePool |? IsPrimordial -eq $false | Get-VirtualDisk | Remove-VirtualDisk -Confirm:$false -ErrorAction SilentlyContinue

    Get-PhysicalDisk | Reset-PhysicalDisk -ErrorAction SilentlyContinue

    Get-Disk |? Number -ne $null |? IsBoot -ne $true |? IsSystem -ne $true |? PartitionStyle -ne RAW |% {

        $_ | Set-Disk -isoffline:$false

        $_ | Set-Disk -isreadonly:$false

        $_ | Clear-Disk -RemoveData -RemoveOEM -Confirm:$false

        $_ | Set-Disk -isreadonly:$true

        $_ | Set-Disk -isoffline:$true

    }

    Get-Disk |? Number -ne $null |? IsBoot -ne $true |? IsSystem -ne $true |? PartitionStyle -eq RAW | Group -NoElement -Property FriendlyName

} | Sort -Property PsComputerName,Count

Now we can enable Storage Spaces Direct and create volumes:

Enable-ClusterStorageSpacesDirect

New-Volume -StoragePoolFriendlyName "S2D*" -FriendlyName CSV-01 -FileSystem CSVFS_ReFS -Size 922GB

New-Volume -StoragePoolFriendlyName "S2D*" -FriendlyName CSV-02 -FileSystem CSVFS_ReFS -Size 922GB

To finish I rename volume in c:\ClusterStorage by their names in the cluster:

Rename-Item -Path C:\ClusterStorage\volume1\ -NewName CSV-01
Rename-Item -Path C:\ClusterStorage\volume2\ -NewName CSV-02

Final Hyper-V configuration

First, I set default VM and virtual disk folders:

Set-VMHOST –computername pyhyv01 –virtualharddiskpath 'C:\ClusterStorage\CSV-01'
Set-VMHOST –computername pyhyv01 –virtualmachinepath 'C:\ClusterStorage\CSV-01'
Set-VMHOST –computername pyhyv02 –virtualharddiskpath 'C:\ClusterStorage\CSV-02'
Set-VMHOST –computername pyhyv02 –virtualmachinepath 'C:\ClusterStorage\CSV-02'

Then I configure the Live-Migration protocol and the number of simultaneous migration allowed:

Enable-VMMigration –Computername pyhyv01, pyhyv02
Set-VMHost -MaximumVirtualMachineMigrations 4 `
           –MaximumStorageMigrations 4 `
           –VirtualMachineMigrationPerformanceOption SMB `
           -ComputerName pyhyv01,pyhyv02

Next I add Kerberos delegation to configure Live-Migration in Kerberos mode:

Enter-PSSession -ComputerName VMADS01.int.homecloud.net
$HyvHost = "pyhyv01"
$Domain = "int.homecloud.net"

Get-ADComputer pyhyv02 | Set-ADObject -Add @{"msDS-AllowedToDelegateTo"="Microsoft Virtual System Migration Service/$HyvHost.$Domain", "cifs/$HyvHost.$Domain","Microsoft Virtual System Migration Service/$HyvHost", "cifs/$HyvHost"}

$HyvHost = "pyhyv02"

Get-ADComputer pyhyv01 | Set-ADObject -Add @{"msDS-AllowedToDelegateTo"="Microsoft Virtual System Migration Service/$HyvHost.$Domain", "cifs/$HyvHost.$Domain","Microsoft Virtual System Migration Service/$HyvHost", "cifs/$HyvHost"}
Exit

Then I set authentication of Live-Migration to Kerberos.

Set-VMHost –Computername pyhyv01, pyhyv02 `
           –VirtualMachineMigrationAuthenticationType Kerberos

Next, I configure the Live-Migration network priority:

To finish I configure the cache size of the CSV to 512MB:

(Get-Cluster).BlockCacheSize = 512

Try a node failure

Now I’d like to shut down a node to verify if the cluster is always up. Let’s see what happening if I shutdown a node:

As you have seen in the above video, even if I stop a node, the workloads still working. When the second node will be startup again, the virtual disks will enter in Regenerating state but you will be able to access to the data.

You can visualize the storage job with the below cmdlet:

Conclusion

2-node configuration is really a great scenario for small office or branch office. Without the cost of an expansive 10GB switch and a SAN, you can have high availability with Storage Spaces Direct. This kind of cluster is not really hard to deploy but I heavily recommend you to leverage PowerShell to make the implementation. Currently I’m working also on VMware vSAN and I can confirm you that Microsoft has a better solution in 2-nodes configuration. In vSAN scenario, you need a third ESX in a third room. In Microsoft environment, you need only a witness in another room as Microsoft Azure with Cloud Witness.

The post 2-node hyperconverged cluster with Windows Server 2016 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/2-node-hyperconverged-cluster-with-windows-server-2016/feed/ 217 4827
Deploy and add Network Controller to Virtual Machine Manager //www.tech-coffee.net/deploy-and-add-network-controller-to-virtual-machine-manager/ //www.tech-coffee.net/deploy-and-add-network-controller-to-virtual-machine-manager/#comments Fri, 21 Aug 2015 13:15:12 +0000 //www.tech-coffee.net/?p=3792 Network Controller is a new feature which will be available with Windows Server 2016. This feature enables to manage centrally the virtual and the physical network infrastructure to automate the management, configuration monitoring and troubleshooting. After a quick overview about network controller, I’ll explain how to deploy network controller and how to connect it to ...

The post Deploy and add Network Controller to Virtual Machine Manager appeared first on Tech-Coffee.

]]>
Network Controller is a new feature which will be available with Windows Server 2016. This feature enables to manage centrally the virtual and the physical network infrastructure to automate the management, configuration monitoring and troubleshooting. After a quick overview about network controller, I’ll explain how to deploy network controller and how to connect it to Virtual Machine Manager.

Network Controller overview

The information and schemas of this section come from here.

Network Controller is a Windows Server 2016 server role which is highly available and scalable. This feature comes with two API:

  • The Southbound API enables to discover devices, detect services configuration and gather network information
  • The Northbound API enables to configure, monitor, troubleshoot and deploy new devices (by REST endpoint or a management application as VMM)

Network Controller is able to manage the following network devices or features :

  • Hyper-V VMs and virtual switches
  • Physical network switches
  • Physical network routers
  • Firewall software
  • VPN gateways (including RRaS)
  • Load Balancers

For more information about Network Controller features you can read this topic (section network controller features)

Deploy Network Controller

Requirements

  • A server (VM or not) running on Windows Server 2016 Technical Preview 3 Datacenter;
  • A valid certificate for this server (Server Authentication);

Create Security groups

First, two security groups are required:

  • The first give permissions to configure Network Controller (GG-NetControllerAdmin);
  • The second enables to configure and manage the network by using the network controller (by using REST) (GG-NetControllerRESTAdmin)

Install Network controller feature

To install network controller features, run the following commands:

Install-WindowsFeature -Name NetworkController –IncludeManagementTools
Install-WindowsFeature -Name Windows-Fabric –IncludeManagementTools
Restart-Computer

Once the computer has rebooted, you can open the Server Manager and check if Network Controller is present:


Configure Network Controller

To understand commands and parameters, I recommend you to read this topic.

Currently in Technical Preview 3, the network controller role doesn’t support multi-node cluster. This is why in the following configuration, only one node will be added to the cluster. First I create a node object by using New-NetworkControllerNodeObject cmdlet.

Next I configure the network controller cluster by using the Install-NetworkControllerCluster cmdlet. I specify the node object, an authentication method and the security group that will be able to manage the network controller.

Then I configure network controller by using Install-NetworkController cmdlet. I specify also the node object, the authentication method for the clients and the security group that will be able to configure and manage network from Network Controller (by using REST).

To finish, I verify if my network controller is well configured, run the following commands:

Now that network controller is set and we can connect it to Virtual Machine Manager.

Add network controller to Virtual Machine Manager

To add Network Controller to VMM, you need VMM technical Preview 3.

Open the VMM console and navigate to Fabric. Right click on Network Services and select Add Network Service. Then specify the network service name.

Next select Microsoft as Manufacturer and Microsoft Network Controller as Model.

Then select your RunAs account.

Next specify ServerURL= and the REST Endpoint address. When Network Controller will support multi-node cluster, the SouthBound API address parameter will be mandatory.

Then select the certificate and check the box to specify that certificates have been reviewed.

Next, run Scan provider and verify that information can be gathered as below.

Next select host groups for which the network controller will be available.

When the network controller is added successfully, it should be listed in network services as below.

The post Deploy and add Network Controller to Virtual Machine Manager appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-and-add-network-controller-to-virtual-machine-manager/feed/ 8 3792
Connect Azure Virtual Networks to On-Premise Networks //www.tech-coffee.net/connect-azure-virtual-networks-to-on-premise-networks/ //www.tech-coffee.net/connect-azure-virtual-networks-to-on-premise-networks/#comments Thu, 18 Jun 2015 09:25:18 +0000 //www.tech-coffee.net/?p=3596 Microsoft Azure provides a virtual networks solution to connect Virtual Machines for example. If you plan to implement a Hybrid Cloud for your IaaS solution, you should connect your On-Premise networks with Microsoft Azure Virtual Networks. In this topic we will see how to interconnect these networks. When creating a Virtual Networks in Microsoft Azure, ...

The post Connect Azure Virtual Networks to On-Premise Networks appeared first on Tech-Coffee.

]]>
Microsoft Azure provides a virtual networks solution to connect Virtual Machines for example. If you plan to implement a Hybrid Cloud for your IaaS solution, you should connect your On-Premise networks with Microsoft Azure Virtual Networks. In this topic we will see how to interconnect these networks.

When creating a Virtual Networks in Microsoft Azure, you can deploy a gateway. This gateway can manage two kinds of VPN connections:

  • Point-To-Site: this is a classic client to server connection as you use to connect to your network company when you travel;
  • Site-To-Site: this is a gateway to gateway connection that enables to interconnect networks from one site to the other site.

So to connect your On-Premise networks to Microsoft Azure Virtual Network, we will implement a Site-To-Site connection which is an IPSEC VPN. Site-To-Site can use also Express Route that enables to interconnect networks by using MPLS VPN. For further information about Express Route, you can read this topic.

Architecture Overview

To write this topic, I have used my home lab to connect to Microsoft Azure. My router is an Ubiquiti EdgeRouter lite. For your lab, I recommend you this hardware because this is really an awesome router and it’s cheap (almost 100€). You can do whatever can make a business Cisco router/Firewall by using the CLI. However network knowledge is required (this is not a “next next finish” router). To finish there is a strong community on Ubiquiti forum.

So I have implemented several VLAN in my lab for different needs. For example, I have the 10.10.0.0/24 network which is the LAN network and 10.10.1.0/24 which is my DMZ network. I will create the same networks in Microsoft Azure by changing the second byte (from 10 to 11). Then I will create a gateway in Microsoft Azure and I will configure my router to establish an IPSEC VPN connection.

Create Azure Virtual Networks

First of all, we have to create the Virtual Networks in Microsoft Azure. So connect to the portal and navigate to networks. Then click on Create a Virtual Network.

Then give a name to your Virtual Network and choose the location. I have called my Virtual Network PublicHomeCloud.

Next I specify DNS Servers. The DNS servers that I have set in the below screenshot are my Domain Controllers on my On-Premise networks. In this way, the virtual machines created in Azure can join my Active Directory. Then I select Configure a Site-To-Site VPN.

On the next screen, I give a name to identify the On-Premise networks (OnPremHomeCloud) and I specify the public IP of my On-Premise gateway. Then I declare my On-Premise networks.

Next I specify a virtual network address space and I declare my subnet. Don’t forget to click on add gateway subnet. When you click on the tick, the Virtual Network is creating.

Once the Virtual Network is created, you can create the gateway by clicking on Create Gateway. My router supports only static routing so I have chosen this one. If your router supports the dynamic routing you can choose this one. The gateway creation can take several times. On my side the gateway was created in almost 30 minutes.

Once the gateway is created, you should have something as below. Copy the gateway IP address and open Manage Key to copy the pre-shared key.

Router configuration

If you have a standard router as Cisco or Juniper, you can click on Download VPN Device Script to configure automatically (or almost) your On-Premise gateway. On my side, I have to configure manually the gateway. So I have used the below commands to configure my Ubiquiti Router (Source: Ubiquiti Forum).

set vpn ipsec disable-uniqreqids
set vpn ipsec esp-group esp-edgemax
set vpn ipsec esp-group esp-edgemax lifetime 3600
set vpn ipsec esp-group esp-edgemax pfs disable
set vpn ipsec esp-group esp-edgemax mode tunnel
set vpn ipsec esp-group esp-edgemax proposal 1
set vpn ipsec esp-group esp-edgemax proposal 1 encryption aes256
set vpn ipsec esp-group esp-edgemax proposal 1 hash sha1
set vpn ipsec esp-group esp-edgemax compression disable
set vpn ipsec ike-group ike-edgemax
set vpn ipsec ike-group ike-edgemax lifetime 28800
set vpn ipsec ike-group ike-edgemax proposal 1
set vpn ipsec ike-group ike-edgemax proposal 1 dh-group 2
set vpn ipsec ike-group ike-edgemax proposal 1 encryption aes256
set vpn ipsec ike-group ike-edgemax proposal 1 hash sha1
set vpn ipsec ipsec-interfaces interface <WAN Interface>
set vpn ipsec logging log-modes all
set vpn ipsec nat-traversal enable
set vpn ipsec site-to-site peer <azure gateway ip address>
set vpn ipsec site-to-site peer <azure gateway ip address> local-ip any
set vpn ipsec site-to-site peer <azure gateway ip address> authentication mode pre-shared-secret
set vpn ipsec site-to-site peer <azure gateway ip address> authentication pre-shared-secret <azure shared key>
set vpn ipsec site-to-site peer <azure gateway ip address> connection-type initiate
set vpn ipsec site-to-site peer <azure gateway ip address> default-esp-group esp-edgemax
set vpn ipsec site-to-site peer <azure gateway ip address> ike-group ike-edgemax
set vpn ipsec site-to-site peer <azure gateway ip address> tunnel 1
set vpn ipsec site-to-site peer <azure gateway ip address> tunnel 1 esp-group esp-edgemax
set vpn ipsec site-to-site peer <azure gateway ip address> tunnel 1 local subnet <subnet for lan>
set vpn ipsec site-to-site peer <azure gateway ip address> tunnel 1 remote subnet <subnet for azure virtual address space>
set vpn ipsec site-to-site peer <azure gateway ip address> tunnel 1 allow-nat-networks disable
set vpn ipsec site-to-site peer <azure gateway ip address> tunnel 1 allow-public-networks disable
Commit
Save

Once the configuration was applied, I have waited almost 5 minutes and I have run the below commands:

So the VPN is up. Next I come back to Microsoft Azure to verify the connection. If you have something as below, that means that the connection is established.

Below you can find screenshots that come from the new Microsoft Azure Portal

Test the connection

Next I create a Virtual Machine to test the connection. I call it VMTEST01. When I have created this VM I have chosen the Internal subnet in PublicHomeCloud.

So I open a RDP connection to VMTEST01 and I run a ping on a domain controller. Hey, it’s working J

Conclusion

In this topic we have seen how to connect the Microsoft Azure Virtual Networks to the On-Premise Networks. It is a great stuff for Hybrid Cloud scenarios. Now if you have deployed a Windows Azure Pack in your datacenter and you use network virtualization, your tenants can connect their virtual network with Microsoft Azure by using also a Site-To-Site connection!

The post Connect Azure Virtual Networks to On-Premise Networks appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/connect-azure-virtual-networks-to-on-premise-networks/feed/ 2 3596