Networking – Tech-Coffee //www.tech-coffee.net Mon, 07 Jan 2019 12:58:58 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Design the network for a Storage Spaces Direct cluster //www.tech-coffee.net/design-the-network-for-a-storage-spaces-direct-cluster/ //www.tech-coffee.net/design-the-network-for-a-storage-spaces-direct-cluster/#comments Mon, 07 Jan 2019 12:57:57 +0000 //www.tech-coffee.net/?p=6689 In a Storage Spaces Direct cluster, the network is the most important part. If the network is not well designed or implemented, you can expect poor performance and high latency. All Software-Defined are based on a healthy network whether it is Nutanix, VMware vSAN or Microsoft S2D. When I audit S2D configuration, most of the ...

The post Design the network for a Storage Spaces Direct cluster appeared first on Tech-Coffee.

]]>
In a Storage Spaces Direct cluster, the network is the most important part. If the network is not well designed or implemented, you can expect poor performance and high latency. All Software-Defined are based on a healthy network whether it is Nutanix, VMware vSAN or Microsoft S2D. When I audit S2D configuration, most of the time the issue comes from the network. This is why I wrote this topic: how to design the network for a Storage Spaces Direct cluster.

Network requirements

The following statements come from the Microsoft documentation:

Minimum (for small scale 2-3 node)

  • 10 Gbps network interface
  • Direct-connect (switchless) is supported with 2-nodes

Recommended (for high performance, at scale, or deployments of 4+ nodes)

  • NICs that are remote-direct memory access (RDMA) capable, iWARP (recommended) or RoCE
  • Two or more NICs for redundancy and performance
  • 25 Gbps network interface or higher

As you can see, for a 4-Node S2D cluster or more, Microsoft recommends 25 Gbps network. I think it is a good recommendation, especially for a full flash configuration or when NVMe are implemented. Because S2D uses SMB to establish communication between nodes, RDMA can be leveraged (SMB Direct).

RDMA: iWARP and RoCE

You remember about DMA (Direct Memory Access)? This feature allows a device attached to a computer (like an SSD) to access to memory without passing by CPU. Thanks to this feature, we achieve better performance and reduce CPU usage. RDMA (Remote Direct Memory Access) is the same thing but across the network. RDMA allows a remote device to access to the local memory directly. Thanks to RDMA the CPU and latency is reduced while throughput is increased. RDMA is not a mandatory feature for S2D but it’s recommended to have it. Last year Microsoft stated RDMA increases S2D performance about 15% in average. So, I recommend heavily to implement it if you deploy a S2D cluster.

Two RDMA implementation is supported by Microsoft: iWARP (Internet Wide-area RDMA Protocol) and RoCE (RDMA over Converged Ethernet). And I can tell you one thing about these implementations: this is war! Microsoft recommends iWARP while lot of consultants prefer RoCE. In fact, Microsoft recommends iWARP because less configuration is required compared to RoCE. Because of RoCE, the number of Microsoft cases were high. But consultants prefer RoCE because Mellanox is behind this implementation. Mellanox provides valuable switches and network adapters with great firmware and drivers. Each time a new Windows Server build is released, a supported Mellanox driver / firmware is also released.

If you want more information about RoCE and iWARP, I suggest you this series of topics from Didier Van Hoye.

Switch Embedded Teaming

Before choosing the right switches, cables and network adapters, it’s important to understand what is the software story. In Windows Server 2012R2 and prior, you had to create a teaming. When the teaming was implemented, a tNIC was created. The tNIC is a sort of virtual NIC but connected to the Teaming. Then you were able to create the virtual switch connected to the tNIC. After that, the virtual NICs for management, storage, VMs and so on were added.

In addition to complexity, this solution prevents the use of RDMA on virtual network adapter (vNIC). This is why Microsoft has improved this part with Windows Server 2016. Now you can implement Switch Embedded Teaming (SET):

This solution reduces the network complexity and vNICs can support RDMA. However, there are some limitations with SET:

  • Each physical network adapter (pNIC) must be the same (same firmware, same drivers, same model)
  • Maximum of 8 pNIC in a SET
  • The following Load Balancing mode are supported: Hyper-V Port (specific case) and Dynamic. This limitation is a good thing because Dynamic is the appropriate choice for most of the case.

For more information about Load Balancing mode, Switch Embedded Teaming and limitation, you can read this documentation. Switch Embedded Teaming brings another great advantage: you can create an affinity between vNIC and pNIC. Let’s think about a SET where two pNICs are member of the teaming. On this vSwitch, you create two vNICs for storage purpose. You can create an affinity between one vNIC and one pNIC and another for the second vNIC and pNIC. It ensures that each pNIC are used.

The design presented below are based on Switch Embedded Teaming.

Network design: VMs traffics and storage separated

Some customers want to separate the VM traffics from the storage traffics. The first reason is they want to connect VM to 1Gbps network. Because storage network requires 10Gbps, you need to separate them. The second reason is they want to dedicate a device for storage such as switches. The following schema introduces this kind of design:

If you have 1Gbps network port for VMs, you can connect them to 1Gbps switches while network adapters for storage are connected to 10Gbps switches.

Whatever you choose, the VMs will be connected to the Switch Embedded Teaming (SET) and you have to create a vNIC for management on top of it. So, when you will connect to nodes through RDP, you will go through the SET. The physical NIC (pNIC) that are dedicated for storage (those on the right on the scheme) are not in a teaming. Instead, we leverage SMB MultiChannel which allows to use multiple network connections simultaneously. So, both network adapters will be used to establish SMB session.

Thanks to Simplified SMB MultiChannel, both pNICs can belong to the same network subnet and VLAN. Live-Migration is configured to use this network subnet and to leverage SMB.

Network Design: Converged topology

The following picture introduces my favorite design: a fully converged network. For this kind of topology, I recommend you 25Gbps network at least, especially with NVMe or full flash. In this case, only one SET is created with two or more pNICs. Then we create the following vNIC:

  • 1x vNIC for host management (RDP, AD and so on)
  • 2x vNIC for Storage (SMB, S2D and Live-Migration)

The vNIC for storage can belong to the same network subnet and VLAN thanks to simplified SMB MultiChannel. Live-Migration is configured to use this network and SMB protocol. RDMA are enabled on these vNICs as well as pNICs if they support it. Then an affinity is created between vNICs and pNICs.

I love this design because it really simple. You have one network adapter for BMC (iDRAC, ILO etc.) and only two network adapters for S2D and VM. So, the physical installation in datacenter and the software configuration are easy.

Network Design: 2-node S2D cluster

Because we are able to direct-attach both nodes in a 2-Node configuration, you don’t need switch for storage. However, Virtual Machines and host management vNIC requires connection so switches are required for these usages. But it can be 1Gbps switches to drastically reduce the solution cost.

The post Design the network for a Storage Spaces Direct cluster appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/design-the-network-for-a-storage-spaces-direct-cluster/feed/ 11 6689
Configure Dell S4048 switches for Storage Spaces Direct //www.tech-coffee.net/configure-dell-s4048-switches-for-storage-spaces-direct/ //www.tech-coffee.net/configure-dell-s4048-switches-for-storage-spaces-direct/#comments Thu, 26 Apr 2018 09:57:59 +0000 //www.tech-coffee.net/?p=6288 When we deploy Storage Spaces Direct (S2D), either hyperconverged or disaggregated, we have to configure the networking part. Usually we work with Dell hardware to deploy Storage Spaces Direct and the one of the switches supported by the Dell reference architectures is the Dell S4048 (Force 10). In this topic, we will see how to ...

The post Configure Dell S4048 switches for Storage Spaces Direct appeared first on Tech-Coffee.

]]>
When we deploy Storage Spaces Direct (S2D), either hyperconverged or disaggregated, we have to configure the networking part. Usually we work with Dell hardware to deploy Storage Spaces Direct and the one of the switches supported by the Dell reference architectures is the Dell S4048 (Force 10). In this topic, we will see how to configure this switch from scratch.

This topic has been co-written with Frederic Stefani – Dell architect solution.

Stack or not

Usually, customers know the stack feature which is common to all network vendors such as Cisco, Dell, HP and so on. This feature enables to add several identical switches in a single configuration managed by a master switch. Because all switches share the same configuration, for the network administrators, all these switches are seen like a single one. So, the administrators connect to the master switch and then edit the configuration on all switches member of the stack.

If the stacking is sexy on the paper, there is a main issue especially with storage solution such as S2D. With S4048 stack, when you run an update, all switches reload at the same time. Because S2D highly relies on the network, your storage solution will crash. This is why the Dell reference architecture for S2D recommends to deploy a VLT (Virtual Link Trunking).

With Stacking you have a single control plane (you configure all switches from a single switch) and a single data plane in a loop free topology. In a VLT configuration, you have also a single data plane in a loop free topology but several control planes, which allow you to reboot switches one by one.

For this reason, the VLT (or MLAG) technology is the preferred way for Storage Spaces Direct.

S4048 overview

A S4048 switch has 48x 10GB/s SFP+ ports, 6x 40GB/s QSFP+ ports, a management port (1GB/s) and a serial port. The management and the serial ports are located on the back. In the below diagram, there is three kinds of connection:

  • Connection for S2D (in this example from port 1 to 16, but you can connect until port 48)
  • VLTi connection
  • Core connection: the uplink to connect to core switches

In the below architecture schema, you can find both S4048 interconnected by using VLTi ports and several S2D nodes (hyperconverged or disaggregated, that doesn’t matter) connected to port 1 to 16. In this topic, we will configure these switches regarding this configuration.

Switches initial configuration

When you start the switch for the first time you have to configure the initial settings such as switch name, IP address and so on. Plug a serial cable from the switch to your computer and connect through Telnet with the following settings:

  • Baud Rate: 115200
  • No Parity
  • 8 data bits
  • 1 stop bit
  • No flow control

Then you can run the following configuration:

Enable
Configure

# Configure the hostname
hostname SwitchName-01

# Set the IP address to the management ports, to connect to switch through IP
interface ManagementEthernet 1/1
ip address 192.168.1.1/24
no shutdown
exit

# Set the default gateway
ip route 0.0.0.0/0 192.168.1.254/24

# Enable SSH
ip ssh server enable

# Create a user and a password to connect to the switch
username admin password 7 MyPassword privilege 15

# Disable Telnet through IP
no ip telnet server enable
Exit

# We leave enabled Rapid Spanning Tree Protocol.
protocol spanning-tree rstp
no disable
Exit

Exit

# Write the configuration in memory
Copy running-configuration startup-configuration

After this configuration is applied, you can connect to the switch through SSH. Apply the same configuration to the other switch (excepted the name and IP address).

Configure switches for RDMA (RoCEv2)

N.B: For this part we assume that you know how RoCE v2 is working, especially DCB, PFC and ETS.

Because we implement the switches for S2D, we have to configure the switches for RDMA (RDMA over Converged Ethernet v2 implementation). Don’t forget that with RoCE v2, you have to configure DCB and PFC end to end (on servers and on switches side). In this configuration, we assume that you use the Priority ID 3 for SMB traffic.

# By default the queue value is 0 for all dot1p (QoS) traffic. We enable <a href="https://www.dell.com/support/manuals/fr/fr/frbsdt1/force10-s4810/s4810_9.7.0.0_cli_pub-v1/service-class-dynamic-dot1p?guid=guid-6bbc7b99-4dde-433c-baf2-98a614eb665e&amp;lang=en-us">this command</a> globally to change this behavivor.
service-class dynamic dot1p

# Data-Center-Bridging enabled. This enable to configure Lossless and latency sensitive traffic in a Priority Flow Control (PFC) queue.
dcb enable

# Provide a name to the DCB buffer threshold
dcb-buffer-threshold RDMA
priority 3 buffer-size 100 pause-threshold 50 resume-offset 35
exit

# Create a dcb map to configure the PFC and ETS rule (Enhanced Transmission Control)
dcb-map RDMA

# For priority group 0, we allocate 50% of the bandwidth and PFC is disabled
priority-group 0 bandwidth 50 pfc off

# For priority group 3, we allocate 50% of the bandwidth and PFC is enabled
priority-group 3 bandwidth 50 pfc on

# Priority group 3 contains traffic with dot1p priority 3.
priority-pgid 0 0 0 3 0 0 0 0

Exit

Exit
Copy running-configuration startup-configuration

Repeat this configuration on the other switch.

VLT domain implementation

First of all, we have to create Port Channel with two QSFP+ ports (port 1/49 and 1/50):

Enable
Configure

# Configure the port-channel 100 (make sure it is not used)
interface Port-channel 100

# Provide a description
description VLTi

# Do not apply an IP address to this port channel
no ip address

#Set the maximum MTU to 9216
mtu 9216

# Add port 1/49 and 1/50
channel-member fortyGigE 1/49,1/50

# Enable the port channel
no shutdown

Exit

Exit
Copy Running-Config Startup-Config

Repeat this configuration on the second switch Then we have to create the VLT domain and use this port-channel. Below the configuration on the first switch:

# Configure the VLT domain 1
vlt domain 1

# Specify the port-channel number which will be used by this VLT domain
peer-link port-channel 100

# Specify the IP address of the other switch
back-up destination 192.168.1.2

# Specify the priority of each switch
primary-priority 1

# Give an used MAC address for the VLT
system-mac mac-address 00:01:02:01:02:05

# Give an ID for each switch
unit-id 0

# Wait 10s before the configuration saved is applied after the switch reload or the peer link restore
delay-restore 10

Exit

Exit
Copy Running-Configuration Startup-Configuration

On the second switch, the configuration looks like this:

vlt domain 1
peer-link port-channel 100
back-up destination 192.168.1.1
primary-priority 2
system-mac mac-address 00:01:02:01:02:05
unit-id 1
delay-restore 10

Exit

Exit
Copy Running-Configuration Startup-Configuration

No the VLT is working. You don’t have to specify VLAN ID on this link. The VLT manage itself tagged and untagged traffic.

S2D port configuration

To finish the switch configuration, we have to configure ports and VLAN for S2D nodes:

Enable
Configure
Interface range Ten 1/1-1/16

# No IP address assigned to these ports
no ip address

# Enable the maximum MTU to 9216
mtu 9216

# Enable the management of untagged and tagged traffic
portmode hybrid

# Enable switchport Level 2 and this port is added to default VLAN to send untagged traffic.
Switchport

# Configure the port to Edge-Port
spanning-tree 0 portfast

# Enable BPDU guard on these port
spanning-tree rstp edge-port bpduguard

#Apply the DCB policy to these port
dcb-policy buffer-threshold RDMA

# Apply the DCB map to this port
dcb-map RDMA

# Enable port
no shutdown

Exit

Exit
Copy Running-Configuration Startup-Configuration

You can copy this configuration to the other switch. Now just VLAN are missing. To create VLAN and assign to port you can run the following configuration:

Interface VLAN 10
Description "Management"
Name "VLAN-10"
Untagged TenGigabitEthernet 1/1-1/16
Exit

Interface VLAN 20
Description "SMB"
Name "VLAN-20"
tagged TenGigabitEthernet 1/1-1/16
Exit

[etc.]
Exit
Copy Running-Config Startup-Config

Once you have finished, copy this configuration on the second switch.

The post Configure Dell S4048 switches for Storage Spaces Direct appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/configure-dell-s4048-switches-for-storage-spaces-direct/feed/ 6 6288
Update Mellanox network adapter firmware //www.tech-coffee.net/update-mellanox-network-adapter-firmware/ //www.tech-coffee.net/update-mellanox-network-adapter-firmware/#respond Wed, 06 Sep 2017 06:57:23 +0000 //www.tech-coffee.net/?p=5721 Like the others, Mellanox network adapters should be updated to the latest firmware to solve issues. When you download the Mellanox firmware release note, you can see how much reported issues have been solved (usually, a lot :)). Mellanox provides tools to update and manage the firmware from Linux, Freebsd, VMware ESXi, Windows and Windows ...

The post Update Mellanox network adapter firmware appeared first on Tech-Coffee.

]]>
Like the others, Mellanox network adapters should be updated to the latest firmware to solve issues. When you download the Mellanox firmware release note, you can see how much reported issues have been solved (usually, a lot :)). Mellanox provides tools to update and manage the firmware from Linux, Freebsd, VMware ESXi, Windows and Windows PE. Thanks to this set of tools, you can update Mellanox network adapter firmware from a powered-up operating system. In this topic, we will see how to manage the firmware from Windows Server 2016 Datacenter Core and from VMware ESXi 6.5u1.

This topic shows you how to update Mellanox network adapter from Mellanox. If you have a branded Mellanox network adapter (Dell, IBM etc.), please check the related documentation from the vendor.

Requirements

First, you need to identify which Mellanox network adapter is installed on your system. You can retrieve this information from a sticker on the network adapter or from invoice. In my lab, I have two Mellanox model:

  • ConnectX3-Pro (MCX312B-XCCT)
  • ConnectX3 (MCX312A-XCBT)

N.B: If you can’t get the model of the network adapter, run mlxfwmanager when the Mellanox tools will be deployed. Then retrieve the PSID information and make a search on Google.

Once you have identified the network adapter model, you can download the firmware from Mellanox. Usually I type in Google “Firmware MCX312B-XCCT” for example. Then I can get this webpage from Mellanox and I can download and unzip the firmware.

You need also the Mellanox toolset called Mellanox Firmware Tools (MST). You can get the documentation from this location. If you need to update firmware from VMware ESXi 6.5u1, download the two VIB files as below:

If you plan to update firmware from Windows Server, download the following executable:

Update firmware from VMware ESXi

First, we need to install MST on ESXi. A reboot is required. I recommend you to place your server in maintenance mode. From vCenter or ESXi web interface, upload in a datastore the firmware and VIB files.

Then open a SSH session and navigate to /vmfs/volumes/<your datastore>. Copy the path with the datastore ID as below:

Then install both VIB files with the command esxcli software vib install -v <path to vib file>:

Then reboot the server. Once you have rebooted the server, you can navigate to /opt/mellanox/bin. The command /opt/Mellanox/bin/mst status gives you the installed Mellanox device.

Then you can flash your device by specifying /opt/Mellanox/bin/flint -d <device> -I <path to firmware file> burn

After the firmware is updated on Mellanox network adapter, you need to reboot your server again. If you open Flexboot (CtRL+B at startup), you can see the new version. You can also use /opt/Mellanox/bin/mlxfwmanager to get this information.

Update firmware from Windows Server

I recommend you to place your node in pause state because a reboot is required. On the MST web page, download the Windows executable as indicated in the following screenshot:

Copy the executable on the server and run the installation. Once MST is installed, navigate to C:\Program Files\Mellanox\WinMFT. You can run mst status to get information about installed Mellanox network adapter:

Then you can run flint -d <Device> -I <path to firmware> burn

As you can see, after the firmware is updated, the new version is not active (you can use mlxfwmanager.exe to get these information). You need to reboot the server in order to use the new firmware version.

After a reboot, you can see that the new version is the only one.

Force to change the PSID to remove vendor custom firmware

There is a chance to brick your network adapter! I’m not responsible in case of hardware degradation.

Recently, I bought on eBay a ConnectX3 from Mellanox with a IBM PSID. I wanted to flash the firmware with a Mellanox image. To make this change, I run /opt/Mellanox/bin/flint -d <device> -i <path to firmware> -allow_psid_change burn. Then flint asked me if I really want to change the PSID because it is not recommended.

After a reboot, I checked from /opt/Mellanox/bin/mlxfwmanager and the PSID was changed.

The post Update Mellanox network adapter firmware appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/update-mellanox-network-adapter-firmware/feed/ 0 5721
Switch Embedded Teaming //www.tech-coffee.net/switch-embedded-teaming/ //www.tech-coffee.net/switch-embedded-teaming/#comments Thu, 08 Oct 2015 13:05:00 +0000 //www.tech-coffee.net/?p=3875 Switch Embedded Teaming (SET) is a new feature in the Software-Defined Networking stack that will be included in Windows Server 2016. It enables to group several physical network adapters (from one to eight) into a single virtual network adapters in a Hyper-V environment. In fact, it is SET is an alternative to standard NIC teaming. ...

The post Switch Embedded Teaming appeared first on Tech-Coffee.

]]>
Switch Embedded Teaming (SET) is a new feature in the Software-Defined Networking stack that will be included in Windows Server 2016. It enables to group several physical network adapters (from one to eight) into a single virtual network adapters in a Hyper-V environment. In fact, it is SET is an alternative to standard NIC teaming.

The main advantage of SET compared to standard NIC teaming is the RDMA convergence and the RDMA virtualization for host. In this way storage traffic with SMB direct can be converged with others traffics. Below you can find a Microsoft diagram which compares the network convergence in Windows Server 2012 R2 and Windows Server 2016:

As you can see above, in Windows Server 2016, you’ll need less Physical NICs for the same job! Obviously at least 10GB NICs and QoS management will be recommended in production.

Network technologies supported and unsupported

Switch Embedded Teaming is compatible with the following network technologies:

  • Datacenter bridging (DCB)
  • Hyper-V Network Virtualization – NV-GRE and VxLAN are both supported in Windows Server 2016 Technical Preview.
  • Receive-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if any of the SET team members support them.
  • Remote Direct Memory Access (RDMA)
  • SDN Quality of Service (QoS)
  • Transmit-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if all of the SET team members support them.
  • Virtual Machine Queues (VMQ)
  • Virtual Receive Side Scalaing (RSS)

However SET is not compatible with these network technologies:

  • 802.1X authentication
  • IPsec Task Offload (IPsecTO)
  • QoS in host or native OSs
  • Receive side coalescing (RSC)
  • Receive side scaling (RSS)
  • Single root I/O virtualization (SR-IOV)
  • TCP Chimney Offload
  • Virtual Machine QoS (VM-QoS)

Physical NICs requirements

First the physical NICs that will be in the SET must pass the Windows Hardware Qualification and Logo. Secondly, each physical NICs must be the same (same manufacturer, model firmware and driver). To finish you can group from one to eight physical NICs in the same team.

SET settings

First of all, when you use SET, all physical network adapters are in active mode. You can’t implement a physical NIC in standby mode in the team.

Next the only teaming mode supported is the Switch Independent. So you can’t implement LACP or other things with SET. Sorry network administrator’s guys J.

To finish you can configure the Load Balancing mode as standard NIC Teaming. But you have only two options: Hyper-V Port or Dynamic.

About Live Migration

In Windows Server 2016 Technical Preview 3, SET does not support Live-Migration (But it should be supported in the final release because official Microsoft Diagrams say the opposite).

SET, VM Queues and RSS

Because the only mode supported by SET is Switch Independent, the total number of VM queues in the team is the sum of the VM Queues of each NIC in the team. It is also called the Sum-Queues.

As standard NIC teaming in Windows Server 2012 R2, it is necessary to set the RssBaseProcNumber to at least 2 and MaxRssProcessor (Powershell command: Set-NetAdapterVmq). By setting the RssBaseProcNumber to at least 2, the logical Core 0 and 1 are not used for networking purposes and leave these cores for system processing.

Play with Switch Embedded Teaming

In this Technet Topic, Microsoft recommends to use Virtual Machine Manager to manage SET. However SET can be managed from PowerShell and you can find the commands on the above topic.

To test the SET management, I add in the team two NICs that I don’t use:

So I run the New-VMSwitch command by using EnableEmbeddedTeaming option:

Next I run Get-NetAdapterRdma to verify if I have Virtual NICs RDMA capable:

Then I enable RDMA on VMSwitchSET:

To finish, I remove the VMSwitch:

The post Switch Embedded Teaming appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/switch-embedded-teaming/feed/ 1 3875
Deploy and add Network Controller to Virtual Machine Manager //www.tech-coffee.net/deploy-and-add-network-controller-to-virtual-machine-manager/ //www.tech-coffee.net/deploy-and-add-network-controller-to-virtual-machine-manager/#comments Fri, 21 Aug 2015 13:15:12 +0000 //www.tech-coffee.net/?p=3792 Network Controller is a new feature which will be available with Windows Server 2016. This feature enables to manage centrally the virtual and the physical network infrastructure to automate the management, configuration monitoring and troubleshooting. After a quick overview about network controller, I’ll explain how to deploy network controller and how to connect it to ...

The post Deploy and add Network Controller to Virtual Machine Manager appeared first on Tech-Coffee.

]]>
Network Controller is a new feature which will be available with Windows Server 2016. This feature enables to manage centrally the virtual and the physical network infrastructure to automate the management, configuration monitoring and troubleshooting. After a quick overview about network controller, I’ll explain how to deploy network controller and how to connect it to Virtual Machine Manager.

Network Controller overview

The information and schemas of this section come from here.

Network Controller is a Windows Server 2016 server role which is highly available and scalable. This feature comes with two API:

  • The Southbound API enables to discover devices, detect services configuration and gather network information
  • The Northbound API enables to configure, monitor, troubleshoot and deploy new devices (by REST endpoint or a management application as VMM)

Network Controller is able to manage the following network devices or features :

  • Hyper-V VMs and virtual switches
  • Physical network switches
  • Physical network routers
  • Firewall software
  • VPN gateways (including RRaS)
  • Load Balancers

For more information about Network Controller features you can read this topic (section network controller features)

Deploy Network Controller

Requirements

  • A server (VM or not) running on Windows Server 2016 Technical Preview 3 Datacenter;
  • A valid certificate for this server (Server Authentication);

Create Security groups

First, two security groups are required:

  • The first give permissions to configure Network Controller (GG-NetControllerAdmin);
  • The second enables to configure and manage the network by using the network controller (by using REST) (GG-NetControllerRESTAdmin)

Install Network controller feature

To install network controller features, run the following commands:

Install-WindowsFeature -Name NetworkController –IncludeManagementTools
Install-WindowsFeature -Name Windows-Fabric –IncludeManagementTools
Restart-Computer

Once the computer has rebooted, you can open the Server Manager and check if Network Controller is present:


Configure Network Controller

To understand commands and parameters, I recommend you to read this topic.

Currently in Technical Preview 3, the network controller role doesn’t support multi-node cluster. This is why in the following configuration, only one node will be added to the cluster. First I create a node object by using New-NetworkControllerNodeObject cmdlet.

Next I configure the network controller cluster by using the Install-NetworkControllerCluster cmdlet. I specify the node object, an authentication method and the security group that will be able to manage the network controller.

Then I configure network controller by using Install-NetworkController cmdlet. I specify also the node object, the authentication method for the clients and the security group that will be able to configure and manage network from Network Controller (by using REST).

To finish, I verify if my network controller is well configured, run the following commands:

Now that network controller is set and we can connect it to Virtual Machine Manager.

Add network controller to Virtual Machine Manager

To add Network Controller to VMM, you need VMM technical Preview 3.

Open the VMM console and navigate to Fabric. Right click on Network Services and select Add Network Service. Then specify the network service name.

Next select Microsoft as Manufacturer and Microsoft Network Controller as Model.

Then select your RunAs account.

Next specify ServerURL= and the REST Endpoint address. When Network Controller will support multi-node cluster, the SouthBound API address parameter will be mandatory.

Then select the certificate and check the box to specify that certificates have been reviewed.

Next, run Scan provider and verify that information can be gathered as below.

Next select host groups for which the network controller will be available.

When the network controller is added successfully, it should be listed in network services as below.

The post Deploy and add Network Controller to Virtual Machine Manager appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-and-add-network-controller-to-virtual-machine-manager/feed/ 8 3792
Hyper-V converged networking and storage design //www.tech-coffee.net/hyper-v-converged-networking-and-storage-design/ //www.tech-coffee.net/hyper-v-converged-networking-and-storage-design/#respond Tue, 04 Aug 2015 12:10:35 +0000 //www.tech-coffee.net/?p=3727 Since Windows Server 2012, the converged networking is supported by Microsoft. This concept enables to share an Ethernet adapter for several network traffics. Before that, it was recommended to dedicate a network adapter per network traffic (backup, cluster and so on). So thanks to the converged networking, we can use a single Ethernet adapter (or ...

The post Hyper-V converged networking and storage design appeared first on Tech-Coffee.

]]>
Since Windows Server 2012, the converged networking is supported by Microsoft. This concept enables to share an Ethernet adapter for several network traffics. Before that, it was recommended to dedicate a network adapter per network traffic (backup, cluster and so on).

So thanks to the converged networking, we can use a single Ethernet adapter (or teaming) to carry several network traffics. However, if the design is not good, the link can quickly reach the bandwidth limit. So when designing converged networking, keep in mind the QoS (Quality of Service) setting. This is this setting which will ensure that the traffic will have the appropriate bandwidth.

When you implement the converged networking, you can play with a setting called QoS weight. You can assign a value from 1 to 100. More the value is high; more the traffic associated with this value has priority.

When you design networks for Hyper-V/VMM, you have usually four networks for hosts: Host fabric Management, Live Migration, Cluster and Backup. I have detailed some examples in the next part Common Network requirements. The other network traffics are related to Virtual Machines. Usually you have at least a network for the fabric Virtual Machines.

Common network requirements

Host Management networks

In the below table, you can find an example of networks for the Hyper-V Hosts. I have specified the VLAN and the QOS Weight also. The Host Fabric Management has a VLAN number set to 0 because packets will be untagged. In this way, even if my Hyper-V host has no VLAN configuration, it can answer to DHCP request. It is useful to deploy host by using Bare-Metal from Virtual Machine Manager.

Network Name

VLAN

Subnet

Description

QoS weight

Host Fabric Management

0

10.10.0.0/24

LAN for host management (AD, RDP …)

10

Live Migration

100

10.10.100.0/24

Live Migration Network

40

Host Cluster

101

10.10.101.0/24

Cluster hearbeat network

10

Host Backup

102

10.10.102.0/24

Backup network

40

In the above configuration, Live-Migration and Backup traffics have a better priority than Host Fabric Management and Cluster traffics. It is because Live-Migration and Backup require a larger bandwidth.

VM Workloads

In the below table, you can find example of VM networks. In this example, I have isolated the network for the Fabric VMs, DMZ VMs and their cluster en backup traffics. In this way I can apply a QoS setting for each type of traffic. Here, Backup traffics have a higher weight than other networks because backup traffics use a larger bandwidth.

Network Name

VLAN

Subnet

Description

QoS weight

VM Fabric

1

10.10.1.0/24

Network for the fabric VM

10

VM DMZ

2

10.10.2.0/24

Network for VM in DMZ

10

VM Fabric Cluster

50

10.10.50.0/24

Cluster network for fabric VM

10

VM DMZ Cluster

51

10.10.51.0/24

Cluster network for DMZ VM

10

VM Fabric Backup

60

10.10.60.0/24

Backup network for fabric VM

30

VM DMZ Backup

61

10.10.61.0/24

Backup network for DMZ VM

30

Hyper-V converged networking and storage designs

Now that you have your network requirements on paper, we can work on the storage part. First you have to choose the storage solution: FC SAN, iSCSI SAN or Software-Defined Storage?

To choose the storage solution you must look at your needs and your history. If you have already a FC SAN with good performance, keep this solution to save money. If you start a new infrastructure and you want to store only VMs on the storage solution, maybe you can implement a Software-Defined Storage.

In the next sections, I have drawn a schema for each storage solution usually implemented. They certainly did not suit all needs but they allow understanding the principle.

Using Fibre Channel storage

Fibre Channel (not fiber-optic cables) is a protocol used to connect a server to the storage solution (SAN: Storage Area Network) with high-speed network. Usually fiber-optic cables are used to interconnect the SAN with the server. The adapters where are connected the fiber-optic on the server are called HBA (Host Bus Adapter).

In the below schema, the Parent Partition traffics are represented by green links while VMs traffics are orange.

On Ethernet side, I implement two dynamic teaming with two physical NICs each:

  • Host Management traffics (Live-Migration, Cluster, Host Backup, host management);
  • VM Workloads (VM Fabric, VM DMZ, VM Backup and so on).

On the storage side, I split also Parent Partition traffics and VM traffics:

  • The Parent Partition traffics are mainly related to Cluster Shared Volume to store Virtual Machines;
  • The VM traffics can be LUN mounted on VMs for Guest Cluster usage (Witness disk), database servers and so on.

To mount LUN directly on VMs, you need HBA with NPIV enabled and you need also to create vSAN on Hyper-V host. Then you have to deploy MPIO inside the VMs. For more information, you can read this TechNet topic.

To support the multi-channel on the parent partition, it is also necessary to enable MPIO on the Hyper-V host.

For a production environment, you need four 10GB Ethernet NICs and four HBA. This is the most expensive solution.

Using iSCSI storage

iSCSI (Internet Small Computer System Interface) is a protocol that carries SCSI commands over IP networks from the server to the SAN. This solution is less effective that Fibre Channel but it is also less expensive.

The network design is the same that the previous solution. Regarding the storage solution, I isolate the parent partition traffics and the VM workloads. MPIO is implemented for CSV to support Multi-Channel. When VMs need direct access to storage, I deploy two NICs bound on each VM Volumes physical NICs. Then I deploy MPIO inside the VMs. To finish, I prefer to use dedicated switches between hosts and SAN.

For each Hyper-V hosts, you need eight 10GB Ethernet Adapter.

Using Software-Defined Storage

This solution is based on software storage solution (as Scale-Out File Servers).

The network is the same as previous solutions. On the storage side, at least two RDMA NICs capable are required for better performance. SMB3 over RDMA (Remote Direct Memory Access) enables to increase throughput and to decrease the CPU load. This solution is also called SMB Direct. To support Multipath, the SMB Multichannel must be enabled (not teaming!!).

When VM needs a Witness disk or other shared volume for Guest Clustering, it is possible to use Shared VHDX to share a virtual hard drive between virtual machines.

This solution is less expensive because the software-defined storage is cheaper than SAN.

What about Windows Server 2016

In Windows Server 2016, you will be able to converged NIC across tenant and RDMA traffic to optimize costs, enabling high performance and network fault tolerance with only 2 NICs instead of 4.

The post Hyper-V converged networking and storage design appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/hyper-v-converged-networking-and-storage-design/feed/ 0 3727