Windows Server 2016 – Tech-Coffee //www.tech-coffee.net Thu, 26 Apr 2018 09:57:59 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Configure Dell S4048 switches for Storage Spaces Direct //www.tech-coffee.net/configure-dell-s4048-switches-for-storage-spaces-direct/ //www.tech-coffee.net/configure-dell-s4048-switches-for-storage-spaces-direct/#comments Thu, 26 Apr 2018 09:57:59 +0000 //www.tech-coffee.net/?p=6288 When we deploy Storage Spaces Direct (S2D), either hyperconverged or disaggregated, we have to configure the networking part. Usually we work with Dell hardware to deploy Storage Spaces Direct and the one of the switches supported by the Dell reference architectures is the Dell S4048 (Force 10). In this topic, we will see how to ...

The post Configure Dell S4048 switches for Storage Spaces Direct appeared first on Tech-Coffee.

]]>
When we deploy Storage Spaces Direct (S2D), either hyperconverged or disaggregated, we have to configure the networking part. Usually we work with Dell hardware to deploy Storage Spaces Direct and the one of the switches supported by the Dell reference architectures is the Dell S4048 (Force 10). In this topic, we will see how to configure this switch from scratch.

This topic has been co-written with Frederic Stefani – Dell architect solution.

Stack or not

Usually, customers know the stack feature which is common to all network vendors such as Cisco, Dell, HP and so on. This feature enables to add several identical switches in a single configuration managed by a master switch. Because all switches share the same configuration, for the network administrators, all these switches are seen like a single one. So, the administrators connect to the master switch and then edit the configuration on all switches member of the stack.

If the stacking is sexy on the paper, there is a main issue especially with storage solution such as S2D. With S4048 stack, when you run an update, all switches reload at the same time. Because S2D highly relies on the network, your storage solution will crash. This is why the Dell reference architecture for S2D recommends to deploy a VLT (Virtual Link Trunking).

With Stacking you have a single control plane (you configure all switches from a single switch) and a single data plane in a loop free topology. In a VLT configuration, you have also a single data plane in a loop free topology but several control planes, which allow you to reboot switches one by one.

For this reason, the VLT (or MLAG) technology is the preferred way for Storage Spaces Direct.

S4048 overview

A S4048 switch has 48x 10GB/s SFP+ ports, 6x 40GB/s QSFP+ ports, a management port (1GB/s) and a serial port. The management and the serial ports are located on the back. In the below diagram, there is three kinds of connection:

  • Connection for S2D (in this example from port 1 to 16, but you can connect until port 48)
  • VLTi connection
  • Core connection: the uplink to connect to core switches

In the below architecture schema, you can find both S4048 interconnected by using VLTi ports and several S2D nodes (hyperconverged or disaggregated, that doesn’t matter) connected to port 1 to 16. In this topic, we will configure these switches regarding this configuration.

Switches initial configuration

When you start the switch for the first time you have to configure the initial settings such as switch name, IP address and so on. Plug a serial cable from the switch to your computer and connect through Telnet with the following settings:

  • Baud Rate: 115200
  • No Parity
  • 8 data bits
  • 1 stop bit
  • No flow control

Then you can run the following configuration:

Enable
Configure

# Configure the hostname
hostname SwitchName-01

# Set the IP address to the management ports, to connect to switch through IP
interface ManagementEthernet 1/1
ip address 192.168.1.1/24
no shutdown
exit

# Set the default gateway
ip route 0.0.0.0/0 192.168.1.254/24

# Enable SSH
ip ssh server enable

# Create a user and a password to connect to the switch
username admin password 7 MyPassword privilege 15

# Disable Telnet through IP
no ip telnet server enable
Exit

# We leave enabled Rapid Spanning Tree Protocol.
protocol spanning-tree rstp
no disable
Exit

Exit

# Write the configuration in memory
Copy running-configuration startup-configuration

After this configuration is applied, you can connect to the switch through SSH. Apply the same configuration to the other switch (excepted the name and IP address).

Configure switches for RDMA (RoCEv2)

N.B: For this part we assume that you know how RoCE v2 is working, especially DCB, PFC and ETS.

Because we implement the switches for S2D, we have to configure the switches for RDMA (RDMA over Converged Ethernet v2 implementation). Don’t forget that with RoCE v2, you have to configure DCB and PFC end to end (on servers and on switches side). In this configuration, we assume that you use the Priority ID 3 for SMB traffic.

# By default the queue value is 0 for all dot1p (QoS) traffic. We enable <a href="https://www.dell.com/support/manuals/fr/fr/frbsdt1/force10-s4810/s4810_9.7.0.0_cli_pub-v1/service-class-dynamic-dot1p?guid=guid-6bbc7b99-4dde-433c-baf2-98a614eb665e&amp;lang=en-us">this command</a> globally to change this behavivor.
service-class dynamic dot1p

# Data-Center-Bridging enabled. This enable to configure Lossless and latency sensitive traffic in a Priority Flow Control (PFC) queue.
dcb enable

# Provide a name to the DCB buffer threshold
dcb-buffer-threshold RDMA
priority 3 buffer-size 100 pause-threshold 50 resume-offset 35
exit

# Create a dcb map to configure the PFC and ETS rule (Enhanced Transmission Control)
dcb-map RDMA

# For priority group 0, we allocate 50% of the bandwidth and PFC is disabled
priority-group 0 bandwidth 50 pfc off

# For priority group 3, we allocate 50% of the bandwidth and PFC is enabled
priority-group 3 bandwidth 50 pfc on

# Priority group 3 contains traffic with dot1p priority 3.
priority-pgid 0 0 0 3 0 0 0 0

Exit

Exit
Copy running-configuration startup-configuration

Repeat this configuration on the other switch.

VLT domain implementation

First of all, we have to create Port Channel with two QSFP+ ports (port 1/49 and 1/50):

Enable
Configure

# Configure the port-channel 100 (make sure it is not used)
interface Port-channel 100

# Provide a description
description VLTi

# Do not apply an IP address to this port channel
no ip address

#Set the maximum MTU to 9216
mtu 9216

# Add port 1/49 and 1/50
channel-member fortyGigE 1/49,1/50

# Enable the port channel
no shutdown

Exit

Exit
Copy Running-Config Startup-Config

Repeat this configuration on the second switch Then we have to create the VLT domain and use this port-channel. Below the configuration on the first switch:

# Configure the VLT domain 1
vlt domain 1

# Specify the port-channel number which will be used by this VLT domain
peer-link port-channel 100

# Specify the IP address of the other switch
back-up destination 192.168.1.2

# Specify the priority of each switch
primary-priority 1

# Give an used MAC address for the VLT
system-mac mac-address 00:01:02:01:02:05

# Give an ID for each switch
unit-id 0

# Wait 10s before the configuration saved is applied after the switch reload or the peer link restore
delay-restore 10

Exit

Exit
Copy Running-Configuration Startup-Configuration

On the second switch, the configuration looks like this:

vlt domain 1
peer-link port-channel 100
back-up destination 192.168.1.1
primary-priority 2
system-mac mac-address 00:01:02:01:02:05
unit-id 1
delay-restore 10

Exit

Exit
Copy Running-Configuration Startup-Configuration

No the VLT is working. You don’t have to specify VLAN ID on this link. The VLT manage itself tagged and untagged traffic.

S2D port configuration

To finish the switch configuration, we have to configure ports and VLAN for S2D nodes:

Enable
Configure
Interface range Ten 1/1-1/16

# No IP address assigned to these ports
no ip address

# Enable the maximum MTU to 9216
mtu 9216

# Enable the management of untagged and tagged traffic
portmode hybrid

# Enable switchport Level 2 and this port is added to default VLAN to send untagged traffic.
Switchport

# Configure the port to Edge-Port
spanning-tree 0 portfast

# Enable BPDU guard on these port
spanning-tree rstp edge-port bpduguard

#Apply the DCB policy to these port
dcb-policy buffer-threshold RDMA

# Apply the DCB map to this port
dcb-map RDMA

# Enable port
no shutdown

Exit

Exit
Copy Running-Configuration Startup-Configuration

You can copy this configuration to the other switch. Now just VLAN are missing. To create VLAN and assign to port you can run the following configuration:

Interface VLAN 10
Description "Management"
Name "VLAN-10"
Untagged TenGigabitEthernet 1/1-1/16
Exit

Interface VLAN 20
Description "SMB"
Name "VLAN-20"
tagged TenGigabitEthernet 1/1-1/16
Exit

[etc.]
Exit
Copy Running-Config Startup-Config

Once you have finished, copy this configuration on the second switch.

The post Configure Dell S4048 switches for Storage Spaces Direct appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/configure-dell-s4048-switches-for-storage-spaces-direct/feed/ 6 6288
Real Case: Implement Storage Replica between two S2D clusters //www.tech-coffee.net/real-case-implement-storage-replica-between-two-s2d-clusters/ //www.tech-coffee.net/real-case-implement-storage-replica-between-two-s2d-clusters/#comments Fri, 06 Apr 2018 09:08:17 +0000 //www.tech-coffee.net/?p=6252 This week, in part of my job I deployed a Storage Replica between two S2D Clusters. I’d like to share with you the steps I followed to implement the storage replication between two S2D hyperconverged cluster. Storage Replica enables to replicate volumes at the block-level. For my customer, Storage Replica is part of a Disaster ...

The post Real Case: Implement Storage Replica between two S2D clusters appeared first on Tech-Coffee.

]]>
This week, in part of my job I deployed a Storage Replica between two S2D Clusters. I’d like to share with you the steps I followed to implement the storage replication between two S2D hyperconverged cluster. Storage Replica enables to replicate volumes at the block-level. For my customer, Storage Replica is part of a Disaster Recovery Plan in case of the first room is down.

Architecture overview

The customer has two rooms. In each room, a four-node S2D cluster has been deployed. Each node has a Mellanox Connectx3-Pro (dual 10GB ports) and an Intel network adapter for VMs. Currently the Mellanox network adapter is used for SMB traffic such as S2D and Live-Migration. This network adapter supports RDMA. Storage Replica can leverage SMB Direct (RDMA). So, the goal is to use also the Mellanox adapters for Storage Replica.

In each room, two Dell S4048S switches are deployed in VLT. Then the switches in both room are connected by two fiber optics of around 5Km. The latency is less than 5ms so we can implement synchronous replication. The Storage Replica traffic must use the fiber optics. Currently the storage traffic is in a VLAN (ID: 247). We will use the same VLAN for Storage Replica.

Each S2D cluster has several Cluster Shared Volume (CSV). Among all these CSV, two CSV will be replicated in each S2D cluster. Below you can find the name of volumes that will be replicated:

  • (S2D Cluster Room 1) PERF-AREP-01 -> (S2D Cluster Room 2) PERF-PREP-01
  • (S2D Cluster Room 1) PERF-AREP-02 -> (S2D Cluster Room 2) PERF-PREP-02
  • (S2D Cluster Room 2) PERF-AREP-03 -> (S2D Cluster Room 1) PERF-PREP-03
  • (S2D Cluster Room 2) PERF-AREP-04 -> (S2D Cluster Room 1) PERF-PREP-04

In order to work, each volume (source and destination) are strictly identical (same capacity, same resilience, same file system etc.). I will create a log volume per volume so I’m going to deploy 4 log volumes per S2D cluster.

Create log volumes

First of all, I create the log volumes by using the following cmdlet. The log volumes must not be converted to Cluster Shared Volume and a drive letter must be assigned:

New-Volume -StoragePoolFriendlyName "<storage pool name>" `
           -FriendlyName "<volume name>" `
           -FileSystem ReFS `
           -DriveLetter "<drive letter>" `
           –Size <capacity> 

As you can see in the following screenshots, I created four log volumes per cluster. The volumes are not CSV.

In the following screenshot, you can see that for each volume, there is a log volume.

Grant Storage Replica Access

You must grant security access between both cluster to implement Storage Replica. To grant the access, run the following cmdlets:

Grant-SRAccess -ComputerName "<Node cluster 1>" -Cluster "<Cluster 2>"
Grant-SRAccess -ComputerName "<Node cluster 2>" -Cluster "<Cluster 1>"

Test Storage Replica Topology

/!\ I didn’t success to run the test storage replica topology. It seems there is a known issue in this cmdlet

N.B: To run this test, you must move the CSV on the node which host the core cluster resources. In the below example, I moved the CSV to replicate on HyperV-02.


To run the test, you have to run the following cmdlet:

Test-SRTopology -SourceComputerName "<Cluster room 1>" `
                -SourceVolumeName "c:\clusterstorage\PERF-AREP-01\" `
                -SourceLogVolumeName "R:" `
                -DestinationComputerName "<Cluster room 2>" `
                -DestinationVolumeName "c:\ClusterStorage\Perf-PREP-01\" `
                -DestinationLogVolumeName "R:" `
                -DurationInMinutes 10 `
                -ResultPath "C:\temp" 

As you can see in the below screenshot, the test is not successful because a path issue. Even if the test didn’t work, I was able to enable Storage Replica between cluster. So if you have the same issue, try to enable the replication (check the next section).

Enable the replication between two volumes

To enable the replication between the volumes, you can run the following cmdlets. With these cmdlets, I created the four replications.

New-SRPartnership -SourceComputerName "<Cluster room 1>" `
                  -SourceRGName REP01 `
                  -SourceVolumeName c:\ClusterStorage\PERF-AREP-01 `
                  -SourceLogVolumeName R: `
                  -DestinationComputerName "<Cluster Room 2>" `
                  -DestinationRGName REP01 `
                  -DestinationVolumeName c:\ClusterStorage\PERF-PREP-01 `
                  -DestinationLogVolumeName R:

New-SRPartnership -SourceComputerName "<Cluster room 1>" `
                  -SourceRGName REP02 `
                  -SourceVolumeName c:\ClusterStorage\PERF-AREP-02 `
                  -SourceLogVolumeName S: `
                  -DestinationComputerName "<Cluster Room 2>" `
                  -DestinationRGName REP02 `
                  -DestinationVolumeName c:\ClusterStorage\PERF-PREP-02 `
                  -DestinationLogVolumeName S:

New-SRPartnership -SourceComputerName "<Cluster Room 2>" `
                  -SourceRGName REP03 `
                  -SourceVolumeName c:\ClusterStorage\PERF-AREP-03 `
                  -SourceLogVolumeName T: `
                  -DestinationComputerName "<Cluster room 1>" `
                  -DestinationRGName REP03 `
                  -DestinationVolumeName c:\ClusterStorage\PERF-PREP-03 `
                  -DestinationLogVolumeName T:

New-SRPartnership -SourceComputerName "<Cluster Room 2>" `
                  -SourceRGName REP04 `
                  -SourceVolumeName c:\ClusterStorage\PERF-AREP-04 `
                  -SourceLogVolumeName U: `
                  -DestinationComputerName "<Cluster room 1>" `
                  -DestinationRGName REP04 `
                  -DestinationVolumeName c:\ClusterStorage\PERF-PREP-04 `
                  -DestinationLogVolumeName U: 

Now that replication is enabled, if you open the failover clustering management, you can see that some volumes are source or destination. A new tab called replication is added and you can check the replication status. The destination volume is no longer accessible until you reverse storage replica way.

Once the initial synchronization is finished, the replication status is Continuously replicating.

Network adapters used by Storage Replica

In the overview section, I said that I want use the Mellanox network adapters for Storage Replica (for RDMA). So I run the following cmdlet to check the Storage Replica is using the Mellanox Network Adapters.

Reverse the Storage Replica way

To reverse the Storage Replica, you can use the following cmdlet:

Set-SRPartnership -NewSourceComputerName "<Cluster room 2>" `
                  -SourceRGName REP01 `
                  -DestinationComputerName "<Cluster room 1>" `
                  -DestinationRGName REP01   

Conclusion

Storage Replica enables to replicate a volume at the block-level to another volume. In this case, I have two S2D clusters where each cluster hosts two source volumes and two destination volumes. Storage Replica helps the customer to implement a Disaster Recovery Plan.

The post Real Case: Implement Storage Replica between two S2D clusters appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/real-case-implement-storage-replica-between-two-s2d-clusters/feed/ 3 6252
Connect StarWind Virtual Tape Library to a Windows Server 2016 //www.tech-coffee.net/connect-starwind-virtual-tape-library-to-a-windows-server-2016/ //www.tech-coffee.net/connect-starwind-virtual-tape-library-to-a-windows-server-2016/#comments Tue, 20 Mar 2018 10:54:54 +0000 //www.tech-coffee.net/?p=6234 StarWind Virtual Tape Library (VTL) is a feature included in StarWind Virtual SAN. StarWind VTL provides a virtual LTO library to store your backup archival. StarWind VTL eliminates the heavy process of tape management and also a costly LTO library. StarWind VTL provides virtual tape to store your backup archival. The connection between StarWind Virtual ...

The post Connect StarWind Virtual Tape Library to a Windows Server 2016 appeared first on Tech-Coffee.

]]>
StarWind Virtual Tape Library (VTL) is a feature included in StarWind Virtual SAN. StarWind VTL provides a virtual LTO library to store your backup archival. StarWind VTL eliminates the heavy process of tape management and also a costly LTO library. StarWind VTL provides virtual tape to store your backup archival. The connection between StarWind Virtual Tape Library and a server is made by using iSCSI. StarWind VTL emulates virtual tape by Hewlett Packard VTL. In this topic we’ll see how to connect StarWind Virtual Tape Library to a Windows Server 2016. In a next topic, I’ll use the StarWind VTL to archive backup with Veeam Backup & Replication U3.

Requirements

To install StarWind VTL, you need a server with at least the following hardware:

  • 2x (v)CPU
  • 4GB of RAM
  • 2x Network Adapters (one for management and the other for iSCSI)
  • 1x hard drive for OS
  • 1x hard drive for virtual tape

From my side, I have deployed a virtual machine to host StarWind VTL

StarWind Virtual Tape Library installation

To deploy StarWind VTL, firstly you need to download StarWind Virtual SAN from this link. Then run the installation and when you have to choose the components, specify settings as the following:

Once the product is installed, run the StarWind management console.

StarWind Virtual Tape Library configuration

Firstly, I change the management interface to bind only to the management IP address. In this way, we can’t manage the product from iSCSI network adapter. To change the management interface, navigate to Configuration | Management Interface.

Next in General pane, click on Add VTL Device.

Then, specify a name for your Virtual Tape Library ad a location.

Then leave the default option and click on Next.

On the next screen, you are asked to create the iSCSI target. Choose to Create new target. Then specify a target alias.

If the creation is successful, you should get the following information:

Now in StarWind Management Console, you have a VTL device.

If you wish, you can add another virtual tape library to your iSCSI target.

Connect Windows Server to StarWind VTL

On the Windows Server, open iSCSI initiator properties. You are asked to start the MS iSCSI service automatically. Choose yes. Then in target, enter the IP address of the StarWInd VTL iSCSI IP address. Then connect to the target.

Once connected, you can open the Device Manager. As you can see in the below screenshot, you

Conclusion

If you don’t want to invest in a Tape Library you can use StarWind Virtual Tape Library to archive your data. In a real world, usually you use a physical machine with a lot of SATA devices. Instead of using Tape, you use SATA devices. In addition to be a cheaper solution, you don’t need to implement a process for tape rotation. However, the StarWind VTL should be in another datacenter in case of datacenter disaster.

The post Connect StarWind Virtual Tape Library to a Windows Server 2016 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/connect-starwind-virtual-tape-library-to-a-windows-server-2016/feed/ 1 6234
Storage Spaces Direct dashboard //www.tech-coffee.net/storage-spaces-direct-dashboard/ //www.tech-coffee.net/storage-spaces-direct-dashboard/#comments Thu, 14 Dec 2017 10:17:42 +0000 //www.tech-coffee.net/?p=6015 Today I release a special Christmas gift for you. For some time, I’m writing a PowerShell script to generate a Storage Spaces Direct dashboard. This dashboard enables you to validate each important setting for a S2D cluster. I decided to write this PowerShell script to avoid to run hundred of PowerShell cmdlet and check manually ...

The post Storage Spaces Direct dashboard appeared first on Tech-Coffee.

]]>
Today I release a special Christmas gift for you. For some time, I’m writing a PowerShell script to generate a Storage Spaces Direct dashboard. This dashboard enables you to validate each important setting for a S2D cluster.

I decided to write this PowerShell script to avoid to run hundred of PowerShell cmdlet and check manually returned value. With this dashboard, you get almost all the information you needs.

Where can I download the script

The script is available on github. You can download the documentation and the script from this link. Please read the documentation before running the script.

Storage Spaces Direct dashboard

The below screenshot shows you a dashboard example. This dashboard has been generated from my 2-node cluster in lab.

Roadmap

I plan to improve the script next year by adding the support for disaggregated S2D deployment model and to add information such as the cache / capacity ratio and the reservation space.

Special thanks

I’d like to thanks Dave Kawula, Charbel Nemnom, Kristopher Turner and Ben Thomas. Thanks for helping me to resolve most of the issues by running the script on your S2D infrastructures

The post Storage Spaces Direct dashboard appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/storage-spaces-direct-dashboard/feed/ 10 6015
Monitor S2D with Operations Manager 2016 //www.tech-coffee.net/monitor-s2d-with-operations-manager-2016/ //www.tech-coffee.net/monitor-s2d-with-operations-manager-2016/#comments Thu, 30 Nov 2017 18:24:17 +0000 //www.tech-coffee.net/?p=5942 Storage Spaces Direct (S2D) is the Microsoft Software-Defined Storage solution. Thanks to S2D, we can deploy hyperconverged infrastructure based on Microsoft technologies such as Hyper-V. This feature is included in Windows Server 2016 Datacenter edition. You can find a lot of blog posts about S2D on this website. In this topic, I’ll talk about how ...

The post Monitor S2D with Operations Manager 2016 appeared first on Tech-Coffee.

]]>
Storage Spaces Direct (S2D) is the Microsoft Software-Defined Storage solution. Thanks to S2D, we can deploy hyperconverged infrastructure based on Microsoft technologies such as Hyper-V. This feature is included in Windows Server 2016 Datacenter edition. You can find a lot of blog posts about S2D on this website. In this topic, I’ll talk about how to monitor S2D.

S2D is a storage solution and so it is a critical component. The S2D availability can also affect the virtual machines and applications. Therefore, we have to monitor S2D to ensure availability but also performance. When you enable Storage Spaces Direct, a new cluster role is also enabled: the Health Service. This cluster role gathers metrics and alerts of all cluster nodes and provide them from a single pane of glass (an API). This API is accessible from PowerShell, .Net and so on. Even if Health Service is a good idea, it is not usable for day-to-day administration because health service provides real time metrics and no historical. Moreover there is no GUI with health service.

Microsoft has written a management pack for Operations Manager which get information from health service API on a regular basis. In this way, we are able to make chart based on these information. Moreover, SCOM is able to raise alerts regarding S2D state. If you are using SCOM and S2D in your IT, I suggest you to install the Storage Spaces Direct management pack 🙂

Requirements

The below requirements come from the management pack documentation. To install the Storage Spaces Direct management pack you need:

  • System Center Operations Manager 2016
  • A S2D cluster based on Windows Server 2016 Datacenter with KB3216755 (Nano Server not supported)
  • Enable agent proxy settings on all S2D nodes
  • A working S2D cluster (hyperconverged or disaggregated)

You can download the management pack from this link.

Management pack installation

After you have downloaded and installed the management pack, you get the following files

File

Description

Storage Spaces Direct 2016

Microsoft Windows Server 2016 Storage Spaces Direct Management Pack.

Storage Spaces Direct 2016 Presentation

This Management Pack adds views and dashboards for the management pack.

Microsoft System Center Operations Manager Storage Visualization Library

This Management Pack contains basic visual components required for the management pack dashboards.

Microsoft Storage Library

A set of common classes for Microsoft Storage management packs.

Then, open an Operations Manager console and navigate in Administration. Right click on Management Packs and select import Management Packs. Then select Add from disk.

If you have Internet on your server, you can select Yes in the following pop-up to resolve dependencies with the online catalog.

In the next Window, Click on Add and select the Storage Spaces Direct management pack files. Then click on Install.

In monitoring pane, you should get a Storage Spaces Direct 2016 “folder” as below. You should also get the following error. It is because the management pack is not yet fully initialized and you have to wait a few minutes.

Operations Manager configuration

First, be sure that on agent proxy is enabled for each S2D nodes. Navigate in Administration | Agent Managed. Then right click on the node and select properties. In Security, be sure that Allow this agent to act as a proxy and discover managed objects on other computers is enabled.

Now I need to create a group for S2D nodes. I’d like this group is dynamic to not populate it manually. To create a group, navigate in Authoring. Right click on groups and select Create a new Group.

Provide a name for this group then select a destination management pack. I have created a dedicated management pack for overrides. I have called this management pack Storage Spaces Direct – Custom.

In the next window, I just click next because I don’t want to provide explicit group members.

Next I create a query for dynamic inclusion. The rule is simple: each server which has an Active Directory DN containing the word Hyper-V is added to the group.

As you can see in the below screen, my S2D nodes are added in a specific OU called Hyper-V. Each time I’ll add a node, the node is moved to this OU, and so my Operations Manager group is populated.

In the next screen of the group wizard, I just click on next.

Then I click again on next because I don’t want to exclude objects from this group.

Then your group is added and should be populated with S2D nodes. Now navigate to Monitoring | Storage | Storage Spaces Direct 2016 | Storage Spaces Direct 2016. Click on the “hamburger” menu on the right and select Add Group.

Then select the group you have just created.

From this point, the configuration is finished. Now you have to wait a long time (I’ve been waiting for 2 or 3 hours) before getting all information.

Monitor S2D

After a few hours, you should get information as below. You can get information about storage sub system, volume, nodes and file shares for disaggregated infrastructure. You can click on each square to get more information.

On the below screenshot, you can get information about volume. They are really valuable because you have the state, the total capacity, IOPS, throughput and so on. Active alerts on volume are also displayed.

Below screenshot shows information about Storage Sub System:

Conclusion

If you are already using Operations Manager 2016 and Storage Spaces Direct, you can easily monitor your storage solution. The management pack is free so I really suggest you to install it. If you are not using Operations Manager, you should find a solution to monitor S2D because the storage layer is a critical component.

The post Monitor S2D with Operations Manager 2016 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/monitor-s2d-with-operations-manager-2016/feed/ 2 5942
Don’t worry, Storage Spaces Direct is not dead ! //www.tech-coffee.net/dont-worry-storage-spaces-direct-not-dead/ //www.tech-coffee.net/dont-worry-storage-spaces-direct-not-dead/#comments Fri, 20 Oct 2017 08:53:32 +0000 //www.tech-coffee.net/?p=5840 Usually, I’m not speaking about news but only about technical details. But with the release of Windows Server 2016 1709, lot of misinformation have been written and I’d like to show another way to look at this. First of all, it is important to understand what happened to Windows Server 2016. Microsoft has modified how ...

The post Don’t worry, Storage Spaces Direct is not dead ! appeared first on Tech-Coffee.

]]>
Usually, I’m not speaking about news but only about technical details. But with the release of Windows Server 2016 1709, lot of misinformation have been written and I’d like to show another way to look at this.

First of all, it is important to understand what happened to Windows Server 2016. Microsoft has modified how Windows Server is distributed to customer. There is two kind of deployment:

  • LTSC (Long-Term Servicing Channel): This is Windows Server with 5 years support and 5 years of extended support. You ‘ll get security and quality updates but no new features. Windows Server Core and with GUI is supported with this deployment. Microsoft expects to release a new LTSC every 2 or 3 years
  • SAC (Semi Annual Channel): With this kind of deployment, Microsoft will release a new version each 6 months. Each release will be supported 18 months from the initial release. Each new release should get new features. Only Windows Server Core is supported with this kind of deployment.

So the release of this month called 1709 (2017 = 17, September = 09: 1709) is part of the SAC deployment mode. In 6 months, a new release in Semi Annual Channel should be released called 1803.

But where is Storage Spaces Direct

Storage Spaces Direct is the main piece to run Software-Defined Storage with Microsoft solution. This feature has been released with Windows Server 2016 called now 1609 (October 2016, you follow me ?). The Windows Server 1609 is LTSC. Storage Spaces Direct (S2D for friends), works great with this release and I have deployed plenty of S2D cluster which are currently running in production without issues (yes I had some issues but resolved quickly).

Microsoft this month has released Windows Server 1709 which is in SAC. This release contains mainly container improvements, and the reason of this topic, no support of S2D. This is a SAC release, not a service pack. You can’t compare anymore a service pack with a SAC release. SAC is a continuously system upgrade while service pack is mainly an aggregate of updates. You are running S2D? don’t install 1709 release and wait 6 months… you’ll see 🙂

Why removing the support of S2D ?

The Storage is a complicated component. We need stable and valuable storage because today companies data are located in this storage. If the storage gone, the company can close down.

I can tell you that Microsoft works hard on S2D to bring you the best Software-Defined solution. But the level of validation for production has not been reached to provide S2D with Windows Server 1709. What do you prefer: a buggy S2D release or wait 6 months for a high quality product ? From my side I prefer to wait 6 months for a better product.

Why Storage Spaces Direct is not dead ?

Last two days, I read some topics saying that Microsoft is pushing Azure / Azure Stack and they don’t care about On-Prem solution. Yes this true, today Microsoft talks only about Azure / Azure Stack and I think it is a shame. But Azure Stack solution is based on Storage Spaces Direct. They need to improve this feature to deploy more and more Azure Stack.

Secondly, Microsoft has presented a new GUI feature called Honolulu. Microsoft has developed a module to manage hyperconverged solution. You may have seen the presentation at Ignite. Why Microsoft is developing a product for a technology that wants to give up?

To finish, I work sometime with the product group which is in charge of S2D. I can tell you they work hard on the product to make it greater. I have the good fortune of being able to see and try next new features of S2D.

Conclusion

You are running S2D on Windows Server 2016: you should keep Windows Server 1609 and wait for the next new release in SAC or LTSC. You want to run S2D but you are afraid about the new announcement: be sure that Microsoft will not left behind S2D. You can deploy S2D with Windows Server 1609 or maybe wait for Windows Server 1803 (next march). Be sure of one thing: Storage Spaces Direct is not dead !

The post Don’t worry, Storage Spaces Direct is not dead ! appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/dont-worry-storage-spaces-direct-not-dead/feed/ 4 5840
Storage Spaces Direct: plan the storage for hyperconverged //www.tech-coffee.net/storage-spaces-direct-plan-the-storage-for-hyperconverged/ //www.tech-coffee.net/storage-spaces-direct-plan-the-storage-for-hyperconverged/#comments Mon, 17 Jul 2017 09:49:52 +0000 //www.tech-coffee.net/?p=5625 When a customer calls me to design or validate the hardware configuration for hyperconverged infrastructure with Storage Spaces Direct, there is often a misunderstanding about the remaining useable capacity, the required cache capacity and ratio, and the different mode of resilience. With this topic, I’ll try to help you to plan the storage for hyperconverged ...

The post Storage Spaces Direct: plan the storage for hyperconverged appeared first on Tech-Coffee.

]]>
When a customer calls me to design or validate the hardware configuration for hyperconverged infrastructure with Storage Spaces Direct, there is often a misunderstanding about the remaining useable capacity, the required cache capacity and ratio, and the different mode of resilience. With this topic, I’ll try to help you to plan the storage for hyperconverged and to clarify some points.

Hardware consideration

Before sizing the storage devices, you should be aware about some limitations. First you can’t exceed 26 storage devices per node. Windows Server 2016 can’t handle more than 26 storage devices so if you deploy your Operating System on two storage devices, 24 are available for Storage Spaces Direct. However, the storage devices are bigger and bigger so 24 storage devices per node is enough (I have never seen a deployment with more than 16 storage devices for Storage Spaces Direct).

Secondly, you have to pay attention on your HBA (Host Bus Adapter). With Storage Spaces Direct, this is the Operating System which is in charge to handle the resilience and cache. This is a software-defined solution after all. So, there is no reason that the HBA manages RAID and cache. In Storage Spaces Direct case, the HBA is used mainly to add more SAS ports. So, don’t buy an HBA with RAID and cache because you will not use these features. Storage Spaces Direct storage devices will be configured in JBOD mode. If you choose to buy Lenovo server, you can buy N2215 HBA. If you choose Dell, you can select HBA330. The HBA must provide the following features:

  • Simple pass-through SAS HBA for both SAS and SATA drives
  • SCSI Enclosure Services (SES) for SAS and SATA drives
  • Any direct-attached storage enclosures must present Unique ID
  • Not Supported: RAID HBA controllers or SAN (Fibre Channel, iSCSI, FCoE) devices

Thirdly, there are requirements regarding storage devices. Only NVMe, SAS and SATA devices are supported. If you have old SCSI storage devices, you can drop them :). These storage devices must be physically attached to only one server (local-attached devices). If you choose to implement SSD, these devices must be enterprise-grade with power loss protection. So please, don’t install a hyperconverged solution with Samsung 850 pro. If you plan to install cache storage devices, these SSD must have 3 DWPD. That means that this device can be written entirely three times per day at least.

To finish, you have to respect a minimum number of storage devices. You must implement at least 4 capacity storage devices per node. If you plan to install cache storage devices, you have to deploy two of them at least per node. For each node in the cluster, you must have the same kind of storage devices. If you choose to deploy NVMe in a server, all servers must have NVMe. The most possible, keep the same configuration across all nodes. The below table provides the minimum storage devices per node regarding the configuration:

Drive types present Minimum number required
All NVMe (same model) 4 NVMe
All SSD (same model) 4 SSD
NVMe + SSD 2 NVMe + 4 SSD
NVMe + HDD 2 NVMe + 4 HDD
SSD + HDD 2 SSD + 4 HDD
NVMe + SSD + HDD 2 NVMe + 4 Others

Cache ratio and capacity

The cache ratio and capacity is an important part when you choose to deploy cache mechanism. I have seen a lot of wrong design because of cache mechanism. The first thing to know is that the cache is not mandatory. As explained in the above table, you can implement an all flash configuration without cache mechanism. However, if you choose to deploy a solution based on HDD, you must implement a cache mechanism. When the storage devices behind cache are HDDs, the cache is set to Read / Write mode. Otherwise, it is set to write only mode.

The cache capacity must be at 10% of the raw capacity. If in each node you have 10TB of raw capacity, you need at least 1TB of cache. Moreover, if you deploy cache mechanism, you need at least two cache storage devices. This ensures the high availability of the cache. When Storage Spaces Direct is enabled, capacity devices are bound to cache devices in round-robin manner. If a cache storage device fails, all its capacity devices are bound to another cache storage device.

To finish, you must respect a ratio between the number of cache devices and capacity devices. The capacity devices must be a multiple of cache devices. This ensures that each cache device has the same number of capacity devices.

Reserved capacity

When you design the storage pool capacity and you choose the number of storage devices, you need to keep in mind that you need some unused capacity in the storage pool. This is the reserved capacity in case of repair process. If a capacity device fails, storage pool duplicates blocks that were written in this device to respect the resilience mode. This process requires free space to duplicate blocks. Microsoft recommends to leave empty the space of one capacity device per node up to four drives.

For example, I have 6 nodes with 4x 4TB HDD per node. I leave empty 4x 4TB (one per node up to four drives) in the storage pool for reserved capacity.

Example of storage design

You should know that in hyperconverged infrastructure, the storage and the compute are related because these components reside in the same box. So before calculate the required raw capacity you should have evaluated two things: the number of nodes you plan to deploy and the useable storage capacity required. For this example, let’s say that we need four nodes and 20TB of useable capacity.

First thing, you have to choose a resilience mode. In hyperconverged, usually 2-way Mirroring and 3-way Mirroring are implemented. If you choose 2-Way mirroring (1 fault tolerance), you have 50% of useable capacity. If you choose 3-Way Mirroring (recommended, 2 fault tolerances) you have only 33% of useable capacity.

PS: At the time of writing this topic, Microsoft has announced deduplication in next Windows Server release for ReFS volume.

So, if you need 20TB of useable capacity and you choose 3-Way Mirroring, you need at least 60TB (20 x 3) of raw storage capacity. That means that in each node (4-node) you need 15TB of raw capacity.

Now that you know you need 15TB of raw storage per node, you need to define the number of capacity storage devices. If you need maximum performance, you can choose only NVMe devices. But this solution will be very expensive. For this example, I choose SSD for the cache and HDD for the capacity.

Next, I need to define which kind of HDD I select. If I choose 4x 4TB HDD per node, I will have 16TB raw capacity per node. I need to add an additional 4TB HDD for the reserved capacity. But this solution is not good regarding the cache ratio. No cache ratio can be respected with five capacity devices. In this case I need to add an additional 4TB HDD to get a total of 6x 4TB HDD per node (24TB raw capacity) and I can respect the cache ratio with 1:2 or 1:3.

The other solution is to select 2TB HDD. I need 8x 2TB HDD to get the required raw capacity. Then I add an additional 2TB HDD for the reserved capacity. I get 9x 2TB HDD and I can respect the cache ratio with 1:3. I prefer this solution because I’m closest of the specifications.

Now we need to design the cache devices. For our solution, we need 3x cache devices for a total capacity of 1.8TB at least (10% of raw capacity per node). So I choose to buy 800GB SSD (because my favorite cache SSD, Intel S3710, exists in 400GB or 800GB :)). 800GB x 3 = 2.1TB cache capacity per node.

So, each node will be installed with 3x 800GB SSD and 9x 2TB HDD with a cache ratio of 1:3. The total raw capacity is 72TB and the reserved capacity is 8TB. The useable capacity will be 21.12TB ((72-8) x 0.33).

About Fault Domain Awareness

I have made this demonstration with a Fault Domain Awareness at the node level. If you choose to configure Fault Domain Awareness at chassis and rack level, the calculation is different. For example, if you choose to configure Fault Domain Awareness at the rack level, you need to divide the total raw capacity across the rack number. You need also the exact same number of nodes per rack. With this configuration and the above case, you need 15TB of raw capacity per rack.

The post Storage Spaces Direct: plan the storage for hyperconverged appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/storage-spaces-direct-plan-the-storage-for-hyperconverged/feed/ 4 5625
Deploy Veeam Cloud Connect for large environments in Microsoft Azure //www.tech-coffee.net/deploy-veeam-cloud-connect-for-large-environments-in-microsoft-azure/ //www.tech-coffee.net/deploy-veeam-cloud-connect-for-large-environments-in-microsoft-azure/#comments Fri, 30 Jun 2017 07:57:00 +0000 //www.tech-coffee.net/?p=5604 Veeam Cloud Connect is a solution to store backups and archives in a second datacenter such as Microsoft Azure. Thanks to this technology, we can easily follow the 3-2-1 backup rule (3 backups; 2 different medias; 1 offsite). Last time I talked about Veeam Cloud Connect, I deployed all Veeam roles within a single VM. ...

The post Deploy Veeam Cloud Connect for large environments in Microsoft Azure appeared first on Tech-Coffee.

]]>
Veeam Cloud Connect is a solution to store backups and archives in a second datacenter such as Microsoft Azure. Thanks to this technology, we can easily follow the 3-2-1 backup rule (3 backups; 2 different medias; 1 offsite). Last time I talked about Veeam Cloud Connect, I deployed all Veeam roles within a single VM. This time I’m going to deploy the Veeam Cloud Connect in Microsoft Azure where roles are allocated across different Azure VMs. Moreover, some roles such as the Veeam Cloud Gateway will be deployed in a high availability setup.

Before I begin, I’d like to thank Pierre-Francois Guglielmi – Veeam Alliances System Engineer (@pfguglielmi) for his time. Thank you for your review, your English correction and your help.

What is Veeam Cloud Connect

Veeam Cloud Connect provides an easy way to copy your backups to an offsite location that can be based on public cloud (such as Microsoft Azure) or for archival purpose. Instead of investing money in another datacenter to store backup copies, you can choose to leverage Veeam Cloud Connect (VCC) to send these backup copies to Microsoft Azure. VCC exists in the form of two templates that you can find in the Microsoft Azure Marketplace:

  • Veeam Cloud Connect for Service Providers
  • Veeam Cloud Connect for the Enterprise

The first one is for service providers with several customers who want to deliver Backup-as-a-Service offerings using the Veeam Cloud Connect technology. This provider can deploy the solution in a public cloud and deliver the service to clients. The second version is dedicated to companies willing to build similar Backup-as-a-Service offerings internally, leveraging the public cloud to send backup copies offsite. For this topic, I’ll work on Veeam Cloud Connect for Enterprise, but the technology is the same.

Veeam Cloud Connect is a Veeam Backup & Replication server with Cloud Connect features unlocked by a specific license file. When deploying this kind of solution, you have the following roles:

  • Microsoft Active Directory Domain Controller (optional)
  • Veeam Cloud Connect server
  • Veeam Cloud Gateway
  • Veeam backup repositories
  • Veeam WAN Accelerator (optional)

Microsoft Active Directory Domain Controller

A Domain controller is not a mandatory role for the Veeam Cloud Connect infrastructure but it can make servers and credentials management easier. If you plan to establish a site-to-site VPN from your on-premises to Microsoft Azure, you can deploy domain controllers within Azure, in the same forest than the existing domain controllers and add all Azure VMs to a domain. In this way, you can use your existing credentials to manage servers, apply existing GPOs and create specific service accounts for Veeam managed by Active Directory. It is up to you: if you don’t deploy a domain controller within Azure, you can still deploy the VCC infrastructure. But then you’ll have to manage servers one by one.

Veeam Cloud Connect server

Veeam Cloud Connect server is a Veeam Backup & Replication server with Cloud Connect features. This is the central point to manage and deploy Veeam Cloud Connect infrastructure components. From this component, you can deploy Veeam Cloud Gateway, WAN accelerator, backup repositories and manage backup copies.

Veeam Cloud Gateway

The Veeam Cloud Gateway component is the entry point of your Veeam Cloud Connect infrastructure. When you’ll choose to send a backup copy to this infrastructure, you’ll specify the public IP or DNS name of the Veeam Cloud Gateway server(s). This service is based on Azure VM(s) running Windows Server and with a public IP address to allow secure inbound and outbound connections to the on-premises environment. If you choose to deploy several Veeam Cloud Gateway servers for high availability, you have two ways to provide a single entry point:

  • Round-Robin record in your public DNS registrar; one DNS name for all A records bound to Veeam Cloud Gateways public IP adresses.
  • A Traffic Manager in front of all Veeam Cloud Gateway servers

Because Veeam Cloud Gateway has its own load balancing mechanism, you can’t deploy Azure Load balancer, F5 appliance or other kinds of load balancers on front of Veeam Cloud Gateways.

Veeam Backup repositories

This is the storage system that stores backups. It can be a single Windows Server with a single disk or a storage space. Don’t forget that in Azure, the maximum size of a single data disk is 4TB (as of June 2017). You can also leverage the Scale-Out Backup Repository functionality where several backup repositories are managed by Veeam as a single logical repository. To finish, and this is the scenario I’m going to present later in this topic, you can store backups on a Scale-Out File Server based on a Storage Spaces Direct cluster. This solution provides SMB 3.11 access to the storage.

Veeam WAN Accelerator

Veeam WAN accelerator is the same component already available in Veeam Backup & Replication. This service optimizes the traffic between source and destination by sending only new unique blocks not already known at destination. To leverage this feature, you need a pair of WAN Accelerator servers. The source WAN Accelerator creates a digest for data blocks and the target synchronizes these digests and populates a global cache. During next transfer, the source WAN Accelerator compares digests of the blocks in the new incremental backup file with the already known digests. If nothing has changed, the block is not copied over the network and the data is taken from the global cache in the target, or from the target backup repositories, which in such a case act as infinite cache.

Architecture Overview

For this topic, I decided to separate roles on different Azure VMs. I’ll have 5 kinds of Azure VMs:

  • Domain Controllers
  • Veeam Cloud Gateways
  • Veeam Cloud Connect
  • Veeam WAN Accelerator
  • File Servers (Storage Spaces Direct)

First, I deploy two Domain Controllers to ease management. This is completely optional. All domain controllers are members of an Azure Availability Set.

The Veeam Cloud Gateway servers are located behind a Traffic Manager profile. Each Veeam Cloud Gateway has its own public IP address. The Traffic Manager profile distributes the traffic across public IP addresses of Veeam Cloud Gateway servers. The JSON template provided below allows to deploy from 1 to 9 Cloud Gateway servers depending on your needs. All Veeam Cloud Gateways are added to an Availability Set to support a 99,95% SLA.

Then I deploy two Veeam Cloud Connect VMs: one active and one passive. I add these both Azure VMs in an Availability Set. If the first VM crashes, the backup configuration is restored to the second VM.

The WAN Accelerator is not in an Availability Set because you can add only one WAN Accelerator per tenant. You can deploy as many WAN accelerators as required.

Finally, the backup repository is based on Storage Spaces Direct. I deploy 4 Azure VMs to leverage parity. I choose parity because my S2D managed disks are based on SSD (premium disk). If you want more performance or if you choose standard disks, I recommend you mirroring instead of parity. You can use a single VM to store backups to save money but for this demonstration, I’d like to share with Storage Spaces Direct just to show that it is possible. However, there is one limitation with S2D in Azure: for better performance, managed disks are recommended. An Availability Set with Azure VMs with managed disks supports only three fault domains. That means that in a four-node S2D cluster, two nodes will be in the same fault domain. So there is a chance that two nodes fail simultaneously. But dual parity (or 3-way mirroring) supports two fault domain failures.

Azure resources: Github

I have published in my Github repository a JSON template to deploy the infrastructure described above. You can use this template to deploy the infrastructure for your lab or production environment. In this example, I won’t explain how to deploy the Azure Resources because this template does it automatically.

Active Directory

Active Directory is not mandatory for this kind of solution. I have deployed domain controllers to make management of servers and credentials easier. To configure domain controllers, I started the Azure VMs where domain controller roles will be deployed. In the first VM, I run the following PowerShell cmdlets to deploy the forest:

# Initialize the Data disk
Initialize-Disk -Number 2

#Create a volume on disk
New-Volume -DiskNumber 2 -FriendlyName Data -FileSystem NTFS -DriveLetter E

#Install DNS and ADDS features
Install-windowsfeature -name AD-Domain-Services, DNS -IncludeManagementTools

# Forest deployment
Import-Module ADDSDeployment
Install-ADDSForest -CreateDnsDelegation:$false `
                   -DatabasePath "E:\NTDS" `
                   -DomainMode "WinThreshold" `
                   -DomainName "VeeamCloudConnect.net" `
                   -DomainNetbiosName "HOMECLOUD" `
                   -ForestMode "WinThreshold" `
                   -InstallDns:$true `
                   -LogPath "E:\NTDS" `
                   -NoRebootOnCompletion:$false `
                   -SysvolPath "E:\SYSVOL" `
                   -Force:$true

Then I run these cmdlets for additional domain controllers:

# Initialize data disk
Initialize-Disk -Number 2

# Create a volume on disk
New-Volume -DiskNumber 2 -FriendlyName Data -FileSystem NTFS -DriveLetter E

# Install DNS and ADDS features
Install-windowsfeature -name AD-Domain-Services, DNS -IncludeManagementTools

# Add domain controller to forest
Import-Module ADDSDeployment
Install-ADDSDomainController -NoGlobalCatalog:$false `
                             -CreateDnsDelegation:$false `
                             -Credential (Get-Credential) `
                             -CriticalReplicationOnly:$false `
                             -DatabasePath "E:\NTDS" `
                             -DomainName "VeeamCloudConnect.net" `
                             -InstallDns:$true `
                             -LogPath "E:\NTDS" `
                             -NoRebootOnCompletion:$false `
                             -SiteName "Default-First-Site-Name" `
                             -SysvolPath "E:\SYSVOL" `
                             -Force:$true

Once the Active Directory is ready, I add each Azure VM to the domain by using the following cmdlet:

Add-Computer -Credential homecloud\administrator -DomainName VeeamCloudConnect.net -Restart

Configure Storage Spaces Direct

I have written several topics on Tech-Coffee about Storage Spaces Direct. You can find for example this topic or this one. These topics are more detailed about the Storage Spaces Direct if you need more information.

To configure Storage Spaces Direct in Azure, I started all file servers VMs. Then in each VM I ran the following cmdlet:

# Rename vNIC connected in Internal subnet by Management
rename-netadapter -Name "Ethernet 3" -NewName Management

# Rename vNIC connected in cluster subnet by cluster
rename-netadapter -Name "Ethernet 2" -NewName Cluster

# Disable DNS registration for cluster vNIC
Set-DNSClient -InterfaceAlias *Cluster* -RegisterThisConnectionsAddress $False

# Install required features
Install-WindowsFeature FS-FileServer, Failover-Clustering -IncludeManagementTools -Restart

Once you have run these commands on each server, you can deploy the cluster:

# Validate cluster prerequisites
Test-Cluster -Node AZFLS00, AZFLS01, AZFLS02, AZFLS03 -Include "Storage Spaces Direct",Inventory,Network,"System Configuration"

#Create the cluster
New-Cluster -Node AZFLS00, AZFLS01, AZFLS02, AZFLS03 -Name Cluster-BCK01 -StaticAddress 10.11.0.160

# Set the cluster quorum to Cloud Witness (choose another Azure location)
Set-ClusterQuorum -CloudWitness -AccountName StorageAccount -AccessKey "AccessKey"

# Change the CSV cache to 1024MB per CSV
(Get-Cluster).BlockCacheSize=1024

# Rename network in the cluster
(Get-ClusterNetwork "Cluster Network 1").Name="Management"
(Get-ClusterNetwork "Cluster Network 2").Name="Cluster"

# Enable Storage Spaces Direct
Enable-ClusterS2D -Confirm:$False

# Create a volume and rename the folder Volume1 to Backup
New-Volume -StoragePoolFriendlyName "*Cluster-BCK01*" -FriendlyName Backup -FileSystem CSVFS_ReFS -ResiliencySettingName parity -PhysicalDiskRedundancy 2 -Size 100GB
Rename-Item C:\ClusterStorage\Volume1 Backup
new-item -type directory C:\ClusterStorage\Backup\HomeCloud

Then open the Active Directory console (dsa.msc) and edit the permissions of the OU where the Cluster Name Object is located. Grant the permission to create computer objects to the CNO (in the example Cluster-BCK01) on the OU.

Next, run the following cmdlets to complete the file server’s configuration:

# Add Scale-Out File Server to cluster
Add-ClusterScaleOutFileServerRole -Name BackupEndpoint

# Create a share
New-SmbShare -Name 'HomeCloud' -Path C:\ClusterStorage\Backup\HomeCloud -FullAccess everyone

First start of the Veeam Cloud Connect VM

First time you connect to the Veeam Cloud Connect VM, you should see the following screen. Just specify the license file for Veeam Cloud Connect and click Next. The next screen shows the requirements to run a Veeam Cloud Connect infrastructure.

Deploy Veeam Cloud Gateway

First component I deploy is Veeam Cloud Gateway. In the Veeam Backup & Replication console (in the Veeam Cloud Connect VM), you can navigate to Cloud Connect. Then select Add Gateway.

In the first screen, just click on Add New…

Then specify the name of the first gateway and provide a description.

In the next screen, enter credentials that have administrative permissions in the Veeam Cloud Gateway VM. For that, I created an account in Active Directory and I added it to local administrators of the VM.

Then Veeam tells you that it has to deploy a component on the target host. Just click Apply.

The following screen shows a successful deployment:

Next you have a summary of the operations applied to the target server and what has been installed.

Now you are back to the first screen. This time select the host you just added. You can change the external port. For this test I kept the default value.

Then choose “This server is located behind NAT” and specify the public IP address of the machine. You can find this information in the Azure Portal on the Azure VM blade. Here again I left the default internal port.

This time, Veeam tells you that it has to install Cloud Gateway components.

The following screenshot shows a successful deployment:

Repeat these steps for each Cloud Gateway. In this example, I have two Cloud Gateways:

To complete the Cloud Gateway configuration, open up the Azure Portal and edit the Traffic Manager profile. Add an endpoint for each Cloud Gateway you deployed and select the right public IP address. (Sorry I didn’t find how to loop the creation of endpoint in JSON template).

Because I have two Cloud Gateways and so two Traffic Manager endpoints with the same weight.

Add the backup repository

In this step, we add the backup repository. Open the Veeam Backup & Replication console (in Veeam Cloud Connect VM) and navigate to Backup Infrastructure. Then select Add Repository.

Enter a name and a description for your backup repository.

Next select Shared folder because Storage Spaces Direct with SOFS is based on … shared folder.

Then specify the UNC path to the share that you have previously created (Storage Spaces Direct section) and provide credentials with privileges.

In the next screen you can limit the maximum number of concurrent tasks, the data rates and set some advanced parameters.

Then I choose to not enable vPower NFS because it’s only use in VMware vSphere environments.

The following steps are not mandatory. I just clean up the default configuration. First I remove the default tenant.

Then I change the Configuration Backup task’s repository to the one created previously. For that I navigate to Configuration Backup:

Then I specify that I want to store the configuration backups to my S2D cluster. It is highly recommended to encrypt configuration backup to save credentials

Finally, I remove the default backup repository.

Deploy Veeam WAN Accelerator (Optional)

To add a Veeam WAN Accelerator, navigate to Backup Infrastructure and select Add WAN Accelerator.

In the next screen, click Add New…

Specify the FQDN of the target host and type in a description.

Then select credentials with administrative permissions on the target host.

In the next screen, Veeam tells you that a component has to be installed.

This screen shows a successful deployment.

Next you have a summary screen which provides a summary of the configuration of the target host.

Now you are back to the first screen. Just select the server that you just added and provide a description. I choose to leave the default traffic port and the number of streams.

Select a cache device with enough capacity for your needs.

Finally you can review your settings. If all is ok, just click Apply.

You can add as many WAN accelerators as needed. One WAN Accelerator can used by several tenants. Only one WAN Accelerator can be bound to a tenant.

Prepare the tenant

Now you can add a tenant. Navigate to Cloud Connect tab and select Add tenant.

Provide a user name, a password and a description to your tenant. Then choose Backup storage (cloud backup repository).

In the next screen you can define the maximum number of concurrent tasks and a bandwidth limit.

Then click Add to bind a backup repository to the tenant.

Specify the cloud repository name, the backup repository, the capacity of the cloud repository and the WAN Accelerator.

Once the cloud repository is configured, you can review the settings in the last screen.

Now the Veeam Cloud Connect infrastructure is ready. The enterprise can now connect to Veeam Cloud Connect in Azure.

Connect On-Premises to Veeam Cloud Connect

To connect to the Veeam Cloud Connect infrastructure from On-Premises, open your Veeam Backup & Replication console. Then in Backup infrastructure, navigate to Service Providers. Click Add Service Provider.

Type in the FQDN to your Traffic Manager profile and provide a description. Select the external port your chose for the Veeam Cloud Gateways configuration (I left mine to the default 6180).

In the next screen, enter the credentials to connect to your tenant.

If the credentials are correct, you should see the available cloud repositories.

Now you can create a backup copy job to Microsoft Azure.

Enter a job name and description and configure the copy interval.

Add virtual machine backups to copy to Microsoft Azure and click Next.

In the next screen you can set archival settings and how many restore points you want to keep. You can also configure some advanced settings.

If you a WAN Accelerator on-premises, you can select the source WAN Accelerator.

Then you can configure scheduling options for the backup copy job.

When the backup copy job configuration is complete, the job starts and you should see backup copies being created in the Veeam Cloud Connect infrastructure.

Conclusion

This topic introduces “a large” Veeam Cloud Connect infrastructure within Azure. All components can be deployed in a single VM (or two) for small environments or as described in this post for huge infrastructure. If you have several branch offices and want to send backup data to an offsite location, it can be the right solution instead of tape library.

The post Deploy Veeam Cloud Connect for large environments in Microsoft Azure appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-veeam-cloud-connect-for-large-environments-in-microsoft-azure/feed/ 1 5604
Deploy a SMB storage solution for Hyper-V with StarWind VSAN free //www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/ //www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/#comments Wed, 14 Jun 2017 14:29:22 +0000 //www.tech-coffee.net/?p=5543 StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS ...

The post Deploy a SMB storage solution for Hyper-V with StarWind VSAN free appeared first on Tech-Coffee.

]]>
StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS solution with Windows Server 2016 Standard Edition and StarWind VSAN free. It is an affordable solution for your Hyper-V VM storage.

In this topic, we’ll see how to deploy StarWind VSAN free on two nodes based on Windows Server 2016 Standard Core edition. Then we’ll deploy a Failover Clustering with SOFS to deliver storage to Hyper-V nodes.

Architecture overview

This solution should be deployed on physical servers with physical disks (NVMe, SSD or HDD etc.). For the demonstration, I have used two virtual machines. Each virtual machine has:

  • 4 vCPU
  • 4GB of Memories
  • 1x OS disk (60GB dynamic) – Windows Server 2016 Standard Core edition
  • 1x Data disk (127GB dynamic)
  • 3x vNIC (1x Management / iSCSI, 1x Hearbeart, 1x Synchronization)

Both nodes are deployed and joined to the domain.

Node preparation

On both nodes, I run the following cmdlet to install the features and prepare a volume for StarWind:

# Install FS-FileServer, Failover Clustering and MPIO
install-WindowsFeature FS-FileServer, Failover-Clustering, MPIO -IncludeManagementTools -Restart

# Set the iSCSI service startup to automatic
get-service MSiSCSI | Set-Service -StartupType Automatic

# Start the iSCSI service
Start-Service MSiSCSI

# Create a volume with disk
New-Volume -DiskNumber 1 -FriendlyName Data -FileSystem NTFS -DriveLetter E

# Enable automatic claiming of iSCSI devices
Enable-MSDSMAutomaticClaim -BusType iSCSI

StarWind installation

Because I have installed nodes in Core edition, I install and configure components from PowerShell and command line. You can download StarWind VSAN free from this link. To install StarWind from command line, you can use the following parameters:

Starwind-v8.exe /SILENT /COMPONENTS="comma separated list of component names" /LICENSEKEY="path to license file"

Current list of components:

Service: StarWind iSCSI SAN server.

service\haprocdriver: HA Processor Driver, it is used to support devices that have been created with older versions of the Software.

service\starflb: Loopback Accelerator, it is used with Windows 2012 and upper versions to accelerate iSCSI operation when client resides on the same machine as server.

service\starportdriver: StarPort driver that is required for operation of Mirror devices.

Gui : Management Console ;

StarWindXDll: StarWindX COM object;

StarWindXDll\powerShellEx: StarWindX PowerShell module.

To install StarWind, I have run the following command:

C:\temp\Starwind-v8.exe /SILENT /COMPONENTS="Service,service\starflb,service\starportdriver,StarWindxDll,StarWindXDll\powerShellEx /LICENSEKEY="c:\temp\ StarWind_Virtual_SAN_Free_License_Key.swk"

I run this command on both nodes. After this command is run, StarWind is installed and ready to be configured.

StarWind configuration

StarWind VSAN free provides a trial of 30 days for the management console. After the 30 days, you have to manage the solution from PowerShell. So I decided to configure the solution from PowerShell:

Import-Module StarWindX

try
{
    $server = New-SWServer -host 10.10.0.54 -port 3261 -user root -password starwind

    $server.Connect()

    $firstNode = new-Object Node

    $firstNode.ImagePath = "My computer\E"
    $firstNode.ImageName = "VMSTO01"
    $firstNode.Size = 65535
    $firstNode.CreateImage = $true
    $firstNode.TargetAlias = "vmsan01"
    $firstNode.AutoSynch = $true
    $firstNode.SyncInterface = "#p2=10.10.100.55:3260"
    $firstNode.HBInterface = "#p2=10.10.100.55:3260"
    $firstNode.CacheSize = 64
    $firstNode.CacheMode = "wb"
    $firstNode.PoolName = "pool1"
    $firstNode.SyncSessionCount = 1
    $firstNode.ALUAOptimized = $true
    
    #
    # device sector size. Possible values: 512 or 4096(May be incompatible with some clients!) bytes. 
    #
    $firstNode.SectorSize = 512
	
	#
	# 'SerialID' should be between 16 and 31 symbols. If it not specified StarWind Service will generate it. 
	# Note: Second node always has the same serial ID. You do not need to specify it for second node
	#
	$firstNode.SerialID = "050176c0b535403ba3ce02102e33eab" 
    
    $secondNode = new-Object Node

    $secondNode.HostName = "10.10.0.55"
    $secondNode.HostPort = "3261"
    $secondNode.Login = "root"
    $secondNode.Password = "starwind"
    $secondNode.ImagePath = "My computer\E"
    $secondNode.ImageName = "VMSTO01"
    $secondNode.Size = 65535
    $secondNode.CreateImage = $true
    $secondNode.TargetAlias = "vmsan02"
    $secondNode.AutoSynch = $true
    $secondNode.SyncInterface = "#p1=10.10.100.54:3260"
    $secondNode.HBInterface = "#p1=10.10.100.54:3260"
    $secondNode.ALUAOptimized = $true
        
    $device = Add-HADevice -server $server -firstNode $firstNode -secondNode $secondNode -initMethod "Clear"
    
    $syncState = $device.GetPropertyValue("ha_synch_status")

    while ($syncState -ne "1")
    {
        #
        # Refresh device info
        #
        $device.Refresh()

        $syncState = $device.GetPropertyValue("ha_synch_status")
        $syncPercent = $device.GetPropertyValue("ha_synch_percent")

        Start-Sleep -m 2000

        Write-Host "Synchronizing: $($syncPercent)%" -foreground yellow
    }
}
catch
{
    Write-Host "Exception $($_.Exception.Message)" -foreground red 
}

$server.Disconnect()

Once this script is run, two HA images are created and they are synchronized. Now we have to connect to this device through iSCSI.

iSCSI connection

To connect to the StarWind devices, I use iSCSI. I choose to set iSCSI from PowerShell to automate the deployment. In the first node, I run the following cmdlets:

New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260
New-IscsiTargetPortal -TargetPortalAddress 10.10.0.55 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.54
Get-IscsiTarget | Connect-IscsiTarget -isMultipathEnabled $True

In the second node, I run the following cmdlets:

New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260
New-IscsiTargetPortal -TargetPortalAddress 10.10.0.54 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.55
Get-IscsiTarget | Connect-IscsiTarget -isMultiPathEnabled $True

You can run the command iscsicpl in a server core to show the iSCSI GUI. You should have something like that:

PS: If you have a 1GB/s network, set the load balance policy to Failover and leave Active the 127.0.0.1 path. If you have 10GB/s network, choose Round Robin policy.

Configure Failover Clustering

Now that a shared volume is available for both node, you can create the cluster:

Test-Cluster -node VMSAN01, VMSAN02

Review the report and if all is ok, you can create the cluster:

New-Cluster -Node VMSAN01, VMSAN02 -Name Cluster-STO01 -StaticAddress 10.10.0.65 -NoStorage

Navigate to Active Directory (dsa.msc) and locate the OU where is located the Cluster Name Object. Edit the permissions on this OU to allow the Cluster Name Object to create computer object:

Now we can create the Scale-Out File Server role:

Add-ClusterScaleOutFileServerRole -Name VMStorage01

Then we can initialize the StarWind disk to convert it later in CSV. Then we can create a SMB share:

# Initialize the disk
get-disk |? OperationalStatus -like Offline | Initialize-Disk

# Create a CSVFS NTFS partition
New-Volume -DiskNumber 3 -FriendlyName VMSto01 -FileSystem CSVFS_NTFS

# Rename the link in C:\ClusterStorage
Rename-Item C:\ClusterStorage\Volume1 VMSTO01

# Create a folder
new-item -Type Directory -Path C:\ClusterStorage\VMSto01 -Name VMs

# Create a share
New-SmbShare -Name 'VMs' -Path C:\ClusterStorage\VMSto01\VMs -FullAccess everyone

The cluster looks like that:

Now from Hyper-V, I am able to store VM in this cluster like that:

Conclusion

StarWind VSAN free and Windows Server 2016 Standard Edition provides an affordable SDS solution. Thanks to this solution, you can deploy a 2-node storage cluster which provides SMB 3.11 shares. So Hyper-V can uses these shares to host virtual machines.

The post Deploy a SMB storage solution for Hyper-V with StarWind VSAN free appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/feed/ 4 5543
RDS 2016 farm: RDS Final configuration //www.tech-coffee.net/rds-2016-farm-rds-final-configuration/ //www.tech-coffee.net/rds-2016-farm-rds-final-configuration/#comments Wed, 24 May 2017 11:32:49 +0000 //www.tech-coffee.net/?p=5497 This article is the final topic about how to deploy a Remote Desktop Service in Microsoft Azure with Windows Server 2016. In this topic, we will apply the RDS Final configuration, such as the certificates, the collection and some custom settings. Then we will try to open a remote application from the portal. Deploy a ...

The post RDS 2016 farm: RDS Final configuration appeared first on Tech-Coffee.

]]>
This article is the final topic about how to deploy a Remote Desktop Service in Microsoft Azure with Windows Server 2016. In this topic, we will apply the RDS Final configuration, such as the certificates, the collection and some custom settings. Then we will try to open a remote application from the portal.

Certificates

Before creating the collection, we can configure the certificates for RD Web Access, RD Gateway and the brokers. You can request a public certificate for this or you can use your own PKI. If you not use your own PKI, you have to distribute the certificate authority certificates to all clients. You have also to provide the CRL/OCSP responder. If you use a public certificate, there is almost no client side configuration. You can get more information about required certificates here.

Once you have your certificate(s), you can open the properties of the RDS Farm from the server manager. Then navigate to certificates. In this interface, you can add the certificate(s) for each role.

On client side, you should add a setting by GPO or with local policy editor. Get the RD Connection Broker – Publishing thumbprint and copy it. Then edit this setting (Specify SH1 thumbprint of certificates representing trusted .rdp publishers) and add the certificate thumbprint without spaces. This setting enable to remove a pop-up for the clients.

Create and configure the collection

To create the collection, I use the following PowerShell cmdlet:

New-RDSessionCollection –CollectionName RemoteApps `
                        –SessionHost azrdh0.homecloud.net, azrdh1.homecloud.net `
                        –CollectionDescription "Remote application collection" `
                        –ConnectionBroker azrdb0.homecloud.net

Once you have created the collection, the RDS farm should indicates a new collection:

Now we can configure the User Profile Disks location:

Set-RDSessionCollectionConfiguration -CollectionName RemoteApps `
                                     -ConnectionBroker azrdb0.homecloud.net `
                                     -EnableUserProfileDisk `
                                     -MaxUserProfileDiskSizeGB 10 `
                                     -DiskPath \\SOFS\UPD$

If you edit the properties of the collection, you should have this User Profile Disk configuration:

In the \\sofs\upd$ folder, you can check if you have new VHDX files as bellow:

From the Server Manager, you can configure the collection properties as below:

Add applications to the collection

The collection that we have created is used to publish applications. So, you can install each application you need in all RD Host servers. Once the applications are installed you can publish them. Open the collection properties and click on add applications in RemoteApp Programs part.

Then select applications you want to publish. If the application you want to publish is not available in the list, you can click on add.

Then the wizard confirms you the application that will be published.

Test

Now that applications are published, you can browse to the RD Web Access portal. In my configuration, I have added a DNS record which is bound to the Azure Load Balancer public IP. Specify your credential and click on Sign In.

Click on the application of your choice.

I have chosen the calculator. As you can see in the task manager, the calculator is run through a Remote Desktop Connection. Great, it is working.

Conclusion

This series of topics about Remote Desktop Services shown you how to deploy the farm in Azure. We saw that Windows Server 2016 brings a lot of new features that ease the deployment in Azure. However, you can also deploy the RDS Farm On-Prem if you wish.

The post RDS 2016 farm: RDS Final configuration appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-rds-final-configuration/feed/ 2 5497