Storage – Tech-Coffee //www.tech-coffee.net Tue, 04 Dec 2018 18:13:40 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 S2D Real case: detect a lack of cache //www.tech-coffee.net/s2d-real-case-detect-a-lack-of-cache/ //www.tech-coffee.net/s2d-real-case-detect-a-lack-of-cache/#comments Tue, 04 Dec 2018 18:11:39 +0000 //www.tech-coffee.net/?p=6667 Last week I worked for a customer who went through a performance issue on a S2D cluster. The customer’s infrastructure is composed of one compute cluster (Hyper-V) and one 4-node S2D cluster. First, I checked if it was related to the network and then if it’s a hardware failure that produces this performance drop. Then ...

The post S2D Real case: detect a lack of cache appeared first on Tech-Coffee.

]]>
Last week I worked for a customer who went through a performance issue on a S2D cluster. The customer’s infrastructure is composed of one compute cluster (Hyper-V) and one 4-node S2D cluster. First, I checked if it was related to the network and then if it’s a hardware failure that produces this performance drop. Then I ran the script watch-cluster.ps1 from VMFleet.

The following screenshot comes from watch-cluster.ps1 script. As you can see, a CSV has almost 25ms of latency. A high latency impacts overall performance especially when intensive IO applications are hosted. If we look into the cache, a lot of miss per second are registered especially on the high latency CSV. But why Miss/sec can produce a high latency?

What happens in case of lack of cache?

The solution I troubleshooted is composed of 2 SSD and 8 HDD per node. The cache ratio is 1:4 and its capacity is almost of 6,5% of the raw capacity. The IO path in normal operation is depicted in the following schema:

Now in the current situation, I have a lot Miss/Sec, that means that SSD cannot handle these IO because there is not enough cache. Below the schema depicts the IO path for miss IO:

You can see that in case of miss, the IO go to HDD directly without being cached in SSD. HDD is really slow compared to SSD and each time IO works directly with this kind of storage device, the latency is increased. When the latency is increased, the overall performance decrease.

How to resolve that?

To resolve this issue, I told to customer to add two SSD in each node. These SSD should be equivalent (or almost) than those already installed in nodes. By adding SSD, I improve the cache ratio to 1:2 and the capacity to 10% compared to raw capacity.

It’s really important to size kindly the cache tier when you design your solution to avoid this issue. As said a fellow MVP: storage is cheap, downtime is expensive.

The post S2D Real case: detect a lack of cache appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/s2d-real-case-detect-a-lack-of-cache/feed/ 1 6667
Storage Spaces Direct: Parallel rebuild //www.tech-coffee.net/storage-spaces-direct-parallel-rebuild/ //www.tech-coffee.net/storage-spaces-direct-parallel-rebuild/#comments Fri, 22 Jun 2018 15:39:59 +0000 //www.tech-coffee.net/?p=6416 Parallel rebuild is a Storage Spaces features that enables to repair a storage pool even if the failed disk is not replaced. This feature is not new to Storage Spaces Direct because it exists also since Windows Server 2012 with Storage Spaces. This is an automatic process which occurs if you have enough free space ...

The post Storage Spaces Direct: Parallel rebuild appeared first on Tech-Coffee.

]]>
Parallel rebuild is a Storage Spaces features that enables to repair a storage pool even if the failed disk is not replaced. This feature is not new to Storage Spaces Direct because it exists also since Windows Server 2012 with Storage Spaces. This is an automatic process which occurs if you have enough free space in the storage pool. This is why Microsoft recommends to leave some free space in the storage pool to allow the parallel rebuild. This amount of free space is often forgotten when designing Storage Spaces Direct solution, this is why I wanted to write this theoretical topic.

How works parallel rebuild

Parallel rebuild needs some free spaces to work. It’s like spare free space. When you create a RAID6 volume, a disk is in spare in case of failure. In Storage Spaces (Direct), instead of spare disk, we have spare free space. Parallel rebuild occurs when a disk fails. If enough of capacity is available, parallel rebuild runs automatically and immediately to restore the resiliency of the volumes. In fact, Storage Spaces Direct creates a new copy of the data that were hosted by the failed disk.

When you receive the new disk (4h later because you took a +4h support :p), you can replace the failed disk. The disk is automatically added to the storage pool if the auto pool option is enabled. Once the disk is added to the storage pool, an automatic rebalance process is run to spread data across all disks to get the best efficiency.

How to calculate the amount of free spaces

Microsoft recommends to leave free space equal to one capacity disk per node until 4 drives:

  • 2-node configuration: leave free the capacity of 2 capacity devices
  • 3-node configuration: leave free the capacity of 3 capacity devices
  • 4-node and more configuration: leave free the capacity of 4 capacity devices

Let’s think about a 4-node S2D cluster with the following storage configuration. I plan to deploy 3-Way Mirroring:

  • 3x SSD of 800GB (Cache)
  • 6x HDD of 2TB (Capacity). Total: 48TB of raw storage.

Because, I deploy a 4-node configuration, I should leave free space equivalent to four capacity drives. So, in this example 8TB should be the amount of free space for parallel rebuild. So, 40TB are available. Because I want to implement 3-Way Mirroring, I divide the available capacity per 3. So 13.3TB is the useable storage.

Now I choose to add a node to this cluster. I don’t need to reserve space for parallel rebuild (regarding the Microsoft recommendation). So I add 12TB capacity (6x HDD of 2TB) in the available capacity for a total of 52TB.

Conclusion

Parallel rebuild is an interesting feature because it enables to restore the resiliency even if the failed disk is not yet replaced. But parallel rebuild has a cost regarding the storage usage. Don’t forget the reserved capacity when you are planning the capacity.

The post Storage Spaces Direct: Parallel rebuild appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/storage-spaces-direct-parallel-rebuild/feed/ 2 6416
Storage Spaces Direct and deduplication in Windows Server 2019 //www.tech-coffee.net/storage-spaces-direct-and-deduplication-in-windows-server-2019/ //www.tech-coffee.net/storage-spaces-direct-and-deduplication-in-windows-server-2019/#respond Tue, 05 Jun 2018 14:58:14 +0000 //www.tech-coffee.net/?p=6382 When Windows Server 2016 has been released, the data deduplication was not available for ReFS file system. With Storage Spaces Direct, the volume should be formatted in ReFS to get latest features (Accelerated VHDX operations) and to get the best performance. So, for Storage Spaces Direct data deduplication was not available. Data Deduplication can reduce ...

The post Storage Spaces Direct and deduplication in Windows Server 2019 appeared first on Tech-Coffee.

]]>
When Windows Server 2016 has been released, the data deduplication was not available for ReFS file system. With Storage Spaces Direct, the volume should be formatted in ReFS to get latest features (Accelerated VHDX operations) and to get the best performance. So, for Storage Spaces Direct data deduplication was not available. Data Deduplication can reduce the storage usage by removing duplicated blocks by replacing by metadata.

Since Windows Server 1709, Data deduplication is supported for ReFS volume. That means that it will be also available in Windows Server 2019. I have updated my S2D lab to Windows Server 2019 to show you how it will be easy to enable deduplication on your S2D volume.

Requirements

To implement data deduplication on S2D volume in Windows Server 2019 you need the following:

  • An up and running S2D cluster executed by Windows Server 2019
  • (Optional) Windows Admin Center: it will help to implement deduplication
  • Install the deduplication feature on each node: Install-WindowsFeature FS-Data-Deduplication

Enable deduplication

Storage Spaces Direct in Windows Server 2019 will be fully manageable from Windows Admin Center (WAC). That means that you can also enable deduplication and compression from WAC. Connect to your hyperconverged cluster from WAC and navigate to Volumes. Select the volume you want and enable Deduplication and compression.

WAC raises an information pop-up to explain you what deduplication and compression is. Click on Start.

Select Hyper-V as deduplication mode and click on Enable deduplication.

Once it is activated, WAC should tell you the percent of saved state. Currently it is not working in WAC 1804.

Get information by using PowerShell

To get deduplication information on S2D node, open a PowerShell prompt. You can get the list of data deduplication command by running Get-Command *Dedup*.

If you run Get-DedupStatus, you should get the following data deduplication summary. As you can see in the following screenshot, I have saved some spaces in my CSV volume.

By running Get-DedupVolume, you can get the saving rate of the data deduplication. In my lab, data deduplication helps me to save almost 50% of storage space. Not bad.

Conclusion

Data deduplication on S2D was expected by many of customers. With Windows Server 2019, the feature will be available. Currently when you deploy 3-way Mirroring for VM, only 33% of raw storage is available. With Data Deduplication we can expect 50%. Thanks to this feature, the average cost of a S2D solution will be reduced.

The post Storage Spaces Direct and deduplication in Windows Server 2019 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/storage-spaces-direct-and-deduplication-in-windows-server-2019/feed/ 0 6382
Real Case: Implement Storage Replica between two S2D clusters //www.tech-coffee.net/real-case-implement-storage-replica-between-two-s2d-clusters/ //www.tech-coffee.net/real-case-implement-storage-replica-between-two-s2d-clusters/#comments Fri, 06 Apr 2018 09:08:17 +0000 //www.tech-coffee.net/?p=6252 This week, in part of my job I deployed a Storage Replica between two S2D Clusters. I’d like to share with you the steps I followed to implement the storage replication between two S2D hyperconverged cluster. Storage Replica enables to replicate volumes at the block-level. For my customer, Storage Replica is part of a Disaster ...

The post Real Case: Implement Storage Replica between two S2D clusters appeared first on Tech-Coffee.

]]>
This week, in part of my job I deployed a Storage Replica between two S2D Clusters. I’d like to share with you the steps I followed to implement the storage replication between two S2D hyperconverged cluster. Storage Replica enables to replicate volumes at the block-level. For my customer, Storage Replica is part of a Disaster Recovery Plan in case of the first room is down.

Architecture overview

The customer has two rooms. In each room, a four-node S2D cluster has been deployed. Each node has a Mellanox Connectx3-Pro (dual 10GB ports) and an Intel network adapter for VMs. Currently the Mellanox network adapter is used for SMB traffic such as S2D and Live-Migration. This network adapter supports RDMA. Storage Replica can leverage SMB Direct (RDMA). So, the goal is to use also the Mellanox adapters for Storage Replica.

In each room, two Dell S4048S switches are deployed in VLT. Then the switches in both room are connected by two fiber optics of around 5Km. The latency is less than 5ms so we can implement synchronous replication. The Storage Replica traffic must use the fiber optics. Currently the storage traffic is in a VLAN (ID: 247). We will use the same VLAN for Storage Replica.

Each S2D cluster has several Cluster Shared Volume (CSV). Among all these CSV, two CSV will be replicated in each S2D cluster. Below you can find the name of volumes that will be replicated:

  • (S2D Cluster Room 1) PERF-AREP-01 -> (S2D Cluster Room 2) PERF-PREP-01
  • (S2D Cluster Room 1) PERF-AREP-02 -> (S2D Cluster Room 2) PERF-PREP-02
  • (S2D Cluster Room 2) PERF-AREP-03 -> (S2D Cluster Room 1) PERF-PREP-03
  • (S2D Cluster Room 2) PERF-AREP-04 -> (S2D Cluster Room 1) PERF-PREP-04

In order to work, each volume (source and destination) are strictly identical (same capacity, same resilience, same file system etc.). I will create a log volume per volume so I’m going to deploy 4 log volumes per S2D cluster.

Create log volumes

First of all, I create the log volumes by using the following cmdlet. The log volumes must not be converted to Cluster Shared Volume and a drive letter must be assigned:

New-Volume -StoragePoolFriendlyName "<storage pool name>" `
           -FriendlyName "<volume name>" `
           -FileSystem ReFS `
           -DriveLetter "<drive letter>" `
           –Size <capacity> 

As you can see in the following screenshots, I created four log volumes per cluster. The volumes are not CSV.

In the following screenshot, you can see that for each volume, there is a log volume.

Grant Storage Replica Access

You must grant security access between both cluster to implement Storage Replica. To grant the access, run the following cmdlets:

Grant-SRAccess -ComputerName "<Node cluster 1>" -Cluster "<Cluster 2>"
Grant-SRAccess -ComputerName "<Node cluster 2>" -Cluster "<Cluster 1>"

Test Storage Replica Topology

/!\ I didn’t success to run the test storage replica topology. It seems there is a known issue in this cmdlet

N.B: To run this test, you must move the CSV on the node which host the core cluster resources. In the below example, I moved the CSV to replicate on HyperV-02.


To run the test, you have to run the following cmdlet:

Test-SRTopology -SourceComputerName "<Cluster room 1>" `
                -SourceVolumeName "c:\clusterstorage\PERF-AREP-01\" `
                -SourceLogVolumeName "R:" `
                -DestinationComputerName "<Cluster room 2>" `
                -DestinationVolumeName "c:\ClusterStorage\Perf-PREP-01\" `
                -DestinationLogVolumeName "R:" `
                -DurationInMinutes 10 `
                -ResultPath "C:\temp" 

As you can see in the below screenshot, the test is not successful because a path issue. Even if the test didn’t work, I was able to enable Storage Replica between cluster. So if you have the same issue, try to enable the replication (check the next section).

Enable the replication between two volumes

To enable the replication between the volumes, you can run the following cmdlets. With these cmdlets, I created the four replications.

New-SRPartnership -SourceComputerName "<Cluster room 1>" `
                  -SourceRGName REP01 `
                  -SourceVolumeName c:\ClusterStorage\PERF-AREP-01 `
                  -SourceLogVolumeName R: `
                  -DestinationComputerName "<Cluster Room 2>" `
                  -DestinationRGName REP01 `
                  -DestinationVolumeName c:\ClusterStorage\PERF-PREP-01 `
                  -DestinationLogVolumeName R:

New-SRPartnership -SourceComputerName "<Cluster room 1>" `
                  -SourceRGName REP02 `
                  -SourceVolumeName c:\ClusterStorage\PERF-AREP-02 `
                  -SourceLogVolumeName S: `
                  -DestinationComputerName "<Cluster Room 2>" `
                  -DestinationRGName REP02 `
                  -DestinationVolumeName c:\ClusterStorage\PERF-PREP-02 `
                  -DestinationLogVolumeName S:

New-SRPartnership -SourceComputerName "<Cluster Room 2>" `
                  -SourceRGName REP03 `
                  -SourceVolumeName c:\ClusterStorage\PERF-AREP-03 `
                  -SourceLogVolumeName T: `
                  -DestinationComputerName "<Cluster room 1>" `
                  -DestinationRGName REP03 `
                  -DestinationVolumeName c:\ClusterStorage\PERF-PREP-03 `
                  -DestinationLogVolumeName T:

New-SRPartnership -SourceComputerName "<Cluster Room 2>" `
                  -SourceRGName REP04 `
                  -SourceVolumeName c:\ClusterStorage\PERF-AREP-04 `
                  -SourceLogVolumeName U: `
                  -DestinationComputerName "<Cluster room 1>" `
                  -DestinationRGName REP04 `
                  -DestinationVolumeName c:\ClusterStorage\PERF-PREP-04 `
                  -DestinationLogVolumeName U: 

Now that replication is enabled, if you open the failover clustering management, you can see that some volumes are source or destination. A new tab called replication is added and you can check the replication status. The destination volume is no longer accessible until you reverse storage replica way.

Once the initial synchronization is finished, the replication status is Continuously replicating.

Network adapters used by Storage Replica

In the overview section, I said that I want use the Mellanox network adapters for Storage Replica (for RDMA). So I run the following cmdlet to check the Storage Replica is using the Mellanox Network Adapters.

Reverse the Storage Replica way

To reverse the Storage Replica, you can use the following cmdlet:

Set-SRPartnership -NewSourceComputerName "<Cluster room 2>" `
                  -SourceRGName REP01 `
                  -DestinationComputerName "<Cluster room 1>" `
                  -DestinationRGName REP01   

Conclusion

Storage Replica enables to replicate a volume at the block-level to another volume. In this case, I have two S2D clusters where each cluster hosts two source volumes and two destination volumes. Storage Replica helps the customer to implement a Disaster Recovery Plan.

The post Real Case: Implement Storage Replica between two S2D clusters appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/real-case-implement-storage-replica-between-two-s2d-clusters/feed/ 3 6252
Deploy a Software-Defined Storage solution with StarWind Virtual SAN //www.tech-coffee.net/deploy-a-software-defined-storage-solution-with-starwind-vsan/ //www.tech-coffee.net/deploy-a-software-defined-storage-solution-with-starwind-vsan/#respond Fri, 26 Jan 2018 10:34:51 +0000 //www.tech-coffee.net/?p=6117 StarWind Virtual SAN is a Software-Defined Storage solution which enables to replicate data across several nodes to ensure availability. The data are mirrored between two or more nodes. Hypervisor can be installed in the StarWind Virtual SAN nodes (hyperconverged) or separated from hypervisor (converged). StarWind Virtual SAN is easy to use and provide high performance. ...

The post Deploy a Software-Defined Storage solution with StarWind Virtual SAN appeared first on Tech-Coffee.

]]>
StarWind Virtual SAN is a Software-Defined Storage solution which enables to replicate data across several nodes to ensure availability. The data are mirrored between two or more nodes. Hypervisor can be installed in the StarWind Virtual SAN nodes (hyperconverged) or separated from hypervisor (converged). StarWind Virtual SAN is easy to use and provide high performance. Moreover, StarWind provides pro active support. In this topic I’ll show you how to deploy a 3-Nodes StarWind VSAN to use with Hyper-V or ESXi.

Lab overview

To write this topic, I have deployed three VMware VMs running on Windows Server 2016. Each VM has the following configuration:

  • 2 vCPU
  • 8GB of memory
  • 1x VMXNET3 NIC in management network (for Active Directory, RDP, VM management)
  • 1x VMXNET3 NIC in cluster network (synchronization and heartbeat)
  • 1x VMXNET3 NIC in Storage network (iSCSI with hypervisor)
  • 1x 100GB Data disk

If you plan to deploy StarWind VSAN in production, you need physical server with enough storage and with enough network adapters.

StarWind Virtual SAN installation

First, download StarWind VSAN from their website. Once you have downloaded the installer, execute it on each StarWind VSAN node. First, accept the license agreement.

In the next screen, click on Next.

Specify a folder location where StarWind Virtual SAN will be installed.

Select StarWind Virtual SAN Server in the drop down menu.

Specify the start menu folder and click on Next.

If you want a desktop icon, enable the checkbox.

If you have already a license key, select Thank you, I do have a key already and click on Next.

Specify the location of the license file and click on Next.

Review the license information and click on Next.

If the iSCSI service is not started and disabled, you’ll get this pop-up. Click on OK to enable and start the Microsoft iSCSI initiator service.

Once you have installed StarWind Virtual SAN in each node, you can start next step.

Create an iSCSI target and a storage device

Open StarWind Management Console and click on Add Server.

Then add each node and click on OK. In the below screenshot, I clicked on Scan StarWind Servers to discover automatically nodes.

When you connect to each node, you get this warning. Choose the default location of the storage pool (storage devices).

Right click on the first node and select Add Target.

Specify a target alias and be sure to allow multiple concurrent iSCSI connections.

Once the target has been created, you get the following screen:

Now, right click on the target and select Add new Device to Target.

Select Hard Disk Device and click on Next.

Choose the option which apply to your configuration. In my case, I choose virtual disk.

Specify a name and a size for the virtual disk.

Choose thick-provisioned or Log Structured File System (LSFS). LSFS is designed for Virtual Machine because this file system eliminates IO Blender effect. With LSFS you can also enable deduplication. Choose also the right block cache size.

In the next screen, you can choose where are held metadata and how may worker threads you want.

Choose a device RAM cache parameters.

You can also specify a flash cache capacity if you have installed SSD in your nodes.

Then click on Create to create the storage device.

Once the storage device is created, you get the following screen:

At this time, you have a virtual disk on the first node. This virtual disk can store your data. But this storage device has no resiliency. In the next steps, we will replicate this storage devices with the two other nodes.

Replicate the storage device in other StarWind VSAN nodes

Right click on the storage device and select Replication Manager.

In the replication manager, select Add Replica.

Select Synchronous Two-Way Replication to replicate data across StarWind Virtual SAN nodes.

Specify the hostname and the port of the partner and click on Next.

Then select the failover strategy: Heartbeat or Node Majority. In my case I choose Node Majority. This mode requires that the majority of nodes are online. In a three nodes configuration, you can support only one loss.

Then choose to create a new partner device.

Specify the target name and the location of the storage device in partner node.

Select the network for synchronization. In my case, I select the cluster network.

Then select to synchronize from existing device.

To start the creation of the replication, click on Create Replica.

Repeat the same previous steps for the third node. At the end, the configuration should be similar to the following screenshot:

In StarWind Management Console, if you click on a target, you can see each iSCSI session: each node has two iSCSI sessions because there are three nodes.

iSCSI connection

Now that StarWind Virtual SAN is ready, you can connect by using iSCSI your favorite hypervisor. Don’t forget to configure MPIO to support the multipath. For ESXi you can read this topic.

The post Deploy a Software-Defined Storage solution with StarWind Virtual SAN appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-software-defined-storage-solution-with-starwind-vsan/feed/ 0 6117
Don’t worry, Storage Spaces Direct is not dead ! //www.tech-coffee.net/dont-worry-storage-spaces-direct-not-dead/ //www.tech-coffee.net/dont-worry-storage-spaces-direct-not-dead/#comments Fri, 20 Oct 2017 08:53:32 +0000 //www.tech-coffee.net/?p=5840 Usually, I’m not speaking about news but only about technical details. But with the release of Windows Server 2016 1709, lot of misinformation have been written and I’d like to show another way to look at this. First of all, it is important to understand what happened to Windows Server 2016. Microsoft has modified how ...

The post Don’t worry, Storage Spaces Direct is not dead ! appeared first on Tech-Coffee.

]]>
Usually, I’m not speaking about news but only about technical details. But with the release of Windows Server 2016 1709, lot of misinformation have been written and I’d like to show another way to look at this.

First of all, it is important to understand what happened to Windows Server 2016. Microsoft has modified how Windows Server is distributed to customer. There is two kind of deployment:

  • LTSC (Long-Term Servicing Channel): This is Windows Server with 5 years support and 5 years of extended support. You ‘ll get security and quality updates but no new features. Windows Server Core and with GUI is supported with this deployment. Microsoft expects to release a new LTSC every 2 or 3 years
  • SAC (Semi Annual Channel): With this kind of deployment, Microsoft will release a new version each 6 months. Each release will be supported 18 months from the initial release. Each new release should get new features. Only Windows Server Core is supported with this kind of deployment.

So the release of this month called 1709 (2017 = 17, September = 09: 1709) is part of the SAC deployment mode. In 6 months, a new release in Semi Annual Channel should be released called 1803.

But where is Storage Spaces Direct

Storage Spaces Direct is the main piece to run Software-Defined Storage with Microsoft solution. This feature has been released with Windows Server 2016 called now 1609 (October 2016, you follow me ?). The Windows Server 1609 is LTSC. Storage Spaces Direct (S2D for friends), works great with this release and I have deployed plenty of S2D cluster which are currently running in production without issues (yes I had some issues but resolved quickly).

Microsoft this month has released Windows Server 1709 which is in SAC. This release contains mainly container improvements, and the reason of this topic, no support of S2D. This is a SAC release, not a service pack. You can’t compare anymore a service pack with a SAC release. SAC is a continuously system upgrade while service pack is mainly an aggregate of updates. You are running S2D? don’t install 1709 release and wait 6 months… you’ll see 🙂

Why removing the support of S2D ?

The Storage is a complicated component. We need stable and valuable storage because today companies data are located in this storage. If the storage gone, the company can close down.

I can tell you that Microsoft works hard on S2D to bring you the best Software-Defined solution. But the level of validation for production has not been reached to provide S2D with Windows Server 1709. What do you prefer: a buggy S2D release or wait 6 months for a high quality product ? From my side I prefer to wait 6 months for a better product.

Why Storage Spaces Direct is not dead ?

Last two days, I read some topics saying that Microsoft is pushing Azure / Azure Stack and they don’t care about On-Prem solution. Yes this true, today Microsoft talks only about Azure / Azure Stack and I think it is a shame. But Azure Stack solution is based on Storage Spaces Direct. They need to improve this feature to deploy more and more Azure Stack.

Secondly, Microsoft has presented a new GUI feature called Honolulu. Microsoft has developed a module to manage hyperconverged solution. You may have seen the presentation at Ignite. Why Microsoft is developing a product for a technology that wants to give up?

To finish, I work sometime with the product group which is in charge of S2D. I can tell you they work hard on the product to make it greater. I have the good fortune of being able to see and try next new features of S2D.

Conclusion

You are running S2D on Windows Server 2016: you should keep Windows Server 1609 and wait for the next new release in SAC or LTSC. You want to run S2D but you are afraid about the new announcement: be sure that Microsoft will not left behind S2D. You can deploy S2D with Windows Server 1609 or maybe wait for Windows Server 1803 (next march). Be sure of one thing: Storage Spaces Direct is not dead !

The post Don’t worry, Storage Spaces Direct is not dead ! appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/dont-worry-storage-spaces-direct-not-dead/feed/ 4 5840
Get Storage Spaces Direct insights from StarWind Manager //www.tech-coffee.net/get-storage-spaces-direct-insights-from-starwind-manager/ //www.tech-coffee.net/get-storage-spaces-direct-insights-from-starwind-manager/#respond Wed, 27 Sep 2017 15:09:33 +0000 //www.tech-coffee.net/?p=5806 Earlier in the week, I published a blog post about Honolulu project and how in the future, this tool can ease Windows Server management. Today I introduce another management tool for Storage Spaces Direct (hyperconverged or disaggregated). This tool is called StarWind Manager and it is developed by … StarWind. StarWind Manager is currently in ...

The post Get Storage Spaces Direct insights from StarWind Manager appeared first on Tech-Coffee.

]]>
Earlier in the week, I published a blog post about Honolulu project and how in the future, this tool can ease Windows Server management. Today I introduce another management tool for Storage Spaces Direct (hyperconverged or disaggregated). This tool is called StarWind Manager and it is developed by … StarWind.

StarWind Manager is currently in preview version and for the moment, it is free. You can download it from this link. This tool provides you real time metrics such as bandwidth, IOPS, CPU usage and so on. You can get also insights about Storage Spaces Direct such as the physical disks, the Cluster Shared Volumes, the jobs which are running etc. In this topic we’ll see how to deploy StarWind Manager and which kind of information you can retrieve.

StarWind Manager roles

StarWind Manager comes with two roles: the StarWind Manager Core and StarWind Manager agent. The agent must be deployed on Storage Spaces Direct (S2D) cluster nodes while the core can be deployed in a VM. The core role provides a web interface to get information about your cluster and takes information from agent. Currently StarWind manager enables to add only nodes. You can’t add entire cluster with a single click.

Deploy StarWind Manager Core role

After you have downloaded StarWind Manager, you can copy the executable to your VM. I have created a VM with 2 vCPU and 4GB of dynamic memory for this. Then run the executable to start the setup wizard. You can achieve the install process quickly because few information are asked.

Select to install StarWind Manager Core and do not install StarWind Manager agent.

That’s all. StarWind Manager Core is installed after the wizard and it is ready to use.

Deploy StarWind Manager Agent role

To install StarWind Manager Agent on your S2D Cluster nodes, copy the installer on servers and run the wizard. It’s work for Windows Server 2016 Core: I have deployed the agents on Core edition in my lab. In the wizard, select StarWind Manager agent and do not install the StarWind Manager Core.

Repeat the agent installation for each S2D cluster node you have.

Connect to StarWind Manager

To connect to StarWind Manager, open a browser and type https://<VM hostname>:8100/client. Default credentials are root / Starwind.

For the moment, StarWind Manager provides only the ability to add S2D cluster nodes. To add nodes, click on … Add New Node.

After you’ve added your nodes, you can retrieve information about your nodes on dashboard pane. You get the status, the IP, the name, the uptime, information about software and hardware.

On performance tab, you can retrieve real time metrics about your node such as CPU utilization, Memory Usage, IOPS and bandwidth.

On Storage Spaces Direct tab you get information about S2D. This pane provides you cluster overview such as the nodes in the cluster the storage capacity and space allocation and the health.

In the same tab, information about Storage Pools and virtual volumes are provided.

You can get also information about physical disks and running jobs.

Conclusion

I’m more than happy that lot of GUI are under development for Storage Spaces Direct. The main disadvantage of the Microsoft solution compared to VMware vSAN or Nutanix is the user experience. But currently Microsoft is working on Honolulu and StarWind is working on this product. Even if both product are under development, they provide clear information about S2D. Now I hope both products will provide in near future easy access to complex S2D operations for day to day administration such has a physical disk removal (place disk in retired mode, enable LED on front of disk, then change the disk and disable LED). From my point of view, this kind of products can heavily help the adoption of Storage Spaces Direct, in hyperconverged or disaggregated model.

The post Get Storage Spaces Direct insights from StarWind Manager appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/get-storage-spaces-direct-insights-from-starwind-manager/feed/ 0 5806
Deploy a SMB storage solution for Hyper-V with StarWind VSAN free //www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/ //www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/#comments Wed, 14 Jun 2017 14:29:22 +0000 //www.tech-coffee.net/?p=5543 StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS ...

The post Deploy a SMB storage solution for Hyper-V with StarWind VSAN free appeared first on Tech-Coffee.

]]>
StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS solution with Windows Server 2016 Standard Edition and StarWind VSAN free. It is an affordable solution for your Hyper-V VM storage.

In this topic, we’ll see how to deploy StarWind VSAN free on two nodes based on Windows Server 2016 Standard Core edition. Then we’ll deploy a Failover Clustering with SOFS to deliver storage to Hyper-V nodes.

Architecture overview

This solution should be deployed on physical servers with physical disks (NVMe, SSD or HDD etc.). For the demonstration, I have used two virtual machines. Each virtual machine has:

  • 4 vCPU
  • 4GB of Memories
  • 1x OS disk (60GB dynamic) – Windows Server 2016 Standard Core edition
  • 1x Data disk (127GB dynamic)
  • 3x vNIC (1x Management / iSCSI, 1x Hearbeart, 1x Synchronization)

Both nodes are deployed and joined to the domain.

Node preparation

On both nodes, I run the following cmdlet to install the features and prepare a volume for StarWind:

# Install FS-FileServer, Failover Clustering and MPIO
install-WindowsFeature FS-FileServer, Failover-Clustering, MPIO -IncludeManagementTools -Restart

# Set the iSCSI service startup to automatic
get-service MSiSCSI | Set-Service -StartupType Automatic

# Start the iSCSI service
Start-Service MSiSCSI

# Create a volume with disk
New-Volume -DiskNumber 1 -FriendlyName Data -FileSystem NTFS -DriveLetter E

# Enable automatic claiming of iSCSI devices
Enable-MSDSMAutomaticClaim -BusType iSCSI

StarWind installation

Because I have installed nodes in Core edition, I install and configure components from PowerShell and command line. You can download StarWind VSAN free from this link. To install StarWind from command line, you can use the following parameters:

Starwind-v8.exe /SILENT /COMPONENTS="comma separated list of component names" /LICENSEKEY="path to license file"

Current list of components:

Service: StarWind iSCSI SAN server.

service\haprocdriver: HA Processor Driver, it is used to support devices that have been created with older versions of the Software.

service\starflb: Loopback Accelerator, it is used with Windows 2012 and upper versions to accelerate iSCSI operation when client resides on the same machine as server.

service\starportdriver: StarPort driver that is required for operation of Mirror devices.

Gui : Management Console ;

StarWindXDll: StarWindX COM object;

StarWindXDll\powerShellEx: StarWindX PowerShell module.

To install StarWind, I have run the following command:

C:\temp\Starwind-v8.exe /SILENT /COMPONENTS="Service,service\starflb,service\starportdriver,StarWindxDll,StarWindXDll\powerShellEx /LICENSEKEY="c:\temp\ StarWind_Virtual_SAN_Free_License_Key.swk"

I run this command on both nodes. After this command is run, StarWind is installed and ready to be configured.

StarWind configuration

StarWind VSAN free provides a trial of 30 days for the management console. After the 30 days, you have to manage the solution from PowerShell. So I decided to configure the solution from PowerShell:

Import-Module StarWindX

try
{
    $server = New-SWServer -host 10.10.0.54 -port 3261 -user root -password starwind

    $server.Connect()

    $firstNode = new-Object Node

    $firstNode.ImagePath = "My computer\E"
    $firstNode.ImageName = "VMSTO01"
    $firstNode.Size = 65535
    $firstNode.CreateImage = $true
    $firstNode.TargetAlias = "vmsan01"
    $firstNode.AutoSynch = $true
    $firstNode.SyncInterface = "#p2=10.10.100.55:3260"
    $firstNode.HBInterface = "#p2=10.10.100.55:3260"
    $firstNode.CacheSize = 64
    $firstNode.CacheMode = "wb"
    $firstNode.PoolName = "pool1"
    $firstNode.SyncSessionCount = 1
    $firstNode.ALUAOptimized = $true
    
    #
    # device sector size. Possible values: 512 or 4096(May be incompatible with some clients!) bytes. 
    #
    $firstNode.SectorSize = 512
	
	#
	# 'SerialID' should be between 16 and 31 symbols. If it not specified StarWind Service will generate it. 
	# Note: Second node always has the same serial ID. You do not need to specify it for second node
	#
	$firstNode.SerialID = "050176c0b535403ba3ce02102e33eab" 
    
    $secondNode = new-Object Node

    $secondNode.HostName = "10.10.0.55"
    $secondNode.HostPort = "3261"
    $secondNode.Login = "root"
    $secondNode.Password = "starwind"
    $secondNode.ImagePath = "My computer\E"
    $secondNode.ImageName = "VMSTO01"
    $secondNode.Size = 65535
    $secondNode.CreateImage = $true
    $secondNode.TargetAlias = "vmsan02"
    $secondNode.AutoSynch = $true
    $secondNode.SyncInterface = "#p1=10.10.100.54:3260"
    $secondNode.HBInterface = "#p1=10.10.100.54:3260"
    $secondNode.ALUAOptimized = $true
        
    $device = Add-HADevice -server $server -firstNode $firstNode -secondNode $secondNode -initMethod "Clear"
    
    $syncState = $device.GetPropertyValue("ha_synch_status")

    while ($syncState -ne "1")
    {
        #
        # Refresh device info
        #
        $device.Refresh()

        $syncState = $device.GetPropertyValue("ha_synch_status")
        $syncPercent = $device.GetPropertyValue("ha_synch_percent")

        Start-Sleep -m 2000

        Write-Host "Synchronizing: $($syncPercent)%" -foreground yellow
    }
}
catch
{
    Write-Host "Exception $($_.Exception.Message)" -foreground red 
}

$server.Disconnect()

Once this script is run, two HA images are created and they are synchronized. Now we have to connect to this device through iSCSI.

iSCSI connection

To connect to the StarWind devices, I use iSCSI. I choose to set iSCSI from PowerShell to automate the deployment. In the first node, I run the following cmdlets:

New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260
New-IscsiTargetPortal -TargetPortalAddress 10.10.0.55 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.54
Get-IscsiTarget | Connect-IscsiTarget -isMultipathEnabled $True

In the second node, I run the following cmdlets:

New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260
New-IscsiTargetPortal -TargetPortalAddress 10.10.0.54 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.55
Get-IscsiTarget | Connect-IscsiTarget -isMultiPathEnabled $True

You can run the command iscsicpl in a server core to show the iSCSI GUI. You should have something like that:

PS: If you have a 1GB/s network, set the load balance policy to Failover and leave Active the 127.0.0.1 path. If you have 10GB/s network, choose Round Robin policy.

Configure Failover Clustering

Now that a shared volume is available for both node, you can create the cluster:

Test-Cluster -node VMSAN01, VMSAN02

Review the report and if all is ok, you can create the cluster:

New-Cluster -Node VMSAN01, VMSAN02 -Name Cluster-STO01 -StaticAddress 10.10.0.65 -NoStorage

Navigate to Active Directory (dsa.msc) and locate the OU where is located the Cluster Name Object. Edit the permissions on this OU to allow the Cluster Name Object to create computer object:

Now we can create the Scale-Out File Server role:

Add-ClusterScaleOutFileServerRole -Name VMStorage01

Then we can initialize the StarWind disk to convert it later in CSV. Then we can create a SMB share:

# Initialize the disk
get-disk |? OperationalStatus -like Offline | Initialize-Disk

# Create a CSVFS NTFS partition
New-Volume -DiskNumber 3 -FriendlyName VMSto01 -FileSystem CSVFS_NTFS

# Rename the link in C:\ClusterStorage
Rename-Item C:\ClusterStorage\Volume1 VMSTO01

# Create a folder
new-item -Type Directory -Path C:\ClusterStorage\VMSto01 -Name VMs

# Create a share
New-SmbShare -Name 'VMs' -Path C:\ClusterStorage\VMSto01\VMs -FullAccess everyone

The cluster looks like that:

Now from Hyper-V, I am able to store VM in this cluster like that:

Conclusion

StarWind VSAN free and Windows Server 2016 Standard Edition provides an affordable SDS solution. Thanks to this solution, you can deploy a 2-node storage cluster which provides SMB 3.11 shares. So Hyper-V can uses these shares to host virtual machines.

The post Deploy a SMB storage solution for Hyper-V with StarWind VSAN free appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/feed/ 4 5543
Patch management of Storage Spaces Direct cluster //www.tech-coffee.net/patch-management-of-storage-spaces-direct-cluster/ //www.tech-coffee.net/patch-management-of-storage-spaces-direct-cluster/#respond Thu, 02 Mar 2017 19:07:13 +0000 //www.tech-coffee.net/?p=5209 A Storage Spaces Direct cluster is based on a Windows Server 2016. Even if you use Windows Server with user Experience, Windows Server Core or Nano Server, you have to patch your operating system. The patching is important for security, stability and to improve features. Storage Spaces Direct (S2D) is a new feature and it ...

The post Patch management of Storage Spaces Direct cluster appeared first on Tech-Coffee.

]]>
A Storage Spaces Direct cluster is based on a Windows Server 2016. Even if you use Windows Server with user Experience, Windows Server Core or Nano Server, you have to patch your operating system. The patching is important for security, stability and to improve features. Storage Spaces Direct (S2D) is a new feature and it is really important to patch the operating system to resolve some issues. But in disaggregated or hyperconverged model, the S2D hosts sensitive data such as virtual machines. So, to avoid service interruption, the patching of all nodes must be orchestrated. Microsoft provides a solution to update nodes of a failover cluster with orchestration: it is called Cluster Aware Updating. This topic describes how to use Cluster Aware Updating to handle the patch management of Storage Spaces Direct cluster nodes.

Prepare Active Directory

Because we will configure the self-updating in the Cluster Aware Updating (CAU), a computer object will be added in Active Directory in the same organizational unit of the cluster name object (CNO). This is the CNO account that will add the computer object for CAU. So, the CNO must have the permissions on the OU to create computer object. In my example, my cluster is called Cluster-Hyv01. So on the OU where is located this CNO, I have granted Cluster-Hyv01 account to create computer objects.

N.B: you can prestaged the computer object for the CAU and skip this step.

Configure Self-Updating

To configure CAU, you can open the Failover Cluster Manager and right click on the cluster. Then choose More Actions | Cluster-Aware Updating.

In the CAU GUI, click on Configure cluster self-updating options.

In the first window of the wizard, just click on Next.

Next, select the option Add the CAU clustered role, with self-updating mode enabled, to this cluster.

In the next screen, you can specify the frequency of the self-updating. The CAU will check updates regarding this schedule.

Next you can change options on the updating run. You can specify pre and post scripts or you can set that all nodes must be online to run the updating process.

In the next screen, you can choose to get recommended updates like important updates.

Then review the configuration that you have specified and click on apply.

If the CAU clustered role is added successfully, you should have something as below:

In Active Directory, you should have a new computer object beginning with CAU:

Validate the CAU configuration

You can review the good configuration of CAU by clicking on Analyse cluster updating readiness.

You should get the result Passed for each test.

Run manual updates to the cluster

The Self-Updating enables to schedule the cluster updates. But you can also apply updates manually. In the CAU interface, click on Apply updates to this cluster.

In the next screen, just click on next.

Next specify option for the updating process. As previously, you can specify pre and post scripts and other settings such as node order to update or wait that all nodes are online to run the updates.

Next choose if you want to apply recommended updates.

To finish, review the settings. If the configuration is good, click on update.

While updating, you have information in Log of Updates in Progress. You can know which node is currently updating, which node is in maintenance mode and which updates are applied.

When the updates are finished, you should have Succeeded status for each node.

The post Patch management of Storage Spaces Direct cluster appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/patch-management-of-storage-spaces-direct-cluster/feed/ 0 5209
Real case: Storage Spaces Direct physical disk replacement //www.tech-coffee.net/real-case-storage-spaces-direct-physical-disk-replacement/ //www.tech-coffee.net/real-case-storage-spaces-direct-physical-disk-replacement/#comments Sat, 11 Feb 2017 18:15:29 +0000 //www.tech-coffee.net/?p=5087 This week I was in Stockholm to build a Storage Spaces Direct cluster (hyperconverged model). When implementing the cluster, I have seen that a physical disk was failing. I’ve written this topic to show you how I have replaced this disk. Identify the failed physical disk I was deploying VMFleet when I saw the both ...

The post Real case: Storage Spaces Direct physical disk replacement appeared first on Tech-Coffee.

]]>
This week I was in Stockholm to build a Storage Spaces Direct cluster (hyperconverged model). When implementing the cluster, I have seen that a physical disk was failing. I’ve written this topic to show you how I have replaced this disk.

Identify the failed physical disk

I was deploying VMFleet when I saw the both virtual disks in a degraded state. So, I checked the job my running Get-StorageSubSystem *Cluster* | Get-StorageJob. Then I opened the Storage Pool and I have seen the following:

So, it seems that this physical disk was not healthy and I decided to change it. First, I ran the following cmdlet because my trust in the Failover Cluster Manager is limited:

Get-StoragePool *S2D* | Get-PhysicalDisk

Then I add the physical disk object into a PowerShell variable (called $Disk) to manipulate the disk. You can change the OperationalStatus filter by another thing while you get the right disk.

$Disk = Get-PhysicalDisk |? OperationalStatus -Notlike ok

Retire and physically identify storage device

Next I set the usage of this disk to Retired to stop writing on this disk and avoid data loss.

Set-PhysicalDisk -InputObject $Disk -Usage Retired

Next I tried to remove the physical disk from the Storage Pool. It seems that the physical disk is in really bad state. I can’t remove it from the pool. So, I decided to change it anyway.

I ran the following cmdlet to turn on the storage device LED to identify it easily in the datacenter:

Get-PhysicalDisk |? OperationalStatus -Notlike OK | Enable-PhysicalDiskIdentification

Next I move to the server room and as you can see in the below photo, the LED is turned on. So, I changed this disk.

Once the disk is replaced, you can turn off the LED:

Get-PhysicalDisk |? OperationalStatus -like OK | Disable-PhysicalDiskIdentification

Add physical disk to storage pool

Before a reboot of the server, the physical disk can’t identify its enclosure name. The disk automatically joined the Storage Pool but without enclosure information. So, you have to reboot the server to get the right information.

Storage Spaces Direct spread automatically the data across the new disk. This process took almost 30mn.

Sometime the physical disk doesn’t join automatically the Storage Pool. So, you can run the following cmdlet to add the physical disk to the Storage Pool.

Conclusion

With storage solutions, you can be sure that a physical disk, either SSD or HDD will fail some days. With Storage Spaces Direct, Microsoft provides all required tools to change properly failed disks easily. Just set the physical disk as retired, then remove the physical disk (if you can) from the storage pool. To finish you can change the disk.

The post Real case: Storage Spaces Direct physical disk replacement appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/real-case-storage-spaces-direct-physical-disk-replacement/feed/ 8 5087