Storage – Tech-Coffee //www.tech-coffee.net Tue, 04 Dec 2018 18:13:40 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 S2D Real case: detect a lack of cache //www.tech-coffee.net/s2d-real-case-detect-a-lack-of-cache/ //www.tech-coffee.net/s2d-real-case-detect-a-lack-of-cache/#comments Tue, 04 Dec 2018 18:11:39 +0000 //www.tech-coffee.net/?p=6667 Last week I worked for a customer who went through a performance issue on a S2D cluster. The customer’s infrastructure is composed of one compute cluster (Hyper-V) and one 4-node S2D cluster. First, I checked if it was related to the network and then if it’s a hardware failure that produces this performance drop. Then ...

The post S2D Real case: detect a lack of cache appeared first on Tech-Coffee.

]]>
Last week I worked for a customer who went through a performance issue on a S2D cluster. The customer’s infrastructure is composed of one compute cluster (Hyper-V) and one 4-node S2D cluster. First, I checked if it was related to the network and then if it’s a hardware failure that produces this performance drop. Then I ran the script watch-cluster.ps1 from VMFleet.

The following screenshot comes from watch-cluster.ps1 script. As you can see, a CSV has almost 25ms of latency. A high latency impacts overall performance especially when intensive IO applications are hosted. If we look into the cache, a lot of miss per second are registered especially on the high latency CSV. But why Miss/sec can produce a high latency?

What happens in case of lack of cache?

The solution I troubleshooted is composed of 2 SSD and 8 HDD per node. The cache ratio is 1:4 and its capacity is almost of 6,5% of the raw capacity. The IO path in normal operation is depicted in the following schema:

Now in the current situation, I have a lot Miss/Sec, that means that SSD cannot handle these IO because there is not enough cache. Below the schema depicts the IO path for miss IO:

You can see that in case of miss, the IO go to HDD directly without being cached in SSD. HDD is really slow compared to SSD and each time IO works directly with this kind of storage device, the latency is increased. When the latency is increased, the overall performance decrease.

How to resolve that?

To resolve this issue, I told to customer to add two SSD in each node. These SSD should be equivalent (or almost) than those already installed in nodes. By adding SSD, I improve the cache ratio to 1:2 and the capacity to 10% compared to raw capacity.

It’s really important to size kindly the cache tier when you design your solution to avoid this issue. As said a fellow MVP: storage is cheap, downtime is expensive.

The post S2D Real case: detect a lack of cache appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/s2d-real-case-detect-a-lack-of-cache/feed/ 1 6667
Prepare a high-speed storage repository for backup with Qnap //www.tech-coffee.net/prepare-a-high-speed-storage-repository-for-backup-with-qnap/ //www.tech-coffee.net/prepare-a-high-speed-storage-repository-for-backup-with-qnap/#respond Thu, 20 Sep 2018 08:20:42 +0000 //www.tech-coffee.net/?p=6543 Most of my customers try to backup their infrastructure at night to avoid impacting the user workloads. Sometimes there are so much data that the night is not enough to backup the whole infrastructure. Usually customers have a decent production infrastructure in term of speed and the bottleneck is the storage repository. For this kind ...

The post Prepare a high-speed storage repository for backup with Qnap appeared first on Tech-Coffee.

]]>
Most of my customers try to backup their infrastructure at night to avoid impacting the user workloads. Sometimes there are so much data that the night is not enough to backup the whole infrastructure. Usually customers have a decent production infrastructure in term of speed and the bottleneck is the storage repository. For this kind of storage, we prefer to get the most of capacity so usually only HDD are installed. But the performances are poor. To significantly increase performance, SSD/NVMe can be added to the storage repository. Thanks to SSD we can implement caching and tiering. By increasing the storage repository performances, the backup window will be reduced. However, to take advantage of SSD, a 10Gb/s network is required in the storage repository.

In its product list, Qnap has a lot of enterprise-grade NAS that offer redundant power supply, 10Gb/s network adapters, tiering (Qtier), caching and 2x NVMe location (M.2 2280). These NAS can support enough drives to fit your need. To write this topic, I used a TS-873U. With the rails and an additional 4Gb memory, I paid this NAS 1650€. If you require more drives, you can choose the TS-1673U-RP with 64GB of memory for 3500€. For a company, I think they are cheap.

I built the following configuration (not full optimized because I didn’t want to buy new SSD / NVMe / HDD):

  • Cache acceleration (Read / Write): 2x Crucial MX500 500GB => RAID 1
  • Storage Pool (Qtier):
    • High Speed tier: 2x SSD Intel S3610 500GB => RAID 1
    • Capacity tier: 4x HDD Western Digital 2TB Red PRO => RAID 6

Let’s see how to configure step-by-step this Qnap to get good performance.

NAS initialization

First of all, download QFinder Pro from this URL. In order that QFinder Pro discover your NAS, make sure you are in the same network subnet. From my side, I connected directly my laptop to a 1Gb/s port. Then run QFinder Pro. The tool should discover your NAS, on the first window, just click on Next.

Then a web browser is open to start the configuration. Just click on Start Smart Installation Guide.

In the next window, provide a NAS name and the admin password.

Then specify the NTP server. Currently it won’t work because the NAS is not connected to the production network.

Then fill the network settings.

Next choose which file transfer service you want to enable. I choose only the Windows services.

Because I need to configure Qtier, cache acceleration and so on, I choose to Configure disks later.

I choose to not enable the multimedia functions.

Finally click on Apply to start the configuration of the NAS.

Update the firmware

When the NAS is ready, check the firmware version and update the NAS. Navigate to Control Panel | System | Firmware Update. Because the NAS has currently no Internet connection, I downloaded the firmware from QNAP website and I used it to update the NAS.

Click on OK to update the firmware.

Network configuration

N.B: In this example, I didn’t implement the most optimize network solution. Because I have 2x 10Gb/s ports, I should implement one IP per adapter and use iSCSI / MPIO. But I use also my NAS to store data such as movies, ISO, VHDX, VMDK etc. and fetch them with SMB. This is why I chose to implement a NIC teaming. But if the only NAS purpose is backup, I recommend you to not implement NIC teaming.

Open the Network & Virtual Switch panel and navigate to Interfaces. Click on Port Trunking.

Then click on Add.

Select the 10Gb/s adapters. In this example they are called Adapter 5 and Adapter 6.

Next choose General Switch (most common).

Next choose Balance-alb. I selected this mode because according to the QNAP documentation, it provides the best performance.

Now your trunk is created, and you can click on Close.

In Network & Virtual Switch, in the trunk click on configure.

Fill the network settings and specify a Jumbo Frame of 9000. To leverage Jumbo Frame, this configuration must be applied on the switches and on the backup server(s).

Then I set the VLAN number.

Now you can plug the NAS to your production network with the 10Gb/s network adapter.

Create the storage pool

Open Storage & Snapshots and create a Storage Pool. Click on Enable Qtier.

In SSD tab, select the SSD and in SATA tab choose the HDD.

Then click on Create.

Once the storage pool is created the wizard asks you if you want to create a new volume. If you chose to leverage iSCSI / MPIO, click on close and create a iSCSI volume in Storage & Snapshot. From my side, I clicked on New Volume.

Then select the storage pool and click on Next.

Specify the volume alias, the capacity and bytes per inode. I chose 4K. Then you can create a shared folder and an alert threshold.

Click on finish to create the volume.

Configure the cache acceleration

To configure cache acceleration, open Storage & Snapshots. Navigate to Cache Acceleration. Click on Create.

Choose the NVMe drives and Read-Write cache type. Click on Next.

I chose to accelerate sequential I/O because backup works on large file.

Then choose the volume that will be accelerated.

Once the cache acceleration is created, the hit rate should increase.

Veeam result

Then I configured Veeam to try the solution. Veeam backup VMs located in my 2-Node S2D cluster based on S3610 200GB SSD. Because These SSD are only 200GB, the performances are poor. When I ran benchmark, I was not able to exceed 450-500MB/s (130K IOPS 4K Read 70% – Write 30%). The following capture shows a processing rate of 373MB/s with a Bottleneck located at the source. 373MB/s is fast compared to some customer productions. But I’m sure I can go beyond this value with a faster S2D cluster.

Conclusion

Today, it’s a non-sense to implement a full flash array for backup repository because it is too expansive. What we need in a storage repository is capacity. But some time, because of the large amount of data to backup, we need also to implement some high-speed drives such as SSD to reduce the backup window. With a small amount of SSD and tiering/caching, we are able to significantly increase performance.

The post Prepare a high-speed storage repository for backup with Qnap appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/prepare-a-high-speed-storage-repository-for-backup-with-qnap/feed/ 0 6543
Use Honolulu to manage your Microsoft hyperconverged cluster //www.tech-coffee.net/use-honolulu-to-manage-your-microsoft-hyperconverged-cluster/ //www.tech-coffee.net/use-honolulu-to-manage-your-microsoft-hyperconverged-cluster/#comments Wed, 21 Feb 2018 11:09:19 +0000 //www.tech-coffee.net/?p=6194 Few months ago, I have written a topic about the next gen Microsoft management tool called Honolulu project. Honolulu provides management for standalone Windows Server, failover clustering and hyperconverged. Currently hyperconverged management works only on Windows Server Semi-Annual Channel (SAC) versions (I cross finger for Honolulu support on Windows Server LTSC). I have upgraded my ...

The post Use Honolulu to manage your Microsoft hyperconverged cluster appeared first on Tech-Coffee.

]]>
Few months ago, I have written a topic about the next gen Microsoft management tool called Honolulu project. Honolulu provides management for standalone Windows Server, failover clustering and hyperconverged. Currently hyperconverged management works only on Windows Server Semi-Annual Channel (SAC) versions (I cross finger for Honolulu support on Windows Server LTSC). I have upgraded my lab to latest technical preview of Windows Server SAC to show you how to use Honolulu to manage your Microsoft hyperconverged cluster.

In part of my job, I deployed dozen of Microsoft hyperconverged cluster and to be honest, the main disadvantage of this solution is the management. Failover Clustering console is archaic and you have to use PowerShell to manage the infrastructure. Even if the Microsoft solution provides high-end performance and good reliability, the day-by-day management is tricky.

Thanks to Honolulu project we have now a modern management which can compete with other solutions on the market. Currently Honolulu is still in preview version and some of features are not yet available but it’s going to the right direction. Moreover, Honolulu project is free and can be installed on your laptop or on a dedicated server. As you wish !

Honolulu dashboard for hyperconverged cluster

Once you have added the cluster connection to Honolulu, you get a new line with the type Hyper-Converged Cluster. By clicking on it, you can access to a dashboard.

This dashboard provides a lot of useful information such as latest alerts provided by the Health Service, the overall performance of the cluster, the resource usage and information about servers, virtual machines, volumes and drives. You can see that currently the cluster performance charts indicate No data available. It is because the Preview of Windows Server that I have installed doesn’t provide information yet.

From my point of view, this dashboard is pretty clear and provides global information about the cluster. At a glance, you get the overall health of the cluster.

N.B: the memory usage indicated -35,6% because of a custom motherboard which not provide memory installed on the node.

Manage Drives

By clicking on Drives, you get information about the raw storage of your cluster and your storage devices. You get the total drives (I know I don’t follow the requirements because I have 5 drives on a node and 4 on another, but it is a lab ). Honolulu provides also the drive health and the raw capacity of the cluster.

By clicking on Inventory, you have detailed information about your drives such as the model, the size, the type, the storage usage and so on. At a glance, you know if you have to run an Optimize-StoragePool.

By clicking on a drive, you get further information about it. Moreover, you can act on it. For example, you can turn light on, retire the disk or update the firmware. For each drive you can get performance and capacity charts.

Manage volumes

By clicking on Volumes, you can get information about your Cluster Shared Volume. At a glance you get the health, the overall performance and the number of volumes.

In the inventory, you get further information about the volumes such as the status, the file system, the resiliency, the size and the storage usage. You can also create a volume.

By clicking on create a new volume, you get this:

By clicking on a volume, you get more information about it and you can make action such as open, resize, offline and delete.

Manage virtual machines

From Honolulu, you can also manage virtual machines. When you click on Virtual Machines | Inventory, you get the following information. You can also manage the VMs (start, stop, turn off, create a new one etc.). All chart values are in real time.

vSwitches management

From the Hyper-Converged cluster pane, you have information about virtual switches. You can create a new one, delete rename and change settings of an existing one.

Node management

Honolulu provides also information about your nodes in the Servers pane. At a glance you get the overall health of all your nodes and resource usage.

In the inventory, you have further information about your nodes.

If you click on a node, you can pause the node for updates or hardware maintenance. You have also detailed information such as performance chartsm drives connected to the node and so on.

Conclusion

Project Honolulu is the future of Windows Server in term of management. This product provides great information about Windows Server, Failover Clustering and Hyperconverged cluster in a web-based form. From my point of view, Honolulu eases the Microsoft hyperconverged solution management and can help administrators. Some features are missing but Microsoft listen the community. Honolulu is modular because it is based on extensions. Without a doubt, Microsoft will add features regularly. Just I cross finger for Honolulu support on Windows Server 2016 released in October 2016 but I am optimistic.

The post Use Honolulu to manage your Microsoft hyperconverged cluster appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/use-honolulu-to-manage-your-microsoft-hyperconverged-cluster/feed/ 1 6194
Deploy a Software-Defined Storage solution with StarWind Virtual SAN //www.tech-coffee.net/deploy-a-software-defined-storage-solution-with-starwind-vsan/ //www.tech-coffee.net/deploy-a-software-defined-storage-solution-with-starwind-vsan/#respond Fri, 26 Jan 2018 10:34:51 +0000 //www.tech-coffee.net/?p=6117 StarWind Virtual SAN is a Software-Defined Storage solution which enables to replicate data across several nodes to ensure availability. The data are mirrored between two or more nodes. Hypervisor can be installed in the StarWind Virtual SAN nodes (hyperconverged) or separated from hypervisor (converged). StarWind Virtual SAN is easy to use and provide high performance. ...

The post Deploy a Software-Defined Storage solution with StarWind Virtual SAN appeared first on Tech-Coffee.

]]>
StarWind Virtual SAN is a Software-Defined Storage solution which enables to replicate data across several nodes to ensure availability. The data are mirrored between two or more nodes. Hypervisor can be installed in the StarWind Virtual SAN nodes (hyperconverged) or separated from hypervisor (converged). StarWind Virtual SAN is easy to use and provide high performance. Moreover, StarWind provides pro active support. In this topic I’ll show you how to deploy a 3-Nodes StarWind VSAN to use with Hyper-V or ESXi.

Lab overview

To write this topic, I have deployed three VMware VMs running on Windows Server 2016. Each VM has the following configuration:

  • 2 vCPU
  • 8GB of memory
  • 1x VMXNET3 NIC in management network (for Active Directory, RDP, VM management)
  • 1x VMXNET3 NIC in cluster network (synchronization and heartbeat)
  • 1x VMXNET3 NIC in Storage network (iSCSI with hypervisor)
  • 1x 100GB Data disk

If you plan to deploy StarWind VSAN in production, you need physical server with enough storage and with enough network adapters.

StarWind Virtual SAN installation

First, download StarWind VSAN from their website. Once you have downloaded the installer, execute it on each StarWind VSAN node. First, accept the license agreement.

In the next screen, click on Next.

Specify a folder location where StarWind Virtual SAN will be installed.

Select StarWind Virtual SAN Server in the drop down menu.

Specify the start menu folder and click on Next.

If you want a desktop icon, enable the checkbox.

If you have already a license key, select Thank you, I do have a key already and click on Next.

Specify the location of the license file and click on Next.

Review the license information and click on Next.

If the iSCSI service is not started and disabled, you’ll get this pop-up. Click on OK to enable and start the Microsoft iSCSI initiator service.

Once you have installed StarWind Virtual SAN in each node, you can start next step.

Create an iSCSI target and a storage device

Open StarWind Management Console and click on Add Server.

Then add each node and click on OK. In the below screenshot, I clicked on Scan StarWind Servers to discover automatically nodes.

When you connect to each node, you get this warning. Choose the default location of the storage pool (storage devices).

Right click on the first node and select Add Target.

Specify a target alias and be sure to allow multiple concurrent iSCSI connections.

Once the target has been created, you get the following screen:

Now, right click on the target and select Add new Device to Target.

Select Hard Disk Device and click on Next.

Choose the option which apply to your configuration. In my case, I choose virtual disk.

Specify a name and a size for the virtual disk.

Choose thick-provisioned or Log Structured File System (LSFS). LSFS is designed for Virtual Machine because this file system eliminates IO Blender effect. With LSFS you can also enable deduplication. Choose also the right block cache size.

In the next screen, you can choose where are held metadata and how may worker threads you want.

Choose a device RAM cache parameters.

You can also specify a flash cache capacity if you have installed SSD in your nodes.

Then click on Create to create the storage device.

Once the storage device is created, you get the following screen:

At this time, you have a virtual disk on the first node. This virtual disk can store your data. But this storage device has no resiliency. In the next steps, we will replicate this storage devices with the two other nodes.

Replicate the storage device in other StarWind VSAN nodes

Right click on the storage device and select Replication Manager.

In the replication manager, select Add Replica.

Select Synchronous Two-Way Replication to replicate data across StarWind Virtual SAN nodes.

Specify the hostname and the port of the partner and click on Next.

Then select the failover strategy: Heartbeat or Node Majority. In my case I choose Node Majority. This mode requires that the majority of nodes are online. In a three nodes configuration, you can support only one loss.

Then choose to create a new partner device.

Specify the target name and the location of the storage device in partner node.

Select the network for synchronization. In my case, I select the cluster network.

Then select to synchronize from existing device.

To start the creation of the replication, click on Create Replica.

Repeat the same previous steps for the third node. At the end, the configuration should be similar to the following screenshot:

In StarWind Management Console, if you click on a target, you can see each iSCSI session: each node has two iSCSI sessions because there are three nodes.

iSCSI connection

Now that StarWind Virtual SAN is ready, you can connect by using iSCSI your favorite hypervisor. Don’t forget to configure MPIO to support the multipath. For ESXi you can read this topic.

The post Deploy a Software-Defined Storage solution with StarWind Virtual SAN appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-software-defined-storage-solution-with-starwind-vsan/feed/ 0 6117
Enable Direct Storage Access in Veeam Backup & Replication //www.tech-coffee.net/enable-direct-storage-access-in-veeam-backup-replication/ //www.tech-coffee.net/enable-direct-storage-access-in-veeam-backup-replication/#comments Mon, 31 Jul 2017 14:39:27 +0000 //www.tech-coffee.net/?p=5640 In environment with virtualization infrastructure connected to a SAN, you may want to connect your Veeam proxies (usually a physical servers) to the SAN to collect data. This configuration avoids using the production network to get data and reduce the backup window by increasing the speed of the backup process. This configuration requires a direct ...

The post Enable Direct Storage Access in Veeam Backup & Replication appeared first on Tech-Coffee.

]]>
In environment with virtualization infrastructure connected to a SAN, you may want to connect your Veeam proxies (usually a physical servers) to the SAN to collect data. This configuration avoids using the production network to get data and reduce the backup window by increasing the speed of the backup process. This configuration requires a direct connection between Veeam proxies and the SAN by using usually iSCSI or FC. In this topic, we’ll see how to configure the Veeam proxies to enable Direct Storage Access and to backup VMware VM.

Design overview

The Veeam server is based on a physical server (Dell R630) and running on Windows Server 2016 (July cumulative updates). The Veeam version is 9.5 update 2. This server has two network adapters member of a teaming for production network and two network adapters for iSCSI. I’d like to collect VM data across the iSCSI network adapters. So Veeam collect VM information and process the VM snapshot from the production network and data are copied accross the backup network. In this topic, I’ll connect VMFS LUN (VMware) to Veeam Server. In this configuration, the Veeam proxy is deployed in the Veeam server.

N.B: If you plan to dedicate servers for Veeam proxies, you must connect each proxy to the production storage.

Configure MPIO

First of all, you need to install MPIO if you have several network links connected to a SAN. MPIO ensures that all paths to a LUN are managed to ensure high availability and bandwidth aggregation (if you have not set MPIO to failover). Install MPIO from Windows features and run mpiocpl (it’s work also in Core edition) to configure MPIO:

After you have added the iSCSI or/and SAS support, a reboot is required.

Disable automount

This option prevents Windows from automatically mounting any new basic and dynamic volumes that are added to the system. The volume must not be mounted in Veeam server. We just need access to the block storage. This is why this option is set.

iSCSI configuration

First, you need to start the iSCSI service. This service should also be automatically started:

Next open iSCSIcpl (it’s work on Server Core also) and add portal address. Once the targets are discovered, you can connect to them.

Once you are connected, open the disk management and check if LUNs are well mounted in the system:

Veeam configuration

Now that you have connected your Veeam proxies to the production storage, you can now change the transport mode. In Veeam Backup & Replication console, navigate to backup infrastructure and edit the proxies. Change the transport mode and choose Direct Storage Access.

Now, when you backup a VM located in a SAN datastore, you should have something like this:

Conclusion

If you have a SAN iSCSI or FC and you want to dedicate these networks for backup, Direct Storage Access can be the way to go. The servers must be connected to the SAN and then Veeam is able to copy at block level from the mounted LUN. If your production network is 1GB/s and your storage network is 10GB/s, you can also save a lot of time and reduce the backup windows.

The post Enable Direct Storage Access in Veeam Backup & Replication appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/enable-direct-storage-access-in-veeam-backup-replication/feed/ 2 5640
Storage Spaces Direct: plan the storage for hyperconverged //www.tech-coffee.net/storage-spaces-direct-plan-the-storage-for-hyperconverged/ //www.tech-coffee.net/storage-spaces-direct-plan-the-storage-for-hyperconverged/#comments Mon, 17 Jul 2017 09:49:52 +0000 //www.tech-coffee.net/?p=5625 When a customer calls me to design or validate the hardware configuration for hyperconverged infrastructure with Storage Spaces Direct, there is often a misunderstanding about the remaining useable capacity, the required cache capacity and ratio, and the different mode of resilience. With this topic, I’ll try to help you to plan the storage for hyperconverged ...

The post Storage Spaces Direct: plan the storage for hyperconverged appeared first on Tech-Coffee.

]]>
When a customer calls me to design or validate the hardware configuration for hyperconverged infrastructure with Storage Spaces Direct, there is often a misunderstanding about the remaining useable capacity, the required cache capacity and ratio, and the different mode of resilience. With this topic, I’ll try to help you to plan the storage for hyperconverged and to clarify some points.

Hardware consideration

Before sizing the storage devices, you should be aware about some limitations. First you can’t exceed 26 storage devices per node. Windows Server 2016 can’t handle more than 26 storage devices so if you deploy your Operating System on two storage devices, 24 are available for Storage Spaces Direct. However, the storage devices are bigger and bigger so 24 storage devices per node is enough (I have never seen a deployment with more than 16 storage devices for Storage Spaces Direct).

Secondly, you have to pay attention on your HBA (Host Bus Adapter). With Storage Spaces Direct, this is the Operating System which is in charge to handle the resilience and cache. This is a software-defined solution after all. So, there is no reason that the HBA manages RAID and cache. In Storage Spaces Direct case, the HBA is used mainly to add more SAS ports. So, don’t buy an HBA with RAID and cache because you will not use these features. Storage Spaces Direct storage devices will be configured in JBOD mode. If you choose to buy Lenovo server, you can buy N2215 HBA. If you choose Dell, you can select HBA330. The HBA must provide the following features:

  • Simple pass-through SAS HBA for both SAS and SATA drives
  • SCSI Enclosure Services (SES) for SAS and SATA drives
  • Any direct-attached storage enclosures must present Unique ID
  • Not Supported: RAID HBA controllers or SAN (Fibre Channel, iSCSI, FCoE) devices

Thirdly, there are requirements regarding storage devices. Only NVMe, SAS and SATA devices are supported. If you have old SCSI storage devices, you can drop them :). These storage devices must be physically attached to only one server (local-attached devices). If you choose to implement SSD, these devices must be enterprise-grade with power loss protection. So please, don’t install a hyperconverged solution with Samsung 850 pro. If you plan to install cache storage devices, these SSD must have 3 DWPD. That means that this device can be written entirely three times per day at least.

To finish, you have to respect a minimum number of storage devices. You must implement at least 4 capacity storage devices per node. If you plan to install cache storage devices, you have to deploy two of them at least per node. For each node in the cluster, you must have the same kind of storage devices. If you choose to deploy NVMe in a server, all servers must have NVMe. The most possible, keep the same configuration across all nodes. The below table provides the minimum storage devices per node regarding the configuration:

Drive types present Minimum number required
All NVMe (same model) 4 NVMe
All SSD (same model) 4 SSD
NVMe + SSD 2 NVMe + 4 SSD
NVMe + HDD 2 NVMe + 4 HDD
SSD + HDD 2 SSD + 4 HDD
NVMe + SSD + HDD 2 NVMe + 4 Others

Cache ratio and capacity

The cache ratio and capacity is an important part when you choose to deploy cache mechanism. I have seen a lot of wrong design because of cache mechanism. The first thing to know is that the cache is not mandatory. As explained in the above table, you can implement an all flash configuration without cache mechanism. However, if you choose to deploy a solution based on HDD, you must implement a cache mechanism. When the storage devices behind cache are HDDs, the cache is set to Read / Write mode. Otherwise, it is set to write only mode.

The cache capacity must be at 10% of the raw capacity. If in each node you have 10TB of raw capacity, you need at least 1TB of cache. Moreover, if you deploy cache mechanism, you need at least two cache storage devices. This ensures the high availability of the cache. When Storage Spaces Direct is enabled, capacity devices are bound to cache devices in round-robin manner. If a cache storage device fails, all its capacity devices are bound to another cache storage device.

To finish, you must respect a ratio between the number of cache devices and capacity devices. The capacity devices must be a multiple of cache devices. This ensures that each cache device has the same number of capacity devices.

Reserved capacity

When you design the storage pool capacity and you choose the number of storage devices, you need to keep in mind that you need some unused capacity in the storage pool. This is the reserved capacity in case of repair process. If a capacity device fails, storage pool duplicates blocks that were written in this device to respect the resilience mode. This process requires free space to duplicate blocks. Microsoft recommends to leave empty the space of one capacity device per node up to four drives.

For example, I have 6 nodes with 4x 4TB HDD per node. I leave empty 4x 4TB (one per node up to four drives) in the storage pool for reserved capacity.

Example of storage design

You should know that in hyperconverged infrastructure, the storage and the compute are related because these components reside in the same box. So before calculate the required raw capacity you should have evaluated two things: the number of nodes you plan to deploy and the useable storage capacity required. For this example, let’s say that we need four nodes and 20TB of useable capacity.

First thing, you have to choose a resilience mode. In hyperconverged, usually 2-way Mirroring and 3-way Mirroring are implemented. If you choose 2-Way mirroring (1 fault tolerance), you have 50% of useable capacity. If you choose 3-Way Mirroring (recommended, 2 fault tolerances) you have only 33% of useable capacity.

PS: At the time of writing this topic, Microsoft has announced deduplication in next Windows Server release for ReFS volume.

So, if you need 20TB of useable capacity and you choose 3-Way Mirroring, you need at least 60TB (20 x 3) of raw storage capacity. That means that in each node (4-node) you need 15TB of raw capacity.

Now that you know you need 15TB of raw storage per node, you need to define the number of capacity storage devices. If you need maximum performance, you can choose only NVMe devices. But this solution will be very expensive. For this example, I choose SSD for the cache and HDD for the capacity.

Next, I need to define which kind of HDD I select. If I choose 4x 4TB HDD per node, I will have 16TB raw capacity per node. I need to add an additional 4TB HDD for the reserved capacity. But this solution is not good regarding the cache ratio. No cache ratio can be respected with five capacity devices. In this case I need to add an additional 4TB HDD to get a total of 6x 4TB HDD per node (24TB raw capacity) and I can respect the cache ratio with 1:2 or 1:3.

The other solution is to select 2TB HDD. I need 8x 2TB HDD to get the required raw capacity. Then I add an additional 2TB HDD for the reserved capacity. I get 9x 2TB HDD and I can respect the cache ratio with 1:3. I prefer this solution because I’m closest of the specifications.

Now we need to design the cache devices. For our solution, we need 3x cache devices for a total capacity of 1.8TB at least (10% of raw capacity per node). So I choose to buy 800GB SSD (because my favorite cache SSD, Intel S3710, exists in 400GB or 800GB :)). 800GB x 3 = 2.1TB cache capacity per node.

So, each node will be installed with 3x 800GB SSD and 9x 2TB HDD with a cache ratio of 1:3. The total raw capacity is 72TB and the reserved capacity is 8TB. The useable capacity will be 21.12TB ((72-8) x 0.33).

About Fault Domain Awareness

I have made this demonstration with a Fault Domain Awareness at the node level. If you choose to configure Fault Domain Awareness at chassis and rack level, the calculation is different. For example, if you choose to configure Fault Domain Awareness at the rack level, you need to divide the total raw capacity across the rack number. You need also the exact same number of nodes per rack. With this configuration and the above case, you need 15TB of raw capacity per rack.

The post Storage Spaces Direct: plan the storage for hyperconverged appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/storage-spaces-direct-plan-the-storage-for-hyperconverged/feed/ 4 5625
Shared virtual hard disks in Hyper-V 2016 //www.tech-coffee.net/shared-virtual-hard-disks-in-hyper-v-2016/ //www.tech-coffee.net/shared-virtual-hard-disks-in-hyper-v-2016/#comments Mon, 01 Aug 2016 12:52:00 +0000 //www.tech-coffee.net/?p=4703 Microsoft brings a new feature to Hyper-V in Windows Server 2016 called VHD Set. This type of disk enables to share virtual hard disks between several servers to implement guest cluster. In this topic we will see why using VHD Set, and how to implement it. Why using VHD Set instead of shared VHDX As ...

The post Shared virtual hard disks in Hyper-V 2016 appeared first on Tech-Coffee.

]]>
Microsoft brings a new feature to Hyper-V in Windows Server 2016 called VHD Set. This type of disk enables to share virtual hard disks between several servers to implement guest cluster. In this topic we will see why using VHD Set, and how to implement it.

Why using VHD Set instead of shared VHDX

As VHD Set, Shared VHDX enables to share a virtual hard disk between multiple virtual machines. This feature is useful to implement a guest cluster where shared disks are required (as SQL Server AlwaysOn FCI or File Servers). Shared VHDX and VHD Set are great to avoid the use of virtual HBA and virtual SAN to present a LUN to the VMs. They are also necessary if you have implemented a SMB3 based storage solution. However shared VHDX feature as some limitation:

  • Resizing and migrating a shared VHDX is not supported
  • Make a backup or a replica of a shared VHDX is not supported

The VHD Set feature has not these limitations. However, VHD Set is available only for Windows Server 2016 guest operating system. When creating a VHD Set, two files are created:

  • A .avhdx file that contains data. This file is fixed or dynamic;
  • A .vhds file that contains metadata to coordinate information between guest cluster nodes. The size of this file is almost 260KB.

Create a VHD Set

To create a VHD Set, you can use the Graphical User Interface (GUI) or PowerShell cmdlets. From the GUI, open the Hyper-V Manager, select New and then Virtual Disk. As below screenshot, select VHD Set.

Then select the type of disk (fixed or dynamic), the name, the location and the size. By using PowerShell, you can run the below cmdlet:

Below you can find the result of these last two actions:

As you can see, the “blue” VHD Set is fixed and the AVHDX file size is 40GB. The “red” one is dynamic and so AVHDX file will expend its size dynamically.

Add VHD Set to virtual machines

To try VHD Set, I have created two virtual machines called VMFLS01 & VMFLS02. Each VM will be connected to two VHD Sets:

  • Quorum: for the cluster Witness disk
  • Shared: for the data

To mount the shared disk into the VM, edit the VM properties and navigate to a SCSI controller. Then select Shared drive.

Next, specify the location of the VHDS file.

You can also mount the VHDS in VM by using PowerShell:

Add-VMHardDiskDrive -VMName VMFLS01 -Path " c:\ClusterStorage\VMStorage01\SharedDisk\VMFLS_Quorum.vhds" -SupportPersistentReservations
Add-VMHardDiskDrive -VMName VMFLS01 -Path " c:\ClusterStorage\VMStorage01\SharedDisk\VMFLS_Shared.vhds" -SupportPersistentReservations

I have repeated the same steps for the second VM. Once both VMs are connected to the VHD Set, you can start the VM.

Add-VMHardDiskDrive -VMName VMFLS02 -Path " c:\ClusterStorage\VMStorage01\SharedDisk\VMFLS_Quorum.vhds" -SupportPersistentReservations
Add-VMHardDiskDrive -VMName VMFLS02 -Path " c:\ClusterStorage\VMStorage01\SharedDisk\VMFLS_Shared.vhds" -SupportPersistentReservations

Create the guest cluster

Now that both VMs are connected to the shared disk, we can create the cluster. I run the following cmdlets to install required features on each server:

# Install Failover Clustering feature and management tools
install-windowsfeature -Name Failover-Clustering -IncludeManagementTools -ComputerName VMFLS01
install-windowsfeature -Name Failover-Clustering -IncludeManagementTools -ComputerName VMFLS02

Then I execute the following command to make online disks and initialize them:

get-disk |? OperationalStatus -Like "Offline" | Initialize-Disk

Now that disks are initialized, I create a partition on each disk. In the above example, the disk 1 is for Quorum usage and the disk 2 for data usage.

New-Volume -DiskNumber 1 -FileSystem NTFS -FriendlyName Quorum
New-Volume -DiskNumber 2 -FileSystem NTFS -FriendlyName Data

Next I run the following cmdlets to create the cluster:

# Test the nodes to check if they are compliant to be part of a cluster
Test-Cluster VMFLS01,VMFLS02
# Create the cluster
New-Cluster -Name Cluster-FS01 -Node VMFLS01,VMFLS02 -StaticAddress 10.10.0.199

# I rename Cluster Disk 1 to Quorum and Cluster Disk 2 to Data
(Get-ClusterResource |? Name -like "Cluster Disk 1").Name="Quorum"
(Get-ClusterResource |? Name -like "Cluster Disk 2").Name="Data"

# Set the Cluster Quorum to use disk Witness
Set-ClusterQuorum -DiskWitness Quorum

# Set the Data volume to Cluster Shared Volume
Get-ClusterResource -Name Data | Add-ClusterSharedVolume

Once you have finished, you should have something like this in the cluster:

Now I’m able to copy data in the volume:

I can also move the storage to another owner node:

Conclusion

Thanks to VHD Set in Windows Server 2016, I can create easily Guest Cluster without using complex technologies as NPIV, virtual HBA and virtual SAN. Moreover, the resizing, the migrating and the backup is supported when implementing shared disks with VHD Set. It is a friendly feature, so why not using it?

The post Shared virtual hard disks in Hyper-V 2016 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/shared-virtual-hard-disks-in-hyper-v-2016/feed/ 27 4703
Understand Microsoft Azure Storage for Virtual Machines //www.tech-coffee.net/understand-microsoft-azure-storage-for-virtual-machines/ //www.tech-coffee.net/understand-microsoft-azure-storage-for-virtual-machines/#respond Sun, 28 Jun 2015 08:40:44 +0000 //www.tech-coffee.net/?p=3623 Microsoft Azure provides a storage solution that can be used for files, backups, virtual machines and so on. In this topic I’ll talk about Blob Storage that enables to store virtual disks for Virtual Machines. Storage account To use the Microsoft Azure storage solution, it’s necessary to create a Storage Account. This storage account gives ...

The post Understand Microsoft Azure Storage for Virtual Machines appeared first on Tech-Coffee.

]]>
Microsoft Azure provides a storage solution that can be used for files, backups, virtual machines and so on. In this topic I’ll talk about Blob Storage that enables to store virtual disks for Virtual Machines.

Storage account

To use the Microsoft Azure storage solution, it’s necessary to create a Storage Account. This storage account gives you a single namespace where only you (by default) can have access. Each Storage Account handles up to 20.000 IOPS, and 500TB of data. If you use this storage account for Standard Virtual Machines, you can store until 40 virtual disks (a disk from a standard virtual machine provides 500 IOPS).

To authenticate on the Microsoft Azure Storage, the Storage Account comes with two keys, called the primary key and the secondary key. Each key can be regenerated when you want. Two key are provided to ease the key regeneration process. For example, if an application uses the primary key to access to the storage, you can:

  1. Regenerate the secondary key ;
  2. Modify the application to use the secondary key;
  3. Regenerate the primary key.

Moreover, if you want to give temporary administrator right to someone, you can give him the secondary key and regenerate it 24h after if you want.

When you create a storage account, several REST endpoints are created to manage the contents of your storage:

  • Blob endpoint: https://<Storage Account Name>.blob.core.windows.net
  • Queue endpoint: https://<Storage Account Name>.queue.core.windows.net
  • Table endpoint: https://<Storage Account Name>.table.core.windows.net
  • File endpoint (preview): https://<Storage Account Name>.file.core.windows.net

The Azure Blob (Binary Large OBject) storage enables to store files as docx, pdf, vhd and so on. There are two blob types called page blobs and block blobs. I’ll talk longer about the differences of these two kind of types after. The Queue storage is useful for messaging and communication between Cloud Services Components. The Table storage is used for NoSQL structured datasets. To finish, the File storage provides SMB 2.1 shares that can be managed from Windows Explorer for example. SMB 2.1 has been chosen for compatibility reason with Linux. This feature is still in Preview.

In the next section, I’ll talk only about blob storage because Virtual Disks are stored in the blob storageJ.

BLOB Storage

Entities and Hierarchy

First it’s important to understand the entities which play a role in blob storage:

  • Storage Account: this is the root of the hierarchy,
  • Container: you can compare container to a folder. You can manage access right from this entity,
  • Blob: this is the binary you want to store (docx, pdf, vhd and so on),
  • Metadata: you can associate your own metadata to a blob.

Block and Page blobs

Before I have told there are two kinds of blobs: the Block blob and the Page blob. So it’s time to explain that :):

  • The page blob is designed for IaaS usage as Virtual Machine disks. The maximum size for a page blob is exactly 1023GB;
  • The block blob is mostly used to store data as documents, photos, videos, backups and so on. The maximum size for a block blob is 200GB.

Access right management

By default, blobs in a container are not accessible anonymously. However you can change this behavior by changing the access type. There is three access types:

  • Private (Off): No Anonymous Access;
  • Blob: Access blobs via anonymous requests;
  • Container: List and access blobs via anonymous requests.

When you want to give access to a container or a blob to someone for a specific period of time and with specific permissions you can use Shared Access Signatures.

Replication

Your data are replicated to avoid to lose them. Currently there are four replication options:

  • Locally redundant storage (Standard_LRS): the data is replicated synchronously three times in a single datacenter;
  • Zone redundant storage (Standard ZRS): this replication option is only available for block blobs. Three copies of data are made on multiple datacenter;
  • Geographically redundant storage (Standard_GRS): the data is replicated synchronously three times in a single datacenter and three others asynchronously copy in a second datacenter;
  • Read-Access geographically redundant storage (Standard_RAGRS): same things as Standard_GRS and you can have a read access to the data in the second datacenter.


Manage Blob Storage

Create a Storage Account

To create an Azure Storage Account, you can use the PowerShell cmdlet New-AzureStorageAccount:

New-AzureStorageAccount -StorageAccountName "techcoffee01" `
                        -label "techcoffee01" `
                        -description "Storage Account to store Virtual Machines" `
                        -Location "West Europe" `
                        -Type "Standard_LRS"

The StorageAccountName parameter enables you to give a name to your Storage Account. Next provides the datacenter where you want to create this storage account by using location parameter. To finish choose a replication option with Type argument. Below this is a screenshot of a successfully Azure Account Storage creation.

Next if I open the Azure Portal, I can retrieve my new Azure Storage Account information as endpoints, location or replication option.

You can also retrieve these information by using Get-AzureStorageAccount cmdlet PowerShell.

Create Azure Storage Context

Before being able to manage containers and blobs, you have to create an Azure Storage Context. First you have to get the primary or the secondary key of your Storage Account by using the command Get-AzureStorageKey.

You can see in the above screenshot the primary and the secondary key. Now we can use the cmdlet New-AzureStorageContext to create the context as below:

New-AzureStorageContext -StorageAccountName techcoffee01 `
                        -StorageAccountKey $Key.Primary

In the next part I’ll use the $ctx variable when the context is required.

Manage containers

To create a container, you can use the New-AzureStorageContainer as below:

Be careful because the container name must be a valid DNS name as MSDN says:

  • Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character.
  • Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names.
  • All letters in a container name must be lowercase.
  • Container names must be from 3 through 63 characters long.

You can list the container from PowerShell by using the cmdlet Get-AzureStorageContainer:


Moreover you can manage your containers from the Azure Portal:

To finish, you can modify the permissions associated to the container by using Set-AzureStorageContainerACL. Bellow I modify the permission of the oldvhds container to blob:

To finish you can delete easily the container by using the cmdlet Remove-AzureStorageContainer:

Upload a VHD

If you want to upload a VHD to create your own image to deploy Virtual Machines, you can use the cmdlet Add-AzureVHD cmdlet:

Delete Storage Account

To delete the storage account you can use the cmdlet Remove-AzureStorageAccount as below:

The result is the same from the Azure Portal:

Conclusion

The Microsoft Azure Storage is a feature that enables you to store binaries, backups, shares and so on. To have your own storage namespace, you have to create a storage account. Each storage account  handles up to 20.000 IOPS and 500TB of data. When Virtual Machines are created in Azure, the VHD files are stored in a page blob storage. You can manage your blob storage easily by using PowerShell cmdlets. You can have the list of PowerShell cmdlet related to Azure Storage with this cmdlet: get-command *AzureStorage*.

The post Understand Microsoft Azure Storage for Virtual Machines appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/understand-microsoft-azure-storage-for-virtual-machines/feed/ 0 3623
Synology NAS for Hyper-V purpose //www.tech-coffee.net/synology-nas-for-hyper-v-purpose/ //www.tech-coffee.net/synology-nas-for-hyper-v-purpose/#comments Fri, 07 Mar 2014 13:47:56 +0000 //www.tech-coffee.net/?p=92 Since two years I have a Synology DS412+ which provides me share for personnal use (such as holiday photos :p). This NAS is nice to share files accross network to computers, TV, smartphone etc. As I have a lot of holiday photos, I had to change all disks for more disk space. Below old configuration : 4 ...

The post Synology NAS for Hyper-V purpose appeared first on Tech-Coffee.

]]>
Since two years I have a Synology DS412+ which provides me share for personnal use (such as holiday photos :p). This NAS is nice to share files accross network to computers, TV, smartphone etc.

As I have a lot of holiday photos, I had to change all disks for more disk space. Below old configuration :

  • 4 x Western Digital RE4 500Go
  • RAID 5
  • Total amount of storage : 1,3To

When I have created this volume, Synology disks groups did not exist. So my volume was not flexible. Moreover I have used my Synology NAS for Hyper-V needs. I had create some file-based LUN which provides flexibility but them bring poor performance.

So recently I have bought four new Western Digital RED 2To. I have deleted the old volume (before I have saved all my holiday photos J) and I have created a volume. Below my requirements :

  • Volume must be faster as possible in read and ESPECIALLY in write => RAID 10
  • Disk group must accept block-level LUN without take all volume space => Multiple LUN on RAID
  • I need WAF (Wife Acceptance Factor :p) volume to share my holiday photos on every multimedia equipment => Volume

Synology side configuration

So lets go to configure my Synology DS412+. The disks group configuration is in Storage Manager. I click create, I select all disks and I choose RAID 10 redondancy :

 

Once disks group is created, I make a volume for my holiday photos shares with 2,61Tb space. Just click on Create in Volume tab, type your volume size and apply configuration.

Now, for my virtualization need, I go to the iSCSI LUN tab to create as many LUN I need.

Click on Create in iSCSI LUN tab, select iSCSI LUN (block-level) – Multiple LUNs on RAID. Type a LUN name and configure iSCSI target as needed. Define a storage size and click to create. Repeat the procedure for each LUN needed.

Hyper-V host side configuration

Once you have created your LUN, go on your Hypervisor server. For this example, I work on Windows Server 2012R2 Datacenter where Hyper-V role is installed.

So I open iSCSI initiator in Administrative tools on Hyper-V host and I launch a quick connect on Jupiter target (this target is the name of my Synology). Click on connect and select all LUN. Now all LUN are in disk management of your Windows.

Now you can use these LUN to store VHDX or to make pass-trough disk for your databases for example.

Even if I have only four disks, performance is not ridiculous. I reached the 200MB / s in write and IO meter in resource monitor of my Synology indicate good performance for my lab needs. Sometime the data transfer is faster than my network bandwidth. I think this is ODX which is working. It’s great because I use Virtual Machine Manager 2012R2 and it is able to use ODX for VHDX transfer.

Synology NAS for Hyper-V is a good option

To conclude, Synology provides some SAN features such as iSCSI, LUN or ODX. For a LAB, it’s useful for shared disks and CSV (Cluster Share Volume) disk for Cluster Failover.

In the same time, WAF volume is available to share holiday photos across network to smartphone, TV etc.

Synology disks group provides flexibility to manage its volumes. I think it is a great feature for power user or small enterprise. So next time I braze a fibre channel adapter on my DS412+ … nooooo it’s a joke (but let me think about that … :p)

The post Synology NAS for Hyper-V purpose appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/synology-nas-for-hyper-v-purpose/feed/ 18 92