Scale-Out File Server – Tech-Coffee //www.tech-coffee.net Wed, 14 Jun 2017 14:29:22 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Deploy a SMB storage solution for Hyper-V with StarWind VSAN free //www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/ //www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/#comments Wed, 14 Jun 2017 14:29:22 +0000 //www.tech-coffee.net/?p=5543 StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS ...

The post Deploy a SMB storage solution for Hyper-V with StarWind VSAN free appeared first on Tech-Coffee.

]]>
StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS solution with Windows Server 2016 Standard Edition and StarWind VSAN free. It is an affordable solution for your Hyper-V VM storage.

In this topic, we’ll see how to deploy StarWind VSAN free on two nodes based on Windows Server 2016 Standard Core edition. Then we’ll deploy a Failover Clustering with SOFS to deliver storage to Hyper-V nodes.

Architecture overview

This solution should be deployed on physical servers with physical disks (NVMe, SSD or HDD etc.). For the demonstration, I have used two virtual machines. Each virtual machine has:

  • 4 vCPU
  • 4GB of Memories
  • 1x OS disk (60GB dynamic) – Windows Server 2016 Standard Core edition
  • 1x Data disk (127GB dynamic)
  • 3x vNIC (1x Management / iSCSI, 1x Hearbeart, 1x Synchronization)

Both nodes are deployed and joined to the domain.

Node preparation

On both nodes, I run the following cmdlet to install the features and prepare a volume for StarWind:

# Install FS-FileServer, Failover Clustering and MPIO
install-WindowsFeature FS-FileServer, Failover-Clustering, MPIO -IncludeManagementTools -Restart

# Set the iSCSI service startup to automatic
get-service MSiSCSI | Set-Service -StartupType Automatic

# Start the iSCSI service
Start-Service MSiSCSI

# Create a volume with disk
New-Volume -DiskNumber 1 -FriendlyName Data -FileSystem NTFS -DriveLetter E

# Enable automatic claiming of iSCSI devices
Enable-MSDSMAutomaticClaim -BusType iSCSI

StarWind installation

Because I have installed nodes in Core edition, I install and configure components from PowerShell and command line. You can download StarWind VSAN free from this link. To install StarWind from command line, you can use the following parameters:

Starwind-v8.exe /SILENT /COMPONENTS="comma separated list of component names" /LICENSEKEY="path to license file"

Current list of components:

Service: StarWind iSCSI SAN server.

service\haprocdriver: HA Processor Driver, it is used to support devices that have been created with older versions of the Software.

service\starflb: Loopback Accelerator, it is used with Windows 2012 and upper versions to accelerate iSCSI operation when client resides on the same machine as server.

service\starportdriver: StarPort driver that is required for operation of Mirror devices.

Gui : Management Console ;

StarWindXDll: StarWindX COM object;

StarWindXDll\powerShellEx: StarWindX PowerShell module.

To install StarWind, I have run the following command:

C:\temp\Starwind-v8.exe /SILENT /COMPONENTS="Service,service\starflb,service\starportdriver,StarWindxDll,StarWindXDll\powerShellEx /LICENSEKEY="c:\temp\ StarWind_Virtual_SAN_Free_License_Key.swk"

I run this command on both nodes. After this command is run, StarWind is installed and ready to be configured.

StarWind configuration

StarWind VSAN free provides a trial of 30 days for the management console. After the 30 days, you have to manage the solution from PowerShell. So I decided to configure the solution from PowerShell:

Import-Module StarWindX

try
{
    $server = New-SWServer -host 10.10.0.54 -port 3261 -user root -password starwind

    $server.Connect()

    $firstNode = new-Object Node

    $firstNode.ImagePath = "My computer\E"
    $firstNode.ImageName = "VMSTO01"
    $firstNode.Size = 65535
    $firstNode.CreateImage = $true
    $firstNode.TargetAlias = "vmsan01"
    $firstNode.AutoSynch = $true
    $firstNode.SyncInterface = "#p2=10.10.100.55:3260"
    $firstNode.HBInterface = "#p2=10.10.100.55:3260"
    $firstNode.CacheSize = 64
    $firstNode.CacheMode = "wb"
    $firstNode.PoolName = "pool1"
    $firstNode.SyncSessionCount = 1
    $firstNode.ALUAOptimized = $true
    
    #
    # device sector size. Possible values: 512 or 4096(May be incompatible with some clients!) bytes. 
    #
    $firstNode.SectorSize = 512
	
	#
	# 'SerialID' should be between 16 and 31 symbols. If it not specified StarWind Service will generate it. 
	# Note: Second node always has the same serial ID. You do not need to specify it for second node
	#
	$firstNode.SerialID = "050176c0b535403ba3ce02102e33eab" 
    
    $secondNode = new-Object Node

    $secondNode.HostName = "10.10.0.55"
    $secondNode.HostPort = "3261"
    $secondNode.Login = "root"
    $secondNode.Password = "starwind"
    $secondNode.ImagePath = "My computer\E"
    $secondNode.ImageName = "VMSTO01"
    $secondNode.Size = 65535
    $secondNode.CreateImage = $true
    $secondNode.TargetAlias = "vmsan02"
    $secondNode.AutoSynch = $true
    $secondNode.SyncInterface = "#p1=10.10.100.54:3260"
    $secondNode.HBInterface = "#p1=10.10.100.54:3260"
    $secondNode.ALUAOptimized = $true
        
    $device = Add-HADevice -server $server -firstNode $firstNode -secondNode $secondNode -initMethod "Clear"
    
    $syncState = $device.GetPropertyValue("ha_synch_status")

    while ($syncState -ne "1")
    {
        #
        # Refresh device info
        #
        $device.Refresh()

        $syncState = $device.GetPropertyValue("ha_synch_status")
        $syncPercent = $device.GetPropertyValue("ha_synch_percent")

        Start-Sleep -m 2000

        Write-Host "Synchronizing: $($syncPercent)%" -foreground yellow
    }
}
catch
{
    Write-Host "Exception $($_.Exception.Message)" -foreground red 
}

$server.Disconnect()

Once this script is run, two HA images are created and they are synchronized. Now we have to connect to this device through iSCSI.

iSCSI connection

To connect to the StarWind devices, I use iSCSI. I choose to set iSCSI from PowerShell to automate the deployment. In the first node, I run the following cmdlets:

New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260
New-IscsiTargetPortal -TargetPortalAddress 10.10.0.55 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.54
Get-IscsiTarget | Connect-IscsiTarget -isMultipathEnabled $True

In the second node, I run the following cmdlets:

New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260
New-IscsiTargetPortal -TargetPortalAddress 10.10.0.54 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.55
Get-IscsiTarget | Connect-IscsiTarget -isMultiPathEnabled $True

You can run the command iscsicpl in a server core to show the iSCSI GUI. You should have something like that:

PS: If you have a 1GB/s network, set the load balance policy to Failover and leave Active the 127.0.0.1 path. If you have 10GB/s network, choose Round Robin policy.

Configure Failover Clustering

Now that a shared volume is available for both node, you can create the cluster:

Test-Cluster -node VMSAN01, VMSAN02

Review the report and if all is ok, you can create the cluster:

New-Cluster -Node VMSAN01, VMSAN02 -Name Cluster-STO01 -StaticAddress 10.10.0.65 -NoStorage

Navigate to Active Directory (dsa.msc) and locate the OU where is located the Cluster Name Object. Edit the permissions on this OU to allow the Cluster Name Object to create computer object:

Now we can create the Scale-Out File Server role:

Add-ClusterScaleOutFileServerRole -Name VMStorage01

Then we can initialize the StarWind disk to convert it later in CSV. Then we can create a SMB share:

# Initialize the disk
get-disk |? OperationalStatus -like Offline | Initialize-Disk

# Create a CSVFS NTFS partition
New-Volume -DiskNumber 3 -FriendlyName VMSto01 -FileSystem CSVFS_NTFS

# Rename the link in C:\ClusterStorage
Rename-Item C:\ClusterStorage\Volume1 VMSTO01

# Create a folder
new-item -Type Directory -Path C:\ClusterStorage\VMSto01 -Name VMs

# Create a share
New-SmbShare -Name 'VMs' -Path C:\ClusterStorage\VMSto01\VMs -FullAccess everyone

The cluster looks like that:

Now from Hyper-V, I am able to store VM in this cluster like that:

Conclusion

StarWind VSAN free and Windows Server 2016 Standard Edition provides an affordable SDS solution. Thanks to this solution, you can deploy a 2-node storage cluster which provides SMB 3.11 shares. So Hyper-V can uses these shares to host virtual machines.

The post Deploy a SMB storage solution for Hyper-V with StarWind VSAN free appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/feed/ 4 5543
Host Veeam backup on Storage Spaces Direct //www.tech-coffee.net/host-veeam-backups-storage-spaces-direct/ //www.tech-coffee.net/host-veeam-backups-storage-spaces-direct/#comments Tue, 06 Jun 2017 08:01:26 +0000 //www.tech-coffee.net/?p=5527 Storage Spaces Direct (S2D) is well known to host virtual machines in the disaggregated or hyperconverged model. But S2D can also be used for backup purpose as a backup repository. You can plan to implement a S2D cluster in the dissagregated model to host virtual machines but also to store backups. Because Veeam Backup & ...

The post Host Veeam backup on Storage Spaces Direct appeared first on Tech-Coffee.

]]>
Storage Spaces Direct (S2D) is well known to host virtual machines in the disaggregated or hyperconverged model. But S2D can also be used for backup purpose as a backup repository. You can plan to implement a S2D cluster in the dissagregated model to host virtual machines but also to store backups. Because Veeam Backup & Replication can leverage a repository based on SMB shares, the Veeam backups can be hosted on a S2D cluster through Scale-Out File Server (SOFS).

Veeam Backup & Replication 9.5 provides advanced ReFS integration to provide faster synthetic full backup creation, reduce storage requirements and improve reliability and backup and restore performance. With Storage Spaces Direct, Microsoft recommends mainly ReFS as the file system. This is why, if you have a S2D cluster (or you plan to deploy) and Veeam, it can be a great opportunity to host backup on the S2D cluster.

S2D cluster provides three resilience models: mirroring, parity and mixed resiliency. Mirroring volume is not a good option to store backup because too many spaces is used for resilience (50% in 2-way mirroring or 3-way mirroring). Mirroring is good to store virtual machines. Parity is a good option to store backups. More you add storage, more your storage is efficiency. However, you need a 4-node S2D cluster. Mixed resiliency is also a good option because you mix mirroring and parity and so performance and efficiency. But mixed resiliency requires a fine design.

In this topic, I’ll implement a S2D cluster with a dual parity volume to store Veeam backups.

4-node S2D cluster deployment

First of all, you have to deploy a S2D Cluster. You can follow this topic to implement the cluster. For this topic, I have deployed a 4-node cluster. After that Operating System and drivers are installed I have run the following PowerShell script. This script installs required features on all nodes and enable RDMA for network adapters with the name containing Cluster.

$Nodes = "VMSDS01","VMSDS02","VMSDS03","VMSDS04"

Foreach ($Node in $Nodes){
    Try {
        $Cim = New-CimSession -ComputerName $Node -ErrorAction Stop
        Install-WindowsFeature Failover-Clustering, FS-FileServer -IncludeManagementTools -Restart ComputerName $Node -ErrorAction Stop | Out-Null

        Enable-NetAdapterRDMA -CimSession $Node -Name Cluster* -ErrorAction Stop | Out-Null
    }
    Catch {
        Write-Host $($Error[0].Exception.Message) -ForegroundColor Red -BackgroundColor Green
        Exit
    }
}

Then from a node of the cluster I have run the following cmdlets:

$Nodes = "VMSDS01","VMSDS02","VMSDS03","VMSDS04"
$ClusIP = "10.10.0.44"
$ClusNm = "Cluster-BCK01"

Test-Cluster -Node $Nodes -Include "Storage Spaces Direct", Inventory,Network,"System Configuration"
New-Cluster -Node $Nodes -StaticAddress $ClusIP -NoStorage
Enable-ClusterS2D

New-Volume -StoragePoolFriendlyName "*Cluster-BCK01" -FileSystem CSVFS_ReFS -ResiliencySettingName Parity -PhysicalDiskRedundancy 2 -Size 100GB

Rename-Item c:\ClusterStorage\Volume1 BCK01

At this moment, the cluster is created and Storage Spaces Direct is enabled. The cluster is called Cluster-BCK01 (IP: 10.10.0.44) and a dual parity volume is created. Then open the permissions of the OU where is located the Cluster Name Object of the cluster. Then add a permission for the cluster name object to allow to create computer objects.

Open the failover clustering manager and rename the networks to ease the management.

You can check also that you have all the enclosures and physical disks.

When S2D has been enabled, a storage pool with all physical disks has been automatically created. I have renamed it to Backup Pool.

You can check also that a Cluster Shared Volume has been well created.

Next run the following cmdlets to create SOFS role, create a folder in the volume and create a share on this folder.

Add-ClusterScaleOutFileServerRole -Name BCK-REPO
new-item -Type Directory -Path '\\vmsds01\c$\ClusterStorage\BCK01' -Name VMBCK01
New-SmbShare -Name 'VMBCK01' -Path C:\ClusterStorage\BCK01\VMBCK01 -FullAccess everyone

If you go back to the cluster, you can see that a Scale-Out File Server role has been created as well as the share.

You can edit the permissions of the folder to give specific permissions to the account that will be used in Veeam Backup & Replication.

Veeam Backup & Replication configuration

First of all, I create a new backup repository in Veeam Backup & Replication.

Then choose the shared folder backup repository.

Next specify the shared folder where you want store the backups. My SOFS role is called BCK-REPO and the share is called VMBCK01. So, the path is \\BCK-REPO\VMBCK01. Specify also credentials that have permissions on the shared folder.

In the next window, you can specify advanced properties.

Then I choose to not enable the vPower NFS service because I backup only Hyper-V VMs.

To finish the backup repository creation, review the properties and click on Apply.

Run a backup on the S2D repository

To test the new backup repository, I choose to create a backup copy job.

Then choose the VM that will be in the backup copy job.

In the next screen, choose the S2D backup repository, the number of restore points and the archival settings.

Next choose if you want use a WAN accelerator or not.

When the wizard is finished, the backup copy job is processing. You can see in the shared folder that data is coming.

When the backup is finished, you can see that data processed and size on disk are different. It is because Veeam leverages ReFS to reduce the storage usage.

Conclusion

Microsoft Storage Spaces Direct can be used to store your virtual machines but also your backups. If you plan a S2D in disaggregated model, you can design the solution to store VM data and backup job. The main disadvantage is that backup should be located in a parity volume (or a mixed resiliency) and that requires at least a 4-node S2D cluster.

The post Host Veeam backup on Storage Spaces Direct appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/host-veeam-backups-storage-spaces-direct/feed/ 3 5527
RDS 2016 Farm: Configure File Servers for User Profile Disks //www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/ //www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/#comments Tue, 09 May 2017 11:26:37 +0000 //www.tech-coffee.net/?p=5471 In the previous topics of this series, we have deployed the RDS Farm in Azure. Now we need a file service in high availability to manage user profile disks (UPD). To support the high availability, I leverage Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS). For more information about the deployment of S2D, you ...

The post RDS 2016 Farm: Configure File Servers for User Profile Disks appeared first on Tech-Coffee.

]]>
In the previous topics of this series, we have deployed the RDS Farm in Azure. Now we need a file service in high availability to manage user profile disks (UPD). To support the high availability, I leverage Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS). For more information about the deployment of S2D, you can read this topic (based on hyperconverged model). For Remote Desktop usage, I’ll deploy a disaggregated model of S2D. In this topic, I’ll configure file servers for User Profile Disks. This series consists of the following topics:

I’ll deploy this file service by using only PowerShell. Before following this topic, be sure that your Azure VM has joined the Active Directory and they have two network adapters in two different subnets (one for cluster and the other for management). I have also fixed the IP addresses from Azure portal.

Deploy the cluster

First of all, I install these features in both file server nodes:

install-WindowsFeature FS-FileServer, Failover-Clustering -IncludeManagementTools

Then I install the RSAT of Failover Clustering in the management VM.

Install-WindowsFeature RSAT-Clustering

Next I test if the cluster nodes can manage Storage Spaces Direct

Test-Cluster -Node "AZFLS0","AZFLS1" -Include "Storage Spaces Direct", Inventory,Network,"System Configuration"

If the test is passed successfully, you can run the following cmdlet to deploy the cluster with the name UPD-Sto and the IP 10.11.0.29.

New-Cluster -Node "AZFLS0","AZFLS1" -Name UPD-Sto -StaticAddress 10.11.0.29 -NoStorage

Once the cluster is created, add the Cluster Name Object (UPD-Sto) the right to create computer object on the OU where it is located. This permission is required to create the CNO for SOFS.

Enable and configure S2D and SOFS

Now that the cluster is created, you can enable S2D (I run the following PowerShell on a file server node by using Remote PowerShell).

Enable-ClusterS2D

Then I create a new volume formatted with ReFS and with a capacity of 100GB. This volume has the 2-Way Mirroring resilience.

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName UPD01 -FileSystem CSVFS_REFS -Size 100GB

Now I rename the folder Volume1 in ClusterStorage by UPD-01

rename-item C:\ClusterStorage\Volume1 UPD-01

Then I a add the role Scale-Out File Server role in the cluster and I call it SOFS.

Add-ClusterScaleOutFileServerRole -Name SOFS

To finish I create a folder called Profiles in the volume and I share it for everyone (not recommended in production) and I call the share UPD$

New-Item -Path C:\ClusterStorage\UPD-01\Profiles -ItemType Directory
New-SmbShare -Name 'UPD$' -Path C:\ClusterStorage\UPD-01\Profiles -FullAccess everyone

Now my storage is ready and I am able to reach \\SOFS.homecloud.net\UPD$

Next topic

In the next topic, I will deploy a session collection and configure it. Then I will add the certificate for each Remote Desktop components.

The post RDS 2016 Farm: Configure File Servers for User Profile Disks appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/feed/ 4 5471
Manage Storage Space Direct from Virtual Machine Manager //www.tech-coffee.net/manage-storage-space-direct-from-virtual-machine-manager/ //www.tech-coffee.net/manage-storage-space-direct-from-virtual-machine-manager/#comments Tue, 03 Nov 2015 10:53:45 +0000 //www.tech-coffee.net/?p=4251 In a previous topic, I shown how to implement a Storage Space Direct on Windows Server 2016 TP2 (it is almost the same thing in Technical Preview 3). In this previous topic I created Storage Pool, storage space and some share from Failover Clustering console. In this topic, I’ll show you how doing the same operation ...

The post Manage Storage Space Direct from Virtual Machine Manager appeared first on Tech-Coffee.

]]>
In a previous topic, I shown how to implement a Storage Space Direct on Windows Server 2016 TP2 (it is almost the same thing in Technical Preview 3). In this previous topic I created Storage Pool, storage space and some share from Failover Clustering console. In this topic, I’ll show you how doing the same operation from Virtual Machine Manager Technical Preview 3.

Requirements

To follow this topic you need:

  • A Scale-Out File Server implementation. In this topic I use storage space direct;
  • A Virtual Machine Manager 2012R2 Update Rollup 8 installation (on my side I’m in Technical Preview 3).

Storage Space Direct implementation

To make this topic, I have deployed four virtual machines on Windows Server 2016 Technical Preview 3. These machine are in a cluster called HyperConverged.int.homecloud.net. I have installed Hyper-V and File Servers role on these servers because it is a POC for Hyper-Convergence (I’m waiting for nested Hyper-V on Windows Server J). Each virtual machine is connected to 5 disks of 40GB.

Each server is connected to four networks.

  • Cluster: cluster communication;
  • Management: AD, RDP, MMC and so on;
  • Storage: dedicated network between Hyper-V and cluster for storage workloads;
  • Live-Migration: dedicated network to migrate VM from one host to another.

The Scale-Out File Server role is deployed in the cluster. I called it VMSto. VMSto can be reachable from storage network.

To finish, I have added a vmm runas account in the Administrators group in each server.

Manage Storage Space Direct

Now I’m connecting to Virtual Machine Manager in the Fabric. I add a Storage Device (right click on Arrays, Add Storage Devices).

Next select the provider type. With Scale-Out File Server, select Windows-Based File Server.

Next type the cluster name and select the RunAs account related to the account that you have added in local Administrators group in each server.

Then the Scale-Out File Server should be discovered with 0GB capacity. It is because there is no Storage Pool created yet. Just click on next.

Then select the Scale-Out File Server to place under management and click on next.

Once the storage device is added to VMM, you can navigate to File Servers and right click on the device. Select Manage Pools.

In the next window, there is the list of storage pool. Because no storage pool is created nothing appears. So click on New to create a storage pool.

Give a name to the storage pool and select a classification.

Then select disks that will be in this storage pool.

To finish you can specify the Interleave.

Once the storage pool is created, you should see it in Storage Pools window as below.

Next run a rescan on the provider and navigate to Array. Now a pool is managed by VMM.

Moreover it has been added to the right classification.

Now, I create a file share to store my VMs. So I select create file share. I give a name to the share and I select a storage pool.

Then I specify a size for the volume, a file system, the resiliency and an allocation unit size. If you have SSD and HDD in the pool, VMM will ask you if you want to Enable Storage Tiers.

Once the share is created, a new LUN (in fact it is a Cluster Share Volume) is added under the storage pool.

In File Share view, you should have your new File Share.

Now you have just to add the share in the Hyper-V configuration as below.

Now you can deploy VMs in this share as you can see below.

Overview in Failover Clustering console

If we come back in the failover clustering console, you should retrieve the Storage Pool, CSV and share that we have created from VMM. First if you navigate to Pools, you should have a new storage pool called Bronze-Tier01.

Then in Disks, you should have a new CSV belonging to your storage pool.

To finish, if you navigate to the Scale-Out File Server role and you select share tab, you should see the new file share.

Manage using PowerShell

Create the storage Pool

You can list disks available from VMM to add them to a storage pool. For that I used Get-SCStoragePhysicalDisk cmdlet.

Then I use the below script to create a storage pool with the selected physical disk.

$storageArray = Get-SCStorageArray -Name "Clustered Windows Storage on HyperConverged"
$disksToAdd = @()
$disksToAdd += Get-SCStoragePhysicalDisk -ID "69d0702d-5de1-4ac4-82f2-224d1b47676c"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "a77c70bd-96df-482c-87e2-314f288e7142"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "cb94acd1-4269-4db5-bab9-42aeea1897dd"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "97dd5243-7502-48cc-9302-433288a487f3"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "e44d24ab-9e47-44bd-94ea-5d57f25d8d66"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "c77e6d97-e7c7-4d88-abd8-72ffe468418d"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "90d7408f-d7be-4aaf-b88c-7cb3c0860c2e"
$disksToAdd += Get-SCStoragePhysicalDisk -ID "5b6217ce-5eff-489c-b074-c97c64c9d1c6"
$classification = Get-SCStorageClassification -Name "Bronze"
$pool_0 = New-SCStoragePool -Name "Bronze-Tier01" -StoragePhysicalDisk $disksToAdd -StorageArray $storageArray -StorageClassification $classification

Create the file share

To create the file share in a storage pool, I use the New-SCStorageFileShare cmdlet as below.

$storageFileServer = Get-SCStorageFileServer -Name VMSto.int.HomeCloud.net
$storagePool = Get-SCStoragePool -name "Bronze-Tier01"
$storageClassification = Get-SCStorageClassification -Name "Bronze"
$storageFileShare = New-SCStorageFileShare -StorageFileServer $storageFileServer -StoragePool $storagePool -Name "Bronze01" -Description "" -SizeMB 102400 -RunAsynchronously -FileSystem "CSVFS_ReFS" -ResiliencySettingName "Mirror" -PhysicalDiskRedundancy "2" -AllocationUnitSizeKB "64" -StorageClassification $storageClassification

The post Manage Storage Space Direct from Virtual Machine Manager appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/manage-storage-space-direct-from-virtual-machine-manager/feed/ 2 4251
Whitepaper: Implement a highly available private cloud to host virtual machines //www.tech-coffee.net/whitepaper-implement-highly-available-private-cloud-host-virtual-machines/ //www.tech-coffee.net/whitepaper-implement-highly-available-private-cloud-host-virtual-machines/#respond Thu, 25 Dec 2014 09:43:27 +0000 //www.tech-coffee.net/?p=3023 For some time I write a whitepaper about how to implement a highly available private cloud to host virtual machines. On this day of Christmas, I have finished and published it. You can download it from this link. This whitepaper explains how to implement a Private Cloud with Windows Azure Pack in high availability from ...

The post Whitepaper: Implement a highly available private cloud to host virtual machines appeared first on Tech-Coffee.

]]>
For some time I write a whitepaper about how to implement a highly available private cloud to host virtual machines. On this day of Christmas, I have finished and published it. You can download it from this link.

This whitepaper explains how to implement a Private Cloud with Windows Azure Pack in high availability from scratch. So I talk about Scale-Out File Servers, SQL AlwaysOn, Virtual Machine Manager, Service Provider Foundation, NVGRE Gateway, RD Gateway and Windows Azure Pack.

I start this implementation just after to have deployed the Active Directory and a PKI and so almost from scratch. I hope this document will help you to implement your own private cloud.

Merry Christmas everyone 🙂

button-1

The post Whitepaper: Implement a highly available private cloud to host virtual machines appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/whitepaper-implement-highly-available-private-cloud-host-virtual-machines/feed/ 0 3023
Use Scale-Out File Server with Virtual Machine Manager //www.tech-coffee.net/use-scale-out-file-server-virtual-machine-manager/ //www.tech-coffee.net/use-scale-out-file-server-virtual-machine-manager/#respond Mon, 14 Apr 2014 19:28:09 +0000 //www.tech-coffee.net/?p=636 Scale-Out File Server (SOFS) is a new feature of Windows Server 2012. This feature provides a file service for application (such as Hyper-V) in an active/active cluster thanks to SMB 3.0. Because this feature uses the Failover Clustering feature, some disks must be connected to each node to convert them in Cluster Share Storage (CSV). ...

The post Use Scale-Out File Server with Virtual Machine Manager appeared first on Tech-Coffee.

]]>
Scale-Out File Server (SOFS) is a new feature of Windows Server 2012. This feature provides a file service for application (such as Hyper-V) in an active/active cluster thanks to SMB 3.0. Because this feature uses the Failover Clustering feature, some disks must be connected to each node to convert them in Cluster Share Storage (CSV). These disks can be connected by iSCSI, Fibre Channel, or SAS attached shared JBOD and so using storage spaces. In Scale-Out File Server, more you add nodes, more you have better performance. However you can’t exceed more than 8 nodes. For further information you can view this link.

Because Hyper-V is compatible with SMB 3.0, it can use Scale-Out File Server to store virtual machine. In Virtual Machine Manager 2012R2, it is possible to manage shares of Scale-Out File Server cluster and use them to store Virtual Machine. We will see in this topic how to create the cluster and manage shares in VMM.

Configure Scale-Out File Server

Roles installation

On each node of the cluster, the File Server role and the Failover Clustering feature have to be installed. To launch the installation, open the server manager and click on Add Role and Feature.

If you have centralized the server management as below, select the server where you want to install roles or features.

On the roles screen, select the file server as below.

On the features screen, select Failover Clustering.

Click next until install roles and features.

Cluster failover configuration

Once you have installed File Server role and Failover Clustering feature, open the Failover Cluster Manager. Click on Validate Configuration as below.

Select servers that are part of the SOFS cluster. Now I add two nodes because I want to show you how to add a node after that the cluster is created.

When creating a cluster, it is recommended to run all tests. These tests verify network, storage etc.

Before launch tests, verify your network configuration and your storage. For example my storage is as below on each node. I have three LUNs connected by iSCSI (thanks to Synology NAS :p).

Once all it is ok, launch the test. Anyway if something goes wrong, the test report will tell you.

 

Below, the failover cluster validation report. All tests are passed successfully. It is because I am on mock-up environment J. In real life, I have never seen this report like that. Anyway, when this report is ok, be sure that Create the now using the validated nodes parameter is checked.

On first Create Cluster Wizard screen, click next.

Type a cluster name and enter a valid IP address.

I don’t check Add all eligible storage to the cluster checkbox because I want add by myself the storage on the cluster.

When you have clicked on next, the cluster is created.

The below warning is normal. It is because I have not configured a quorum witness. I don’t want configure disk witness because I use the new Windows Server 2012 feature called dynamic quorum.

Now that the cluster is created, right click on Disks and select Add Disks to a Cluster.

 

 

 

 

Select disks that are part of the cluster.

Now that disks are available in the cluster, it is time to convert them to Cluster Shared Volumes (CSV). When they are converted to CSV, they can be used in SOFS.

To finish cluster configuration, open networks part. Rename your networks related to their using and select the right cluster use parameter. Usually, one network has to serve clients.

Add Scale-Out File Server role

Now that cluster is set, right click on roles and select add role.

 

Select the File Server role.

On File Server Type screen, select Scale-Out File Server for application data.

Choose a name for your client access point. This name will be added in DNS later to connect to share.


Note : As you can see in the above screenshot, a Cluster Name Object (CNO) will be created beside the CNO of the cluster. For example if I create a cluster called Fabrikam-SOFS, a computer object will be created in Active Directory called Fabrikam-SOFS. When you create a SOFS role in a cluster called SOFS01 as above, a CNO will be created beside the object Fabrikam-SOFS. It is very important because the object Fabrikam-SOFS has to have a delegation on the OU where it is to create child item. In this example, it is Fabrikam-SOFS that create the computer object SOFS01 in Active Directory.

Now that SOFS role is added to the cluster, status should report Running. To finish role configuration, an IP address has to be added to resource of SOFS01.

Add an IP address belonging to a network that has client cluster uses.

Add node in cluster if needed

Before configuring VMM with the previous Scale-Out File Server created, I will show you how to add a node to a running cluster. First right click on Nodes in Failover Cluster Manager and select Add Node.

On Select Servers screen, add the server that you want to be part of the cluster.

In a production environment, I recommend you to run all tests before adding a server to a running cluster.

In production environment I recommend you to run all tests to add a node to a running cluster.

My Failover Cluster Validation report informs me that there is no disk witness. As above, I use a dynamic quorum J.

Once you click on finish, you can add the node.

And again, the wizard report me that I have no disk witness. It is because I use Dynamic quorum …

Configure VMM to store Virtual Machine on SOFS

Now that cluster is ready to use, it is time to add some virtual machines to my Scale-Out File Servers. Open the fabric tab and right click on providers to Add Storage Devices.

 

 

 

 

 

Select Windows Based file server as the storage provider type.

Provide the name of the Scale-Out File Server role in the cluster. A run As account is needed with appropriate rights.

Select Storage Device and click next.

Select the storage devices to place under management. Note that you can create a classification for this storage.

To add the storage devices to VMM click on finish.

Click on Arrays on fabric, select your storage device and click on Create File Share.

Select the File Server, type a name and select the volume. You can create a classification for this file share. Repeat this operation for each volume that you have.

Now storage volumes are available in VMM.

 

If you open the cluster failover Manager, you can view the previously created File Share as below.

Now right click on your hosts, select properties and open Storage tab. Click on Add and select file shares that you have created.

Now you can create a Virtual Machine on these shares as below.

As you can see below, the VMTEST01 VM is stored on my Scale-Out File Server.

Test high availability of SOFS

Before testing, the FSLUN01 is owned by VMSOFS03. On FSLUN01, there are my VMTEST01. I will shut down the VMSOFS03 machine to view how VMTEST01 acts.

I have launched the task manager to view if there are some interruption.

Now that VMSOFS03 is shut down, the new FSLUN01 owner is VMSOFS02.

It is impressive, even if the owner of the Cluster Shared Volume (CSV) changes roughly, the VMTEST01 still online. It sounds like magic !

Conclusion

Scale-Out File Server is a new feature of Windows Server 2012. It permits to create Active/Active cluster File Servers for application usage such as Hyper-V. SOFS uses SMB 3.0 technology so this enables Hyper-V to store Virtual Machines on shares of the SOFS. With SOFS, more you add nodes, more you gain performance. However you can’t exceed 8 nodes. As you have seen before on this topic, even if a node is shut down, Virtual Machines keep running. The main advantage of SOFS is that is less expensive than a traditional SAN and fabric storage solution. The best scenario with SOFS is a SAS attached share JBOD with storage spaces. This is a good and cheap storage solution.

 

 

The post Use Scale-Out File Server with Virtual Machine Manager appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/use-scale-out-file-server-virtual-machine-manager/feed/ 0 636