SOFS – Tech-Coffee //www.tech-coffee.net Fri, 30 Jun 2017 08:06:25 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Deploy Veeam Cloud Connect for large environments in Microsoft Azure //www.tech-coffee.net/deploy-veeam-cloud-connect-for-large-environments-in-microsoft-azure/ //www.tech-coffee.net/deploy-veeam-cloud-connect-for-large-environments-in-microsoft-azure/#comments Fri, 30 Jun 2017 07:57:00 +0000 //www.tech-coffee.net/?p=5604 Veeam Cloud Connect is a solution to store backups and archives in a second datacenter such as Microsoft Azure. Thanks to this technology, we can easily follow the 3-2-1 backup rule (3 backups; 2 different medias; 1 offsite). Last time I talked about Veeam Cloud Connect, I deployed all Veeam roles within a single VM. ...

The post Deploy Veeam Cloud Connect for large environments in Microsoft Azure appeared first on Tech-Coffee.

]]>
Veeam Cloud Connect is a solution to store backups and archives in a second datacenter such as Microsoft Azure. Thanks to this technology, we can easily follow the 3-2-1 backup rule (3 backups; 2 different medias; 1 offsite). Last time I talked about Veeam Cloud Connect, I deployed all Veeam roles within a single VM. This time I’m going to deploy the Veeam Cloud Connect in Microsoft Azure where roles are allocated across different Azure VMs. Moreover, some roles such as the Veeam Cloud Gateway will be deployed in a high availability setup.

Before I begin, I’d like to thank Pierre-Francois Guglielmi – Veeam Alliances System Engineer (@pfguglielmi) for his time. Thank you for your review, your English correction and your help.

What is Veeam Cloud Connect

Veeam Cloud Connect provides an easy way to copy your backups to an offsite location that can be based on public cloud (such as Microsoft Azure) or for archival purpose. Instead of investing money in another datacenter to store backup copies, you can choose to leverage Veeam Cloud Connect (VCC) to send these backup copies to Microsoft Azure. VCC exists in the form of two templates that you can find in the Microsoft Azure Marketplace:

  • Veeam Cloud Connect for Service Providers
  • Veeam Cloud Connect for the Enterprise

The first one is for service providers with several customers who want to deliver Backup-as-a-Service offerings using the Veeam Cloud Connect technology. This provider can deploy the solution in a public cloud and deliver the service to clients. The second version is dedicated to companies willing to build similar Backup-as-a-Service offerings internally, leveraging the public cloud to send backup copies offsite. For this topic, I’ll work on Veeam Cloud Connect for Enterprise, but the technology is the same.

Veeam Cloud Connect is a Veeam Backup & Replication server with Cloud Connect features unlocked by a specific license file. When deploying this kind of solution, you have the following roles:

  • Microsoft Active Directory Domain Controller (optional)
  • Veeam Cloud Connect server
  • Veeam Cloud Gateway
  • Veeam backup repositories
  • Veeam WAN Accelerator (optional)

Microsoft Active Directory Domain Controller

A Domain controller is not a mandatory role for the Veeam Cloud Connect infrastructure but it can make servers and credentials management easier. If you plan to establish a site-to-site VPN from your on-premises to Microsoft Azure, you can deploy domain controllers within Azure, in the same forest than the existing domain controllers and add all Azure VMs to a domain. In this way, you can use your existing credentials to manage servers, apply existing GPOs and create specific service accounts for Veeam managed by Active Directory. It is up to you: if you don’t deploy a domain controller within Azure, you can still deploy the VCC infrastructure. But then you’ll have to manage servers one by one.

Veeam Cloud Connect server

Veeam Cloud Connect server is a Veeam Backup & Replication server with Cloud Connect features. This is the central point to manage and deploy Veeam Cloud Connect infrastructure components. From this component, you can deploy Veeam Cloud Gateway, WAN accelerator, backup repositories and manage backup copies.

Veeam Cloud Gateway

The Veeam Cloud Gateway component is the entry point of your Veeam Cloud Connect infrastructure. When you’ll choose to send a backup copy to this infrastructure, you’ll specify the public IP or DNS name of the Veeam Cloud Gateway server(s). This service is based on Azure VM(s) running Windows Server and with a public IP address to allow secure inbound and outbound connections to the on-premises environment. If you choose to deploy several Veeam Cloud Gateway servers for high availability, you have two ways to provide a single entry point:

  • Round-Robin record in your public DNS registrar; one DNS name for all A records bound to Veeam Cloud Gateways public IP adresses.
  • A Traffic Manager in front of all Veeam Cloud Gateway servers

Because Veeam Cloud Gateway has its own load balancing mechanism, you can’t deploy Azure Load balancer, F5 appliance or other kinds of load balancers on front of Veeam Cloud Gateways.

Veeam Backup repositories

This is the storage system that stores backups. It can be a single Windows Server with a single disk or a storage space. Don’t forget that in Azure, the maximum size of a single data disk is 4TB (as of June 2017). You can also leverage the Scale-Out Backup Repository functionality where several backup repositories are managed by Veeam as a single logical repository. To finish, and this is the scenario I’m going to present later in this topic, you can store backups on a Scale-Out File Server based on a Storage Spaces Direct cluster. This solution provides SMB 3.11 access to the storage.

Veeam WAN Accelerator

Veeam WAN accelerator is the same component already available in Veeam Backup & Replication. This service optimizes the traffic between source and destination by sending only new unique blocks not already known at destination. To leverage this feature, you need a pair of WAN Accelerator servers. The source WAN Accelerator creates a digest for data blocks and the target synchronizes these digests and populates a global cache. During next transfer, the source WAN Accelerator compares digests of the blocks in the new incremental backup file with the already known digests. If nothing has changed, the block is not copied over the network and the data is taken from the global cache in the target, or from the target backup repositories, which in such a case act as infinite cache.

Architecture Overview

For this topic, I decided to separate roles on different Azure VMs. I’ll have 5 kinds of Azure VMs:

  • Domain Controllers
  • Veeam Cloud Gateways
  • Veeam Cloud Connect
  • Veeam WAN Accelerator
  • File Servers (Storage Spaces Direct)

First, I deploy two Domain Controllers to ease management. This is completely optional. All domain controllers are members of an Azure Availability Set.

The Veeam Cloud Gateway servers are located behind a Traffic Manager profile. Each Veeam Cloud Gateway has its own public IP address. The Traffic Manager profile distributes the traffic across public IP addresses of Veeam Cloud Gateway servers. The JSON template provided below allows to deploy from 1 to 9 Cloud Gateway servers depending on your needs. All Veeam Cloud Gateways are added to an Availability Set to support a 99,95% SLA.

Then I deploy two Veeam Cloud Connect VMs: one active and one passive. I add these both Azure VMs in an Availability Set. If the first VM crashes, the backup configuration is restored to the second VM.

The WAN Accelerator is not in an Availability Set because you can add only one WAN Accelerator per tenant. You can deploy as many WAN accelerators as required.

Finally, the backup repository is based on Storage Spaces Direct. I deploy 4 Azure VMs to leverage parity. I choose parity because my S2D managed disks are based on SSD (premium disk). If you want more performance or if you choose standard disks, I recommend you mirroring instead of parity. You can use a single VM to store backups to save money but for this demonstration, I’d like to share with Storage Spaces Direct just to show that it is possible. However, there is one limitation with S2D in Azure: for better performance, managed disks are recommended. An Availability Set with Azure VMs with managed disks supports only three fault domains. That means that in a four-node S2D cluster, two nodes will be in the same fault domain. So there is a chance that two nodes fail simultaneously. But dual parity (or 3-way mirroring) supports two fault domain failures.

Azure resources: Github

I have published in my Github repository a JSON template to deploy the infrastructure described above. You can use this template to deploy the infrastructure for your lab or production environment. In this example, I won’t explain how to deploy the Azure Resources because this template does it automatically.

Active Directory

Active Directory is not mandatory for this kind of solution. I have deployed domain controllers to make management of servers and credentials easier. To configure domain controllers, I started the Azure VMs where domain controller roles will be deployed. In the first VM, I run the following PowerShell cmdlets to deploy the forest:

# Initialize the Data disk
Initialize-Disk -Number 2

#Create a volume on disk
New-Volume -DiskNumber 2 -FriendlyName Data -FileSystem NTFS -DriveLetter E

#Install DNS and ADDS features
Install-windowsfeature -name AD-Domain-Services, DNS -IncludeManagementTools

# Forest deployment
Import-Module ADDSDeployment
Install-ADDSForest -CreateDnsDelegation:$false `
                   -DatabasePath "E:\NTDS" `
                   -DomainMode "WinThreshold" `
                   -DomainName "VeeamCloudConnect.net" `
                   -DomainNetbiosName "HOMECLOUD" `
                   -ForestMode "WinThreshold" `
                   -InstallDns:$true `
                   -LogPath "E:\NTDS" `
                   -NoRebootOnCompletion:$false `
                   -SysvolPath "E:\SYSVOL" `
                   -Force:$true

Then I run these cmdlets for additional domain controllers:

# Initialize data disk
Initialize-Disk -Number 2

# Create a volume on disk
New-Volume -DiskNumber 2 -FriendlyName Data -FileSystem NTFS -DriveLetter E

# Install DNS and ADDS features
Install-windowsfeature -name AD-Domain-Services, DNS -IncludeManagementTools

# Add domain controller to forest
Import-Module ADDSDeployment
Install-ADDSDomainController -NoGlobalCatalog:$false `
                             -CreateDnsDelegation:$false `
                             -Credential (Get-Credential) `
                             -CriticalReplicationOnly:$false `
                             -DatabasePath "E:\NTDS" `
                             -DomainName "VeeamCloudConnect.net" `
                             -InstallDns:$true `
                             -LogPath "E:\NTDS" `
                             -NoRebootOnCompletion:$false `
                             -SiteName "Default-First-Site-Name" `
                             -SysvolPath "E:\SYSVOL" `
                             -Force:$true

Once the Active Directory is ready, I add each Azure VM to the domain by using the following cmdlet:

Add-Computer -Credential homecloud\administrator -DomainName VeeamCloudConnect.net -Restart

Configure Storage Spaces Direct

I have written several topics on Tech-Coffee about Storage Spaces Direct. You can find for example this topic or this one. These topics are more detailed about the Storage Spaces Direct if you need more information.

To configure Storage Spaces Direct in Azure, I started all file servers VMs. Then in each VM I ran the following cmdlet:

# Rename vNIC connected in Internal subnet by Management
rename-netadapter -Name "Ethernet 3" -NewName Management

# Rename vNIC connected in cluster subnet by cluster
rename-netadapter -Name "Ethernet 2" -NewName Cluster

# Disable DNS registration for cluster vNIC
Set-DNSClient -InterfaceAlias *Cluster* -RegisterThisConnectionsAddress $False

# Install required features
Install-WindowsFeature FS-FileServer, Failover-Clustering -IncludeManagementTools -Restart

Once you have run these commands on each server, you can deploy the cluster:

# Validate cluster prerequisites
Test-Cluster -Node AZFLS00, AZFLS01, AZFLS02, AZFLS03 -Include "Storage Spaces Direct",Inventory,Network,"System Configuration"

#Create the cluster
New-Cluster -Node AZFLS00, AZFLS01, AZFLS02, AZFLS03 -Name Cluster-BCK01 -StaticAddress 10.11.0.160

# Set the cluster quorum to Cloud Witness (choose another Azure location)
Set-ClusterQuorum -CloudWitness -AccountName StorageAccount -AccessKey "AccessKey"

# Change the CSV cache to 1024MB per CSV
(Get-Cluster).BlockCacheSize=1024

# Rename network in the cluster
(Get-ClusterNetwork "Cluster Network 1").Name="Management"
(Get-ClusterNetwork "Cluster Network 2").Name="Cluster"

# Enable Storage Spaces Direct
Enable-ClusterS2D -Confirm:$False

# Create a volume and rename the folder Volume1 to Backup
New-Volume -StoragePoolFriendlyName "*Cluster-BCK01*" -FriendlyName Backup -FileSystem CSVFS_ReFS -ResiliencySettingName parity -PhysicalDiskRedundancy 2 -Size 100GB
Rename-Item C:\ClusterStorage\Volume1 Backup
new-item -type directory C:\ClusterStorage\Backup\HomeCloud

Then open the Active Directory console (dsa.msc) and edit the permissions of the OU where the Cluster Name Object is located. Grant the permission to create computer objects to the CNO (in the example Cluster-BCK01) on the OU.

Next, run the following cmdlets to complete the file server’s configuration:

# Add Scale-Out File Server to cluster
Add-ClusterScaleOutFileServerRole -Name BackupEndpoint

# Create a share
New-SmbShare -Name 'HomeCloud' -Path C:\ClusterStorage\Backup\HomeCloud -FullAccess everyone

First start of the Veeam Cloud Connect VM

First time you connect to the Veeam Cloud Connect VM, you should see the following screen. Just specify the license file for Veeam Cloud Connect and click Next. The next screen shows the requirements to run a Veeam Cloud Connect infrastructure.

Deploy Veeam Cloud Gateway

First component I deploy is Veeam Cloud Gateway. In the Veeam Backup & Replication console (in the Veeam Cloud Connect VM), you can navigate to Cloud Connect. Then select Add Gateway.

In the first screen, just click on Add New…

Then specify the name of the first gateway and provide a description.

In the next screen, enter credentials that have administrative permissions in the Veeam Cloud Gateway VM. For that, I created an account in Active Directory and I added it to local administrators of the VM.

Then Veeam tells you that it has to deploy a component on the target host. Just click Apply.

The following screen shows a successful deployment:

Next you have a summary of the operations applied to the target server and what has been installed.

Now you are back to the first screen. This time select the host you just added. You can change the external port. For this test I kept the default value.

Then choose “This server is located behind NAT” and specify the public IP address of the machine. You can find this information in the Azure Portal on the Azure VM blade. Here again I left the default internal port.

This time, Veeam tells you that it has to install Cloud Gateway components.

The following screenshot shows a successful deployment:

Repeat these steps for each Cloud Gateway. In this example, I have two Cloud Gateways:

To complete the Cloud Gateway configuration, open up the Azure Portal and edit the Traffic Manager profile. Add an endpoint for each Cloud Gateway you deployed and select the right public IP address. (Sorry I didn’t find how to loop the creation of endpoint in JSON template).

Because I have two Cloud Gateways and so two Traffic Manager endpoints with the same weight.

Add the backup repository

In this step, we add the backup repository. Open the Veeam Backup & Replication console (in Veeam Cloud Connect VM) and navigate to Backup Infrastructure. Then select Add Repository.

Enter a name and a description for your backup repository.

Next select Shared folder because Storage Spaces Direct with SOFS is based on … shared folder.

Then specify the UNC path to the share that you have previously created (Storage Spaces Direct section) and provide credentials with privileges.

In the next screen you can limit the maximum number of concurrent tasks, the data rates and set some advanced parameters.

Then I choose to not enable vPower NFS because it’s only use in VMware vSphere environments.

The following steps are not mandatory. I just clean up the default configuration. First I remove the default tenant.

Then I change the Configuration Backup task’s repository to the one created previously. For that I navigate to Configuration Backup:

Then I specify that I want to store the configuration backups to my S2D cluster. It is highly recommended to encrypt configuration backup to save credentials

Finally, I remove the default backup repository.

Deploy Veeam WAN Accelerator (Optional)

To add a Veeam WAN Accelerator, navigate to Backup Infrastructure and select Add WAN Accelerator.

In the next screen, click Add New…

Specify the FQDN of the target host and type in a description.

Then select credentials with administrative permissions on the target host.

In the next screen, Veeam tells you that a component has to be installed.

This screen shows a successful deployment.

Next you have a summary screen which provides a summary of the configuration of the target host.

Now you are back to the first screen. Just select the server that you just added and provide a description. I choose to leave the default traffic port and the number of streams.

Select a cache device with enough capacity for your needs.

Finally you can review your settings. If all is ok, just click Apply.

You can add as many WAN accelerators as needed. One WAN Accelerator can used by several tenants. Only one WAN Accelerator can be bound to a tenant.

Prepare the tenant

Now you can add a tenant. Navigate to Cloud Connect tab and select Add tenant.

Provide a user name, a password and a description to your tenant. Then choose Backup storage (cloud backup repository).

In the next screen you can define the maximum number of concurrent tasks and a bandwidth limit.

Then click Add to bind a backup repository to the tenant.

Specify the cloud repository name, the backup repository, the capacity of the cloud repository and the WAN Accelerator.

Once the cloud repository is configured, you can review the settings in the last screen.

Now the Veeam Cloud Connect infrastructure is ready. The enterprise can now connect to Veeam Cloud Connect in Azure.

Connect On-Premises to Veeam Cloud Connect

To connect to the Veeam Cloud Connect infrastructure from On-Premises, open your Veeam Backup & Replication console. Then in Backup infrastructure, navigate to Service Providers. Click Add Service Provider.

Type in the FQDN to your Traffic Manager profile and provide a description. Select the external port your chose for the Veeam Cloud Gateways configuration (I left mine to the default 6180).

In the next screen, enter the credentials to connect to your tenant.

If the credentials are correct, you should see the available cloud repositories.

Now you can create a backup copy job to Microsoft Azure.

Enter a job name and description and configure the copy interval.

Add virtual machine backups to copy to Microsoft Azure and click Next.

In the next screen you can set archival settings and how many restore points you want to keep. You can also configure some advanced settings.

If you a WAN Accelerator on-premises, you can select the source WAN Accelerator.

Then you can configure scheduling options for the backup copy job.

When the backup copy job configuration is complete, the job starts and you should see backup copies being created in the Veeam Cloud Connect infrastructure.

Conclusion

This topic introduces “a large” Veeam Cloud Connect infrastructure within Azure. All components can be deployed in a single VM (or two) for small environments or as described in this post for huge infrastructure. If you have several branch offices and want to send backup data to an offsite location, it can be the right solution instead of tape library.

The post Deploy Veeam Cloud Connect for large environments in Microsoft Azure appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-veeam-cloud-connect-for-large-environments-in-microsoft-azure/feed/ 1 5604
Deploy a SMB storage solution for Hyper-V with StarWind VSAN free //www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/ //www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/#comments Wed, 14 Jun 2017 14:29:22 +0000 //www.tech-coffee.net/?p=5543 StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS ...

The post Deploy a SMB storage solution for Hyper-V with StarWind VSAN free appeared first on Tech-Coffee.

]]>
StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS solution with Windows Server 2016 Standard Edition and StarWind VSAN free. It is an affordable solution for your Hyper-V VM storage.

In this topic, we’ll see how to deploy StarWind VSAN free on two nodes based on Windows Server 2016 Standard Core edition. Then we’ll deploy a Failover Clustering with SOFS to deliver storage to Hyper-V nodes.

Architecture overview

This solution should be deployed on physical servers with physical disks (NVMe, SSD or HDD etc.). For the demonstration, I have used two virtual machines. Each virtual machine has:

  • 4 vCPU
  • 4GB of Memories
  • 1x OS disk (60GB dynamic) – Windows Server 2016 Standard Core edition
  • 1x Data disk (127GB dynamic)
  • 3x vNIC (1x Management / iSCSI, 1x Hearbeart, 1x Synchronization)

Both nodes are deployed and joined to the domain.

Node preparation

On both nodes, I run the following cmdlet to install the features and prepare a volume for StarWind:

# Install FS-FileServer, Failover Clustering and MPIO
install-WindowsFeature FS-FileServer, Failover-Clustering, MPIO -IncludeManagementTools -Restart

# Set the iSCSI service startup to automatic
get-service MSiSCSI | Set-Service -StartupType Automatic

# Start the iSCSI service
Start-Service MSiSCSI

# Create a volume with disk
New-Volume -DiskNumber 1 -FriendlyName Data -FileSystem NTFS -DriveLetter E

# Enable automatic claiming of iSCSI devices
Enable-MSDSMAutomaticClaim -BusType iSCSI

StarWind installation

Because I have installed nodes in Core edition, I install and configure components from PowerShell and command line. You can download StarWind VSAN free from this link. To install StarWind from command line, you can use the following parameters:

Starwind-v8.exe /SILENT /COMPONENTS="comma separated list of component names" /LICENSEKEY="path to license file"

Current list of components:

Service: StarWind iSCSI SAN server.

service\haprocdriver: HA Processor Driver, it is used to support devices that have been created with older versions of the Software.

service\starflb: Loopback Accelerator, it is used with Windows 2012 and upper versions to accelerate iSCSI operation when client resides on the same machine as server.

service\starportdriver: StarPort driver that is required for operation of Mirror devices.

Gui : Management Console ;

StarWindXDll: StarWindX COM object;

StarWindXDll\powerShellEx: StarWindX PowerShell module.

To install StarWind, I have run the following command:

C:\temp\Starwind-v8.exe /SILENT /COMPONENTS="Service,service\starflb,service\starportdriver,StarWindxDll,StarWindXDll\powerShellEx /LICENSEKEY="c:\temp\ StarWind_Virtual_SAN_Free_License_Key.swk"

I run this command on both nodes. After this command is run, StarWind is installed and ready to be configured.

StarWind configuration

StarWind VSAN free provides a trial of 30 days for the management console. After the 30 days, you have to manage the solution from PowerShell. So I decided to configure the solution from PowerShell:

Import-Module StarWindX

try
{
    $server = New-SWServer -host 10.10.0.54 -port 3261 -user root -password starwind

    $server.Connect()

    $firstNode = new-Object Node

    $firstNode.ImagePath = "My computer\E"
    $firstNode.ImageName = "VMSTO01"
    $firstNode.Size = 65535
    $firstNode.CreateImage = $true
    $firstNode.TargetAlias = "vmsan01"
    $firstNode.AutoSynch = $true
    $firstNode.SyncInterface = "#p2=10.10.100.55:3260"
    $firstNode.HBInterface = "#p2=10.10.100.55:3260"
    $firstNode.CacheSize = 64
    $firstNode.CacheMode = "wb"
    $firstNode.PoolName = "pool1"
    $firstNode.SyncSessionCount = 1
    $firstNode.ALUAOptimized = $true
    
    #
    # device sector size. Possible values: 512 or 4096(May be incompatible with some clients!) bytes. 
    #
    $firstNode.SectorSize = 512
	
	#
	# 'SerialID' should be between 16 and 31 symbols. If it not specified StarWind Service will generate it. 
	# Note: Second node always has the same serial ID. You do not need to specify it for second node
	#
	$firstNode.SerialID = "050176c0b535403ba3ce02102e33eab" 
    
    $secondNode = new-Object Node

    $secondNode.HostName = "10.10.0.55"
    $secondNode.HostPort = "3261"
    $secondNode.Login = "root"
    $secondNode.Password = "starwind"
    $secondNode.ImagePath = "My computer\E"
    $secondNode.ImageName = "VMSTO01"
    $secondNode.Size = 65535
    $secondNode.CreateImage = $true
    $secondNode.TargetAlias = "vmsan02"
    $secondNode.AutoSynch = $true
    $secondNode.SyncInterface = "#p1=10.10.100.54:3260"
    $secondNode.HBInterface = "#p1=10.10.100.54:3260"
    $secondNode.ALUAOptimized = $true
        
    $device = Add-HADevice -server $server -firstNode $firstNode -secondNode $secondNode -initMethod "Clear"
    
    $syncState = $device.GetPropertyValue("ha_synch_status")

    while ($syncState -ne "1")
    {
        #
        # Refresh device info
        #
        $device.Refresh()

        $syncState = $device.GetPropertyValue("ha_synch_status")
        $syncPercent = $device.GetPropertyValue("ha_synch_percent")

        Start-Sleep -m 2000

        Write-Host "Synchronizing: $($syncPercent)%" -foreground yellow
    }
}
catch
{
    Write-Host "Exception $($_.Exception.Message)" -foreground red 
}

$server.Disconnect()

Once this script is run, two HA images are created and they are synchronized. Now we have to connect to this device through iSCSI.

iSCSI connection

To connect to the StarWind devices, I use iSCSI. I choose to set iSCSI from PowerShell to automate the deployment. In the first node, I run the following cmdlets:

New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260
New-IscsiTargetPortal -TargetPortalAddress 10.10.0.55 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.54
Get-IscsiTarget | Connect-IscsiTarget -isMultipathEnabled $True

In the second node, I run the following cmdlets:

New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260
New-IscsiTargetPortal -TargetPortalAddress 10.10.0.54 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.55
Get-IscsiTarget | Connect-IscsiTarget -isMultiPathEnabled $True

You can run the command iscsicpl in a server core to show the iSCSI GUI. You should have something like that:

PS: If you have a 1GB/s network, set the load balance policy to Failover and leave Active the 127.0.0.1 path. If you have 10GB/s network, choose Round Robin policy.

Configure Failover Clustering

Now that a shared volume is available for both node, you can create the cluster:

Test-Cluster -node VMSAN01, VMSAN02

Review the report and if all is ok, you can create the cluster:

New-Cluster -Node VMSAN01, VMSAN02 -Name Cluster-STO01 -StaticAddress 10.10.0.65 -NoStorage

Navigate to Active Directory (dsa.msc) and locate the OU where is located the Cluster Name Object. Edit the permissions on this OU to allow the Cluster Name Object to create computer object:

Now we can create the Scale-Out File Server role:

Add-ClusterScaleOutFileServerRole -Name VMStorage01

Then we can initialize the StarWind disk to convert it later in CSV. Then we can create a SMB share:

# Initialize the disk
get-disk |? OperationalStatus -like Offline | Initialize-Disk

# Create a CSVFS NTFS partition
New-Volume -DiskNumber 3 -FriendlyName VMSto01 -FileSystem CSVFS_NTFS

# Rename the link in C:\ClusterStorage
Rename-Item C:\ClusterStorage\Volume1 VMSTO01

# Create a folder
new-item -Type Directory -Path C:\ClusterStorage\VMSto01 -Name VMs

# Create a share
New-SmbShare -Name 'VMs' -Path C:\ClusterStorage\VMSto01\VMs -FullAccess everyone

The cluster looks like that:

Now from Hyper-V, I am able to store VM in this cluster like that:

Conclusion

StarWind VSAN free and Windows Server 2016 Standard Edition provides an affordable SDS solution. Thanks to this solution, you can deploy a 2-node storage cluster which provides SMB 3.11 shares. So Hyper-V can uses these shares to host virtual machines.

The post Deploy a SMB storage solution for Hyper-V with StarWind VSAN free appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-a-smb-storage-solution-for-hyper-v-with-starwind-vsan-free/feed/ 4 5543
Host Veeam backup on Storage Spaces Direct //www.tech-coffee.net/host-veeam-backups-storage-spaces-direct/ //www.tech-coffee.net/host-veeam-backups-storage-spaces-direct/#comments Tue, 06 Jun 2017 08:01:26 +0000 //www.tech-coffee.net/?p=5527 Storage Spaces Direct (S2D) is well known to host virtual machines in the disaggregated or hyperconverged model. But S2D can also be used for backup purpose as a backup repository. You can plan to implement a S2D cluster in the dissagregated model to host virtual machines but also to store backups. Because Veeam Backup & ...

The post Host Veeam backup on Storage Spaces Direct appeared first on Tech-Coffee.

]]>
Storage Spaces Direct (S2D) is well known to host virtual machines in the disaggregated or hyperconverged model. But S2D can also be used for backup purpose as a backup repository. You can plan to implement a S2D cluster in the dissagregated model to host virtual machines but also to store backups. Because Veeam Backup & Replication can leverage a repository based on SMB shares, the Veeam backups can be hosted on a S2D cluster through Scale-Out File Server (SOFS).

Veeam Backup & Replication 9.5 provides advanced ReFS integration to provide faster synthetic full backup creation, reduce storage requirements and improve reliability and backup and restore performance. With Storage Spaces Direct, Microsoft recommends mainly ReFS as the file system. This is why, if you have a S2D cluster (or you plan to deploy) and Veeam, it can be a great opportunity to host backup on the S2D cluster.

S2D cluster provides three resilience models: mirroring, parity and mixed resiliency. Mirroring volume is not a good option to store backup because too many spaces is used for resilience (50% in 2-way mirroring or 3-way mirroring). Mirroring is good to store virtual machines. Parity is a good option to store backups. More you add storage, more your storage is efficiency. However, you need a 4-node S2D cluster. Mixed resiliency is also a good option because you mix mirroring and parity and so performance and efficiency. But mixed resiliency requires a fine design.

In this topic, I’ll implement a S2D cluster with a dual parity volume to store Veeam backups.

4-node S2D cluster deployment

First of all, you have to deploy a S2D Cluster. You can follow this topic to implement the cluster. For this topic, I have deployed a 4-node cluster. After that Operating System and drivers are installed I have run the following PowerShell script. This script installs required features on all nodes and enable RDMA for network adapters with the name containing Cluster.

$Nodes = "VMSDS01","VMSDS02","VMSDS03","VMSDS04"

Foreach ($Node in $Nodes){
    Try {
        $Cim = New-CimSession -ComputerName $Node -ErrorAction Stop
        Install-WindowsFeature Failover-Clustering, FS-FileServer -IncludeManagementTools -Restart ComputerName $Node -ErrorAction Stop | Out-Null

        Enable-NetAdapterRDMA -CimSession $Node -Name Cluster* -ErrorAction Stop | Out-Null
    }
    Catch {
        Write-Host $($Error[0].Exception.Message) -ForegroundColor Red -BackgroundColor Green
        Exit
    }
}

Then from a node of the cluster I have run the following cmdlets:

$Nodes = "VMSDS01","VMSDS02","VMSDS03","VMSDS04"
$ClusIP = "10.10.0.44"
$ClusNm = "Cluster-BCK01"

Test-Cluster -Node $Nodes -Include "Storage Spaces Direct", Inventory,Network,"System Configuration"
New-Cluster -Node $Nodes -StaticAddress $ClusIP -NoStorage
Enable-ClusterS2D

New-Volume -StoragePoolFriendlyName "*Cluster-BCK01" -FileSystem CSVFS_ReFS -ResiliencySettingName Parity -PhysicalDiskRedundancy 2 -Size 100GB

Rename-Item c:\ClusterStorage\Volume1 BCK01

At this moment, the cluster is created and Storage Spaces Direct is enabled. The cluster is called Cluster-BCK01 (IP: 10.10.0.44) and a dual parity volume is created. Then open the permissions of the OU where is located the Cluster Name Object of the cluster. Then add a permission for the cluster name object to allow to create computer objects.

Open the failover clustering manager and rename the networks to ease the management.

You can check also that you have all the enclosures and physical disks.

When S2D has been enabled, a storage pool with all physical disks has been automatically created. I have renamed it to Backup Pool.

You can check also that a Cluster Shared Volume has been well created.

Next run the following cmdlets to create SOFS role, create a folder in the volume and create a share on this folder.

Add-ClusterScaleOutFileServerRole -Name BCK-REPO
new-item -Type Directory -Path '\\vmsds01\c$\ClusterStorage\BCK01' -Name VMBCK01
New-SmbShare -Name 'VMBCK01' -Path C:\ClusterStorage\BCK01\VMBCK01 -FullAccess everyone

If you go back to the cluster, you can see that a Scale-Out File Server role has been created as well as the share.

You can edit the permissions of the folder to give specific permissions to the account that will be used in Veeam Backup & Replication.

Veeam Backup & Replication configuration

First of all, I create a new backup repository in Veeam Backup & Replication.

Then choose the shared folder backup repository.

Next specify the shared folder where you want store the backups. My SOFS role is called BCK-REPO and the share is called VMBCK01. So, the path is \\BCK-REPO\VMBCK01. Specify also credentials that have permissions on the shared folder.

In the next window, you can specify advanced properties.

Then I choose to not enable the vPower NFS service because I backup only Hyper-V VMs.

To finish the backup repository creation, review the properties and click on Apply.

Run a backup on the S2D repository

To test the new backup repository, I choose to create a backup copy job.

Then choose the VM that will be in the backup copy job.

In the next screen, choose the S2D backup repository, the number of restore points and the archival settings.

Next choose if you want use a WAN accelerator or not.

When the wizard is finished, the backup copy job is processing. You can see in the shared folder that data is coming.

When the backup is finished, you can see that data processed and size on disk are different. It is because Veeam leverages ReFS to reduce the storage usage.

Conclusion

Microsoft Storage Spaces Direct can be used to store your virtual machines but also your backups. If you plan a S2D in disaggregated model, you can design the solution to store VM data and backup job. The main disadvantage is that backup should be located in a parity volume (or a mixed resiliency) and that requires at least a 4-node S2D cluster.

The post Host Veeam backup on Storage Spaces Direct appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/host-veeam-backups-storage-spaces-direct/feed/ 3 5527
RDS 2016 Farm: Configure File Servers for User Profile Disks //www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/ //www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/#comments Tue, 09 May 2017 11:26:37 +0000 //www.tech-coffee.net/?p=5471 In the previous topics of this series, we have deployed the RDS Farm in Azure. Now we need a file service in high availability to manage user profile disks (UPD). To support the high availability, I leverage Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS). For more information about the deployment of S2D, you ...

The post RDS 2016 Farm: Configure File Servers for User Profile Disks appeared first on Tech-Coffee.

]]>
In the previous topics of this series, we have deployed the RDS Farm in Azure. Now we need a file service in high availability to manage user profile disks (UPD). To support the high availability, I leverage Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS). For more information about the deployment of S2D, you can read this topic (based on hyperconverged model). For Remote Desktop usage, I’ll deploy a disaggregated model of S2D. In this topic, I’ll configure file servers for User Profile Disks. This series consists of the following topics:

I’ll deploy this file service by using only PowerShell. Before following this topic, be sure that your Azure VM has joined the Active Directory and they have two network adapters in two different subnets (one for cluster and the other for management). I have also fixed the IP addresses from Azure portal.

Deploy the cluster

First of all, I install these features in both file server nodes:

install-WindowsFeature FS-FileServer, Failover-Clustering -IncludeManagementTools

Then I install the RSAT of Failover Clustering in the management VM.

Install-WindowsFeature RSAT-Clustering

Next I test if the cluster nodes can manage Storage Spaces Direct

Test-Cluster -Node "AZFLS0","AZFLS1" -Include "Storage Spaces Direct", Inventory,Network,"System Configuration"

If the test is passed successfully, you can run the following cmdlet to deploy the cluster with the name UPD-Sto and the IP 10.11.0.29.

New-Cluster -Node "AZFLS0","AZFLS1" -Name UPD-Sto -StaticAddress 10.11.0.29 -NoStorage

Once the cluster is created, add the Cluster Name Object (UPD-Sto) the right to create computer object on the OU where it is located. This permission is required to create the CNO for SOFS.

Enable and configure S2D and SOFS

Now that the cluster is created, you can enable S2D (I run the following PowerShell on a file server node by using Remote PowerShell).

Enable-ClusterS2D

Then I create a new volume formatted with ReFS and with a capacity of 100GB. This volume has the 2-Way Mirroring resilience.

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName UPD01 -FileSystem CSVFS_REFS -Size 100GB

Now I rename the folder Volume1 in ClusterStorage by UPD-01

rename-item C:\ClusterStorage\Volume1 UPD-01

Then I a add the role Scale-Out File Server role in the cluster and I call it SOFS.

Add-ClusterScaleOutFileServerRole -Name SOFS

To finish I create a folder called Profiles in the volume and I share it for everyone (not recommended in production) and I call the share UPD$

New-Item -Path C:\ClusterStorage\UPD-01\Profiles -ItemType Directory
New-SmbShare -Name 'UPD$' -Path C:\ClusterStorage\UPD-01\Profiles -FullAccess everyone

Now my storage is ready and I am able to reach \\SOFS.homecloud.net\UPD$

Next topic

In the next topic, I will deploy a session collection and configure it. Then I will add the certificate for each Remote Desktop components.

The post RDS 2016 Farm: Configure File Servers for User Profile Disks appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/feed/ 4 5471
Upgrade your SOFS to Windows vNext with Rolling Cluster Upgrade //www.tech-coffee.net/upgrade-your-sofs-to-windows-vnext-with-rolling-cluster-upgrade/ //www.tech-coffee.net/upgrade-your-sofs-to-windows-vnext-with-rolling-cluster-upgrade/#respond Wed, 18 Mar 2015 14:18:41 +0000 //www.tech-coffee.net/?p=3281 The next release of Windows Server will provide a new feature called Rolling Cluster Upgrade. In other words, it will be possible to have nodes running on Windows Server 2012R2 and Windows Server vNext in the same cluster. This will ease the migration of all nodes in a cluster to the next Windows version and ...

The post Upgrade your SOFS to Windows vNext with Rolling Cluster Upgrade appeared first on Tech-Coffee.

]]>
The next release of Windows Server will provide a new feature called Rolling Cluster Upgrade. In other words, it will be possible to have nodes running on Windows Server 2012R2 and Windows Server vNext in the same cluster. This will ease the migration of all nodes in a cluster to the next Windows version and so the upgrade of the cluster itself.

Rolling Cluster Upgrade

The below schema explains which steps to perform for upgrading the cluster:

N.B: Once your cluster is in Mixed-OS mode, you have to use the Failover Clustering console from a Windows Server vNext node.

In this topic, I will upgrade my Scale-Out File Server cluster to Windows Server vNext. This is a three-node cluster connected to some LUNs by iSCSI. These three nodes are running on Windows Server 2012R2 with the latest updates.

Upgrade nodes to Windows Server vNext

First of all, you have to evict the node which will be upgraded:

Now you can mount the Windows Server Technical Preview ISO (you can download it from here) and click on Install now.

Next I choose to install Windows Server Technical Preview with the graphical interface.

Then to upgrade the system choose the Upgrade option.

Once the setup has verified the compatibility, you are warned by the below message. Just click on Next.

Next the system is upgrading. You have to wait the completion of the installation J.

Once the server is upgraded to Windows Server vNext, open the Failover Clustering console from this node and connect to your cluster. Next right click on Nodes and choose Add Node.

Specify the server name and click on Add.

And tada, the node is back in the cluster.

Now you have to repeat this for each node in the cluster. When you have added the first Windows Server vNext in the cluster, this last is in Mixed-OS mode.

Upgrade the cluster functional level

Now that each nodes are running on Windows Server vNext, we can upgrade the cluster functional level. So connect to a cluster node and open a PowerShell command line. Run the below command:

Once this command is run, it is irreversible and the cluster can contains only Windows Server vNext nodes. If I run the below command we can see that the cluster functional level is now 9.

Issues found

The first issue that I have found is related to the DNS. I don’t know why but the cluster account had not anymore the authorization to change the DNS entry of the cluster role:

So I have navigated to the DNS and I have added the permission to the cluster account on the DNS entry:

After this, the above error did not occur anymore.

The last but not the least issue is that I can’t add anymore my SOFS cluster to VMM. It is not a surprise since it is not supported by VMM J.

The post Upgrade your SOFS to Windows vNext with Rolling Cluster Upgrade appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/upgrade-your-sofs-to-windows-vnext-with-rolling-cluster-upgrade/feed/ 0 3281