Failover Cluster – Tech-Coffee //www.tech-coffee.net Tue, 16 Jan 2018 15:50:51 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 The cluster resource could not be deleted since it is a core resource //www.tech-coffee.net/the-cluster-resource-could-not-be-deleted-since-it-is-a-core-resource/ //www.tech-coffee.net/the-cluster-resource-could-not-be-deleted-since-it-is-a-core-resource/#comments Tue, 16 Jan 2018 15:50:51 +0000 //www.tech-coffee.net/?p=6069 The last month, I wanted to change the Witness of a cluster from a Cloud Witness to a File Share Witness. The cluster is a 2-node S2D cluster and as discussed with Microsoft, Cloud Witness should not be used with a 2-node cluster. If the Cloud Witness fails (for example when subscription has expired), the ...

The post The cluster resource could not be deleted since it is a core resource appeared first on Tech-Coffee.

]]>
The last month, I wanted to change the Witness of a cluster from a Cloud Witness to a File Share Witness. The cluster is a 2-node S2D cluster and as discussed with Microsoft, Cloud Witness should not be used with a 2-node cluster. If the Cloud Witness fails (for example when subscription has expired), the cluster can crashes. I’ve experienced this in production. So, all my customers, I decided to change the cloud witness to file share witness.

Naivly, I tried to change the Cloud Witness to a File Share witness. But it doesn’t work 🙂 You’ll get this message: The cluster resource could not be deleted since it is a core resource. In this topic I’ll show you the issue and the resolution

Issue

As you can see below, my cluster is using a Cloud Witness to add an additional vote.

So, I decide to replace the Cloud Witness by a File Share Witness. I right click on the cluster | more Actions | Configure Cluster Quorum Settings.

Then I choose Select the quorum witness.

I select configure a file share witness.

I specify the file share path as usual.

To finish I click next and it should replace the cloud witness by the file share witness.

Actually no, you should get the following error message.

If you check the witness state, it is offline.

This issue occurs because we can’t remove cluster core resources. So how I remove the Cloud Witness?

Resolution

To remove the Cloud Witness, choose again to configure the cluster quorum and this time select Advanced Quorum Configuration.

Then select Do not configure a quorum witness.

VoilĂ , the Cloud Witness is gone. No you can add the File Share Witness.

In the below screenshot you can see the configuration to add a file share witness to the cluster.

Now the file share witness is added to the cluster.

Conclusion

If you want to remove a Cloud Witness or a File Share Witness, you have first to not configure a witness for the quorum. Then you can add the witness type you want.

The post The cluster resource could not be deleted since it is a core resource appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/the-cluster-resource-could-not-be-deleted-since-it-is-a-core-resource/feed/ 14 6069
Cluster Health Service in Windows Server 2016 //www.tech-coffee.net/cluster-health-service-in-windows-server-2016/ //www.tech-coffee.net/cluster-health-service-in-windows-server-2016/#comments Tue, 29 Nov 2016 21:22:00 +0000 //www.tech-coffee.net/?p=4881 Before Windows Server 2016, the alerting and the monitoring of the cluster were managed by monitoring tools such as System Center Operations Manager (SCOM). The monitoring tool used WMI, PowerShell scripts, performance counters or whatever to get the health of the cluster. In Windows Server 2016, Microsoft has added the Health Service in the cluster ...

The post Cluster Health Service in Windows Server 2016 appeared first on Tech-Coffee.

]]>
Before Windows Server 2016, the alerting and the monitoring of the cluster were managed by monitoring tools such as System Center Operations Manager (SCOM). The monitoring tool used WMI, PowerShell scripts, performance counters or whatever to get the health of the cluster. In Windows Server 2016, Microsoft has added the Health Service in the cluster that provides metrics and fault information. Currently, the Health Service is enabled only when using Storage Spaces Direct and no other scenario. When you enable the Storage Spaces Direct in the cluster, the Health Service is also enabled automatically.

The health service aggregates monitoring information (fault and metrics) of all nodes in the cluster. These information are available from a single point and can be used by PowerShell or across API. The Health Service can raise alerts in real-time regarding event in the cluster. These alerts contain the severity, the description, the recommended action and the location information related to the fault domain. Health Service raises alerts for several faults as you can see below:

The rollup monitors can help to find a root cause of a fault. For example, in order that the server monitor is healthy, all underlying monitor must be also healthy. If an underlying monitor is not healthy, the parent monitor shows an alert.

In the above example, a drive is down in a node. So, the Health Service raises an alert for the drive and the parent node monitor is in error state.

In the next version, the Health Service will be smart. The cluster monitor will be “only” in warning state because the cluster still has enough node to run the service and after all, a single drive down is not a severe issue for the service. This feature should be called severity masking

The Health Service is also able to gather metrics about the cluster such as IOPS, capacity, CPU usage and so on.

Use Cluster Health Service

Show Metrics

To show the metrics gathered by the Health Service, run the cmdlet Get-StorageHealthReport as below:

Get-StorageSubSystem *Cluster* | Get-StorageHealthReport

As you can see, you have consolidated information as the memory available, the IOPS, the capacity, the average CPU usage and so on. We can imagine a tool that gather information from the API several times per minute to show charts or pies with these information.

Show Alerts

To get current alerts in the cluster, run the following cmdlet:

Get-StorageSubSystem *Cluster* | Debug-StorageSubSystem

To show you screenshot, I run cmdlet against my lab Storage Spaces Direct cluster which is not best practices. The following alert is raised because I have not enough reserve capacity:

Then I stop a node in my cluster:

I have several issues in my cluster! The Health Service has detected that the node is done and that some cables are disconnected. It is because my Mellanox adapters are direct attached to the other node.

SCOM Dashboard

This dashboard is not yet available at the time of writing but in the future, Microsoft should releaser the below SCOM dashboard which leverage the Cluster Health Service.

Another example: DataOn Must

DataOn is a company that provides hardware which are compliants with Storage Spaces (Direct). DataOn has also released dashboards called DataOn Must which are based on Health Service. DataOn Must is currently only available when you buy DataOn hardware. Thanks to Health Service API, we can have fancy and readable charts and pies about the health of the Storage Spaces Direct Cluster.

I would like thanks Cosmos Darwin for the topic review and to have left me the opportunity to talk about severity masking.

The post Cluster Health Service in Windows Server 2016 appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/cluster-health-service-in-windows-server-2016/feed/ 2 4881
Fault Domain Awareness with Storage Spaces Direct //www.tech-coffee.net/fault-domain-awareness-with-storage-spaces-direct/ //www.tech-coffee.net/fault-domain-awareness-with-storage-spaces-direct/#comments Mon, 07 Nov 2016 13:49:33 +0000 //www.tech-coffee.net/?p=4862 Fault Domain Awareness is a new feature in Failover Clustering since Windows Server 2016. Fault Domain Awareness brings a new approach of the high availability which is more flexible and Cloud oriented. In previous edition, the high availability was based only on node: if a node failed, the resources were moved to another node. With ...

The post Fault Domain Awareness with Storage Spaces Direct appeared first on Tech-Coffee.

]]>
Fault Domain Awareness is a new feature in Failover Clustering since Windows Server 2016. Fault Domain Awareness brings a new approach of the high availability which is more flexible and Cloud oriented. In previous edition, the high availability was based only on node: if a node failed, the resources were moved to another node. With Fault Domain Awareness, the point of failure can be a node (as previously), a chassis, a rack or a site. This enables a greater flexibility and a modern approach of the high availability. Datacenters which are Cloud oriented, require this kind of flexibility to change the point of failure of the cluster from the single node to an entire rack which contains several nodes.

In Microsoft definition, a fault domain is a set of hardware that shares the same point of failure. The default fault domain in a cluster is the node. You can also create the fault domain based on chassis, rack and site. Moreover, a fault domain can belong to another fault domain. For example, you can create racks fault domains and configure them to specify that the parent is a site.

Storage Spaces Direct (S2D) can leverage Fault Domain Awareness to spread block replicas across fault domains (unfortunately it is not yet possible to spread block replicas across sites because Storage Spaces Direct doesn’t support stretched cluster with Storage Replica). Let think about a three-way mirroring implementation of S2D: this means that we have three times the data (the original and two replicas). S2D is able for example to create the original data on a first rack, and each replica are copied in several racks. In this way, even if you lose a rack, the storage keeps working.

In S2D documentation, Microsoft doesn’t say anymore the number nodes required for each resilience type:

  • 2-Way Mirroring: two fault domains
  • 3-way Mirroring: three fault domains
  • Erasure Coding: from four fault domains.

These statements are really important for design consideration. If you plan to use fault domain awareness with racks and you plan to use erasure coding, you need also four racks at least. Each rack must have the same number of nodes. So, in the case of there is four racks, the number of nodes per cluster can be 4, 8, 12 or 16. So by using fault domain awareness, you lose some flexibility for deployment, but you increase the availability capabilities.

Configure Fault Domain Awareness

This section introduces how to configure fault domain in the cluster. It is heavily recommended to make this configuration, before that you enable Storage Spaces Direct!

By using PowerShell

In this example, I show you how to configure the fault domain in your cluster with a two nodes cluster. It is not really useful for a two-node cluster to create fault domain but I just want to show you how to create them in the cluster configuration.

Before running the below cmdlet, I have initialized $CIM variable by using the following command (Cluster-Hyv01 is the name of my cluster):

$CIM = New-CimSession -ComputerName Cluster-Hyv01

Then I gather fault domain information by using the Get-ClusterFaultDomain cmdlet:

As you can see above, a fault domain is automatically created for each node. To create an additional fault domain, you can use the cmdlet New-ClusterFaultDomain as below.

If I run again the Get-ClusterFaultDomain cmdlet, you can see each fault domain.

Then I run the following cmdlet to set the Fault Domain parents:

Set-ClusterFaultDomain -Name Rack-22U -Parent Lyon
Set-ClusterFaultDomain -Name Chassis-Fabric -Parent Rack-22U
Set-ClusterFaultDomain -Name pyhyv01 -Parent Chassis-Fabric
Set-ClusterFaultDomain -Name pyhyv02 -Parent Chassis-Fabric

In the Failover Clustering manager, you can see the result by opening the node tab. As you can see below, each node belongs to Rack-22U and the site Lyon.

By using XML

You can also declare your physical infrastructure by using a XML File as below:

<Topology>
    <Site Name="Lyon" Location="Lyon 8e">
        <Rack Name="Rack-22U" Location="Restroom">
            <Node Name="pyhyv01" Location="Rack 6U" />
            <Node Name="pyhyv02" Location="Rack 12U" />
        </Rack>
    </Site>
</Topology>

Once your topology is written, you can configure your cluster with the XML File:

$xml = Get-Content <XML File> | Out-String
Set-ClusterFaultDomainXML -XML $xml

Conclusion

Fault Domain Awareness is a great feature to improve the availability of your infrastructure, especially with Storage Spaces Direct. The fault domain can be oriented on racks instead of nodes. This means that you can lose a higher number of nodes and keep the service running. On the other hand, it is necessary to be careful during the design phase because an equivalent number of nodes must be installed in each rack. If you need erasure coding, you require 4 racks.

The post Fault Domain Awareness with Storage Spaces Direct appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/fault-domain-awareness-with-storage-spaces-direct/feed/ 4 4862
Understand Failover Cluster Quorum //www.tech-coffee.net/understand-failover-cluster-quorum/ //www.tech-coffee.net/understand-failover-cluster-quorum/#comments Tue, 17 Nov 2015 10:22:02 +0000 //www.tech-coffee.net/?p=4274 This topic aims to explain the Quorum configuration in a Failover Clustering. As part of my job, I work with Hyper-V Clusters where the Quorum is not well configured and so my customers have not the expected behavior when an outage occurs. I work especially on Hyper-V clusters but the following topic applies to most ...

The post Understand Failover Cluster Quorum appeared first on Tech-Coffee.

]]>
This topic aims to explain the Quorum configuration in a Failover Clustering. As part of my job, I work with Hyper-V Clusters where the Quorum is not well configured and so my customers have not the expected behavior when an outage occurs. I work especially on Hyper-V clusters but the following topic applies to most of Failover Cluster configuration.

What’s a Failover Cluster Quorum

A Failover Cluster Quorum configuration specifies the number of failures that a cluster can support in order to keep working. Once the threshold limit is reached, the cluster stops working. The most common failures in a cluster are nodes that stop working or nodes that can’t communicate anymore.

Imagine that quorum doesn’t exist and you have two-nodes cluster. Now there is a network problem and the two nodes can’t communicate. If there is no Quorum, what prevents both nodes to operate independently and take disks ownership on each side? This situation is called Split-Brain. Quorum exists to avoid Split-Brain and prevents corruption on disks.

The Quorum is based on a voting algorithm. Each node in the cluster has a vote. The cluster keeps working while more than half of the voters are online. This is the quorum (or the majority of votes). When there are too many of failures and not enough online voters to constitute a quorum, the cluster stop working.

Below this is a two nodes cluster configuration:

The majority of vote is 2 votes. So a two nodes cluster as above is not really resilient because if you lose a node, the cluster is down.

Below a three-node cluster configuration:

Now you add a node in your cluster. So you are in a three-node cluster. The majority of vote is still 2 votes. But because there is three nodes, you can lose a node and the cluster keep working.

Below a four-node cluster configuration:

Despite its four nodes, this cluster can support one node failure before losing the quorum. The majority of vote is 3 votes so you can lose only one node.

On a five-node cluster the majority of votes is still 3 votes so you can lose two nodes before than the cluster stop working and so on. As you can see, the majority of nodes must remain online in order to the cluster keeps working and this is why it is recommended to have an odd majority of votes. But sometimes we want only a two-node cluster for some application that don’t require more nodes (as Virtual Machine Manager, SQL AlwaysOn and so on). In this case we add a disk witness, a file witness or in Windows Server 2016, a cloud Witness.

Failover Cluster Quorum Witness

As said before, it is recommended to have an odd majority of votes. But sometimes we don’t want an odd number of nodes. In this case, a disk witness, a file witness or a cloud witness can be added to the cluster. This witness too has a vote. So when there are an even number of nodes, the witness enables to have an odd majority of vote. Below, the requirements and recommendations of each Witness type (except Cloud Witness):

Witness type Description Requirements and recommendations
Disk witness
  • Dedicated LUN that stores a copy of the cluster database
  • Most useful for clusters with shared (not replicated) storage
  • Size of LUN must be at least 512 MB
  • Must be dedicated to cluster use and not assigned to a clustered role
  • Must be included in clustered storage and pass storage validation tests
  • Cannot be a disk that is a Cluster Shared Volume (CSV)
  • Basic disk with a single volume
  • Does not need to have a drive letter
  • Can be formatted with NTFS or ReFS
  • Can be optionally configured with hardware RAID for fault tolerance
  • Should be excluded from backups and antivirus scanning
File share witness
  • SMB file share that is configured on a file server running Windows Server
  • Does not store a copy of the cluster database
  • Maintains cluster information only in a witness.log file
  • Most useful for multisite clusters with replicated storage
  • Must have a minimum of 5 MB of free space
  • Must be dedicated to the single cluster and not used to store user or application data
  • Must have write permissions enabled for the computer object for the cluster name

The following are additional considerations for a file server that hosts the file share witness:

  • A single file server can be configured with file share witnesses for multiple clusters.
  • The file server must be on a site that is separate from the cluster workload. This allows equal opportunity for any cluster site to survive if site-to-site network communication is lost. If the file server is on the same site, that site becomes the primary site, and it is the only site that can reach the file share.
  • The file server can run on a virtual machine if the virtual machine is not hosted on the same cluster that uses the file share witness.
  • For high availability, the file server can be configured on a separate failover cluster.

So below you can find again a two-nodes Cluster with a witness:

two-nodes-witness

Now there is a witness, you can lose a node and keep the quorum. Even if a node is down, the cluster still working. So when you have an even number of nodes, the quorum witness is required. But to keep an odd majority of votes, when you have an odd number of nodes, you should not implement a quorum witness.

Quorum configuration

Below you can find the four possible cluster configuration (taken from TechNet):

  • Node Majority (recommended for clusters with an odd number of nodes)
    • Can sustain failures of half the nodes (rounding up) minus one. For example, a seven node cluster can sustain three node failures.
  • Node and Disk Majority (recommended for clusters with an even number of nodes).
    • Can sustain failures of half the nodes (rounding up) if the disk witness remains online. For example, a six node cluster in which the disk witness is online could sustain three node failures.
    • Can sustain failures of half the nodes (rounding up) minus one if the disk witness goes offline or fails. For example, a six node cluster with a failed disk witness could sustain two (3-1=2) node failures.
  • Node and File Share Majority (for clusters with special configurations)
    • Works in a similar way to Node and Disk Majority, but instead of a disk witness, this cluster uses a file share witness.
    • Note that if you use Node and File Share Majority, at least one of the available cluster nodes must contain a current copy of the cluster configuration before you can start the cluster. Otherwise, you must force the starting of the cluster through a particular node. For more information, see “Additional considerations” in Start or Stop the Cluster Service on a Cluster Node.
  • No Majority: Disk Only (not recommended)
    • Can sustain failures of all nodes except one (if the disk is online). However, this configuration is not recommended because the disk might be a single point of failure.

Stretched Cluster Scenario

Unfortunately (I don’t like stretched cluster in Hyper-V scenario), some customers have stretched cluster between two datacenters. And the most common mistake I see to save money is the below scenario:

So the customer tells me: Ok I’ve followed the recommendation because I have four nodes in my cluster but I have added a witness to obtain an odd majority of votes. So let’s start the production. The cluster is running for a while and then one day the room 1 is underwater. So you lose Room 1:

In this scenario you should have also a stretched storage and so if you have implemented a disk witness it should move to room 2. But in the above case you have lost the majority of votes and so the cluster stop working (sometimes with some luck, the cluster is still working because the disk witness has time to failover but it is lucky). So when you implement a stretched cluster, I recommend the below scenario:

In this scenario, even if you lose a room, the cluster still working. Yes I know, three rooms are expensive but I have not recommended you to make a stretched cluster J (Hyper-V case). Fortunately, in Windows Server 2016, the quorum witness can be hosted in Microsoft Azure (Cloud Witness).

Dynamic Quorum (Windows Server 2012 feature)

Dynamic Quorum enables to assign vote to node dynamically to avoid to lose the majority of votes and so the cluster can run with one node (known as last-man standing). Let’s take the above example with four-node cluster without quorum witness. I said that the Quorum is 3 votes so without dynamic quorum, if you lose two nodes, the cluster is down.

Now I enable the Dynamic Quorum. The majority of votes is computed automatically related to running nodes. Let’s take again the Four-Node example:

So, why implementing a witness, especially for stretched cluster? Because Dynamic Quorum works great when the failure are sequential and not simultaneous. So for the stretched cluster scenario, if you lose a room, the failure is simultaneous and the dynamic quorum has not the time to recalculate the majority of votes. Moreover I have seen strange behavior with dynamic quorum especially with two-node cluster. This is why in Windows Server 2012, I always disabled the dynamic quorum when I didn’t use a quorum witness.

The dynamic quorum has been enhanced in Windows Server 2012 R2. Now there is the Dynamic Witness implemented. This feature calculate if the Quorum Witness has a vote. There is two cases:

  • If there is an even number of node in the cluster with the dynamic quorum enabled, the Dynamic Witness is enabled on the Quorum Witness and so the witness has vote.
  • If there is an odd number of node in the cluster with the dynamic quorum enabled, the Dynamic Witness is enabled on the Quorum Witness and so the witness has not vote.

So since Windows Server 2012 R2, Microsoft recommends to always implement a witness in a cluster and let the dynamic quorum to decide for you.

The Dynamic Quorum is enabled by default since Windows Server 2012. In the below example, there is a four-node cluster on Windows Server 2016. But it is the same behavior.

I verify if the dynamic quorum is enabled and also the dynamic witness:

The Dynamic Quorum and the Dynamic Witness are well enabled. Because I have four nodes, the Witness has a vote and this is why the Dynamic Witness is enabled. If you want to disable the Dynamic Quorum you can run this command:

(Get-Cluster).DynamicQuorum = 0

To finish, Microsoft has enhanced the dynamic quorum by adjusting the number of online node’s vote to keep an odd number of votes. First the cluster plays with the dynamic witness to keep an odd majority of votes. Then if it can’t adjust the number of vote with the dynamic witness, it remove a vote on a running node.

For example you have a four-node cluster in a streched cluster. You have lost your quorum witness. Now you have two nodes in the room one and two nodes in the room two. The cluster will remove a vote on a node to keep a majority in a room. In this way, even if you lose a node, the cluster still working.

Cloud Quorum Witness (Windows Server 2016 feature)

By implementing a Cloud Quorum Witness, you avoid to spend money on a third room in case of stretched cluster. Below this is the scenario:

The Cloud Witness, hosted in Microsoft Azure, has also one vote. In this way you have also an odd majority of votes. For that you need an existing storage account in Microsoft Azure. You need also an access key.

Now you have just to configure the quorum as a standard witness. Select Configure a Cloud Witness when it is asked.

Then specify the Azure Storage Account and a storage key.

At the end of the configuration, the Cloud Witness should be online.

Conclusion

In conclusion I recommend this when you configure a Quorum in a failover cluster:

  • Prior to Windows Server 2012 R2, always keep an odd majority of vote
    • In case of an even number of nodes, implement a witness
    • In case of an odd number of nodes, do not implement a witness
  • Since Windows Server 2012 R2, Always implement a quorum witness
    • Dynamic Quorum manage the assigned vote to the nodes
    • Dynamic Witness manage the assigned vote to the Quorum Witness
  • In case of stretched cluster, implement the witness in a third room or use Microsoft Azure.

The post Understand Failover Cluster Quorum appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/understand-failover-cluster-quorum/feed/ 28 4274
Upgrade your SOFS to Windows vNext with Rolling Cluster Upgrade //www.tech-coffee.net/upgrade-your-sofs-to-windows-vnext-with-rolling-cluster-upgrade/ //www.tech-coffee.net/upgrade-your-sofs-to-windows-vnext-with-rolling-cluster-upgrade/#respond Wed, 18 Mar 2015 14:18:41 +0000 //www.tech-coffee.net/?p=3281 The next release of Windows Server will provide a new feature called Rolling Cluster Upgrade. In other words, it will be possible to have nodes running on Windows Server 2012R2 and Windows Server vNext in the same cluster. This will ease the migration of all nodes in a cluster to the next Windows version and ...

The post Upgrade your SOFS to Windows vNext with Rolling Cluster Upgrade appeared first on Tech-Coffee.

]]>
The next release of Windows Server will provide a new feature called Rolling Cluster Upgrade. In other words, it will be possible to have nodes running on Windows Server 2012R2 and Windows Server vNext in the same cluster. This will ease the migration of all nodes in a cluster to the next Windows version and so the upgrade of the cluster itself.

Rolling Cluster Upgrade

The below schema explains which steps to perform for upgrading the cluster:

N.B: Once your cluster is in Mixed-OS mode, you have to use the Failover Clustering console from a Windows Server vNext node.

In this topic, I will upgrade my Scale-Out File Server cluster to Windows Server vNext. This is a three-node cluster connected to some LUNs by iSCSI. These three nodes are running on Windows Server 2012R2 with the latest updates.

Upgrade nodes to Windows Server vNext

First of all, you have to evict the node which will be upgraded:

Now you can mount the Windows Server Technical Preview ISO (you can download it from here) and click on Install now.

Next I choose to install Windows Server Technical Preview with the graphical interface.

Then to upgrade the system choose the Upgrade option.

Once the setup has verified the compatibility, you are warned by the below message. Just click on Next.

Next the system is upgrading. You have to wait the completion of the installation J.

Once the server is upgraded to Windows Server vNext, open the Failover Clustering console from this node and connect to your cluster. Next right click on Nodes and choose Add Node.

Specify the server name and click on Add.

And tada, the node is back in the cluster.

Now you have to repeat this for each node in the cluster. When you have added the first Windows Server vNext in the cluster, this last is in Mixed-OS mode.

Upgrade the cluster functional level

Now that each nodes are running on Windows Server vNext, we can upgrade the cluster functional level. So connect to a cluster node and open a PowerShell command line. Run the below command:

Once this command is run, it is irreversible and the cluster can contains only Windows Server vNext nodes. If I run the below command we can see that the cluster functional level is now 9.

Issues found

The first issue that I have found is related to the DNS. I don’t know why but the cluster account had not anymore the authorization to change the DNS entry of the cluster role:

So I have navigated to the DNS and I have added the permission to the cluster account on the DNS entry:

After this, the above error did not occur anymore.

The last but not the least issue is that I can’t add anymore my SOFS cluster to VMM. It is not a surprise since it is not supported by VMM J.

The post Upgrade your SOFS to Windows vNext with Rolling Cluster Upgrade appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/upgrade-your-sofs-to-windows-vnext-with-rolling-cluster-upgrade/feed/ 0 3281