Understand Failover Cluster Quorum

This topic aims to explain the Quorum configuration in a Failover Clustering. As part of my job, I work with Hyper-V Clusters where the Quorum is not well configured and so my customers have not the expected behavior when an outage occurs. I work especially on Hyper-V clusters but the following topic applies to most of Failover Cluster configuration.

What’s a Failover Cluster Quorum

A Failover Cluster Quorum configuration specifies the number of failures that a cluster can support in order to keep working. Once the threshold limit is reached, the cluster stops working. The most common failures in a cluster are nodes that stop working or nodes that can’t communicate anymore.

Imagine that quorum doesn’t exist and you have two-nodes cluster. Now there is a network problem and the two nodes can’t communicate. If there is no Quorum, what prevents both nodes to operate independently and take disks ownership on each side? This situation is called Split-Brain. Quorum exists to avoid Split-Brain and prevents corruption on disks.

The Quorum is based on a voting algorithm. Each node in the cluster has a vote. The cluster keeps working while more than half of the voters are online. This is the quorum (or the majority of votes). When there are too many of failures and not enough online voters to constitute a quorum, the cluster stop working.

Below this is a two nodes cluster configuration:

The majority of vote is 2 votes. So a two nodes cluster as above is not really resilient because if you lose a node, the cluster is down.

Below a three-node cluster configuration:

Now you add a node in your cluster. So you are in a three-node cluster. The majority of vote is still 2 votes. But because there is three nodes, you can lose a node and the cluster keep working.

Below a four-node cluster configuration:

Despite its four nodes, this cluster can support one node failure before losing the quorum. The majority of vote is 3 votes so you can lose only one node.

On a five-node cluster the majority of votes is still 3 votes so you can lose two nodes before than the cluster stop working and so on. As you can see, the majority of nodes must remain online in order to the cluster keeps working and this is why it is recommended to have an odd majority of votes. But sometimes we want only a two-node cluster for some application that don’t require more nodes (as Virtual Machine Manager, SQL AlwaysOn and so on). In this case we add a disk witness, a file witness or in Windows Server 2016, a cloud Witness.

Failover Cluster Quorum Witness

As said before, it is recommended to have an odd majority of votes. But sometimes we don’t want an odd number of nodes. In this case, a disk witness, a file witness or a cloud witness can be added to the cluster. This witness too has a vote. So when there are an even number of nodes, the witness enables to have an odd majority of vote. Below, the requirements and recommendations of each Witness type (except Cloud Witness):

Witness type Description Requirements and recommendations
Disk witness
  • Dedicated LUN that stores a copy of the cluster database
  • Most useful for clusters with shared (not replicated) storage
  • Size of LUN must be at least 512 MB
  • Must be dedicated to cluster use and not assigned to a clustered role
  • Must be included in clustered storage and pass storage validation tests
  • Cannot be a disk that is a Cluster Shared Volume (CSV)
  • Basic disk with a single volume
  • Does not need to have a drive letter
  • Can be formatted with NTFS or ReFS
  • Can be optionally configured with hardware RAID for fault tolerance
  • Should be excluded from backups and antivirus scanning
File share witness
  • SMB file share that is configured on a file server running Windows Server
  • Does not store a copy of the cluster database
  • Maintains cluster information only in a witness.log file
  • Most useful for multisite clusters with replicated storage
  • Must have a minimum of 5 MB of free space
  • Must be dedicated to the single cluster and not used to store user or application data
  • Must have write permissions enabled for the computer object for the cluster name

The following are additional considerations for a file server that hosts the file share witness:

  • A single file server can be configured with file share witnesses for multiple clusters.
  • The file server must be on a site that is separate from the cluster workload. This allows equal opportunity for any cluster site to survive if site-to-site network communication is lost. If the file server is on the same site, that site becomes the primary site, and it is the only site that can reach the file share.
  • The file server can run on a virtual machine if the virtual machine is not hosted on the same cluster that uses the file share witness.
  • For high availability, the file server can be configured on a separate failover cluster.

So below you can find again a two-nodes Cluster with a witness:

two-nodes-witness

Now there is a witness, you can lose a node and keep the quorum. Even if a node is down, the cluster still working. So when you have an even number of nodes, the quorum witness is required. But to keep an odd majority of votes, when you have an odd number of nodes, you should not implement a quorum witness.

Quorum configuration

Below you can find the four possible cluster configuration (taken from TechNet):

  • Node Majority (recommended for clusters with an odd number of nodes)
    • Can sustain failures of half the nodes (rounding up) minus one. For example, a seven node cluster can sustain three node failures.
  • Node and Disk Majority (recommended for clusters with an even number of nodes).
    • Can sustain failures of half the nodes (rounding up) if the disk witness remains online. For example, a six node cluster in which the disk witness is online could sustain three node failures.
    • Can sustain failures of half the nodes (rounding up) minus one if the disk witness goes offline or fails. For example, a six node cluster with a failed disk witness could sustain two (3-1=2) node failures.
  • Node and File Share Majority (for clusters with special configurations)
    • Works in a similar way to Node and Disk Majority, but instead of a disk witness, this cluster uses a file share witness.
    • Note that if you use Node and File Share Majority, at least one of the available cluster nodes must contain a current copy of the cluster configuration before you can start the cluster. Otherwise, you must force the starting of the cluster through a particular node. For more information, see “Additional considerations” in Start or Stop the Cluster Service on a Cluster Node.
  • No Majority: Disk Only (not recommended)
    • Can sustain failures of all nodes except one (if the disk is online). However, this configuration is not recommended because the disk might be a single point of failure.

Stretched Cluster Scenario

Unfortunately (I don’t like stretched cluster in Hyper-V scenario), some customers have stretched cluster between two datacenters. And the most common mistake I see to save money is the below scenario:

So the customer tells me: Ok I’ve followed the recommendation because I have four nodes in my cluster but I have added a witness to obtain an odd majority of votes. So let’s start the production. The cluster is running for a while and then one day the room 1 is underwater. So you lose Room 1:

In this scenario you should have also a stretched storage and so if you have implemented a disk witness it should move to room 2. But in the above case you have lost the majority of votes and so the cluster stop working (sometimes with some luck, the cluster is still working because the disk witness has time to failover but it is lucky). So when you implement a stretched cluster, I recommend the below scenario:

In this scenario, even if you lose a room, the cluster still working. Yes I know, three rooms are expensive but I have not recommended you to make a stretched cluster J (Hyper-V case). Fortunately, in Windows Server 2016, the quorum witness can be hosted in Microsoft Azure (Cloud Witness).

Dynamic Quorum (Windows Server 2012 feature)

Dynamic Quorum enables to assign vote to node dynamically to avoid to lose the majority of votes and so the cluster can run with one node (known as last-man standing). Let’s take the above example with four-node cluster without quorum witness. I said that the Quorum is 3 votes so without dynamic quorum, if you lose two nodes, the cluster is down.

Now I enable the Dynamic Quorum. The majority of votes is computed automatically related to running nodes. Let’s take again the Four-Node example:

So, why implementing a witness, especially for stretched cluster? Because Dynamic Quorum works great when the failure are sequential and not simultaneous. So for the stretched cluster scenario, if you lose a room, the failure is simultaneous and the dynamic quorum has not the time to recalculate the majority of votes. Moreover I have seen strange behavior with dynamic quorum especially with two-node cluster. This is why in Windows Server 2012, I always disabled the dynamic quorum when I didn’t use a quorum witness.

The dynamic quorum has been enhanced in Windows Server 2012 R2. Now there is the Dynamic Witness implemented. This feature calculate if the Quorum Witness has a vote. There is two cases:

  • If there is an even number of node in the cluster with the dynamic quorum enabled, the Dynamic Witness is enabled on the Quorum Witness and so the witness has vote.
  • If there is an odd number of node in the cluster with the dynamic quorum enabled, the Dynamic Witness is enabled on the Quorum Witness and so the witness has not vote.

So since Windows Server 2012 R2, Microsoft recommends to always implement a witness in a cluster and let the dynamic quorum to decide for you.

The Dynamic Quorum is enabled by default since Windows Server 2012. In the below example, there is a four-node cluster on Windows Server 2016. But it is the same behavior.

I verify if the dynamic quorum is enabled and also the dynamic witness:

The Dynamic Quorum and the Dynamic Witness are well enabled. Because I have four nodes, the Witness has a vote and this is why the Dynamic Witness is enabled. If you want to disable the Dynamic Quorum you can run this command:

(Get-Cluster).DynamicQuorum = 0

To finish, Microsoft has enhanced the dynamic quorum by adjusting the number of online node’s vote to keep an odd number of votes. First the cluster plays with the dynamic witness to keep an odd majority of votes. Then if it can’t adjust the number of vote with the dynamic witness, it remove a vote on a running node.

For example you have a four-node cluster in a streched cluster. You have lost your quorum witness. Now you have two nodes in the room one and two nodes in the room two. The cluster will remove a vote on a node to keep a majority in a room. In this way, even if you lose a node, the cluster still working.

Cloud Quorum Witness (Windows Server 2016 feature)

By implementing a Cloud Quorum Witness, you avoid to spend money on a third room in case of stretched cluster. Below this is the scenario:

The Cloud Witness, hosted in Microsoft Azure, has also one vote. In this way you have also an odd majority of votes. For that you need an existing storage account in Microsoft Azure. You need also an access key.

Now you have just to configure the quorum as a standard witness. Select Configure a Cloud Witness when it is asked.

Then specify the Azure Storage Account and a storage key.

At the end of the configuration, the Cloud Witness should be online.

Conclusion

In conclusion I recommend this when you configure a Quorum in a failover cluster:

  • Prior to Windows Server 2012 R2, always keep an odd majority of vote
    • In case of an even number of nodes, implement a witness
    • In case of an odd number of nodes, do not implement a witness
  • Since Windows Server 2012 R2, Always implement a quorum witness
    • Dynamic Quorum manage the assigned vote to the nodes
    • Dynamic Witness manage the assigned vote to the Quorum Witness
  • In case of stretched cluster, implement the witness in a third room or use Microsoft Azure.

About Romain Serre

Romain Serre works in Lyon as a Senior Consultant. He is focused on Microsoft Technology, especially on Hyper-V, System Center, Storage, networking and Cloud OS technology as Microsoft Azure or Azure Stack. He is a MVP and he is certified Microsoft Certified Solution Expert (MCSE Server Infrastructure & Private Cloud), on Hyper-V and on Microsoft Azure (Implementing a Microsoft Azure Solution).

28 comments

  1. Great article . Thank you.

  2. I also say thank you! Very well explained!

  3. Thanks, great explained!

  4. Arnaud from Veeam Software

    Tres bel article Romain. Bravo. J’ai enfin compris toutes les subtilités du Witness dans les différentes versions de Windows. Merci

  5. “Moreover I have seen strange behavior with dynamic quorum especially with two-node cluster”

    What is the exact strange behavior in this case? I need to explain this to my customer.
    And what would happen if i enable Dynamic Quorum plus configure File Share Witness at the same time?

  6. Where to see if Quorum Witness has been configured for two node RHEL cluster?

  7. Will witness.log file modified only in case of failover? or it keeps on modifying.,

  8. Hi Romain,

    I am currently in a situation that i don’t know i can achieve without any down time. We are currently having our hardware refresh, therefore we create new hyper-V cluster for our new hardware, we have move our VMs to the new hardware and the newly created hyper-V cluster. Now, that i need to move my SQL which they are cluster with a shared CSV as their quorum drive. Is there anyway i can move these VMs without shutting them down?

  9. Excellent article, Thanks
    It will be great if you let us know
    Why witness disk should be excluded from backup and antivirus scan?

  10. Very nice and detailed article about cluster quorum settings and option. Thanks for sharing the information.

  11. Thanks so much guys.
    Very good detailed.

  12. Thank you, very helpful.

  13. When we use cloud witness whats the content being saved in the blob ?

  14. Bonjour Romain,

    Excellent article, well written and understandable. As the La Fontaine saying goes; “Ce qui se conçoit bien s’énonce clairement” (what is well understood/mastered can be clearly explained)!

    However, I am still curious to know if there is a best practice to define the size of a witness disk and/or quorum disk. I have found diverse answers on the net and it seems that the minimum size should be 512MB but many go for 1GB without any explanation. Can the size of a witness disk be always the same regardless of the storage capacity of the cluster nodes or is there a relationship between the storage capacity of the node and the size of the witness disk?

    Would you know what is the best practice and why?

    Thanks in advance and Kudos for a great article!!

  15. Super article! thanks!

  16. Very descriptive. One of the best explanations on Quorum i came across.

  17. thank you very much awesome explain

  18. Thank you. clear explain.

  19. What would be the results if Room1 and Room2 lost connection with each other, but both rooms could still communicate with the cloud witness. Would that produce a split brain senario?

  20. Searching answer for same question which Janne asked and found this page…What happens there, how Cloud Witness helps not to get DBs into split brain scenario?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

x

Check Also

Cluster Health Service in Windows Server 2016

Before Windows Server 2016, the alerting and the monitoring of the cluster were managed by ...

Fault Domain Awareness with Storage Spaces Direct

Fault Domain Awareness is a new feature in Failover Clustering since Windows Server 2016. Fault ...

Upgrade your SOFS to Windows vNext with Rolling Cluster Upgrade

The next release of Windows Server will provide a new feature called Rolling Cluster Upgrade. ...