Choose properly your CPU for Storage Spaces Direct

When building a hyperconverged solution, the CPU is the most important part. This component defines the number of vCPU supported by your infrastructure and so, defined the number of servers to deploy. Today we can install hundreds of RAM in GB, and dozens of storage in TB. Compared to these components the number of CPU is small. So, a bigger CPU (in term of number of cores) almost always reduces the number of servers to buy. In the other hand, with the new Windows Server 2016 license model, a bigger CPU means a higher license cost.

This topic introduces a way to try to choose the right CPU to balance the infrastructure cost (hardware and software) and performance. This is a feedback from a real case study that I have done.

Theoretical case study values

For this topic, I’ll use the theoretical values to show you the cost difference between several CPUs. To make this study, I’ll use the following required values:

  • 500 vCPUs
  • 3TB of RAM
  • 20TB of Storage

Remind about the Windows Server 2016 license

For this topic, I’ll focus on Windows Server 2016 Datacenter license because Storage Spaces Direct requires this edition. Windows Server 2016 is a per-core basis. The Windows Server 2016 license covers up to two sockets of 8 physical cores (16 cores per server). Beyond these 16 cores, you have to acquire license pack. Each license pack covers up to two physical cores.

For example, if you have two CPU of 14 cores, the server has 28 cores. So, you buy the Windows Server 2016 Datacenter which covers up to 16 cores. For the 12 cores remaining, you have to buy 6 additional license packs.

The public cost of Windows Server 2016 Datacenter is:

  • 6155$ for Windows Server 2016 Datacenter
  • 760$ for a license pack

Selected CPU

For this topic, I have Selected two CPUs:

  • Intel Xeon E5-2630v4 (667$): 10 Cores
    • 25MB cache, 2,20GHz
  • Intel Xeon E5-2683v4 (1846$): 16 Cores
    • 40MB cache, 2,10GHz

The Intel Xeon E5-2683v4 has more cache but 100MHz less than Intel Xeon E5-2630v4. But the higher number of cores enable to support more vCPUs.

Calculate the required hardware

For this infrastructure, 500 vCPUs are required. Because we will host server workloads, we can define the consolidation rate to 4:

Number of Cores = 500vCPUs / 4 (consolution rate)

So, we need 125 cores. For the calculation, I never consider the Hyperthreading. Below, please find the number of servers required depending of the server. The column #Node is the number of nodes (Raw) round up with an additional node for the N+1. To find the number of nodes, I have divided the required cores per the number of cores per node. As you can see, with the E5-2630v4, you need two additional nodes.

required Cores

Processor

#Cores / node

#Nodes (Raw)

#Node

125

E5-2630v4

20

6.25

7

125

E5-2683-v4

32

3.91

5

Now that we have the number of nodes, we can calculate the required memory. To calculate the required memory per node, I take the number of nodes – 1 to support a node failure. With the CPU E5-2630v4, you need 500GB per node and you’ll have 3,5TB of RAM for the cluster. With E5-2683v4, you need 750GB of RAM per node and you’ll have 3,75TB of RAM for the cluster. You need more RAM per cluster for E5-2683v4 because you have less nodes.

Required RAM (GB)

CPU

#Nodes

#Nodes – 1

Ram per node

Total RAM

3000

E5-2630v4

7

6

500

3500

3000

E5-2683v4

5

4

750

3750

Finally, for the storage, we need 20TB. We use Storage Spaces Direct and to host virtual machines, we need 3-way mirroring. I have selected the following configuration:

  • Intel Xeon E5-2630v4 (7 nodes) – Total 22TB with 4TB of reservation
    • 4xSSD 400GB (Cache)
    • 6x HDD 2TB (Capacity)
  • Intel Xeon E5-2683v4 (5Nodes) – Total 24TB with 8TB of reservation
    • 4xSSD 400GB (Cache)
    • 4xHDD 4TB (Capacity)

To make the price comparison related to the selected hardware, I have used Dell hardware (Dell R730xd):

Node E5-2630v4

Node E5-2683v4

2x E5-2630v4
512GB of RAM
2x SSD 120GB (OS)
4x SSD 400GB (Cache)
6x HDD 2TB (Capacity)
1x Mellanox 25GB dual controller

2x E5-2683v4
750GB of RAM
2x SSD 120GB (OS)
4x SSD 400GB (Cache)
4x HDD 4TB (Capacity)
1x Mellanox 25GB dual controller

27K$ per node

35K$ per node

189K$ for 7 nodes

175K$ for 5 nodes

The total hardware cost for E5-2683v4 solution is 14K$ less expensive than the solution with E5-2630v4. Now, let’s see the license cost

Windows Server 2016 Datacenter licenses

With the E5-2630v4 solution, we have 20 cores per node and with E5-2683v4 solution we have 32 cores per node.

CPU

#Cores / node

#Extra license packs

#Total cost / node

#Nodes

Total cost

E5-2630v4

20

2

$7,675.00

7

$53,725.00

E5-2683v4

32

8

$12,235.00

5

$61,175.00

As you can see, you need to spend more money for E5-2683v4 solution than E5-2630v3 in Windows Server 2016 licenses.

Total solution cost

Below you can find the total cost for both solutions:

E5-2630v4

E5-2683v4

Hardware

$189,000.00

$175,000.00

Software

$53,725.00

$61,175.00

Total

$242,725.00

$236,175.00

So, the total cost for E5-2630v4 solution is almost 243K$ and 236K$ for E5-2683v4 solution. As you can see, even if the bigger CPU is more expensive and even if you have to acquire more Windows Server 2016 licenses, the E5-2683v4 solution is less expensive. Moreover the E5-2683v4 solution has a better consolidation because we can host more VMs in each node. So this solution is more scalable.

Conclusion

I have written this topic to show you that a bigger CPU doesn’t mean a more expensive solution. So for your hyperconverged solution with Storage Spaces Direct, you should evaluate several type of CPU. By taking time in the planning phase, you can save money and implement a more scalable solution.

About Romain Serre

Romain Serre works in Lyon as a Senior Consultant. He is focused on Microsoft Technology, especially on Hyper-V, System Center, Storage, networking and Cloud OS technology as Microsoft Azure or Azure Stack. He is a MVP and he is certified Microsoft Certified Solution Expert (MCSE Server Infrastructure & Private Cloud), on Hyper-V and on Microsoft Azure (Implementing a Microsoft Azure Solution).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

x

Check Also

Don’t do it: enable performance history in an Azure Stack HCI mixed mode cluster

Lately I worked for a customer to add two nodes in an existing 2-nodes Storage ...

Keep Dell Azure Stack HCI hardware up to date with WSSD Catalog

The firmware and driver’s management can be a pain during the lifecycle of an Azure ...

Storage Spaces Direct: performance tests between 2-Way Mirroring and Nested Resiliency

Microsoft has released Windows Server 2019 with a new resiliency mode called nested resiliency. This ...