Cores (Actual Cores, Adjusted Weight, Memory Adjustments like Unbalanced DIMMs)

In Sizing Details you may see an odd number like 40.27 cores for RAW cores as shown below

Actual Core Capacity

This is the total number of cores in the recommendation.

By clicking on the tooltip by the node you get the information

So in this recommendation we have 3 nodes where each has 2 cpu and each cpu has 8 cores.  So the Actual core capacity is 3 nodes * 2 cpu/node * 8 cores/cpu = 48 cores

Applied Weight

 

Intel designs a wide range of cpus to meet different market needs.  Core count certainly varies but the speed of a core is not the same across all cpu’s.

We need some benchmark to adjust for the core speed differences.  We use SPECInt 2006.  It is the best benchmark in terms of being an industry standard where vendors who publish numbers have to use standard testing process and publically publish the numbers.  We see consistency as well for a given CPU across all the vendors.  Thus this is a good benchmark to use for us to adjust for different values

So applied weight is where we have adjusted the cores to the baseline processor which runs at 42.31 specints

Review the Processor Table page with their core count, specints, and adjusted cores

Using this example we have a recommendation of 3 nodes with each node have quantity 2 2620v4 processors.  The table (calculation is shown in that page too) shows the 2620 v4 adjusted cores is 14.9 cores with nodes having 2 cpus

Thus in this recommendation total effective cores is 14.91 cores/node * 3 nodes = 44.73 cores.  We take applied weight adjustment of -3.26

 

Memory Adjustments

Broadwell Processors

With Broadwell processors “unbalanced DIMM” configuration depends on how they are laid out on the motherboard.  When it occurs there is a 10% increased access latency

To determine whether Sizer takes a discount it takes total count of DIMMs in a node and divides by 4. If odd number then it is Unbalanced and Sizer applies the discount.
If even, then no reduction is needed

Example

12x32GB in a node. 12 DIMMs/4 = 3 and so unbalanced
8X32GB in a node 8 DIMMs/4 = 2 and so balanced

If unbalanced core capacity is  reduced.

– Actual Core Capacity = Cores/Node * Node count
– Applied Wieght = extra or less cores vs baseline
– Adjustment due to Memory Issues = -10% * (RAW Cores+Applied Wieght)

It should be noted that if single processor system then NO adjustment is needed.

Skylake Processors

Skylake processors is more complex compared to Broadwell in terms of whether a system  has unbalanced dimms

We now test for the following

  • CPU – skylake
  • Model – Balanced_Motherboard – true  (described below)
  • Memory bandwidth – go with slower figure for either memory or CPU.  If 2133 Mhz then -10% memory adjustment.   If 2400Mhz or 2666Mhz (most common with skylake models) we take a 0% adjustment

Like before, we find the DIMM count per socket.  There is typically 2 sockets (cpu’s) but can be 1 and starting to introduce 4 socket models

Using the quantity of DIMMs per socket we should apply following rules

If CPU is skylake

  • If dimm count per socket is 5,7,9,10,11 then the model is considered unbalanced and we need to take a -50% memory adjustment
  • if dimm count per socket is 2,3,4, or 12 it is balanced and memory adjustment = 0%
  • if model is balanced and DIMM count per socket is 6 or 8 then it is balanced and memory adjustment = 0%
  • if model is unbalanced and DIMM count per socket is 6 or 8 then it is unbalanced and memory adjustment = -50%

After determining the adjustment percent we would make the adjustment as we do currently

  • Actual core capacity = Total cores in the cluster
  • Applied weight = adjustment vs baseline specint
  • Adjustment = Adjustment Percent * (Actual core capacity – Applied weight)
With Skylake, it can matter how the DIMMs are arranged on the motherboard.  We have PM review that and so far all models are laid out in balanced fashion
Here is doc that shows the options

 

 

Processor Table

Here is the table of processors

The first 5 columns are from Spec.org

https://www.spec.org/cgi-bin/osgresults

SPECint Adjusted Core is simply adjusting cores vs a baseline of 42.31 SPECint per core

Note in the SPECint tests, typically 2 CPU solutions are tested and so include cores per CPU

For example, the 2620v4 has 16 cores but only at 39.44 SPECint per core

  • SPECint adjusted cores = 16 * SPECint per core / Baseline = 16 * 39.44/42.31 = 14.91
  • Basically, this is saying the 2620 v4 has 16 cores but it is equivalent to 14.91 baseline cores in 2 CPU nodes
  • For a single CPU, it would be just 14.91/2 = 7.455

Looking at a high-speed CPU, the 6128 has just 12 cores but screams at 68.07 SPECint

  • Specint Adjusted cores = 12 * specint per core/ baseline = 12 * 68.07/42.31 = 19.31
  • Basically, this is saying the 6128 has 12 cores but it is equivalent to 19.31 baseline cores
System (Emerald Rapids) # Cores(2 socket) # Chips CINT2006/core
Intel Xeon Silver 4514Y 16C 150W 2.0GHz Processor 32 2 79.14
Intel Xeon Silver 4509Y 8C 125W 2.6GHz Processor 16 2 101.75
Intel Xeon Silver 4510 12C 150W 2.4GHz Processor 24 2 95.20
Intel Xeon Gold 6548N 32C 250W 2.8GHz Processor 64 2 94.16
Intel Xeon Gold 5512U 28C 185W 2.1GHz Processor 28 1 87.38
Intel Xeon Gold 5515+ 8C 165W 3.2GHz Processor 16 2 105.91
Intel Xeon Gold 6526Y 16C 195W 2.8GHz Processor 32 2 100.26
Intel Xeon Gold 6542Y 24C 250W 2.9GHz Processor 48 2 101.15
Intel Xeon Gold 6548Y+ 32C 250W 2.5GHz Processor 64 2 94.46
Intel Xeon Gold 6534 8C 195W 3.9GHz Processor 16 2 116.62
Intel Xeon Gold 6544Y 16C 270W 3.6GHz Processor 32 2 114.54
Intel Xeon Gold 5520+ 28C 205W 2.2GHz Processor 56 2 86.87
Intel Xeon Gold 6538Y+ 32C 225W 2.2GHz Processor 64 2 93.12
Intel Xeon Platinum 8592V 64C 330W 2.0GHz Processor 128 2 72.00
Intel Xeon Platinum 8581V 60C 270W 2.0GHz Processor 60 1 74.89
Intel Xeon Platinum 8571N 52C 300W 2.4GHz Processor 52 1 86.60
Intel Xeon Platinum 8558U 48C 300W 2.0GHz Processor 48 1 81.32
Intel Xeon Platinum 8568Y+ 48C 350W 2.3GHz Processor 96 2 89.45
Intel Xeon Platinum 8580 60C 350W 2.0GHz Processor 120 2 80.13
Intel Xeon Platinum 8592+ 64C 350W 1.9GHz Processor 128 2 75.86
Intel Xeon Platinum 8562Y+ 32C 300W 2.8GHz Processor 64 2 100.41
Intel Xeon Platinum 8558 48C 330W 2.1GHz Processor 48 1 81.32
System (Sapphire Rapids) # Cores(2 socket) # Chips CINT2006/core
Intel Gold 6414U 32C 2.0GHz 250W 32 1 76.16
Intel Silver 4410Y 12C 2.0GHz 135W-145W 24 2 83.30
Intel Silver 4416+ 20C 2.1GHz 165W 40 2 83.06
Intel Silver 4410T 10C 2.7GHz 150W 20 2 97.10
Intel Gold 5415+ 8C 2.9GHz 150W 16 2 102.34
Intel Gold 5418Y 24C 2.1GHz 185W 48 2 81.32
Intel Gold 5420+ 28C 1.9-2.0GHz 205W 56 2 78.20
Intel Gold 6426Y 16C 2.6GHz 185W 32 2 94.90
Intel Gold 6430 32C 1.9GHz 270W 64 2 75.57
Intel Gold 6434 8C 3.9GHz 205W 16 2 113.05
Intel Gold 6438Y+ 32C 1.9-2.0GHz 205W 64 2 82.26
Intel Gold 6442Y 24C 2.6GHz 225W 48 2 94.21
Intel Gold 6444Y 16C 3.5GHz 270W 32 2 110.67
Intel Gold 6448Y 32C 2.2GHz 225W 64 2 83.60
Intel Gold 6438M 32C 2.2GHz 205W 64 2 82.41
Intel Gold 5418N 24C 1.8GHz 165W 48 2 75.96
Intel Gold 6428N 32C 1.8GHz 185W 64 2 72.59
Intel Gold 6438N 32C 2.0GHz 205W 64 2 80.47
Intel Gold 5416S 16C 2.0GHz 150W 32 2 82.11
Intel Gold 6454S 32C 2.2GHz 270W 64 2 79.58
Intel Platinum 8462Y+ 32C 2.8GHz 300W 64 2 94.46
Intel Platinum 8452Y 36C 2.0GHz 300W 72 2 78.80
Intel Platinum 8460Y+ 40C 2.0GHz 300W 80 2 78.18
Intel Platinum 8468 48C 2.1GHz 350W 96 2 80.23
Intel Platinum 8470 52C 2.0GHz 350W 104 2 78.81
Intel Platinum 8480+ 56C 2.0GHz 350W 112 2 73.61
Intel Platinum 8490H 60C 1.9GHz 350W 120 2 71.56
Intel Platinum 8470N 52C 1.7GHz 300W 104 2 69.57
Intel Platinum 8468V 48C 2.4GHz 330W 96 2 76.46
Intel Platinum 8458P 44C 2.7GHz 350W 88 2 82.76
Intel Xeon Platinum 8468H 48C 330W 2.1GHz Processor 96 2 76.66
Intel Xeon Platinum 8454H 32C 270W 2.1GHz Processor 64 2 72.74
Intel Xeon Platinum 8450H 28C 250W 2.0GHz Processor 56 2 79.90
Intel Xeon Platinum 8444H 16C 270W 2.9GHz Processor 32 2 96.69
Intel Xeon Platinum 8460H 40C 330W 2.2GHz Processor 80 2 83.66
Intel Xeon Gold 6448H 32C 250W 2.4GHz Processor 64 2 89.85
Intel Xeon Gold 6418H 24C 185W 2.1GHz Processor 48 2 79.53
Intel Xeon Gold 6416H 18C 165W 2.2GHz Processor 36 2 85.42
Intel Xeon Gold 6434H 8C 195W 3.7GHz Processor 16 2 119.00
Intel Xeon Platinum 8470Q 52C 2.10 GHz Processor 104 2 79.36
Intel Xeon Gold 6458Q 32C 3.10 GHz Processor 64 2 101.45
Intel Xeon-B 3408U 8C 8 1 50.69
Intel Xeon-G 5412U 24C 24 1 85.28
Intel Xeon-G 5411N 24C 165W 1.9GHz Processor 24 1 82.51
Intel Xeon-G 6421N 32C 185W 1.8GHz Processor 32 1 78.54
Intel Xeon Platinum 8461V 48C 300W 2.2GHz Processor 48 1 75.37
Intel Xeon Platinum 8471N 52C 300W 1.8GHz Processor 52 1 75.43
System (Ice Lake) # Cores(2 socket) # Chips CINT2006/core
Intel® Xeon® Platinum 8368Q Processor (57M Cache, 2.60 GHz) 76 2 64.51
Intel® Xeon® Platinum 8360Y Processor (54M Cache, 2.40 GHz) 52 2 94.83
Intel® Xeon® Platinum 8358P Processor (48M Cache, 2.60 GHz) 64 2 70.81
Intel® Xeon® Platinum 8352Y Processor (48M Cache, 2.20 GHz) 64 2 65.30
Intel® Xeon® Platinum 8352V Processor (54M Cache, 2.10 GHz) 72 2 56.72
Intel® Xeon® Platinum 8352S Processor (48M Cache, 2.20 GHz) 64 2 65.30
Intel® Xeon® Platinum 8351N Processor (54M Cache, 2.40 GHz) 36 1 67.43
Intel® Xeon® Gold 6338N Processor (48M Cache, 2.20 GHz) 64 2 63.37
Intel® Xeon® Gold 6336Y Processor 48 2 71.00
Intel® Xeon® Gold 6330N Processor (42M Cache, 2.20 GHz) 56 2 61.20
Intel® Xeon® Gold 5318Y Processor 48 2 63.07
Intel® Xeon® Gold 5315Y Processor 16 2 82.11
Intel® Xeon® Silver 4309Y Processor 16 2 79.73
Intel® Xeon® Platinum 8380 Processor (60M Cache, 2.30 GHz) 80 2 66.28
Intel® Xeon® Platinum 8368 Processor (57M Cache, 2.40 GHz) 76 2 68.02
Intel® Xeon® Platinum 8358 Processor (48M Cache, 2.60 GHz) 64 2 73.48
Intel® Xeon® Gold 6354 Processor (39M Cache, 3.00 GHz) 36 2 81.45
Intel® Xeon® Gold 6348 Processor (42M Cache, 2.60 GHz) 56 2 74.63
Intel® Xeon® Gold 6346 Processor (36M Cache, 3.10 GHz) 32 2 83.60
Intel® Xeon® Gold 6342 Processor 48 2 76.16
Intel® Xeon® Gold 6338 Processor (48M Cache, 2.00 GHz) 64 2 62.03
Intel® Xeon® Gold 6334 Processor 16 2 86.87
Intel® Xeon® Gold 6330 Processor (42M Cache, 2.00 GHz) 56 2 62.22
Intel® Xeon® Gold 6326 Processor 32 2 78.24
Intel® Xeon® Gold 5320 Processor 52 2 66.09
Intel® Xeon® Gold 5317 Processor 24 2 80.13
Intel® Xeon® Silver 4316 Processor 40 2 67.12
Intel® Xeon® Silver 4314 Processor 32 2 69.62
Intel® Xeon® Silver 4310 Processor 24 2 66.64
Intel Xeon Gold 6338T processor (2.1 GHz/ 24-core/ 165W) 48 2 63.27
Intel Xeon Gold 5320T processor (2.3 GHz/ 20-core/ 150W) 40 2 66.40
Intel Xeon Silver 4310T processor (2.3 GHz/ 10-core/ 105W) 20 2 70.45
Intel Xeon Gold 6314U processor (2.30 GHz/32-core/205W) 32 1 67.24
Intel Xeon Gold 6312U processor (2.40 GHz/24-core/185W) 24 1 73.38
System(Cascade Lake) # Cores(2 socket) # Chips CINT2006/core
CPU (2.10 GHz, Intel Xeon Gold 6230) 40 2 52.65
CPU (2.30 GHz, Intel Xeon Gold 5218) 32 2 54.28
CPU (2.30 GHz, Intel Xeon Gold 5218B) 32 2 54.28
CPU (2.60 GHz, Intel Xeon Gold 6240) 36 2 60.12
CPU (2.10 GHz, Intel Xeon Gold 6252) 48 2 51.68
CPU (2.30 GHz, Intel Xeon Gold 6252N) 48 2 50.71
CPU (2.20 GHz, Intel Xeon Platinum 8276) 56 2 52.71
CPU (2.20 GHz, Intel Xeon Silver 4210) 20 2 52.14
CPU (2.20 GHz, Intel Xeon Silver 4214) 24 2 53.49
CPU (2.20 GHz, Intel Xeon Silver 4214Y) 24 2 53.49
CPU (2.10 GHz, Intel Xeon Silver 4216) 32 2 52.82
CPU (2.50 GHz, Intel Xeon Gold 5215) 20 2 57.48
CPU (2.50 GHz, Intel Xeon Gold 6248) 40 2 55.16
CPU (2.50 GHz, Intel Xeon Silver 4215) 16 2 58.07
CPU (2.60 GHz, Intel Xeon Gold 6240Y) 36 2 59.08
CPU (2.70 GHz, Intel Xeon Platinum 8270) 52 2 59.19
CPU (2.70 GHz, Intel Xeon Platinum 8280) 56 2 58.91
CPU (2.70 GHz, Intel Xeon Platinum 8280M) 56 2 57.32
CPU (2.90 GHz, Intel Xeon Platinum 8268) 48 2 62.15
CPU (3.00 GHz, Intel Xeon Gold 5217) 16 2 64.33
CPU (3.80 GHz, Intel Xeon Gold 5222) 8 2 77.44
CPU (2.10 GHz, Intel Xeon Silver 4208) 16 2 49.73
CPU (2.70 GHz, Intel Xeon 6226) 24 2 64.75
CPU (3.3 GHz, Intel Xeon Gold 6234) 16 2 75.44
CPU (2.8 GHz, Intel Xeon Gold 6242) 32 2 64.54
CPU (2.2 GHz, Intel Xeon Silver 5220) 36 2 52.86
CPU (2.1 GHz, Intel Xeon Gold 6238) 44 2 51.78
CPU (3.6 GHz, Intel Xeon Gold 6244) 16 2 80.92
CPU (3.3 GHz, Intel Xeon Gold 6246) 24 2 70.97
CPU (2.5 GHz, Intel Xeon Gold 6248) 40 2 55.16
CPU (3.1 GHz, Intel Xeon Gold 6254) 36 2 69.1
CPU (1.8 GHz, Intel Xeon Gold 6222V) 40 2 47.6
CPU (1.9 GHz, Intel Xeon Gold 6262V) 48 2 48
CPU (2.5 GHz, Intel Xeon Gold 5215M) 20 2 56.53
CPU (2.1 GHz, Intel Xeon Gold 6238M) 44 2 51.57
CPU (2.6 GHz, Intel Xeon Gold 6240M) 36 2 57.78
CPU (2.5 GHz, Intel Xeon Gold 5215L) 20 2 57.48
CPU (2.1 GHz, Intel Xeon Gold 6238L) 44 2 52
CPU (2.4 GHz, Intel Xeon Gold 8260) 48 2 57.42
CPU (2.4 GHz, Intel Xeon Gold 8260L) 48 2 57.22
CPU (2.4 GHz, Intel Xeon Gold 8260M) 48 2 55.63
CPU (2.9 GHz, Intel Xeon Gold 8268) 48 2 62.15
CPU (2.7 GHz, Intel Xeon Gold 8270) 52 2 59.19
CPU (2.2 GHz, Intel Xeon Gold 8276) 56 2 52.71
CPU (2.2 GHz, Intel Xeon Gold 8280) 56 2 58.91
CPU (2.2 GHz, Intel Xeon Gold 8280M) 56 2 57.32
CPU (2.2 GHz, Intel Xeon Gold 8276M) 56 2 49.69
CPU (2.2 GHz, Intel Xeon Gold 8276L) 56 2 50.02
CPU (2.2 GHz, Intel Xeon Gold 8280L) 56 2 58.36
CPU (2.4 GHz, Intel Xeon Gold 8260Y) 48 2 55.83
CPU (2.5 GHz, Intel Xeon Gold 6210U) 20 1 59.98
CPU (1.9 GHz, Intel Xeon Gold 3206R) 16 2 47.6
CPU (2.4 GHz, Intel Xeon Gold 4210R) 20 2 60.45
CPU (2.4 GHz, Intel Xeon Gold 4214R) 24 2 64.26
CPU (3.2 GHz, Intel Xeon Gold 4215R) 16 2 73.19
CPU (2.1 GHz, Intel Xeon Gold 5218R) 40 2 58.79
CPU (2.2 GHz, Intel Xeon Gold 5220R) 48 2 54.74
CPU (2.1 GHz, Intel Xeon Gold 6230R) 52 2 56.94
CPU (2.9 GHz, Intel Xeon Gold 6226R) 32 2 71.7
CPU (2.4 GHz, Intel Xeon Gold 6240R) 48 2 59.5
CPU (3.1 GHz, Intel Xeon Gold 6242R) 40 2 72.11
CPU (2.2 GHz, Intel Xeon Gold 6238R) 56 2 54.06
CPU (3.0 GHz, Intel Xeon Gold 6248R) 48 2 66.84
CPU (2.7 GHz, Intel Xeon Gold 6258R) 56 2 61.54
CPU (3.9 GHz, Intel Xeon Gold 6250) 16 2 81.49
CPU (3.6 GHz, Intel Xeon Gold 6256) 24 2 77.39
CPU (3.4 GHz, Intel Xeon Gold 6246R) 32 2 70.06
CPU Type CPU Family SpecInt2006Rate # of Cores Specint2006Rate per CORE Specint Adjusted Cores Cores per CPU
2699v3 Haswell 1389 36 38.58 32.8 18
2630v3 Haswell 688 16 43.00 16.3 8
2620v3 Haswell 529 12 44.08 12.5 6
2697v3 Haswell 1236 28 44.14 29.2 14
2680v3 Haswell 1063 24 44.31 25.1 12
2660v3 Haswell 900 20 45.00 21.3 10
2640v3 Haswell 725 16 45.31 17.1 8
2623v3 Haswell 424 8 53.00 10.0 4
2643v3 Haswell 690 12 57.50 16.3 6
2620v2 Ivy Bridge 429 12 35.75 10.1 6
2697v2 Ivy Bridge 962 24 40.08 22.7 12
2630v2 Ivy Bridge 505 12 42.08 11.9 6
2680v2 Ivy Bridge 846 20 42.31 20.0 10
2650v2 Ivy Bridge 681 16 42.55 16.1 8
2690v2 Ivy Bridge 888 20 44.40 21.0 10
2643v2 Ivy Bridge 634 12 52.83 15.0 6
2620v1 Sandy Bridge 390 12 32.50 9.2 6
2670v1 Sandy Bridge 640 16 40.00 15.1 8
2690v1 Sandy Bridge 685 16 42.81 16.2 8
2637v3 Haswell 472 8 59.00 11.2 4
2698v3 Haswell 1290 32 40.31 30.5 16
E5-2609v4 Broadwell 415 16 25.94 9.8 8
E5-2620v4 Broadwell 631 16 39.44 14.9 8
E5-2630v4 Broadwell 795 20 39.75 18.8 10
E5-2640v4 Broadwell 844 20 42.20 19.9 10
E5-2643v4 Broadwell 703 12 58.58 16.6 6
E5-2650v4 Broadwell 984 24 41.00 23.3 12
E5-2660v4 Broadwell 1090 28 38.93 25.8 14
E5-2680v4 Broadwell 1200 28 42.86 28.4 14
E5-2690v4 Broadwell 1300 28 46.43 30.7 14
E5-2695v4 Broadwell 1370 36 38.06 32.4 18
E5-2697v4 Broadwell 1460 36 40.56 34.5 18
E5-2698v4 Broadwell 1540 40 38.50 36.4 20
E5-2699v4 Broadwell 1690 44 38.41 39.9 22
3106 Skylake 431.4 16 26.9625 10.2 8
4108 Skylake 629.65 16 39.353125 14.9 8
4109T Skylake 667.92 16 41.745 15.8 8
4110 Skylake 693.24 16 43.3275 16.4 8
4112 Skylake 412.91 8 51.61375 9.8 4
4114 Skylake 890.6 20 44.53 21.0 10
4116 Skylake 1030.87 24 42.95291667 24.4 12
5115 Skylake 969.14 20 48.457 22.9 10
5118 Skylake 1133.2 24 47.21666667 26.8 12
5120 Skylake 1271.56 28 45.41285714 30.1 14
5122 Skylake 544.38 8 68.0475 12.9 4
6126 Skylake 1304.67 24 54.36125 30.8 12
6128 Skylake 816.91 12 68.07583333 19.3 6
6130 Skylake 1516.45 32 47.3890625 35.8 16
6132 Skylake 1524.55 28 54.44821429 36.0 14
6134 Skylake 1037.72 16 64.8575 24.5 8
6134M Skylake 1085 16 67.8125 25.6 8
6136 Skylake 1451 24 60.45833333 34.3 12
6138 Skylake 1748.89 40 43.72225 41.3 20
6140 Skylake 1752.86 36 48.69055556 41.4 18
6140M Skylake 1810 36 50.27777778 42.8 18
6142 Skylake 1688.5 32 52.765625 39.9 16
6142M Skylake 1785 32 55.78125 42.2 16
6143 Skylake 1950 32 60.9375 46.1 16
6144 Skylake 1113 16 69.5625 26.3 8
6146 Skylake 1534.44 24 63.935 36.3 12
6148 Skylake 1921.3 40 48.0325 45.4 20
6150 Skylake 1903.75 36 52.88194444 45.0 18
6152 Skylake 1951.18 44 44.345 46.1 22
6154 Skylake 2062 36 57.27777778 48.7 18
8153 Skylake 1326.88 32 41.465 31.4 16
8156 Skylake 550.81 8 68.85125 13.0 4
8158 Skylake 1464 24 61 34.6 12
8160 Skylake 2152.5 48 44.84375 50.9 24
8160M Skylake 2285 48 47.60416667 54.0 24
8164 Skylake 2204 52 42.38461538 52.1 26
8165 Skylake 2500 48 52.08333333 59.1 24
8168 Skylake 2454.12 48 51.1275 58.0 24
8170 Skylake 2282.86 52 43.90115385 54.0 26
8170M Skylake 2420 52 46.53846154 57.2 26
8176 Skylake 2386.87 56 42.62267857 56.4 28
8176M Skylake 2507 56 44.76785714 59.2 28
8180 Skylake 2722.38 56 48.61392857 64.3 28
8180M Skylake 2710 56 48.39285714 64.0 28
System (AMD Genoa) # of Cores across CPUs # of CPUs CINT2006/core
AMD EPYC 9274F 24C 320W 4.05GHz Processor 48 2 123.17
AMD EPYC 9354P 32C 280W 3.25GHz Processor 32 1 108.59
AMD EPYC 9224 24C 200W 2.5GHz Processor 48 2 99.37
AMD EPYC 9174F 16C 320W 4.1GHz Processor 32 2 127.33
AMD EPYC 9654P 96C 360W 2.4GHz Processor 96 1 81.32
AMD EPYC 9554P 64C 360W 3.1GHz Processor 64 1 95.80
AMD EPYC 9454P 48C 290W 2.75GHz Processor 48 1 101.15
AMD EPYC 9634 84C 290W 2.25GHz Processor 168 2 75.93
AMD EPYC 9354 32C 280W 3.25GHz Processor 64 2 108.74
AMD EPYC 9474F 48C 360W 3.6GHz Processor 96 2 107.10
AMD EPYC 9374F 32C 320W 3.85GHz Processor 64 2 119.89
AMD EPYC 9534 64C 280W 2.45GHz Processor 128 2 88.51
AMD EPYC 9454 48C 290W 2.75GHz Processor 96 2 101.15
AMD EPYC 9334 32C 210W 2.7GHz Processor 64 2 103.83
AMD EPYC 9254 24C 200W 2.9GHz Processor 48 2 108.69
AMD EPYC 9124 16C 200W 3.0GHz Processor 32 2 103.23
AMD EPYC 9554 64C 360W 3.1GHz Processor 128 2 95.94
AMD EPYC 9654 96C 360W 2.4GHz Processor 192 2 79.83
AMD EPYC 9734 2.2GHz 112-Core Processor 224 2 70.98
AMD EPYC 9754 2.25GHz 128-Core Processor 256 2 67.31
System (AMD Milan) # of Cores across CPUs # of CPUs CINT2006/core
AMD EPYC 7663P CPU 2.00 GHz 56 1 58.65
AMD EPYC 7643P CPU 2.30 GHz 48 1 64.46
AMD EPYC 7303P CPU 2.40 GHz 16 1 78.54
AMD EPYC 7203P CPU 2.80 GHz 8 1 84.25
AMD EPYC 7303 CPU 2.40 GHz 32 2 77.65
AMD EPYC 7203 CPU 2.80 GHz 16 2 83.90
AMD EPYC 7313P CPU 3.00 GHz 16 1 89.25
AMD EPYC 7443P CPU 2.85 GHz 24 1 84.49
AMD EPYC 7713P CPU 2.00 GHz 64 1 55.78
AMD EPYC 7543P CPU 2.80 GHz 32 1 80.62
AMD EPYC 7413 CPU 2.65 GHz 24 1 81.32
AMD EPYC 7763 CPU 2.45 GHz 64 1 61.43
AMD EPYC 7343 CPU 3.20 GHz 16 1 90.44
AMD EPYC 7453 CPU 2.75 GHz 28 1 74.80
AMD EPYC 75F3 CPU 2.95 GHz 32 1 83.90
AMD EPYC 7663 CPU 2.00 GHz 56 1 60.52
AMD EPYC 72F3 CPU 3.70 GHz 8 1 106.15
AMD EPYC 73F3 CPU 3.50 GHz 16 1 98.77
AMD EPYC 74F3 CPU 3.20 GHz 24 1 88.46
AMD EPYC 7643 CPU 2.30 GHz 48 1 65.65
AMD EPYC 7543 CPU 2.8 GHz 64 2 80.03
AMD EPYC 7713 CPU 2.0 GHz 128 2 55.04
AMD EPYC 7443 CPU 2.85 GHz 48 2 84.69
AMD EPYC 7313 CPU 3.0 GHz 32 2 90.74
AMD EPYC 7513 CPU 2.6 GHz 64 2 73.78
AMD EPYC 7373X FIO (16 cores, 768 M Cache, 3.8 GHz, DDR4 3200MHz) 32 2 97.58
AMD EPYC 7473X FIO (24 cores, 768 M Cache, 3.7 GHz, DDR4 3200MHz) 48 2 88.85
AMD EPYC 7573X FIO (32 cores, 768 M Cache, 3.6 GHz, DDR4 3200MHz) 64 2 84.94
AMD EPYC 7773X FIO (64 cores, 768 M Cache, 3.5 GHz, DDR4 3200MHz) 128 2 60.10
System (AMD Milan) # of Cores across CPUs # of CPUs CINT2006/core
AMD EPYC 7742 CPU 2.25GHz 128 2 49.31
AMD EPYC 7702 CPU 2.00GHz 128 2 46.11
AMD EPYC 7502 CPU 2.5GHz 64 2 62.77
AMD EPYC 7452 CPU 2.35GHz 64 2 59.35
AMD EPYC 7402 CPU 2.80GHz 48 2 68.23
AMD EPYC 7302 CPU 3.00GHz 32 2 68.72
AMD EPYC 7502P CPU 2.50GHz 32 1 63.37
AMD EPYC 7262 CPU 3.20GHz 16 2 74.38
AMD EPYC 7261 CPU 2.50GHz 16 2 55.93
AMD EPYC 7H12 CPU 2.60GHz 128 2 51.17
AMD EPYC 7662 CPU 2.00GHz 128 2 48.72
AMD EPYC 7642 CPU 2.30GHz 96 2 56.72
AMD EPYC 7552 CPU 2.20GHz 96 2 50.58
AMD EPYC 7532 CPU 2.40GHz 64 2 65.00
AMD EPYC 7272 CPU 2.90GHz 24 2 64.26
AMD EPYC 7352 CPU 2.30GHz 48 2 62.67
AMD EPYC 7302P CPU 3.0GHz 16 1 69.02
AMD EPYC 7402P CPU 2.8GHz 24 1 67.04
AMD EPYC 7702P CPU 2.0GHz 64 1 47.45
AMD EPYC 7232P CPU 3.1GHz 8 1 67.83
AMD EPYC 7282 CPU 2.8GHz 16 1 66.64
AMD EPYC 7542 CPU 2.9GHz 64 2 61.29
AMD EPYC 7F72 CPU 3.3GHz 48 2 72.99
AMD EPYC 7F52 CPU 3.5GHz 32 2 85.09
AMD EPYC 7252 CPU 3.1GHz 16 2 69.62
AMD EPYC 7F32 CPU 3.70GHz 16 2 88.06

Continue reading “Processor Table”

CVM (Cores, Memory, HDD, SSD)

CVM Cores

Sizer will first compute the number of cores needed for the workloads.  The sum of all the workload cores is called TotalCores in this equation.

Each workload type  has its own number of cores

  • NumVDICores
  • NumDBCores (SQL Server)
  • NumServVirtCores
  • NumRawCores.   Note: coreCVMOverhead is a user input for RAW workload to set CVM Cores with default being 4 cores
  • NumServerComputingCores
  • NumSplunkCores
  • NumXenAppCores
  • NumFileServicesCores
  • Oracle cores is set to 6

What sizer then does is places a weighted average of CVM cores for these workloads with a range of 4 to 6 cores per node depending on workload mix

CVM  cores per node = (NumVDICores / TotalCores) * 4 + (NumDBCores / TotalCores) * 6 + (NumExchgCores / TotalCores) * 6   + (NumServVirtCores / TotalCores) * 4 + (NumRawCores / TotalCores) * coreCVMOverhead  + (NumServerComputingCores / TotalCores) * 4 + (NumSplunkCores / TotalCores) * 6 + (NumXenAppCores / TotalCores) * 4 + (NumFileServicesCores / TotalCores) * 4 + (number of Oracle cores/TotalCores) * 6;

For example, if only VDI is in a scenario than the NumVDICores / TotalCores ratio is 1 and then 4 cores are assigned for each node for CVM

The workload equations will come to a CVM core count per node for entire configuration that depends on the workload balance but is between 4 to 6 cores for CVM per node.

 

CVM Memory

 

CVM memory will vary by Platform type

 

 

Platform Default Memory (GB)
VDI, server virtualization 20
Storage only 28
Light Compute 28
Large server, high-performance, all-flash 32

 

Feature Addon CVM Memory

 

Sizer adds on the following amount of memory for features as noted below

 

 

Features Memory (GB)
Capacity tier deduplication (includes performance tier deduplication) 16
Redundancy factor 3 8
Performance tier deduplication 8
Cold-tier nodes + capacity tier deduplication 4
Capacity tier deduplication + redundancy factor 3 16
Self-service portal (AHV only)

 

  • With Asterix.1 no need to add memory beyond what is allocated for Platform CVM memory
 

Sizer approach to calculate CVM Memory

  • First determine Platform type CVM Memory from the tables.  As we do a sizing for a given model determine what type of model it is (it should be a table to allow updates) and assign appropriate CVM memory per node (20, 28, or 32 GB per node).
  •  Next we add memory for addons which can not go higher than 32 GB
    • Add CVM memory for extras.  Total CVM Memory = Min (Platform CVM Memory + Addon memory, 32)  where addon memory =
    • If RF3 = 8 GB
    • Dedupe only =  16GB
    • Both RF3 and Dedupe = 16 GB
    • No addons = 0GB
    • Compression  = 0 GB
  • If EPIC workload take MAX(32GB, result found in step 2).   Here should at least be 32 but may be more. If not EPIC go to step 4
  • Add memory for Hypervisor.  Looking at best practices for AHV, ESX and HyperV can assume 8GB needed for the hypervisor.  Though not a CVM memory requirement per-se it is a per node requirement and so good place to add it (versus a new line item in the Sizing details).
    • Total CVM Memory = Total CVM Memory + 8Gb

Examples.

  • Either manual or automatic sizing is sizing 3060-G5.  RF3 is turned on for one workload. User wants SSP. Not EPIC workload

CVM memory per node

·        Platform CVM Memory for 3060-G5 = 20GB

·        Add on feature  CVM requirement = 8 GB

·   ·        Hypervisor = 8 GB

CVM Memory per node = 28GB.  Will show 36GB with hypervisor

  •  Either manual or automatic sizing is sizing 1065-G5.  RF3 and Dedupe are OFF. Not EPIC workload

CVM memory per node

·        Platform CVM Memory for 3060-G5 = 20GB

·        Add on feature  CVM requirement = 0 GB

·        Hypervisor = 8 GB

CVM Memory per node = 20GB.  Will show 28GB with hypervisor

  • Either manual or automatic sizing is sizing 8035-G5.  Dedupe is turned on for one workload and want SSP. Not EPIC workload

CVM memory per node

·        Platform CVM Memory for 8035-G5 = 28GB

·        Add on feature  CVM requirement = 16 GB

·      ·        Hypervisor = 8 GB

CVM Memory per node = 32GB.  Though the addon requires 16GB we reached the maximum of 32 for the platform and addons together.  Will show 40GB with hypervisor

CVM HDD

Below is how the CVM HDD overhead is calculated.

Ext4 5% of all HDD in TiB
Genesis 5% of all HDD in TiB after Ext4 is discounted
Curator Max (2% * HDD in TiB, 60 GiB) for Curator 1st HDD
+
Max (2% * HDD in TiB, 20 GiB) for Curator for all remaining HDDs
Let us take an example and see how this calculation works
HDD Capacity per node in TB 32
Number of Nodes in Cluster 3
Cluster total HDD Capacity in TB 96
Cluster total HDD Capacity in TiB 87.31

The example assumes each node has 4 x 8TB HDDs

Capacity of 1st HDD in TB 8
Capacity of 1st HDD in TiB 7.28
Capacity of all remaining HDDs in TB 88
Capacity of all remaining HDDs in TiB 80.04

Let us take the above numbers in the example and derive the HDD CVM overhead

Ext4 5% of all HDD in TiB 4.37
Genesis 5% of all HDD in TiB after Ext4 is discounted 4.15
Curator Max (2% * HDD in TiB, 60 GiB) for Curator 1st HDD
+
Max (2% * HDD in TiB, 20 GiB) for Curator for all remaining HDDs
1.75
Total CVM Overhead   10.26

CVM SSD

CVM SSD per node:

Nutanix Home 60 GiB for first 2 SSDs.  Assuming all nodes have at least 2 SSDs.

 

  • If just one SSD like for the 6035c then just 60GiB.
Ext 4 5% of of each SSD after downstroke  in GiB after Nutanix Home capacity taken.
Genesis 5% of of each SSD in GiB after Ext 4 taken
Cassandra Homogeneous clusters

 

This is for all nodes

Max(30 GiB per node, 3% of HDD raw capacity + 3% of SSD raw capacity)

For heterogenous clusters

Find the largest node and then apply above equation for all nodes

Oplog Oplog reservation per node = MIN(0.25 *(SSD space left after cassandra, cache, curator reservation), 400GiB)
Content cache 20GB per node converted to GiB

 

What are the details on CVM overheads

  • HDD numbers can be seen by clicking the “I” button
  • SSD numbers can be seen by clicking the “I” button
  • In case of AF all the CVM components are applied to SSD CVM as shown below

 

Getting 1 node or 2 node clusters

New rules in Sizer for Regular Models and ROBO Models (October 2018)

Regular Models

Rules

  • All models included
  • All use cases are allowed – main cluster application, remote cluster application and remote snapshots
  • 3+ nodes is recommended

Summary

    • This is default in Sizer and is used most of the time
    • Fits best practices for a data center to have 3 or more nodes
    • Huge benefit as Sizer user can stay in this mode to size for 1175s or other vendor’s small models  if they want 3+ nodes anyhow. No need to go to Robo mode

 

  • Note: This gets rid of previous Sizer user headache as they want to size these models for 3+ nodes and get confused where to go

 

What changes

  • The smaller nodes such as 1175S are included in the list for running main cluster applications vs just remote applications and remote snapshots

ROBO Models

Rules

    • All models  but only some can size for 1 or 2 node
    • All use cases – main cluster application, remote cluster application and remote snapshots
    • All models can 3+ nodes depending on sizing requirements

 

  • ONLY Certain Models (aka ROBO models) can be 1 or 2 node

 

    • Note there is no CPU restriction.  Basically PM decides what models are ROBO and they can be 1 or 2 cpu

Summary

  • User would ONLY need to go to ROBO if they feel the solution fits in 1 or 2 node
    • If the size of the workloads require 3+ nodes, Sizer would simply report the required nodes and it would be no different recommendation than in regular
    • They feel 1 or 2 node restrictions is fine.  
      • The list of robo models are fine for the customer
      • RF for 1 node is disk level not node level
      • Some workloads like AFS require 3 nodes and so not available

What changes

  • All models can be used in ROBO where before it was just the ROBO models

No quoting in Sizer for Robo

Currently there are minimum number of units or deal size when quoting Robo.  Sizer will size the opportunity and will tell you that you should quote X units.  Given it takes 10 or more units and possibly you want to club together multiple projects, we disabled quoting from Sizer when includes 1175S

 

No Optimal Solution

At times no optimal solution can be found

Typical – No Optimal Solution Found Issues

When Sizer cannot find a solution given various settings and constraints it will simply say No Optimal Solution found. For example, if set node count to 3 nodes and ask for extremely large workloads it will say No Optimal Solution found as there is no 3 nodes solution to cover that many users.

So here is the list of common things users set that may cause No Optimal Solution.

  • Node count set too low in the Auto Sizing panel
  • Budget set too low in the Auto Sizing
  • Set models to say 1065 and ask for lot of demand that requires more than 8 nodes
  • NearSync is selected and using ROBO models like 1175S

What to do

  • Get back to Automatic Sizing with
    • No Node Count filter
    • No Max Budget filter
    • Set model types to All Regular models

 

Compression Sizing

Compression Settings

  • In each each workload,  there are the following compression settings
    • Disable compression for pre-compressed data.
      • This turns off compression in Sizer.  It is a good idea if  customer has mostly pre-compressed data for that workload.  Though it may be tempting to turn-off compression all the time to be conservative, it is hard to economically have large All Flash solutions without any compression.   It is also unrealistic that no data compression is possible.  Thus use this sparingly
    • Enable Compression
      • This is always ON for All Flash.  The reason for that is because post process compression is turned ON for AF as it comes out of the factory.
      • By default it is ON for Hybrid, but user can turn it OFF
    • Container Compression
      • There is a slider that can go from 1:1 (0% savings) to 2:1 (50% savings).
      • The range will vary by workload.  We do review pulse data on various workloads.  Typically 30% to 50%.  For Splunk, it is 15% maximum as the application does fair amount of pre-compression before stored in Acropolis.

What Sizer will do if Compression is turned ON

  • Post process compression is what Sizer sizes for.  The compression algorithm in Acropolis is LZ4 which runs about every 6 hours but occasionally LZ4-HC goes through cold tier data that is over day old and can compress it further.
  • First the workload HDD  and SSD requirements are computed without compression.  This would include the workload and RF overhead
  • Compression will then be applied.  .
  • Example.  Workload requires 4.39 TiB (be it SSD or HDD), RF3 is used for Replication Factor, and Compression is set to 30%
    • Workload Total in Sizing Details = 4.39 TiB
    • RF Overhead in Sizing Details = 4.39* 2 = 8.79 TiB  (with RF3 there is 2 extra copies while with RF 2 there is just one extra copy)
    • Compression Savings in Sizing Details = 30% (Workload + RF Overhead) = 30% (4.39 + 8.79) = 3.96 TiB

Deduplication

  • Deduplication does not effect the compression sizing

Local Snapshots

  • First the local snapshots are computed using what the user enter for daily change rate  and number of snapshots retained (hourly, daily, weekly)
  • RF is applied to the local snapshots as extra copies need to be made.
  • Compression is applied
  • Example
    •  Workload requires 4.39 TiB HDD, RF3 is used for Replication Factor, and Compression is set to 30%
    • Daily change rate = 1% with 24 hourly snapshots, 7 daily snapshots, 4 weekly snapshots
    • Local Snapshot Overhead in Sizing Details =  1.76 TiB  (explained in separate section)
    • Snapshots RF Overhead in Sizing Details = 2*1.76 TiB  = 3.52 TiB (with RF3 there is 2 extra copies while with RF 2 there is just one extra copy)
    • Compression Savings in Sizing Details = 30% (Workload + RF Overhead + Local Snapshot Overhead + Snapshots RF Overhead) = 30% * ( 4.39 + 8.79 + 1.76 + 3.52) = 30% * 18.46 = 5.54 TiB
      • Though a lot of numbers this is saying compression is applied to all the cold user data (not CVM)

Remote Snapshots

  • Using same example used in local snapshots but adding remote snapshots put on a different cluster
  • Remote Snapshot overhead in Sizing Details  = 6.64 TiB  (note this is just for the remote cluster, also explained in separate section)
  • Snapshots RF Overhead in Sizing Details = 13.28 TiB  (note this is just for the remote cluster and remember it is RF3)
  • Compression Savings in Sizing Details = 30% * ( 6.64 + 13.28) = 5.98 TiB
    • Though a lot of numbers this is saying compression is applied to all the cold user data (not CVM)

Misc

  • If compression is ON then just Pro or Ultimate  license in financial assumptions and in the financial analysis section of the BOM

Login Information and Vendor Support

This is a common concern with various users as they will see different login approaches and vendor support

Login Approaches

My Nutanix Login –  This is for registered partners and for all Nutanix employees.  Most sizings will be done using this login approach.  You can do a complete sizing including generating a BOM or budgetary quotes.    You can not attach a BOM to a SFDC opportunity or generate a quote in SFDC.

Salesforce Login –  This is for Nutanix employees with SFDC Account.  This is used by Nutanix field who has access to SFDC.  You can do a complete sizing including generating a BOM or budgetary quotes.    You also can attach a BOM to a SFDC opportunity or generate a quote in SFDC.

Vendor Support

When you create a scenario you select what vendor the scenario should use, meaning their models.  Nutanix employees have access to all current vendors.

Partners often have to be registered partners with a given vendor.  When a partner logs in via My Nutanix their roles are retrieved and only those vendors are allowed.

Partners that feel they should be registered for a given vendor can send email request to:  partnerhelp@nutanix.com

Prospect Sizer

For customers we do have Prospect Sizer.  Same Sizer which is updated when we post a new sprint but with limitations

  • Intended for a prospect to get an initial sizing for a Nutanix solution
    • Not intended to be the final configuration to get a quote
    • Not intended to provide full sizer capability  where competitors can see what Nutanix partner will most likely  bid
  • What it can do
    • Get a sizing for VDI, RDSH/XenApp, Server Virtualization, RAW
    • Allow the prospect to do some sizings within 3 day period
  • What it can not do
    • No financial analysis or financial assumptions.
    • No sizing details
    • Set to homogenous sizing only (no mixed or manual)
    • Standard sizing only (not aggressive or conservative)
    • No BOM
    • Limited to 3 scenarios and 3 workloads per scenario maximum
    • List pricing used for determining recommendation (not margin)
    • No customization allowed
    • No Resiliency and Availability section

To access Prospect Sizer the customer should go here

https://productsizer.nutanix.com

If they have not registered or need to re-register they will be directed to the registration page

Workload Modules Overview

To add a workload simply click on the Add Workload link.    That will pop up  the Add Workload page.

As shown below there are several workload options

  • VDI  – This is a virtual desktop environment for different user profiles
  • Xenapp –  This is a virtual desktop environment for different user profiles.
  • Server Virtualization –  This is for users wanting to deploy web applications
  • SQL Server –  This is for users wanting to deploy SQL Server
  • RAW Input –  This is to simulate other workloads
  • File Services (AFS).  –  This is for customers that want to store files on our infrastructure

 

  • Once the workload type is selected you can make edits as necessary

One thing that is nice in 3.0, is that all the workload parameters are all on one page.  Thus it is easier to make edits and see all the parameters at once

How to change a profile?

 

Many of the workloads have profiles like small, medium, large VM or SQL server.  VDI has different user profiles which can be edited

How to define snapshots and Disaster Recover?

If Data Protection is set to Yes than can have following options

  • Local snapshots – here snapshots are kept locally
  • Local and remote snapshots – Here snapshots are in both clusters
  • Disaster Recovery and Local and Remote snapshots –  Here in addition to snapshots we duplicate the resources required to run the workload on the remote cluster to support asynchronous disaster recovery.

 

If either Remote snapshots or Disaster Recovery is expected then a Remote Cluster needs to be specified.  Also Sizer needs to know the snapshot frequency in minutes, amount of change expected within a snapshot period, and number of snapshots retained.  Same policy is applied to local and remote snapshots.

Scenario Page Overview

Though it looks like a complicated page it is organized neatly into different parts.  Looking  from upper right and going clockwise

  • Sizing Options –  This shows current specification on how you want Sizer to do a sizing like Automatic with All Flash, Manual, etc.
  • Hardware summary.  Shows the model that was recommended.  Mulntiple rows cover mixed clusters with different types of nodes.
  • Sizing Summary.  This shows the current results for the recommendation.  The dials show the utilization for cpu, RAM, HDD, SSD for all clusters combined.    Later Sizer will allow for a per-cluster view
  • Sizing Details.  Here all the workloads are summarized and the total required resources are summed for all the workloads.   In the larger table the recommendation’s sizing details are shown in terms of cpu, ram, hdd, ssd usage to cover the workloads, RF2, CVM, etc
  • Workloads.  In the left panel are the list of workloads in the scenario
  • Actions button.  Here various actions can be performed on the scenario such as downloading a BOM.

Sizing Charts

The point of Sizing Charts is simply to give exact presentation of the Sizing Details in charts.  Any value in Sizing Charts is reflected in Sizing Details.  Sizing Details being thorough is frankly a table with a lot of numbers.  Sizing Charts then puts it in nice charts if user wants to see it.  Also good to capture in proposals

Separate is Storage Calculator which allows you to enter your own set of nodes and see extent store and a derived effective capacity.  That is NOT tied to the scenario in terms of workloads, recommendation, sizing details.  More info on different page.

Here is the Sizing Details for a Scenario

This is sample scenario that is used to describe the charts.

Here is the Sizing Charts for this scenario

There is an option to view all charts at once.  You can see there is a 1:1 coorespondance between the Sizing Details and the charts for Cores, Ram, HDD, and SSD.  Also shown is the breakout for SSD CVM and HDD CVM.  Maybe a technical customer wants to see the details but in graphical form and this would cover it

Each of these can be looked at individually so you can just look at what interests you

Cores

Here you see the sizing elements for Core.  The tooltip shows the applied weight adjustment and memory adjustments.  The donut shows the CVM, workload requirement and usable remaining cores.

RAM

 

This is the RAM chart.  In this scenario there is 17 TiB of RAM available.  CVM consumes 1 TiB, workload consumes 14.45 TiB and 2.04 TiB remaining

HDD

This shows HDD.  The total amount of HDD space is RAW plus storage efficiency which in this case is just compression.  Dedupe of ECX are two other technologies that save space.   So because of compression savings we actually can deal with a total of 418.49 TiB.  That number is the size of the donut chart.  From there things that consume space would be the workload, RF overhead, and CVM.  Usable remaining HDD then is 122.59 TiB.  In sizing details it is reported as Usable remaining capacity  (Assuming RF2) = 122.59/2 = 61.3 or Usable remaining capacity  (Assuming RF3) = 122.59/3 = 40.86

SSD

This shows SSD.  The total amount of SSD space is RAW plus storage efficiency which in this case is just compression.  Dedupe of ECX are two other technologies that save space.    For SSD (as explained in Sizing details) we do also add back oplog because being a write journal it is user space.  So because of compression savings  and adding back oplog we actually can deal with a total of 302.06 TiB.  That number is the size of the donut chart.  From there things that consume space would be the workload, RF overhead, and CVM.  Usable remaining HDD then is 31.34 TiB.  In sizing details it is reported as Usable remaining capacity  (Assuming RF2) = 31.34/2 = 15.67 or Usable remaining capacity  (Assuming RF3) = 31.34/3 = 10.45

HDD CVM

There is a chart that shows the CVM components on HDD.  In case of All Flash solution all CVM components are stored in SSD

SSD CVM

There is a chart that shows the CVM components on SSD.  In case of All Flash solution all CVM components are stored in SSD.

Questions

What is purpose of Storage Charts?   –  Simply to give a graphical picture of the numbers in Sizing Details.  Here you can see Cores, RAM, HDD, SSD that is consumed and what is available for RF2 or RF3.  It is tied to the scenario.  Thus changing workloads or models will change the charts

What is purpose of Storage Calculator?   –  It is separate from the scenario in terms of workloads and recommendations.  It is intended to allow user to scope amount of storage that is available for given set of nodes.  It answers what is the potential storage available for those nodes.

Do I need the Storage Charts?  –  Since it is 100% duplicate of Sizing Details, not necessarily.  It does give a graphical view though