Cold tier data adjusted in Hot tier

Sizer always finds ways to propose the most optimal solution in terms of resources and cost.

As part of this effort, in certain cases , Sizer adjusts the workload data suposed to be sitting on the cold tier storage(HDDs) onto the hot tier storage(SSDs).

This happens in circumstances where there is surplus of SSDs in the BOM which is unutilized. The unutilised flash capacity is used for cold tier data if it helps reduce the overall number of nodes or if it helps avoid adding additional disks to met large HDD requirement if the extra SSD capacity can meet the same. The defined threshold levels are maintained and does not change for this adjustment in particular.

The same can be seen in a separate row in the calculation table and the Sizer BOM . Sample below:

 

Backup Sizing

All  Nutanix workloads now support backups.    This does the following

  • For any workload you can define backup policy in terms of number of full and incremental backups.
  • When invoked Sizer computes the backup storage that is needed and puts that in a standalone cluster.  Only backup workloads can be included in the backup standalone cluster(s)
  • Sizer also allocates cores, ram and storage for 3rd party backup software in the backup cluster
  • In future,  you can specify the backup hardware that is to be used in the backup cluster(s).
  • Alternatively we do offer Files Pro and Buckets Pro standalone clusters as targets

The inputs are as follows

  • Recovery Point Objective – the time between last backup (be it incremental or full backup).  This represents what point in time you can recover data.
    • For example, you want to recover some information.  The last backup will have occurred less than or at most 24 hours ago
  • Backup cycles.  This would be the number of cycles you want retained
  • Full backups in a cycle.  Typically 1 but can be more.  Here all the data in the workload is backed up
  • Incremental backups in a cycle.  Typically several and amount of data is the % percent change * workload data
  • Retention in Days – Backup cycles * (Full Backups per cycle + Incremental backups per cycle)
  • Rate change – Percent change expected between incremental backups
  • Backup Target –  options for holding the data such as Files Pro
  • Standalone Cluster – Name of cluster that will hold the backups

ROBO VM Solution Sizing

Robo VM Solution

The idea of Robo VM Solution is to combine the sizing of Robo Models with Decoupled  Quoting (separate license and hardware).

So in the end you pay for Robo VM licenses and the ROBO hardware but NOT the AOS cores or SSD TiB capacity.

Here  I defined a couple workloads with total VM count of 100.   You can have as many as you want.

Then in the sizing panel selected Robo Models

The resulting budgetary quote shows you pay for the Robo VM licenses and the decoupled hardware.

Ok, there are some limits

No user VM can be more than

  • 32 GB  RAM (** this will be enforced in AOS)
  • 2 TiB  total for hdd and ssd storage per vm
  • 50 TiB total or hdd and ssd  storage within each standalone cluster

Can have multiple workloads that are assigned to a cluster.  The cluster limit though is 50 TiB

Can have multiple standalone clusters which can represent different sites.  So could have two clusters that are 40 TiB each and that is fine

  • No limit on cores

If any user VM exceeds those constraints then present following error message

“This exceeds the Robo VM limits and so please select Data Center Models”

 

N+0, N+1, N+2 Failover Indicator

N+0, N+1, N+2 Failover Indicator

This is a BIG sizing improvement in Sizer where Sizer will always tell you if you are at N+0, N+1 or N+2 failover level for all resources (CPU, RAM, HDD, SSD) for each cluster.

Now as you make   changes in automatic sizing or manual sizing you always know if you have adequate failover.  Best practice is N+1 so you can take down any one node  (e..g take one node offline for an upgrade) and customer workloads can still run.

This can be very hard to figure out on your own.  ECX savings for example varies by node count.  Heterogenous clusters mean you have to find the largest node for each resource.  Multiple clusters mean you have to look at each separately.  Sizer does this for you !!

Here is what you need to know. 

Let’s take a two cluster scenario.  One called Cluster-1 is Nutanix cluster running 900 users for VDI and the Files to support those users.  The other is a standalone cluster for Files Pro with 100TB of user data

All clusters:

In a multi-cluster scenario All Clusters just provides a summary.  Here it shows two clusters and the hardware for the clusters.  In regards to N+1 indicator on the lower left it shows the worse cluster.  Both are N+1 and so you see N+1.  Had any cluster been N+0 then N+0 would be shown.  Great indicator to show there is an issue with one of the clusters

 

 

File cluster

This is the Standalone cluster for Files.  You see the hardware used in the cluster.  You see the failover level for each resource (CPU, RAM, HDD, SSD).  N+2 would indicate possibly could have less of that resource but often product options force more anyhow.  This is cold storage intensive workload and so HDD is the worse case.

Cluster -1

This is the Nutanix cluster for the VDI users.  You see the hardware used in the cluster.  You see the failover level for each resource (CPU, RAM, HDD, SSD).  This is core intensive workload and so that is the worse case.

 

Usable Capacity

Usable Remaining Capacity is the amount of storage that is available to the customer AFTER workloads, RF, storage savings are applied.  It represents what they should have remaining once deployed.

Sizer presents the values in both RF2 and RF3.

Usable Remaining Capacity (Assumming RF2)

  • HDD Usable Remaining  Capacity = (Raw + Compression Savings + Dedupe Savings + ECX Savings – Workload – RF Overhead – CVM overhead ) / 2
  • SSD Usable Remaining  Capacity =  (Raw + Compression Savings + Dedupe Savings + ECX Savings – Workload – RF Overhead – CVM overhead + Oplog ) / 2
  • Notes:
    • Usable capacity is basically RAW + storage savings with data reduction techniques like compression less workload, RF overhead and CVM overhead.
    • If All Flash,  Compression Savings, Dedupe Savings , ECX Savings, RF Overhead,  and CVM overhead that would be attributed to HDD’s is applied to SSDs
    • For SSD Capacity, Oplog is included as part of CVM overhead for SSDs but also added back as it is a Write log and so is available for user data.

Extent Store and Effective Capacity

Extent Store

This is a concept that is used in the Nutanix Bible.  This is RAW capacity less CVM.  It represents the capacity that is available to a customer

 

Effective Capacity

Used in Storage Calculator or DesignBrewz.  This is the Extent Store * Storage Efficiency setting in  Storage calculator.  So if the Extent Store is 10TiB and the Storage Efficiency factor is set to 1.5:1 then the Effective Capacity is 15 TiB.   Storage Efficiency factor is the expected benefit of storage reduction approaches like compression, dedupe, ECX.  Effective Capacity then is what is hoped to be available with these reduction techniques

Cores (Actual Cores, Adjusted Weight, Memory Adjustments like Unbalanced DIMMs)

In Sizing Details you may see an odd number like 40.27 cores for RAW cores as shown below

Actual Core Capacity

This is the total number of cores in the recommendation.

By clicking on the tooltip by the node you get the information

So in this recommendation we have 3 nodes where each has 2 cpu and each cpu has 8 cores.  So the Actual core capacity is 3 nodes * 2 cpu/node * 8 cores/cpu = 48 cores

Applied Weight

 

Intel designs a wide range of cpus to meet different market needs.  Core count certainly varies but the speed of a core is not the same across all cpu’s.

We need some benchmark to adjust for the core speed differences.  We use SPECInt 2006.  It is the best benchmark in terms of being an industry standard where vendors who publish numbers have to use standard testing process and publically publish the numbers.  We see consistency as well for a given CPU across all the vendors.  Thus this is a good benchmark to use for us to adjust for different values

So applied weight is where we have adjusted the cores to the baseline processor which runs at 42.31 specints

Review the Processor Table page with their core count, specints, and adjusted cores

Using this example we have a recommendation of 3 nodes with each node have quantity 2 2620v4 processors.  The table (calculation is shown in that page too) shows the 2620 v4 adjusted cores is 14.9 cores with nodes having 2 cpus

Thus in this recommendation total effective cores is 14.91 cores/node * 3 nodes = 44.73 cores.  We take applied weight adjustment of -3.26

 

Memory Adjustments

Broadwell Processors

With Broadwell processors “unbalanced DIMM” configuration depends on how they are laid out on the motherboard.  When it occurs there is a 10% increased access latency

To determine whether Sizer takes a discount it takes total count of DIMMs in a node and divides by 4. If odd number then it is Unbalanced and Sizer applies the discount.
If even, then no reduction is needed

Example

12x32GB in a node. 12 DIMMs/4 = 3 and so unbalanced
8X32GB in a node 8 DIMMs/4 = 2 and so balanced

If unbalanced core capacity is  reduced.

– Actual Core Capacity = Cores/Node * Node count
– Applied Wieght = extra or less cores vs baseline
– Adjustment due to Memory Issues = -10% * (RAW Cores+Applied Wieght)

It should be noted that if single processor system then NO adjustment is needed.

Skylake Processors

Skylake processors is more complex compared to Broadwell in terms of whether a system  has unbalanced dimms

We now test for the following

  • CPU – skylake
  • Model – Balanced_Motherboard – true  (described below)
  • Memory bandwidth – go with slower figure for either memory or CPU.  If 2133 Mhz then -10% memory adjustment.   If 2400Mhz or 2666Mhz (most common with skylake models) we take a 0% adjustment

Like before, we find the DIMM count per socket.  There is typically 2 sockets (cpu’s) but can be 1 and starting to introduce 4 socket models

Using the quantity of DIMMs per socket we should apply following rules

If CPU is skylake

  • If dimm count per socket is 5,7,9,10,11 then the model is considered unbalanced and we need to take a -50% memory adjustment
  • if dimm count per socket is 2,3,4, or 12 it is balanced and memory adjustment = 0%
  • if model is balanced and DIMM count per socket is 6 or 8 then it is balanced and memory adjustment = 0%
  • if model is unbalanced and DIMM count per socket is 6 or 8 then it is unbalanced and memory adjustment = -50%

After determining the adjustment percent we would make the adjustment as we do currently

  • Actual core capacity = Total cores in the cluster
  • Applied weight = adjustment vs baseline specint
  • Adjustment = Adjustment Percent * (Actual core capacity – Applied weight)
With Skylake, it can matter how the DIMMs are arranged on the motherboard.  We have PM review that and so far all models are laid out in balanced fashion
Here is doc that shows the options

 

 

Processor Table

Here is the table of processors

The first 5 columns are from Spec.org

https://www.spec.org/cgi-bin/osgresults

SPECint Adjusted Core is simply adjusting cores vs a baseline of 52.86 SPECint per core

Note in the SPECint tests, typically 2 CPU solutions are tested and so include cores per CPU

For example, the 2620v4 has 16 cores but only at 39.44 SPECint per core

  • SPECint adjusted cores = 16 * SPECint per core / Baseline = 16 * 39.44/52.86 = 11.94
  • Basically, this is saying the 2620 v4 has 16 cores but it is equivalent to 11.94 baseline cores in 2 CPU nodes
  • For a single CPU, it would be just 11.94/2 = 5.97

Looking at a high-speed CPU, the 6128 has just 12 cores but screams at 68.07 SPECint

  • Specint Adjusted cores = 12 * specint per core/ baseline = 12 * 68.07/52.86 = 15.45
  • Basically, this is saying the 6128 has 12 cores but it is equivalent to 15.45 baseline cores
System (Emerald Rapids) # Cores(2 socket) # Chips CINT2006/core
Intel Xeon Silver 4514Y 16C 150W 2.0GHz Processor 32 2 79.14
Intel Xeon Silver 4509Y 8C 125W 2.6GHz Processor 16 2 101.75
Intel Xeon Silver 4510 12C 150W 2.4GHz Processor 24 2 95.20
Intel Xeon Gold 6548N 32C 250W 2.8GHz Processor 64 2 94.16
Intel Xeon Gold 5512U 28C 185W 2.1GHz Processor 28 1 87.38
Intel Xeon Gold 5515+ 8C 165W 3.2GHz Processor 16 2 105.91
Intel Xeon Gold 6526Y 16C 195W 2.8GHz Processor 32 2 100.26
Intel Xeon Gold 6542Y 24C 250W 2.9GHz Processor 48 2 101.15
Intel Xeon Gold 6548Y+ 32C 250W 2.5GHz Processor 64 2 94.46
Intel Xeon Gold 6534 8C 195W 3.9GHz Processor 16 2 116.62
Intel Xeon Gold 6544Y 16C 270W 3.6GHz Processor 32 2 114.54
Intel Xeon Gold 5520+ 28C 205W 2.2GHz Processor 56 2 86.87
Intel Xeon Gold 6538Y+ 32C 225W 2.2GHz Processor 64 2 93.12
Intel Xeon Platinum 8592V 64C 330W 2.0GHz Processor 128 2 72.00
Intel Xeon Platinum 8581V 60C 270W 2.0GHz Processor 60 1 74.89
Intel Xeon Platinum 8571N 52C 300W 2.4GHz Processor 52 1 86.60
Intel Xeon Platinum 8558U 48C 300W 2.0GHz Processor 48 1 81.32
Intel Xeon Platinum 8568Y+ 48C 350W 2.3GHz Processor 96 2 89.45
Intel Xeon Platinum 8580 60C 350W 2.0GHz Processor 120 2 80.13
Intel Xeon Platinum 8592+ 64C 350W 1.9GHz Processor 128 2 75.86
Intel Xeon Platinum 8562Y+ 32C 300W 2.8GHz Processor 64 2 100.41
Intel Xeon Platinum 8558 48C 330W 2.1GHz Processor 48 1 81.32
System (Sapphire Rapids) # Cores(2 socket) # Chips CINT2006/core
Intel Gold 6414U 32C 2.0GHz 250W 32 1 76.16
Intel Silver 4410Y 12C 2.0GHz 135W-145W 24 2 83.30
Intel Silver 4416+ 20C 2.1GHz 165W 40 2 83.06
Intel Silver 4410T 10C 2.7GHz 150W 20 2 97.10
Intel Gold 5415+ 8C 2.9GHz 150W 16 2 102.34
Intel Gold 5418Y 24C 2.1GHz 185W 48 2 81.32
Intel Gold 5420+ 28C 1.9-2.0GHz 205W 56 2 78.20
Intel Gold 6426Y 16C 2.6GHz 185W 32 2 94.90
Intel Gold 6430 32C 1.9GHz 270W 64 2 75.57
Intel Gold 6434 8C 3.9GHz 205W 16 2 113.05
Intel Gold 6438Y+ 32C 1.9-2.0GHz 205W 64 2 82.26
Intel Gold 6442Y 24C 2.6GHz 225W 48 2 94.21
Intel Gold 6444Y 16C 3.5GHz 270W 32 2 110.67
Intel Gold 6448Y 32C 2.2GHz 225W 64 2 83.60
Intel Gold 6438M 32C 2.2GHz 205W 64 2 82.41
Intel Gold 5418N 24C 1.8GHz 165W 48 2 75.96
Intel Gold 6428N 32C 1.8GHz 185W 64 2 72.59
Intel Gold 6438N 32C 2.0GHz 205W 64 2 80.47
Intel Gold 5416S 16C 2.0GHz 150W 32 2 82.11
Intel Gold 6454S 32C 2.2GHz 270W 64 2 79.58
Intel Platinum 8462Y+ 32C 2.8GHz 300W 64 2 94.46
Intel Platinum 8452Y 36C 2.0GHz 300W 72 2 78.80
Intel Platinum 8460Y+ 40C 2.0GHz 300W 80 2 78.18
Intel Platinum 8468 48C 2.1GHz 350W 96 2 80.23
Intel Platinum 8470 52C 2.0GHz 350W 104 2 78.81
Intel Platinum 8480+ 56C 2.0GHz 350W 112 2 73.61
Intel Platinum 8490H 60C 1.9GHz 350W 120 2 71.56
Intel Platinum 8470N 52C 1.7GHz 300W 104 2 69.57
Intel Platinum 8468V 48C 2.4GHz 330W 96 2 76.46
Intel Platinum 8458P 44C 2.7GHz 350W 88 2 82.76
Intel Xeon Platinum 8468H 48C 330W 2.1GHz Processor 96 2 76.66
Intel Xeon Platinum 8454H 32C 270W 2.1GHz Processor 64 2 72.74
Intel Xeon Platinum 8450H 28C 250W 2.0GHz Processor 56 2 79.90
Intel Xeon Platinum 8444H 16C 270W 2.9GHz Processor 32 2 96.69
Intel Xeon Platinum 8460H 40C 330W 2.2GHz Processor 80 2 83.66
Intel Xeon Gold 6448H 32C 250W 2.4GHz Processor 64 2 89.85
Intel Xeon Gold 6418H 24C 185W 2.1GHz Processor 48 2 79.53
Intel Xeon Gold 6416H 18C 165W 2.2GHz Processor 36 2 85.42
Intel Xeon Gold 6434H 8C 195W 3.7GHz Processor 16 2 119.00
Intel Xeon Platinum 8470Q 52C 2.10 GHz Processor 104 2 79.36
Intel Xeon Gold 6458Q 32C 3.10 GHz Processor 64 2 101.45
Intel Xeon-B 3408U 8C 8 1 50.69
Intel Xeon-G 5412U 24C 24 1 85.28
Intel Xeon-G 5411N 24C 165W 1.9GHz Processor 24 1 82.51
Intel Xeon-G 6421N 32C 185W 1.8GHz Processor 32 1 78.54
Intel Xeon Platinum 8461V 48C 300W 2.2GHz Processor 48 1 75.37
Intel Xeon Platinum 8471N 52C 300W 1.8GHz Processor 52 1 75.43
System (Ice Lake) # Cores(2 socket) # Chips CINT2006/core
Intel® Xeon® Platinum 8368Q Processor (57M Cache, 2.60 GHz) 76 2 64.51
Intel® Xeon® Platinum 8360Y Processor (54M Cache, 2.40 GHz) 52 2 94.83
Intel® Xeon® Platinum 8358P Processor (48M Cache, 2.60 GHz) 64 2 70.81
Intel® Xeon® Platinum 8352Y Processor (48M Cache, 2.20 GHz) 64 2 65.30
Intel® Xeon® Platinum 8352V Processor (54M Cache, 2.10 GHz) 72 2 56.72
Intel® Xeon® Platinum 8352S Processor (48M Cache, 2.20 GHz) 64 2 65.30
Intel® Xeon® Platinum 8351N Processor (54M Cache, 2.40 GHz) 36 1 67.43
Intel® Xeon® Gold 6338N Processor (48M Cache, 2.20 GHz) 64 2 63.37
Intel® Xeon® Gold 6336Y Processor 48 2 71.00
Intel® Xeon® Gold 6330N Processor (42M Cache, 2.20 GHz) 56 2 61.20
Intel® Xeon® Gold 5318Y Processor 48 2 63.07
Intel® Xeon® Gold 5315Y Processor 16 2 82.11
Intel® Xeon® Silver 4309Y Processor 16 2 79.73
Intel® Xeon® Platinum 8380 Processor (60M Cache, 2.30 GHz) 80 2 66.28
Intel® Xeon® Platinum 8368 Processor (57M Cache, 2.40 GHz) 76 2 68.02
Intel® Xeon® Platinum 8358 Processor (48M Cache, 2.60 GHz) 64 2 73.48
Intel® Xeon® Gold 6354 Processor (39M Cache, 3.00 GHz) 36 2 81.45
Intel® Xeon® Gold 6348 Processor (42M Cache, 2.60 GHz) 56 2 74.63
Intel® Xeon® Gold 6346 Processor (36M Cache, 3.10 GHz) 32 2 83.60
Intel® Xeon® Gold 6342 Processor 48 2 76.16
Intel® Xeon® Gold 6338 Processor (48M Cache, 2.00 GHz) 64 2 62.03
Intel® Xeon® Gold 6334 Processor 16 2 86.87
Intel® Xeon® Gold 6330 Processor (42M Cache, 2.00 GHz) 56 2 62.22
Intel® Xeon® Gold 6326 Processor 32 2 78.24
Intel® Xeon® Gold 5320 Processor 52 2 66.09
Intel® Xeon® Gold 5317 Processor 24 2 80.13
Intel® Xeon® Silver 4316 Processor 40 2 67.12
Intel® Xeon® Silver 4314 Processor 32 2 69.62
Intel® Xeon® Silver 4310 Processor 24 2 66.64
Intel Xeon Gold 6338T processor (2.1 GHz/ 24-core/ 165W) 48 2 63.27
Intel Xeon Gold 5320T processor (2.3 GHz/ 20-core/ 150W) 40 2 66.40
Intel Xeon Silver 4310T processor (2.3 GHz/ 10-core/ 105W) 20 2 70.45
Intel Xeon Gold 6314U processor (2.30 GHz/32-core/205W) 32 1 67.24
Intel Xeon Gold 6312U processor (2.40 GHz/24-core/185W) 24 1 73.38
System(Cascade Lake) # Cores(2 socket) # Chips CINT2006/core
CPU (2.10 GHz, Intel Xeon Gold 6230) 40 2 52.65
CPU (2.30 GHz, Intel Xeon Gold 5218) 32 2 54.28
CPU (2.30 GHz, Intel Xeon Gold 5218B) 32 2 54.28
CPU (2.60 GHz, Intel Xeon Gold 6240) 36 2 60.12
CPU (2.10 GHz, Intel Xeon Gold 6252) 48 2 51.68
CPU (2.30 GHz, Intel Xeon Gold 6252N) 48 2 50.71
CPU (2.20 GHz, Intel Xeon Platinum 8276) 56 2 52.71
CPU (2.20 GHz, Intel Xeon Silver 4210) 20 2 52.14
CPU (2.20 GHz, Intel Xeon Silver 4214) 24 2 53.49
CPU (2.20 GHz, Intel Xeon Silver 4214Y) 24 2 53.49
CPU (2.10 GHz, Intel Xeon Silver 4216) 32 2 52.82
CPU (2.50 GHz, Intel Xeon Gold 5215) 20 2 57.48
CPU (2.50 GHz, Intel Xeon Gold 6248) 40 2 55.16
CPU (2.50 GHz, Intel Xeon Silver 4215) 16 2 58.07
CPU (2.60 GHz, Intel Xeon Gold 6240Y) 36 2 59.08
CPU (2.70 GHz, Intel Xeon Platinum 8270) 52 2 59.19
CPU (2.70 GHz, Intel Xeon Platinum 8280) 56 2 58.91
CPU (2.70 GHz, Intel Xeon Platinum 8280M) 56 2 57.32
CPU (2.90 GHz, Intel Xeon Platinum 8268) 48 2 62.15
CPU (3.00 GHz, Intel Xeon Gold 5217) 16 2 64.33
CPU (3.80 GHz, Intel Xeon Gold 5222) 8 2 77.44
CPU (2.10 GHz, Intel Xeon Silver 4208) 16 2 49.73
CPU (2.70 GHz, Intel Xeon 6226) 24 2 64.75
CPU (3.3 GHz, Intel Xeon Gold 6234) 16 2 75.44
CPU (2.8 GHz, Intel Xeon Gold 6242) 32 2 64.54
CPU (2.2 GHz, Intel Xeon Silver 5220) 36 2 52.86
CPU (2.1 GHz, Intel Xeon Gold 6238) 44 2 51.78
CPU (3.6 GHz, Intel Xeon Gold 6244) 16 2 80.92
CPU (3.3 GHz, Intel Xeon Gold 6246) 24 2 70.97
CPU (2.5 GHz, Intel Xeon Gold 6248) 40 2 55.16
CPU (3.1 GHz, Intel Xeon Gold 6254) 36 2 69.1
CPU (1.8 GHz, Intel Xeon Gold 6222V) 40 2 47.6
CPU (1.9 GHz, Intel Xeon Gold 6262V) 48 2 48
CPU (2.5 GHz, Intel Xeon Gold 5215M) 20 2 56.53
CPU (2.1 GHz, Intel Xeon Gold 6238M) 44 2 51.57
CPU (2.6 GHz, Intel Xeon Gold 6240M) 36 2 57.78
CPU (2.5 GHz, Intel Xeon Gold 5215L) 20 2 57.48
CPU (2.1 GHz, Intel Xeon Gold 6238L) 44 2 52
CPU (2.4 GHz, Intel Xeon Gold 8260) 48 2 57.42
CPU (2.4 GHz, Intel Xeon Gold 8260L) 48 2 57.22
CPU (2.4 GHz, Intel Xeon Gold 8260M) 48 2 55.63
CPU (2.9 GHz, Intel Xeon Gold 8268) 48 2 62.15
CPU (2.7 GHz, Intel Xeon Gold 8270) 52 2 59.19
CPU (2.2 GHz, Intel Xeon Gold 8276) 56 2 52.71
CPU (2.2 GHz, Intel Xeon Gold 8280) 56 2 58.91
CPU (2.2 GHz, Intel Xeon Gold 8280M) 56 2 57.32
CPU (2.2 GHz, Intel Xeon Gold 8276M) 56 2 49.69
CPU (2.2 GHz, Intel Xeon Gold 8276L) 56 2 50.02
CPU (2.2 GHz, Intel Xeon Gold 8280L) 56 2 58.36
CPU (2.4 GHz, Intel Xeon Gold 8260Y) 48 2 55.83
CPU (2.5 GHz, Intel Xeon Gold 6210U) 20 1 59.98
CPU (1.9 GHz, Intel Xeon Gold 3206R) 16 2 47.6
CPU (2.4 GHz, Intel Xeon Gold 4210R) 20 2 60.45
CPU (2.4 GHz, Intel Xeon Gold 4214R) 24 2 64.26
CPU (3.2 GHz, Intel Xeon Gold 4215R) 16 2 73.19
CPU (2.1 GHz, Intel Xeon Gold 5218R) 40 2 58.79
CPU (2.2 GHz, Intel Xeon Gold 5220R) 48 2 54.74
CPU (2.1 GHz, Intel Xeon Gold 6230R) 52 2 56.94
CPU (2.9 GHz, Intel Xeon Gold 6226R) 32 2 71.7
CPU (2.4 GHz, Intel Xeon Gold 6240R) 48 2 59.5
CPU (3.1 GHz, Intel Xeon Gold 6242R) 40 2 72.11
CPU (2.2 GHz, Intel Xeon Gold 6238R) 56 2 54.06
CPU (3.0 GHz, Intel Xeon Gold 6248R) 48 2 66.84
CPU (2.7 GHz, Intel Xeon Gold 6258R) 56 2 61.54
CPU (3.9 GHz, Intel Xeon Gold 6250) 16 2 81.49
CPU (3.6 GHz, Intel Xeon Gold 6256) 24 2 77.39
CPU (3.4 GHz, Intel Xeon Gold 6246R) 32 2 70.06
CPU Type CPU Family SpecInt2006Rate # of Cores Specint2006Rate per CORE Specint Adjusted Cores Cores per CPU
2699v3 Haswell 1389 36 38.58 32.8 18
2630v3 Haswell 688 16 43.00 16.3 8
2620v3 Haswell 529 12 44.08 12.5 6
2697v3 Haswell 1236 28 44.14 29.2 14
2680v3 Haswell 1063 24 44.31 25.1 12
2660v3 Haswell 900 20 45.00 21.3 10
2640v3 Haswell 725 16 45.31 17.1 8
2623v3 Haswell 424 8 53.00 10.0 4
2643v3 Haswell 690 12 57.50 16.3 6
2620v2 Ivy Bridge 429 12 35.75 10.1 6
2697v2 Ivy Bridge 962 24 40.08 22.7 12
2630v2 Ivy Bridge 505 12 42.08 11.9 6
2680v2 Ivy Bridge 846 20 42.31 20.0 10
2650v2 Ivy Bridge 681 16 42.55 16.1 8
2690v2 Ivy Bridge 888 20 44.40 21.0 10
2643v2 Ivy Bridge 634 12 52.83 15.0 6
2620v1 Sandy Bridge 390 12 32.50 9.2 6
2670v1 Sandy Bridge 640 16 40.00 15.1 8
2690v1 Sandy Bridge 685 16 42.81 16.2 8
2637v3 Haswell 472 8 59.00 11.2 4
2698v3 Haswell 1290 32 40.31 30.5 16
E5-2609v4 Broadwell 415 16 25.94 9.8 8
E5-2620v4 Broadwell 631 16 39.44 14.9 8
E5-2630v4 Broadwell 795 20 39.75 18.8 10
E5-2640v4 Broadwell 844 20 42.20 19.9 10
E5-2643v4 Broadwell 703 12 58.58 16.6 6
E5-2650v4 Broadwell 984 24 41.00 23.3 12
E5-2660v4 Broadwell 1090 28 38.93 25.8 14
E5-2680v4 Broadwell 1200 28 42.86 28.4 14
E5-2690v4 Broadwell 1300 28 46.43 30.7 14
E5-2695v4 Broadwell 1370 36 38.06 32.4 18
E5-2697v4 Broadwell 1460 36 40.56 34.5 18
E5-2698v4 Broadwell 1540 40 38.50 36.4 20
E5-2699v4 Broadwell 1690 44 38.41 39.9 22
3106 Skylake 431.4 16 26.9625 10.2 8
4108 Skylake 629.65 16 39.353125 14.9 8
4109T Skylake 667.92 16 41.745 15.8 8
4110 Skylake 693.24 16 43.3275 16.4 8
4112 Skylake 412.91 8 51.61375 9.8 4
4114 Skylake 890.6 20 44.53 21.0 10
4116 Skylake 1030.87 24 42.95291667 24.4 12
5115 Skylake 969.14 20 48.457 22.9 10
5118 Skylake 1133.2 24 47.21666667 26.8 12
5120 Skylake 1271.56 28 45.41285714 30.1 14
5122 Skylake 544.38 8 68.0475 12.9 4
6126 Skylake 1304.67 24 54.36125 30.8 12
6128 Skylake 816.91 12 68.07583333 19.3 6
6130 Skylake 1516.45 32 47.3890625 35.8 16
6132 Skylake 1524.55 28 54.44821429 36.0 14
6134 Skylake 1037.72 16 64.8575 24.5 8
6134M Skylake 1085 16 67.8125 25.6 8
6136 Skylake 1451 24 60.45833333 34.3 12
6138 Skylake 1748.89 40 43.72225 41.3 20
6140 Skylake 1752.86 36 48.69055556 41.4 18
6140M Skylake 1810 36 50.27777778 42.8 18
6142 Skylake 1688.5 32 52.765625 39.9 16
6142M Skylake 1785 32 55.78125 42.2 16
6143 Skylake 1950 32 60.9375 46.1 16
6144 Skylake 1113 16 69.5625 26.3 8
6146 Skylake 1534.44 24 63.935 36.3 12
6148 Skylake 1921.3 40 48.0325 45.4 20
6150 Skylake 1903.75 36 52.88194444 45.0 18
6152 Skylake 1951.18 44 44.345 46.1 22
6154 Skylake 2062 36 57.27777778 48.7 18
8153 Skylake 1326.88 32 41.465 31.4 16
8156 Skylake 550.81 8 68.85125 13.0 4
8158 Skylake 1464 24 61 34.6 12
8160 Skylake 2152.5 48 44.84375 50.9 24
8160M Skylake 2285 48 47.60416667 54.0 24
8164 Skylake 2204 52 42.38461538 52.1 26
8165 Skylake 2500 48 52.08333333 59.1 24
8168 Skylake 2454.12 48 51.1275 58.0 24
8170 Skylake 2282.86 52 43.90115385 54.0 26
8170M Skylake 2420 52 46.53846154 57.2 26
8176 Skylake 2386.87 56 42.62267857 56.4 28
8176M Skylake 2507 56 44.76785714 59.2 28
8180 Skylake 2722.38 56 48.61392857 64.3 28
8180M Skylake 2710 56 48.39285714 64.0 28
System (AMD Genoa) # of Cores across CPUs # of CPUs CINT2006/core
AMD EPYC 9274F 24C 320W 4.05GHz Processor 48 2 123.17
AMD EPYC 9354P 32C 280W 3.25GHz Processor 32 1 108.59
AMD EPYC 9224 24C 200W 2.5GHz Processor 48 2 99.37
AMD EPYC 9174F 16C 320W 4.1GHz Processor 32 2 127.33
AMD EPYC 9654P 96C 360W 2.4GHz Processor 96 1 81.32
AMD EPYC 9554P 64C 360W 3.1GHz Processor 64 1 95.80
AMD EPYC 9454P 48C 290W 2.75GHz Processor 48 1 101.15
AMD EPYC 9634 84C 290W 2.25GHz Processor 168 2 75.93
AMD EPYC 9354 32C 280W 3.25GHz Processor 64 2 108.74
AMD EPYC 9474F 48C 360W 3.6GHz Processor 96 2 107.10
AMD EPYC 9374F 32C 320W 3.85GHz Processor 64 2 119.89
AMD EPYC 9534 64C 280W 2.45GHz Processor 128 2 88.51
AMD EPYC 9454 48C 290W 2.75GHz Processor 96 2 101.15
AMD EPYC 9334 32C 210W 2.7GHz Processor 64 2 103.83
AMD EPYC 9254 24C 200W 2.9GHz Processor 48 2 108.69
AMD EPYC 9124 16C 200W 3.0GHz Processor 32 2 103.23
AMD EPYC 9554 64C 360W 3.1GHz Processor 128 2 95.94
AMD EPYC 9654 96C 360W 2.4GHz Processor 192 2 79.83
AMD EPYC 9734 2.2GHz 112-Core Processor 224 2 70.98
AMD EPYC 9754 2.25GHz 128-Core Processor 256 2 67.31
System (AMD Milan) # of Cores across CPUs # of CPUs CINT2006/core
AMD EPYC 7663P CPU 2.00 GHz 56 1 58.65
AMD EPYC 7643P CPU 2.30 GHz 48 1 64.46
AMD EPYC 7303P CPU 2.40 GHz 16 1 78.54
AMD EPYC 7203P CPU 2.80 GHz 8 1 84.25
AMD EPYC 7303 CPU 2.40 GHz 32 2 77.65
AMD EPYC 7203 CPU 2.80 GHz 16 2 83.90
AMD EPYC 7313P CPU 3.00 GHz 16 1 89.25
AMD EPYC 7443P CPU 2.85 GHz 24 1 84.49
AMD EPYC 7713P CPU 2.00 GHz 64 1 55.78
AMD EPYC 7543P CPU 2.80 GHz 32 1 80.62
AMD EPYC 7413 CPU 2.65 GHz 24 1 81.32
AMD EPYC 7763 CPU 2.45 GHz 64 1 61.43
AMD EPYC 7343 CPU 3.20 GHz 16 1 90.44
AMD EPYC 7453 CPU 2.75 GHz 28 1 74.80
AMD EPYC 75F3 CPU 2.95 GHz 32 1 83.90
AMD EPYC 7663 CPU 2.00 GHz 56 1 60.52
AMD EPYC 72F3 CPU 3.70 GHz 8 1 106.15
AMD EPYC 73F3 CPU 3.50 GHz 16 1 98.77
AMD EPYC 74F3 CPU 3.20 GHz 24 1 88.46
AMD EPYC 7643 CPU 2.30 GHz 48 1 65.65
AMD EPYC 7543 CPU 2.8 GHz 64 2 80.03
AMD EPYC 7713 CPU 2.0 GHz 128 2 55.04
AMD EPYC 7443 CPU 2.85 GHz 48 2 84.69
AMD EPYC 7313 CPU 3.0 GHz 32 2 90.74
AMD EPYC 7513 CPU 2.6 GHz 64 2 73.78
AMD EPYC 7373X FIO (16 cores, 768 M Cache, 3.8 GHz, DDR4 3200MHz) 32 2 97.58
AMD EPYC 7473X FIO (24 cores, 768 M Cache, 3.7 GHz, DDR4 3200MHz) 48 2 88.85
AMD EPYC 7573X FIO (32 cores, 768 M Cache, 3.6 GHz, DDR4 3200MHz) 64 2 84.94
AMD EPYC 7773X FIO (64 cores, 768 M Cache, 3.5 GHz, DDR4 3200MHz) 128 2 60.10
System (AMD Milan) # of Cores across CPUs # of CPUs CINT2006/core
AMD EPYC 7742 CPU 2.25GHz 128 2 49.31
AMD EPYC 7702 CPU 2.00GHz 128 2 46.11
AMD EPYC 7502 CPU 2.5GHz 64 2 62.77
AMD EPYC 7452 CPU 2.35GHz 64 2 59.35
AMD EPYC 7402 CPU 2.80GHz 48 2 68.23
AMD EPYC 7302 CPU 3.00GHz 32 2 68.72
AMD EPYC 7502P CPU 2.50GHz 32 1 63.37
AMD EPYC 7262 CPU 3.20GHz 16 2 74.38
AMD EPYC 7261 CPU 2.50GHz 16 2 55.93
AMD EPYC 7H12 CPU 2.60GHz 128 2 51.17
AMD EPYC 7662 CPU 2.00GHz 128 2 48.72
AMD EPYC 7642 CPU 2.30GHz 96 2 56.72
AMD EPYC 7552 CPU 2.20GHz 96 2 50.58
AMD EPYC 7532 CPU 2.40GHz 64 2 65.00
AMD EPYC 7272 CPU 2.90GHz 24 2 64.26
AMD EPYC 7352 CPU 2.30GHz 48 2 62.67
AMD EPYC 7302P CPU 3.0GHz 16 1 69.02
AMD EPYC 7402P CPU 2.8GHz 24 1 67.04
AMD EPYC 7702P CPU 2.0GHz 64 1 47.45
AMD EPYC 7232P CPU 3.1GHz 8 1 67.83
AMD EPYC 7282 CPU 2.8GHz 16 1 66.64
AMD EPYC 7542 CPU 2.9GHz 64 2 61.29
AMD EPYC 7F72 CPU 3.3GHz 48 2 72.99
AMD EPYC 7F52 CPU 3.5GHz 32 2 85.09
AMD EPYC 7252 CPU 3.1GHz 16 2 69.62
AMD EPYC 7F32 CPU 3.70GHz 16 2 88.06

Continue reading “Processor Table”

CVM (Cores, Memory, HDD, SSD)

CVM Cores

Sizer will first compute the number of cores needed for the workloads.  The sum of all the workload cores is called TotalCores in this equation.

Each workload type  has its own number of cores

  • NumVDICores
  • NumDBCores (SQL Server)
  • NumServVirtCores
  • NumRawCores.   Note: coreCVMOverhead is a user input for RAW workload to set CVM Cores with default being 4 cores
  • NumServerComputingCores
  • NumSplunkCores
  • NumXenAppCores
  • NumFileServicesCores
  • Oracle cores is set to 6

What sizer then does is places a weighted average of CVM cores for these workloads with a range of 4 to 6 cores per node depending on workload mix

CVM  cores per node = (NumVDICores / TotalCores) * 4 + (NumDBCores / TotalCores) * 6 + (NumExchgCores / TotalCores) * 6   + (NumServVirtCores / TotalCores) * 4 + (NumRawCores / TotalCores) * coreCVMOverhead  + (NumServerComputingCores / TotalCores) * 4 + (NumSplunkCores / TotalCores) * 6 + (NumXenAppCores / TotalCores) * 4 + (NumFileServicesCores / TotalCores) * 4 + (number of Oracle cores/TotalCores) * 6;

For example, if only VDI is in a scenario than the NumVDICores / TotalCores ratio is 1 and then 4 cores are assigned for each node for CVM

The workload equations will come to a CVM core count per node for entire configuration that depends on the workload balance but is between 4 to 6 cores for CVM per node.

 

CVM Memory

 

CVM memory will vary by Platform type

 

 

Platform Default Memory (GB)
VDI, server virtualization 20
Storage only 28
Light Compute 28
Large server, high-performance, all-flash 32

 

Feature Addon CVM Memory

 

Sizer adds on the following amount of memory for features as noted below

 

 

Features Memory (GB)
Capacity tier deduplication (includes performance tier deduplication) 16
Redundancy factor 3 8
Performance tier deduplication 8
Cold-tier nodes + capacity tier deduplication 4
Capacity tier deduplication + redundancy factor 3 16
Self-service portal (AHV only)

  • With Asterix.1 no need to add memory beyond what is allocated for Platform CVM memory

Sizer approach to calculate CVM Memory

  • First determine Platform type CVM Memory from the tables.  As we do a sizing for a given model determine what type of model it is (it should be a table to allow updates) and assign appropriate CVM memory per node (20, 28, or 32 GB per node).
  •  Next we add memory for addons which can not go higher than 32 GB
    • Add CVM memory for extras.  Total CVM Memory = Min (Platform CVM Memory + Addon memory, 32)  where addon memory =
    • If RF3 = 8 GB
    • Dedupe only =  16GB
    • Both RF3 and Dedupe = 16 GB
    • No addons = 0GB
    • Compression  = 0 GB
  • If EPIC workload take MAX(32GB, result found in step 2).   Here should at least be 32 but may be more. If not EPIC go to step 4
  • Add memory for Hypervisor.  Looking at best practices for AHV, ESX and HyperV can assume 8GB needed for the hypervisor.  Though not a CVM memory requirement per-se it is a per node requirement and so good place to add it (versus a new line item in the Sizing details).
    • Total CVM Memory = Total CVM Memory + 8Gb

Examples.

  • Either manual or automatic sizing is sizing 3060-G5.  RF3 is turned on for one workload. User wants SSP. Not EPIC workload

CVM memory per node

·        Platform CVM Memory for 3060-G5 = 20GB

·        Add on feature  CVM requirement = 8 GB

·   ·        Hypervisor = 8 GB

CVM Memory per node = 28GB.  Will show 36GB with hypervisor

  •  Either manual or automatic sizing is sizing 1065-G5.  RF3 and Dedupe are OFF. Not EPIC workload

CVM memory per node

·        Platform CVM Memory for 3060-G5 = 20GB

·        Add on feature  CVM requirement = 0 GB

·        Hypervisor = 8 GB

CVM Memory per node = 20GB.  Will show 28GB with hypervisor

  • Either manual or automatic sizing is sizing 8035-G5.  Dedupe is turned on for one workload and want SSP. Not EPIC workload

CVM memory per node

·        Platform CVM Memory for 8035-G5 = 28GB

·        Add on feature  CVM requirement = 16 GB

·      ·        Hypervisor = 8 GB

CVM Memory per node = 32GB.  Though the addon requires 16GB we reached the maximum of 32 for the platform and addons together.  Will show 40GB with hypervisor

CVM HDD

Below is how the CVM HDD overhead is calculated.

Ext4 5% of all HDD in TiB
Genesis 5% of all HDD in TiB after Ext4 is discounted
Curator Max (2% * HDD in TiB, 60 GiB) for Curator 1st HDD
+
Max (2% * HDD in TiB, 20 GiB) for Curator for all remaining HDDs
Let us take an example and see how this calculation works
HDD Capacity per node in TB 32
Number of Nodes in Cluster 3
Cluster total HDD Capacity in TB 96
Cluster total HDD Capacity in TiB 87.31

The example assumes each node has 4 x 8TB HDDs

Capacity of 1st HDD in TB 8
Capacity of 1st HDD in TiB 7.28
Capacity of all remaining HDDs in TB 88
Capacity of all remaining HDDs in TiB 80.04

Let us take the above numbers in the example and derive the HDD CVM overhead

Ext4 5% of all HDD in TiB 4.37
Genesis 5% of all HDD in TiB after Ext4 is discounted 4.15
Curator Max (2% * HDD in TiB, 60 GiB) for Curator 1st HDD
+
Max (2% * HDD in TiB, 20 GiB) for Curator for all remaining HDDs
1.75
Total CVM Overhead 10.26

CVM SSD

CVM SSD per node:

Nutanix Home 60 GiB for first 2 SSDs.  Assuming all nodes have at least 2 SSDs.

  • If just one SSD like for the 6035c then just 60GiB.
Ext 4 5% of of each SSD after downstroke  in GiB after Nutanix Home capacity taken.
Genesis 5% of of each SSD in GiB after Ext 4 taken
Cassandra Homogeneous clusters

This is for all nodes

Max(30 GiB per node, 3% of HDD raw capacity + 3% of SSD raw capacity)

For heterogenous clusters

Find the largest node and then apply above equation for all nodes

Oplog Oplog reservation per node = MIN(0.25 *(SSD space left after cassandra, cache, curator reservation), 400GiB)
Content cache 20GB per node converted to GiB

 

What are the details on CVM overheads

  • HDD numbers can be seen by clicking the “I” button
  • SSD numbers can be seen by clicking the “I” button
  • In case of AF all the CVM components are applied to SSD CVM as shown below

Getting 1 node or 2 node clusters

New rules in Sizer for Regular Models and ROBO Models (October 2018)

Regular Models

Rules

  • All models included
  • All use cases are allowed – main cluster application, remote cluster application and remote snapshots
  • 3+ nodes is recommended

Summary

    • This is default in Sizer and is used most of the time
    • Fits best practices for a data center to have 3 or more nodes
    • Huge benefit as Sizer user can stay in this mode to size for 1175s or other vendor’s small models  if they want 3+ nodes anyhow. No need to go to Robo mode

 

  • Note: This gets rid of previous Sizer user headache as they want to size these models for 3+ nodes and get confused where to go

 

What changes

  • The smaller nodes such as 1175S are included in the list for running main cluster applications vs just remote applications and remote snapshots

ROBO Models

Rules

    • All models  but only some can size for 1 or 2 node
    • All use cases – main cluster application, remote cluster application and remote snapshots
    • All models can 3+ nodes depending on sizing requirements

 

  • ONLY Certain Models (aka ROBO models) can be 1 or 2 node

 

    • Note there is no CPU restriction.  Basically PM decides what models are ROBO and they can be 1 or 2 cpu

Summary

  • User would ONLY need to go to ROBO if they feel the solution fits in 1 or 2 node
    • If the size of the workloads require 3+ nodes, Sizer would simply report the required nodes and it would be no different recommendation than in regular
    • They feel 1 or 2 node restrictions is fine.  
      • The list of robo models are fine for the customer
      • RF for 1 node is disk level not node level
      • Some workloads like AFS require 3 nodes and so not available

What changes

  • All models can be used in ROBO where before it was just the ROBO models

No quoting in Sizer for Robo

Currently there are minimum number of units or deal size when quoting Robo.  Sizer will size the opportunity and will tell you that you should quote X units.  Given it takes 10 or more units and possibly you want to club together multiple projects, we disabled quoting from Sizer when includes 1175S