Storage Calculator

Storage Calculator is both a standalone tool as well as a Sizer feature.  Either way it is used to determine the Extent Store and Effective Capacity of a configuration the user defines.  It is NOT tied to the workloads or the recommendation in the sizing scenario.

Access as a Standalone Tool

This is available on the Internet without login.  The same as DesignBrewz.

https://services.nutanix.com/#/storage-capacity-calculator

Access as a Sizer Feature

This is accessed by clicking on Storage Calculator in upper right corner of Sizer user interface

Storage Calculator

Here is Storage Calculator.

The purpose of Storage Calculator is to determine either the Extent Store or the Effective Capacity of a configuration.  As mentioned it is not tied to a sizing scenario.

  • Extent Store is the amount of storage remaining after discounting for CVM.  This is amount available for customer workloads.
  • Effective Capacity is then Extent Store * Storage Efficiency  + Erasure Coding savings you expect.   Storage Efficiency is either none, 1.5:1, or 2:1.  Examples of storage efficiency is compression and dedupe.

Defining the Configuration and Input Settings

Here are the inputs

  • SSD Size –  Pulldown with common SSDs currently available in various vendor models
  • SSD is downstroked –  If selected each drive loses 80GB for downstroking.  Sizer does that in its sizing for regular SSDs but assumes no downstroking is needed for encrypted drives
  • SSD quantity –  This is the number of SSDs you expect in model you are sizing.  Minimum is 1 as always need a SSD for parts of CVM
  • HDD Size –  Pulldown with common HDDs currently available in various vendor models
  • HDD quantity –  This is the number of HDDs you expect in model you are sizing.  Min is 0 in case of All Flash
  • Node Count –  Number of nodes you expect
  • Replication Factor –  Can be RF2 or RF3
  • ECX – If selected then see the % of Cold Data input
  • % of Cold Data – If select ECX then this input appears and is the percentage of cold data you are expecting
  • Storage Efficiency –  This is the factor you expect for storage efficiency and can be none, 1.5:1, or 2:1.
  • Calculate Button –  NOTE: must click on calculate when make any changes above

Storage Calculator Charts

Total Usage

  • The left donut chart shows the Extent Store and the CVM.  Extent Store is adjusted for either RF2 or RF3 depending on the input selection.  So here the extent store is adjusted for RF2 and is 7.26 TiB.  The total amount of Extent Store is 2x that amount or 14.52 TiB.  The adjustment was made so the customer sees amount of storage they have given the Replication Factor they prefer.
  • The right donut breaks out all the CVM pieces be it stored on HDD or SSD
  • Effective Capacity is above the charts.  It is Extent Store * Storage Efficiency Factor + ECX savings.  Again we adjust for RF level.  This capacity then represents the storage available to customers at their preferred RF level and including expected benefits from storage efficiency as as well as ECX.

SSD Usage

This is a supplemental graph from Total Usage.  It breaks out just the SSD portion of the Total Usage.

  • Top graph shows SSD CVM and SSD Extent Store adjusted for either RF2 or RF3
  • Lower graph shows all the SSD CVM elements.

HDD Usage

This is a supplemental graph from Total Usage.  It breaks out just the HDD portion of the Total Usage.

  • Top graph shows HDD CVM and HDD Extent Store adjusted for either RF2 or RF3
  • Lower graph shows all the HDD CVM elements.

 

What do the letters in the SSD drive indicate?

The letters indicate different levels of endurance in terms of Drive Writes per Day (DWPD).  For example, 3DWPD means you can rewrite all the data on the drive 3 times a day for its entire life that it is warranted for.

VDI Sizing (Frame/HorizonView/Citrix Desktops

VDI Profiles  used in Sizer

Sizer relies on Login VSI profiles and tests.  Here are descriptions about the profiles and applications run

Task Worker Workload

  • The Task Worker workload runs fewer applications than the other workloads (mainly Excel and Internet Explorer with some minimal Word activity, Outlook, Adobe, copy and zip actions) and starts/stops the applications less frequently. This results in lower CPU, memory and disk IO usage.

Below is the profile definition for a Task Worker:

Knowledge Worker Workload

  • The Knowledge Worker workload is designed for virtual machines with 2vCPUs. This workload contains the following applications and activities:
    •  Outlook, browse messages.
    •  Internet Explorer, browse different webpages and a YouTube style video (480p movie trailer) is opened three times in every loop.
    •  Word, one instance to measure response time, one instance to review and edit a document.
    •  Doro PDF Printer & Acrobat Reader, the Word document is printed and exported to PDF.
    •  Excel, a very large randomized sheet is opened.
    •  PowerPoint, a presentation is reviewed and edited.
    •  FreeMind, a Java based Mind Mapping application.
    •  Various copy and zip actions.

Below is the profile definition for a Knowledge Worker:

Power Worker Workload

  • The Power Worker workload is the most intensive of the standard workloads. The following activities are performed with this workload:
    •  Begins by opening four instances of Internet Explorer which remain open throughout the workload.
    •  Begins by opening two instances of Adobe Reader which remain open throughout the workload.
    •  There are more PDF printer actions in the workload as compared to the other workloads.
    •  Instead of 480p videos a 720p and a 1080p video are watched.
    •  The idle time is reduced to two minutes.
    •  Various copy and zip actions.

Below is the profile definition for a Power Worker:

Developer Worker Type

Sizer does offer Developer profile which is assumes 1 core per user (2 VCPU,  VCPU;pCore = 2).  Use that for super heavy user demands.

Below is the profile definition for a Developer:

What is strength and weaknesses of Profiles

Strengths

  • LoginVSI is the defacto  Industry standard VDI performance testing suite.  That offers ability to have common terms like “knowledge worker” .
  • Test suite was run on Nutanix-based cluster and number of users were found with reasonable performance.  From there we could build out the profile definitions in Sizer and this is based on lab results.
  • Things were setup optimally.  Hyperthreading is turned on and the cluster is set up using best practices.
  • It does a good job of not only having mix of applications but having different workload activity as add more users.  For example, how frequently applications are opened and so it does simulate having multiple users in real environment.
  • Essentially the “best game in town” to getting consistent sizing

Weaknesses

  • In the end VDI is a shared environment and sizing will depend on the activities of the users.  So if three companies have 1000 task workers, each company could have different sizing requirements as what the users do and when will vary.

What are other fctors Sizer considers for VDI sizing: 

Common VDI sizing parameters:  (Across all VDI Brokers)

Windows desktop OS and Office version:

Depending on the OS and Office version type, there are performance implications and cores are adjusted accordingly.

The below table has the adjustment factors for cores depending on the Windows OS:

Version Factor
No adjustment 1
Windows 11 – 22H2 1.3915
Windows 11 – 21H2 1.334
Windows 10 – 22H2 1.1845
Windows 10 – 21H2 1.219
Windows 10 – 20H2 1.219
Windows 10 – 2004 1.15
Windows 10 – 1903/1909 1.135
Windows 10 – 1803/1809 1.1
Windows 10 – 1709 1.05

The factors above include performance hits from Spectre and Meltdown updates.

Similarly, the below table has the adjustment factors for cores depending on the Windows Office version:

Office 2010 0.75
Office 2013 1
Office 2016/2019 1

Display Protocol:                   

Depending on the VDI broker, there are the following Display Protocols:

VMware Horizon View:

  • Blast(default)
  • PCoIP

Citrix Virtual Desktop:

  • ICA(default)

Frame:  

  • Frame Remote Protocl(FRP)

There are adjustment to cores depending on the selected protol for the respective VDI brokers as follows:

ICA 1
PCoIP 1.15
Blast 1.38
Frame 1.45

Sizing equations for Cores/RAM/Storage:

Cores: 

Cores users * VCPUs per user  * (1 / (Vcpu per CPU) *125% if V2V/P2V * 85% if 2400 MhZ DIMM
Note this change If provisioning type is V2V/P2V then need to increase cores by 25%, due to change this provisioning.  Now default is Thinwire video protocol and that causes 25% hit. If H264 then no hit. We will assume the default of Thinwire is used as Sizer user probably does not know.

RAM: 

RAM (users * RAM in GiB / user  * 1/1024 TiB/GiB) +

 (64MB * users * conversion from MB to TiB)

Note this change a. First part finds RAM for user data

b.  Second  part calculates reqt per VM which is user

Note: Hypervisor RAM will be added to CVM RAM as one Hypervisor per node

SSD:

For VDI workload, the rule to calculate SSD is as follows:

SSD  = hotTierDataPerNode * estNodes + goldImageCapacity * estNodes + numUsers * requiredSSD,

where  hotTierDataPerNode = 0.3 GB converted to GiB ,

estimatedNummerOfNodes = ( max (1, cores/20) ) where cores is calculated cores, 

goldImageCapacity as per selected profile numUsers as received from UI, 

requiredSSD – 2.5GiB for task worker, 5GiB for Power user/Developer user, 3.3GiB for Knowledge worker/Epic Hyperspace/ Hyperspace + Nuance Dragon,

(0.3 GB* 0.931323 GiB/GB * est nodes + goldimage in GiB *est nodes + users * reqdSSD in GiB) * 1/1024 TiB/GiB
reqdSSD = 2.5 GiB for task worker, 5 GiB for Power user/developer, 3.3 GiB for knowledge

HDD:

For VDI workload, the rule to calculate HDD is as follows: 

if VDI > SSD, HDD = VDI – SSD else   HDD = 0

where VDI  = numUsers * actPerUserCap    numUsers as received from UI, 

actPerUserCap : if provisionType is V2V/P2V or Full Clone, 

actPerUserCap =  goldImageCapacity + userDataCap where goldImageCapacity and userDataCap are received from UI  

                                       : if provisionType is  not V2V/P2V or Full Clone,

actPerUserCap =    userDataCap

VDI Sizing – July 2018 sprint

  • Dell completed extensive VDI testing using LoginVSI profiles and test suite on a Nutanix cluster using their skylake models.  So we now have the most extensive lab testing results to update Sizer profiles.  Given that we updated Sizer VDI workload sizing.  The key reasons:
    • This was run on skylake models and so includes any enhancements in that architecture
    • Latest AOS version was used
    • Best practices were used in setting up the cluster by VDI experts.  For example hyperthreading is turned ON
    • Latest login VSI suite was used
  • Here is summary of the results:
    • Big change  is Task workers.  In old days of Windows 7 and Office 2010 we were seeing 10 task workers per core as common ratio.  However, both Windows 10 and Office 2016 are very expensive resource-wise.  In the lab tests we only get about 6 users per core.  We are seeing a big bump in core counts for task workers as a result.  For example 18% increase in cores for Xenapp Task workers and 28% for Horizon task workers.  A customer’s actual usage will vary.
    • Windows 7 is estimated to be needing 60% of cores vs Windows 10.
    • Office 2010 is estimated to be needing 75% of cores vs Office 2016.
    • Knowledge workers for either View or Xen Desktop brokers did not change much
    • Power users on View did not change much
    • Power users for Xen Desktop did increase by 21% as the profile changed from 5 users per core to just 4 users per core.

Continue reading “VDI Sizing (Frame/HorizonView/Citrix Desktops”

Usable Capacity

Usable Remaining Capacity is the amount of storage that is available to the customer AFTER workloads, RF, storage savings are applied.  It represents what they should have remaining once deployed.

Sizer presents the values in both RF2 and RF3.

Usable Remaining Capacity (Assumming RF2)

  • HDD Usable Remaining  Capacity = (Raw + Compression Savings + Dedupe Savings + ECX Savings – Workload – RF Overhead – CVM overhead ) / 2
  • SSD Usable Remaining  Capacity =  (Raw + Compression Savings + Dedupe Savings + ECX Savings – Workload – RF Overhead – CVM overhead + Oplog ) / 2
  • Notes:
    • Usable capacity is basically RAW + storage savings with data reduction techniques like compression less workload, RF overhead and CVM overhead.
    • If All Flash,  Compression Savings, Dedupe Savings , ECX Savings, RF Overhead,  and CVM overhead that would be attributed to HDD’s is applied to SSDs
    • For SSD Capacity, Oplog is included as part of CVM overhead for SSDs but also added back as it is a Write log and so is available for user data.

Extent Store and Effective Capacity

Extent Store

This is a concept that is used in the Nutanix Bible.  This is RAW capacity less CVM.  It represents the capacity that is available to a customer

 

Effective Capacity

Used in Storage Calculator or DesignBrewz.  This is the Extent Store * Storage Efficiency setting in  Storage calculator.  So if the Extent Store is 10TiB and the Storage Efficiency factor is set to 1.5:1 then the Effective Capacity is 15 TiB.   Storage Efficiency factor is the expected benefit of storage reduction approaches like compression, dedupe, ECX.  Effective Capacity then is what is hoped to be available with these reduction techniques

Cores (Actual Cores, Adjusted Weight, Memory Adjustments like Unbalanced DIMMs)

In Sizing Details you may see an odd number like 40.27 cores for RAW cores as shown below

Actual Core Capacity

This is the total number of cores in the recommendation.

By clicking on the tooltip by the node you get the information

So in this recommendation we have 3 nodes where each has 2 cpu and each cpu has 8 cores.  So the Actual core capacity is 3 nodes * 2 cpu/node * 8 cores/cpu = 48 cores

Applied Weight

 

Intel designs a wide range of cpus to meet different market needs.  Core count certainly varies but the speed of a core is not the same across all cpu’s.

We need some benchmark to adjust for the core speed differences.  We use SPECInt 2006.  It is the best benchmark in terms of being an industry standard where vendors who publish numbers have to use standard testing process and publically publish the numbers.  We see consistency as well for a given CPU across all the vendors.  Thus this is a good benchmark to use for us to adjust for different values

So applied weight is where we have adjusted the cores to the baseline processor which runs at 42.31 specints

Review the Processor Table page with their core count, specints, and adjusted cores

Using this example we have a recommendation of 3 nodes with each node have quantity 2 2620v4 processors.  The table (calculation is shown in that page too) shows the 2620 v4 adjusted cores is 14.9 cores with nodes having 2 cpus

Thus in this recommendation total effective cores is 14.91 cores/node * 3 nodes = 44.73 cores.  We take applied weight adjustment of -3.26

 

Memory Adjustments

Broadwell Processors

With Broadwell processors “unbalanced DIMM” configuration depends on how they are laid out on the motherboard.  When it occurs there is a 10% increased access latency

To determine whether Sizer takes a discount it takes total count of DIMMs in a node and divides by 4. If odd number then it is Unbalanced and Sizer applies the discount.
If even, then no reduction is needed

Example

12x32GB in a node. 12 DIMMs/4 = 3 and so unbalanced
8X32GB in a node 8 DIMMs/4 = 2 and so balanced

If unbalanced core capacity is  reduced.

– Actual Core Capacity = Cores/Node * Node count
– Applied Wieght = extra or less cores vs baseline
– Adjustment due to Memory Issues = -10% * (RAW Cores+Applied Wieght)

It should be noted that if single processor system then NO adjustment is needed.

Skylake Processors

Skylake processors is more complex compared to Broadwell in terms of whether a system  has unbalanced dimms

We now test for the following

  • CPU – skylake
  • Model – Balanced_Motherboard – true  (described below)
  • Memory bandwidth – go with slower figure for either memory or CPU.  If 2133 Mhz then -10% memory adjustment.   If 2400Mhz or 2666Mhz (most common with skylake models) we take a 0% adjustment

Like before, we find the DIMM count per socket.  There is typically 2 sockets (cpu’s) but can be 1 and starting to introduce 4 socket models

Using the quantity of DIMMs per socket we should apply following rules

If CPU is skylake

  • If dimm count per socket is 5,7,9,10,11 then the model is considered unbalanced and we need to take a -50% memory adjustment
  • if dimm count per socket is 2,3,4, or 12 it is balanced and memory adjustment = 0%
  • if model is balanced and DIMM count per socket is 6 or 8 then it is balanced and memory adjustment = 0%
  • if model is unbalanced and DIMM count per socket is 6 or 8 then it is unbalanced and memory adjustment = -50%

After determining the adjustment percent we would make the adjustment as we do currently

  • Actual core capacity = Total cores in the cluster
  • Applied weight = adjustment vs baseline specint
  • Adjustment = Adjustment Percent * (Actual core capacity – Applied weight)
With Skylake, it can matter how the DIMMs are arranged on the motherboard.  We have PM review that and so far all models are laid out in balanced fashion
Here is doc that shows the options

 

 

Processor Table

Here is the table of processors

The first 5 columns are from Spec.org

https://www.spec.org/cgi-bin/osgresults

SPECint Adjusted Core is simply adjusting cores vs a baseline of 42.31 SPECint per core

Note in the SPECint tests, typically 2 CPU solutions are tested and so include cores per CPU

For example, the 2620v4 has 16 cores but only at 39.44 SPECint per core

  • SPECint adjusted cores = 16 * SPECint per core / Baseline = 16 * 39.44/42.31 = 14.91
  • Basically, this is saying the 2620 v4 has 16 cores but it is equivalent to 14.91 baseline cores in 2 CPU nodes
  • For a single CPU, it would be just 14.91/2 = 7.455

Looking at a high-speed CPU, the 6128 has just 12 cores but screams at 68.07 SPECint

  • Specint Adjusted cores = 12 * specint per core/ baseline = 12 * 68.07/42.31 = 19.31
  • Basically, this is saying the 6128 has 12 cores but it is equivalent to 19.31 baseline cores
System (Emerald Rapids) # Cores(2 socket) # Chips CINT2006/core
Intel Xeon Silver 4514Y 16C 150W 2.0GHz Processor 32 2 79.14
Intel Xeon Silver 4509Y 8C 125W 2.6GHz Processor 16 2 101.75
Intel Xeon Silver 4510 12C 150W 2.4GHz Processor 24 2 95.20
Intel Xeon Gold 6548N 32C 250W 2.8GHz Processor 64 2 94.16
Intel Xeon Gold 5512U 28C 185W 2.1GHz Processor 28 1 87.38
Intel Xeon Gold 5515+ 8C 165W 3.2GHz Processor 16 2 105.91
Intel Xeon Gold 6526Y 16C 195W 2.8GHz Processor 32 2 100.26
Intel Xeon Gold 6542Y 24C 250W 2.9GHz Processor 48 2 101.15
Intel Xeon Gold 6548Y+ 32C 250W 2.5GHz Processor 64 2 94.46
Intel Xeon Gold 6534 8C 195W 3.9GHz Processor 16 2 116.62
Intel Xeon Gold 6544Y 16C 270W 3.6GHz Processor 32 2 114.54
Intel Xeon Gold 5520+ 28C 205W 2.2GHz Processor 56 2 86.87
Intel Xeon Gold 6538Y+ 32C 225W 2.2GHz Processor 64 2 93.12
Intel Xeon Platinum 8592V 64C 330W 2.0GHz Processor 128 2 72.00
Intel Xeon Platinum 8581V 60C 270W 2.0GHz Processor 60 1 74.89
Intel Xeon Platinum 8571N 52C 300W 2.4GHz Processor 52 1 86.60
Intel Xeon Platinum 8558U 48C 300W 2.0GHz Processor 48 1 81.32
Intel Xeon Platinum 8568Y+ 48C 350W 2.3GHz Processor 96 2 89.45
Intel Xeon Platinum 8580 60C 350W 2.0GHz Processor 120 2 80.13
Intel Xeon Platinum 8592+ 64C 350W 1.9GHz Processor 128 2 75.86
Intel Xeon Platinum 8562Y+ 32C 300W 2.8GHz Processor 64 2 100.41
Intel Xeon Platinum 8558 48C 330W 2.1GHz Processor 48 1 81.32
System (Sapphire Rapids) # Cores(2 socket) # Chips CINT2006/core
Intel Gold 6414U 32C 2.0GHz 250W 32 1 76.16
Intel Silver 4410Y 12C 2.0GHz 135W-145W 24 2 83.30
Intel Silver 4416+ 20C 2.1GHz 165W 40 2 83.06
Intel Silver 4410T 10C 2.7GHz 150W 20 2 97.10
Intel Gold 5415+ 8C 2.9GHz 150W 16 2 102.34
Intel Gold 5418Y 24C 2.1GHz 185W 48 2 81.32
Intel Gold 5420+ 28C 1.9-2.0GHz 205W 56 2 78.20
Intel Gold 6426Y 16C 2.6GHz 185W 32 2 94.90
Intel Gold 6430 32C 1.9GHz 270W 64 2 75.57
Intel Gold 6434 8C 3.9GHz 205W 16 2 113.05
Intel Gold 6438Y+ 32C 1.9-2.0GHz 205W 64 2 82.26
Intel Gold 6442Y 24C 2.6GHz 225W 48 2 94.21
Intel Gold 6444Y 16C 3.5GHz 270W 32 2 110.67
Intel Gold 6448Y 32C 2.2GHz 225W 64 2 83.60
Intel Gold 6438M 32C 2.2GHz 205W 64 2 82.41
Intel Gold 5418N 24C 1.8GHz 165W 48 2 75.96
Intel Gold 6428N 32C 1.8GHz 185W 64 2 72.59
Intel Gold 6438N 32C 2.0GHz 205W 64 2 80.47
Intel Gold 5416S 16C 2.0GHz 150W 32 2 82.11
Intel Gold 6454S 32C 2.2GHz 270W 64 2 79.58
Intel Platinum 8462Y+ 32C 2.8GHz 300W 64 2 94.46
Intel Platinum 8452Y 36C 2.0GHz 300W 72 2 78.80
Intel Platinum 8460Y+ 40C 2.0GHz 300W 80 2 78.18
Intel Platinum 8468 48C 2.1GHz 350W 96 2 80.23
Intel Platinum 8470 52C 2.0GHz 350W 104 2 78.81
Intel Platinum 8480+ 56C 2.0GHz 350W 112 2 73.61
Intel Platinum 8490H 60C 1.9GHz 350W 120 2 71.56
Intel Platinum 8470N 52C 1.7GHz 300W 104 2 69.57
Intel Platinum 8468V 48C 2.4GHz 330W 96 2 76.46
Intel Platinum 8458P 44C 2.7GHz 350W 88 2 82.76
Intel Xeon Platinum 8468H 48C 330W 2.1GHz Processor 96 2 76.66
Intel Xeon Platinum 8454H 32C 270W 2.1GHz Processor 64 2 72.74
Intel Xeon Platinum 8450H 28C 250W 2.0GHz Processor 56 2 79.90
Intel Xeon Platinum 8444H 16C 270W 2.9GHz Processor 32 2 96.69
Intel Xeon Platinum 8460H 40C 330W 2.2GHz Processor 80 2 83.66
Intel Xeon Gold 6448H 32C 250W 2.4GHz Processor 64 2 89.85
Intel Xeon Gold 6418H 24C 185W 2.1GHz Processor 48 2 79.53
Intel Xeon Gold 6416H 18C 165W 2.2GHz Processor 36 2 85.42
Intel Xeon Gold 6434H 8C 195W 3.7GHz Processor 16 2 119.00
Intel Xeon Platinum 8470Q 52C 2.10 GHz Processor 104 2 79.36
Intel Xeon Gold 6458Q 32C 3.10 GHz Processor 64 2 101.45
Intel Xeon-B 3408U 8C 8 1 50.69
Intel Xeon-G 5412U 24C 24 1 85.28
Intel Xeon-G 5411N 24C 165W 1.9GHz Processor 24 1 82.51
Intel Xeon-G 6421N 32C 185W 1.8GHz Processor 32 1 78.54
Intel Xeon Platinum 8461V 48C 300W 2.2GHz Processor 48 1 75.37
Intel Xeon Platinum 8471N 52C 300W 1.8GHz Processor 52 1 75.43
System (Ice Lake) # Cores(2 socket) # Chips CINT2006/core
Intel® Xeon® Platinum 8368Q Processor (57M Cache, 2.60 GHz) 76 2 64.51
Intel® Xeon® Platinum 8360Y Processor (54M Cache, 2.40 GHz) 52 2 94.83
Intel® Xeon® Platinum 8358P Processor (48M Cache, 2.60 GHz) 64 2 70.81
Intel® Xeon® Platinum 8352Y Processor (48M Cache, 2.20 GHz) 64 2 65.30
Intel® Xeon® Platinum 8352V Processor (54M Cache, 2.10 GHz) 72 2 56.72
Intel® Xeon® Platinum 8352S Processor (48M Cache, 2.20 GHz) 64 2 65.30
Intel® Xeon® Platinum 8351N Processor (54M Cache, 2.40 GHz) 36 1 67.43
Intel® Xeon® Gold 6338N Processor (48M Cache, 2.20 GHz) 64 2 63.37
Intel® Xeon® Gold 6336Y Processor 48 2 71.00
Intel® Xeon® Gold 6330N Processor (42M Cache, 2.20 GHz) 56 2 61.20
Intel® Xeon® Gold 5318Y Processor 48 2 63.07
Intel® Xeon® Gold 5315Y Processor 16 2 82.11
Intel® Xeon® Silver 4309Y Processor 16 2 79.73
Intel® Xeon® Platinum 8380 Processor (60M Cache, 2.30 GHz) 80 2 66.28
Intel® Xeon® Platinum 8368 Processor (57M Cache, 2.40 GHz) 76 2 68.02
Intel® Xeon® Platinum 8358 Processor (48M Cache, 2.60 GHz) 64 2 73.48
Intel® Xeon® Gold 6354 Processor (39M Cache, 3.00 GHz) 36 2 81.45
Intel® Xeon® Gold 6348 Processor (42M Cache, 2.60 GHz) 56 2 74.63
Intel® Xeon® Gold 6346 Processor (36M Cache, 3.10 GHz) 32 2 83.60
Intel® Xeon® Gold 6342 Processor 48 2 76.16
Intel® Xeon® Gold 6338 Processor (48M Cache, 2.00 GHz) 64 2 62.03
Intel® Xeon® Gold 6334 Processor 16 2 86.87
Intel® Xeon® Gold 6330 Processor (42M Cache, 2.00 GHz) 56 2 62.22
Intel® Xeon® Gold 6326 Processor 32 2 78.24
Intel® Xeon® Gold 5320 Processor 52 2 66.09
Intel® Xeon® Gold 5317 Processor 24 2 80.13
Intel® Xeon® Silver 4316 Processor 40 2 67.12
Intel® Xeon® Silver 4314 Processor 32 2 69.62
Intel® Xeon® Silver 4310 Processor 24 2 66.64
Intel Xeon Gold 6338T processor (2.1 GHz/ 24-core/ 165W) 48 2 63.27
Intel Xeon Gold 5320T processor (2.3 GHz/ 20-core/ 150W) 40 2 66.40
Intel Xeon Silver 4310T processor (2.3 GHz/ 10-core/ 105W) 20 2 70.45
Intel Xeon Gold 6314U processor (2.30 GHz/32-core/205W) 32 1 67.24
Intel Xeon Gold 6312U processor (2.40 GHz/24-core/185W) 24 1 73.38
System(Cascade Lake) # Cores(2 socket) # Chips CINT2006/core
CPU (2.10 GHz, Intel Xeon Gold 6230) 40 2 52.65
CPU (2.30 GHz, Intel Xeon Gold 5218) 32 2 54.28
CPU (2.30 GHz, Intel Xeon Gold 5218B) 32 2 54.28
CPU (2.60 GHz, Intel Xeon Gold 6240) 36 2 60.12
CPU (2.10 GHz, Intel Xeon Gold 6252) 48 2 51.68
CPU (2.30 GHz, Intel Xeon Gold 6252N) 48 2 50.71
CPU (2.20 GHz, Intel Xeon Platinum 8276) 56 2 52.71
CPU (2.20 GHz, Intel Xeon Silver 4210) 20 2 52.14
CPU (2.20 GHz, Intel Xeon Silver 4214) 24 2 53.49
CPU (2.20 GHz, Intel Xeon Silver 4214Y) 24 2 53.49
CPU (2.10 GHz, Intel Xeon Silver 4216) 32 2 52.82
CPU (2.50 GHz, Intel Xeon Gold 5215) 20 2 57.48
CPU (2.50 GHz, Intel Xeon Gold 6248) 40 2 55.16
CPU (2.50 GHz, Intel Xeon Silver 4215) 16 2 58.07
CPU (2.60 GHz, Intel Xeon Gold 6240Y) 36 2 59.08
CPU (2.70 GHz, Intel Xeon Platinum 8270) 52 2 59.19
CPU (2.70 GHz, Intel Xeon Platinum 8280) 56 2 58.91
CPU (2.70 GHz, Intel Xeon Platinum 8280M) 56 2 57.32
CPU (2.90 GHz, Intel Xeon Platinum 8268) 48 2 62.15
CPU (3.00 GHz, Intel Xeon Gold 5217) 16 2 64.33
CPU (3.80 GHz, Intel Xeon Gold 5222) 8 2 77.44
CPU (2.10 GHz, Intel Xeon Silver 4208) 16 2 49.73
CPU (2.70 GHz, Intel Xeon 6226) 24 2 64.75
CPU (3.3 GHz, Intel Xeon Gold 6234) 16 2 75.44
CPU (2.8 GHz, Intel Xeon Gold 6242) 32 2 64.54
CPU (2.2 GHz, Intel Xeon Silver 5220) 36 2 52.86
CPU (2.1 GHz, Intel Xeon Gold 6238) 44 2 51.78
CPU (3.6 GHz, Intel Xeon Gold 6244) 16 2 80.92
CPU (3.3 GHz, Intel Xeon Gold 6246) 24 2 70.97
CPU (2.5 GHz, Intel Xeon Gold 6248) 40 2 55.16
CPU (3.1 GHz, Intel Xeon Gold 6254) 36 2 69.1
CPU (1.8 GHz, Intel Xeon Gold 6222V) 40 2 47.6
CPU (1.9 GHz, Intel Xeon Gold 6262V) 48 2 48
CPU (2.5 GHz, Intel Xeon Gold 5215M) 20 2 56.53
CPU (2.1 GHz, Intel Xeon Gold 6238M) 44 2 51.57
CPU (2.6 GHz, Intel Xeon Gold 6240M) 36 2 57.78
CPU (2.5 GHz, Intel Xeon Gold 5215L) 20 2 57.48
CPU (2.1 GHz, Intel Xeon Gold 6238L) 44 2 52
CPU (2.4 GHz, Intel Xeon Gold 8260) 48 2 57.42
CPU (2.4 GHz, Intel Xeon Gold 8260L) 48 2 57.22
CPU (2.4 GHz, Intel Xeon Gold 8260M) 48 2 55.63
CPU (2.9 GHz, Intel Xeon Gold 8268) 48 2 62.15
CPU (2.7 GHz, Intel Xeon Gold 8270) 52 2 59.19
CPU (2.2 GHz, Intel Xeon Gold 8276) 56 2 52.71
CPU (2.2 GHz, Intel Xeon Gold 8280) 56 2 58.91
CPU (2.2 GHz, Intel Xeon Gold 8280M) 56 2 57.32
CPU (2.2 GHz, Intel Xeon Gold 8276M) 56 2 49.69
CPU (2.2 GHz, Intel Xeon Gold 8276L) 56 2 50.02
CPU (2.2 GHz, Intel Xeon Gold 8280L) 56 2 58.36
CPU (2.4 GHz, Intel Xeon Gold 8260Y) 48 2 55.83
CPU (2.5 GHz, Intel Xeon Gold 6210U) 20 1 59.98
CPU (1.9 GHz, Intel Xeon Gold 3206R) 16 2 47.6
CPU (2.4 GHz, Intel Xeon Gold 4210R) 20 2 60.45
CPU (2.4 GHz, Intel Xeon Gold 4214R) 24 2 64.26
CPU (3.2 GHz, Intel Xeon Gold 4215R) 16 2 73.19
CPU (2.1 GHz, Intel Xeon Gold 5218R) 40 2 58.79
CPU (2.2 GHz, Intel Xeon Gold 5220R) 48 2 54.74
CPU (2.1 GHz, Intel Xeon Gold 6230R) 52 2 56.94
CPU (2.9 GHz, Intel Xeon Gold 6226R) 32 2 71.7
CPU (2.4 GHz, Intel Xeon Gold 6240R) 48 2 59.5
CPU (3.1 GHz, Intel Xeon Gold 6242R) 40 2 72.11
CPU (2.2 GHz, Intel Xeon Gold 6238R) 56 2 54.06
CPU (3.0 GHz, Intel Xeon Gold 6248R) 48 2 66.84
CPU (2.7 GHz, Intel Xeon Gold 6258R) 56 2 61.54
CPU (3.9 GHz, Intel Xeon Gold 6250) 16 2 81.49
CPU (3.6 GHz, Intel Xeon Gold 6256) 24 2 77.39
CPU (3.4 GHz, Intel Xeon Gold 6246R) 32 2 70.06
CPU Type CPU Family SpecInt2006Rate # of Cores Specint2006Rate per CORE Specint Adjusted Cores Cores per CPU
2699v3 Haswell 1389 36 38.58 32.8 18
2630v3 Haswell 688 16 43.00 16.3 8
2620v3 Haswell 529 12 44.08 12.5 6
2697v3 Haswell 1236 28 44.14 29.2 14
2680v3 Haswell 1063 24 44.31 25.1 12
2660v3 Haswell 900 20 45.00 21.3 10
2640v3 Haswell 725 16 45.31 17.1 8
2623v3 Haswell 424 8 53.00 10.0 4
2643v3 Haswell 690 12 57.50 16.3 6
2620v2 Ivy Bridge 429 12 35.75 10.1 6
2697v2 Ivy Bridge 962 24 40.08 22.7 12
2630v2 Ivy Bridge 505 12 42.08 11.9 6
2680v2 Ivy Bridge 846 20 42.31 20.0 10
2650v2 Ivy Bridge 681 16 42.55 16.1 8
2690v2 Ivy Bridge 888 20 44.40 21.0 10
2643v2 Ivy Bridge 634 12 52.83 15.0 6
2620v1 Sandy Bridge 390 12 32.50 9.2 6
2670v1 Sandy Bridge 640 16 40.00 15.1 8
2690v1 Sandy Bridge 685 16 42.81 16.2 8
2637v3 Haswell 472 8 59.00 11.2 4
2698v3 Haswell 1290 32 40.31 30.5 16
E5-2609v4 Broadwell 415 16 25.94 9.8 8
E5-2620v4 Broadwell 631 16 39.44 14.9 8
E5-2630v4 Broadwell 795 20 39.75 18.8 10
E5-2640v4 Broadwell 844 20 42.20 19.9 10
E5-2643v4 Broadwell 703 12 58.58 16.6 6
E5-2650v4 Broadwell 984 24 41.00 23.3 12
E5-2660v4 Broadwell 1090 28 38.93 25.8 14
E5-2680v4 Broadwell 1200 28 42.86 28.4 14
E5-2690v4 Broadwell 1300 28 46.43 30.7 14
E5-2695v4 Broadwell 1370 36 38.06 32.4 18
E5-2697v4 Broadwell 1460 36 40.56 34.5 18
E5-2698v4 Broadwell 1540 40 38.50 36.4 20
E5-2699v4 Broadwell 1690 44 38.41 39.9 22
3106 Skylake 431.4 16 26.9625 10.2 8
4108 Skylake 629.65 16 39.353125 14.9 8
4109T Skylake 667.92 16 41.745 15.8 8
4110 Skylake 693.24 16 43.3275 16.4 8
4112 Skylake 412.91 8 51.61375 9.8 4
4114 Skylake 890.6 20 44.53 21.0 10
4116 Skylake 1030.87 24 42.95291667 24.4 12
5115 Skylake 969.14 20 48.457 22.9 10
5118 Skylake 1133.2 24 47.21666667 26.8 12
5120 Skylake 1271.56 28 45.41285714 30.1 14
5122 Skylake 544.38 8 68.0475 12.9 4
6126 Skylake 1304.67 24 54.36125 30.8 12
6128 Skylake 816.91 12 68.07583333 19.3 6
6130 Skylake 1516.45 32 47.3890625 35.8 16
6132 Skylake 1524.55 28 54.44821429 36.0 14
6134 Skylake 1037.72 16 64.8575 24.5 8
6134M Skylake 1085 16 67.8125 25.6 8
6136 Skylake 1451 24 60.45833333 34.3 12
6138 Skylake 1748.89 40 43.72225 41.3 20
6140 Skylake 1752.86 36 48.69055556 41.4 18
6140M Skylake 1810 36 50.27777778 42.8 18
6142 Skylake 1688.5 32 52.765625 39.9 16
6142M Skylake 1785 32 55.78125 42.2 16
6143 Skylake 1950 32 60.9375 46.1 16
6144 Skylake 1113 16 69.5625 26.3 8
6146 Skylake 1534.44 24 63.935 36.3 12
6148 Skylake 1921.3 40 48.0325 45.4 20
6150 Skylake 1903.75 36 52.88194444 45.0 18
6152 Skylake 1951.18 44 44.345 46.1 22
6154 Skylake 2062 36 57.27777778 48.7 18
8153 Skylake 1326.88 32 41.465 31.4 16
8156 Skylake 550.81 8 68.85125 13.0 4
8158 Skylake 1464 24 61 34.6 12
8160 Skylake 2152.5 48 44.84375 50.9 24
8160M Skylake 2285 48 47.60416667 54.0 24
8164 Skylake 2204 52 42.38461538 52.1 26
8165 Skylake 2500 48 52.08333333 59.1 24
8168 Skylake 2454.12 48 51.1275 58.0 24
8170 Skylake 2282.86 52 43.90115385 54.0 26
8170M Skylake 2420 52 46.53846154 57.2 26
8176 Skylake 2386.87 56 42.62267857 56.4 28
8176M Skylake 2507 56 44.76785714 59.2 28
8180 Skylake 2722.38 56 48.61392857 64.3 28
8180M Skylake 2710 56 48.39285714 64.0 28
System (AMD Genoa) # of Cores across CPUs # of CPUs CINT2006/core
AMD EPYC 9274F 24C 320W 4.05GHz Processor 48 2 123.17
AMD EPYC 9354P 32C 280W 3.25GHz Processor 32 1 108.59
AMD EPYC 9224 24C 200W 2.5GHz Processor 48 2 99.37
AMD EPYC 9174F 16C 320W 4.1GHz Processor 32 2 127.33
AMD EPYC 9654P 96C 360W 2.4GHz Processor 96 1 81.32
AMD EPYC 9554P 64C 360W 3.1GHz Processor 64 1 95.80
AMD EPYC 9454P 48C 290W 2.75GHz Processor 48 1 101.15
AMD EPYC 9634 84C 290W 2.25GHz Processor 168 2 75.93
AMD EPYC 9354 32C 280W 3.25GHz Processor 64 2 108.74
AMD EPYC 9474F 48C 360W 3.6GHz Processor 96 2 107.10
AMD EPYC 9374F 32C 320W 3.85GHz Processor 64 2 119.89
AMD EPYC 9534 64C 280W 2.45GHz Processor 128 2 88.51
AMD EPYC 9454 48C 290W 2.75GHz Processor 96 2 101.15
AMD EPYC 9334 32C 210W 2.7GHz Processor 64 2 103.83
AMD EPYC 9254 24C 200W 2.9GHz Processor 48 2 108.69
AMD EPYC 9124 16C 200W 3.0GHz Processor 32 2 103.23
AMD EPYC 9554 64C 360W 3.1GHz Processor 128 2 95.94
AMD EPYC 9654 96C 360W 2.4GHz Processor 192 2 79.83
AMD EPYC 9734 2.2GHz 112-Core Processor 224 2 70.98
AMD EPYC 9754 2.25GHz 128-Core Processor 256 2 67.31
System (AMD Milan) # of Cores across CPUs # of CPUs CINT2006/core
AMD EPYC 7663P CPU 2.00 GHz 56 1 58.65
AMD EPYC 7643P CPU 2.30 GHz 48 1 64.46
AMD EPYC 7303P CPU 2.40 GHz 16 1 78.54
AMD EPYC 7203P CPU 2.80 GHz 8 1 84.25
AMD EPYC 7303 CPU 2.40 GHz 32 2 77.65
AMD EPYC 7203 CPU 2.80 GHz 16 2 83.90
AMD EPYC 7313P CPU 3.00 GHz 16 1 89.25
AMD EPYC 7443P CPU 2.85 GHz 24 1 84.49
AMD EPYC 7713P CPU 2.00 GHz 64 1 55.78
AMD EPYC 7543P CPU 2.80 GHz 32 1 80.62
AMD EPYC 7413 CPU 2.65 GHz 24 1 81.32
AMD EPYC 7763 CPU 2.45 GHz 64 1 61.43
AMD EPYC 7343 CPU 3.20 GHz 16 1 90.44
AMD EPYC 7453 CPU 2.75 GHz 28 1 74.80
AMD EPYC 75F3 CPU 2.95 GHz 32 1 83.90
AMD EPYC 7663 CPU 2.00 GHz 56 1 60.52
AMD EPYC 72F3 CPU 3.70 GHz 8 1 106.15
AMD EPYC 73F3 CPU 3.50 GHz 16 1 98.77
AMD EPYC 74F3 CPU 3.20 GHz 24 1 88.46
AMD EPYC 7643 CPU 2.30 GHz 48 1 65.65
AMD EPYC 7543 CPU 2.8 GHz 64 2 80.03
AMD EPYC 7713 CPU 2.0 GHz 128 2 55.04
AMD EPYC 7443 CPU 2.85 GHz 48 2 84.69
AMD EPYC 7313 CPU 3.0 GHz 32 2 90.74
AMD EPYC 7513 CPU 2.6 GHz 64 2 73.78
AMD EPYC 7373X FIO (16 cores, 768 M Cache, 3.8 GHz, DDR4 3200MHz) 32 2 97.58
AMD EPYC 7473X FIO (24 cores, 768 M Cache, 3.7 GHz, DDR4 3200MHz) 48 2 88.85
AMD EPYC 7573X FIO (32 cores, 768 M Cache, 3.6 GHz, DDR4 3200MHz) 64 2 84.94
AMD EPYC 7773X FIO (64 cores, 768 M Cache, 3.5 GHz, DDR4 3200MHz) 128 2 60.10
System (AMD Milan) # of Cores across CPUs # of CPUs CINT2006/core
AMD EPYC 7742 CPU 2.25GHz 128 2 49.31
AMD EPYC 7702 CPU 2.00GHz 128 2 46.11
AMD EPYC 7502 CPU 2.5GHz 64 2 62.77
AMD EPYC 7452 CPU 2.35GHz 64 2 59.35
AMD EPYC 7402 CPU 2.80GHz 48 2 68.23
AMD EPYC 7302 CPU 3.00GHz 32 2 68.72
AMD EPYC 7502P CPU 2.50GHz 32 1 63.37
AMD EPYC 7262 CPU 3.20GHz 16 2 74.38
AMD EPYC 7261 CPU 2.50GHz 16 2 55.93
AMD EPYC 7H12 CPU 2.60GHz 128 2 51.17
AMD EPYC 7662 CPU 2.00GHz 128 2 48.72
AMD EPYC 7642 CPU 2.30GHz 96 2 56.72
AMD EPYC 7552 CPU 2.20GHz 96 2 50.58
AMD EPYC 7532 CPU 2.40GHz 64 2 65.00
AMD EPYC 7272 CPU 2.90GHz 24 2 64.26
AMD EPYC 7352 CPU 2.30GHz 48 2 62.67
AMD EPYC 7302P CPU 3.0GHz 16 1 69.02
AMD EPYC 7402P CPU 2.8GHz 24 1 67.04
AMD EPYC 7702P CPU 2.0GHz 64 1 47.45
AMD EPYC 7232P CPU 3.1GHz 8 1 67.83
AMD EPYC 7282 CPU 2.8GHz 16 1 66.64
AMD EPYC 7542 CPU 2.9GHz 64 2 61.29
AMD EPYC 7F72 CPU 3.3GHz 48 2 72.99
AMD EPYC 7F52 CPU 3.5GHz 32 2 85.09
AMD EPYC 7252 CPU 3.1GHz 16 2 69.62
AMD EPYC 7F32 CPU 3.70GHz 16 2 88.06

Continue reading “Processor Table”

CVM (Cores, Memory, HDD, SSD)

CVM Cores

Sizer will first compute the number of cores needed for the workloads.  The sum of all the workload cores is called TotalCores in this equation.

Each workload type  has its own number of cores

  • NumVDICores
  • NumDBCores (SQL Server)
  • NumServVirtCores
  • NumRawCores.   Note: coreCVMOverhead is a user input for RAW workload to set CVM Cores with default being 4 cores
  • NumServerComputingCores
  • NumSplunkCores
  • NumXenAppCores
  • NumFileServicesCores
  • Oracle cores is set to 6

What sizer then does is places a weighted average of CVM cores for these workloads with a range of 4 to 6 cores per node depending on workload mix

CVM  cores per node = (NumVDICores / TotalCores) * 4 + (NumDBCores / TotalCores) * 6 + (NumExchgCores / TotalCores) * 6   + (NumServVirtCores / TotalCores) * 4 + (NumRawCores / TotalCores) * coreCVMOverhead  + (NumServerComputingCores / TotalCores) * 4 + (NumSplunkCores / TotalCores) * 6 + (NumXenAppCores / TotalCores) * 4 + (NumFileServicesCores / TotalCores) * 4 + (number of Oracle cores/TotalCores) * 6;

For example, if only VDI is in a scenario than the NumVDICores / TotalCores ratio is 1 and then 4 cores are assigned for each node for CVM

The workload equations will come to a CVM core count per node for entire configuration that depends on the workload balance but is between 4 to 6 cores for CVM per node.

 

CVM Memory

 

CVM memory will vary by Platform type

 

 

Platform Default Memory (GB)
VDI, server virtualization 20
Storage only 28
Light Compute 28
Large server, high-performance, all-flash 32

 

Feature Addon CVM Memory

 

Sizer adds on the following amount of memory for features as noted below

 

 

Features Memory (GB)
Capacity tier deduplication (includes performance tier deduplication) 16
Redundancy factor 3 8
Performance tier deduplication 8
Cold-tier nodes + capacity tier deduplication 4
Capacity tier deduplication + redundancy factor 3 16
Self-service portal (AHV only)

 

  • With Asterix.1 no need to add memory beyond what is allocated for Platform CVM memory
 

Sizer approach to calculate CVM Memory

  • First determine Platform type CVM Memory from the tables.  As we do a sizing for a given model determine what type of model it is (it should be a table to allow updates) and assign appropriate CVM memory per node (20, 28, or 32 GB per node).
  •  Next we add memory for addons which can not go higher than 32 GB
    • Add CVM memory for extras.  Total CVM Memory = Min (Platform CVM Memory + Addon memory, 32)  where addon memory =
    • If RF3 = 8 GB
    • Dedupe only =  16GB
    • Both RF3 and Dedupe = 16 GB
    • No addons = 0GB
    • Compression  = 0 GB
  • If EPIC workload take MAX(32GB, result found in step 2).   Here should at least be 32 but may be more. If not EPIC go to step 4
  • Add memory for Hypervisor.  Looking at best practices for AHV, ESX and HyperV can assume 8GB needed for the hypervisor.  Though not a CVM memory requirement per-se it is a per node requirement and so good place to add it (versus a new line item in the Sizing details).
    • Total CVM Memory = Total CVM Memory + 8Gb

Examples.

  • Either manual or automatic sizing is sizing 3060-G5.  RF3 is turned on for one workload. User wants SSP. Not EPIC workload

CVM memory per node

·        Platform CVM Memory for 3060-G5 = 20GB

·        Add on feature  CVM requirement = 8 GB

·   ·        Hypervisor = 8 GB

CVM Memory per node = 28GB.  Will show 36GB with hypervisor

  •  Either manual or automatic sizing is sizing 1065-G5.  RF3 and Dedupe are OFF. Not EPIC workload

CVM memory per node

·        Platform CVM Memory for 3060-G5 = 20GB

·        Add on feature  CVM requirement = 0 GB

·        Hypervisor = 8 GB

CVM Memory per node = 20GB.  Will show 28GB with hypervisor

  • Either manual or automatic sizing is sizing 8035-G5.  Dedupe is turned on for one workload and want SSP. Not EPIC workload

CVM memory per node

·        Platform CVM Memory for 8035-G5 = 28GB

·        Add on feature  CVM requirement = 16 GB

·      ·        Hypervisor = 8 GB

CVM Memory per node = 32GB.  Though the addon requires 16GB we reached the maximum of 32 for the platform and addons together.  Will show 40GB with hypervisor

CVM HDD

Below is how the CVM HDD overhead is calculated.

Ext4 5% of all HDD in TiB
Genesis 5% of all HDD in TiB after Ext4 is discounted
Curator Max (2% * HDD in TiB, 60 GiB) for Curator 1st HDD
+
Max (2% * HDD in TiB, 20 GiB) for Curator for all remaining HDDs
Let us take an example and see how this calculation works
HDD Capacity per node in TB 32
Number of Nodes in Cluster 3
Cluster total HDD Capacity in TB 96
Cluster total HDD Capacity in TiB 87.31

The example assumes each node has 4 x 8TB HDDs

Capacity of 1st HDD in TB 8
Capacity of 1st HDD in TiB 7.28
Capacity of all remaining HDDs in TB 88
Capacity of all remaining HDDs in TiB 80.04

Let us take the above numbers in the example and derive the HDD CVM overhead

Ext4 5% of all HDD in TiB 4.37
Genesis 5% of all HDD in TiB after Ext4 is discounted 4.15
Curator Max (2% * HDD in TiB, 60 GiB) for Curator 1st HDD
+
Max (2% * HDD in TiB, 20 GiB) for Curator for all remaining HDDs
1.75
Total CVM Overhead   10.26

CVM SSD

CVM SSD per node:

Nutanix Home 60 GiB for first 2 SSDs.  Assuming all nodes have at least 2 SSDs.

 

  • If just one SSD like for the 6035c then just 60GiB.
Ext 4 5% of of each SSD after downstroke  in GiB after Nutanix Home capacity taken.
Genesis 5% of of each SSD in GiB after Ext 4 taken
Cassandra Homogeneous clusters

 

This is for all nodes

Max(30 GiB per node, 3% of HDD raw capacity + 3% of SSD raw capacity)

For heterogenous clusters

Find the largest node and then apply above equation for all nodes

Oplog Oplog reservation per node = MIN(0.25 *(SSD space left after cassandra, cache, curator reservation), 400GiB)
Content cache 20GB per node converted to GiB

 

What are the details on CVM overheads

  • HDD numbers can be seen by clicking the “I” button
  • SSD numbers can be seen by clicking the “I” button
  • In case of AF all the CVM components are applied to SSD CVM as shown below

 

Getting 1 node or 2 node clusters

New rules in Sizer for Regular Models and ROBO Models (October 2018)

Regular Models

Rules

  • All models included
  • All use cases are allowed – main cluster application, remote cluster application and remote snapshots
  • 3+ nodes is recommended

Summary

    • This is default in Sizer and is used most of the time
    • Fits best practices for a data center to have 3 or more nodes
    • Huge benefit as Sizer user can stay in this mode to size for 1175s or other vendor’s small models  if they want 3+ nodes anyhow. No need to go to Robo mode

 

  • Note: This gets rid of previous Sizer user headache as they want to size these models for 3+ nodes and get confused where to go

 

What changes

  • The smaller nodes such as 1175S are included in the list for running main cluster applications vs just remote applications and remote snapshots

ROBO Models

Rules

    • All models  but only some can size for 1 or 2 node
    • All use cases – main cluster application, remote cluster application and remote snapshots
    • All models can 3+ nodes depending on sizing requirements

 

  • ONLY Certain Models (aka ROBO models) can be 1 or 2 node

 

    • Note there is no CPU restriction.  Basically PM decides what models are ROBO and they can be 1 or 2 cpu

Summary

  • User would ONLY need to go to ROBO if they feel the solution fits in 1 or 2 node
    • If the size of the workloads require 3+ nodes, Sizer would simply report the required nodes and it would be no different recommendation than in regular
    • They feel 1 or 2 node restrictions is fine.  
      • The list of robo models are fine for the customer
      • RF for 1 node is disk level not node level
      • Some workloads like AFS require 3 nodes and so not available

What changes

  • All models can be used in ROBO where before it was just the ROBO models

No quoting in Sizer for Robo

Currently there are minimum number of units or deal size when quoting Robo.  Sizer will size the opportunity and will tell you that you should quote X units.  Given it takes 10 or more units and possibly you want to club together multiple projects, we disabled quoting from Sizer when includes 1175S

 

Compression Sizing

Compression Settings

  • In each each workload,  there are the following compression settings
    • Disable compression for pre-compressed data.
      • This turns off compression in Sizer.  It is a good idea if  customer has mostly pre-compressed data for that workload.  Though it may be tempting to turn-off compression all the time to be conservative, it is hard to economically have large All Flash solutions without any compression.   It is also unrealistic that no data compression is possible.  Thus use this sparingly
    • Enable Compression
      • This is always ON for All Flash.  The reason for that is because post process compression is turned ON for AF as it comes out of the factory.
      • By default it is ON for Hybrid, but user can turn it OFF
    • Container Compression
      • There is a slider that can go from 1:1 (0% savings) to 2:1 (50% savings).
      • The range will vary by workload.  We do review pulse data on various workloads.  Typically 30% to 50%.  For Splunk, it is 15% maximum as the application does fair amount of pre-compression before stored in Acropolis.

What Sizer will do if Compression is turned ON

  • Post process compression is what Sizer sizes for.  The compression algorithm in Acropolis is LZ4 which runs about every 6 hours but occasionally LZ4-HC goes through cold tier data that is over day old and can compress it further.
  • First the workload HDD  and SSD requirements are computed without compression.  This would include the workload and RF overhead
  • Compression will then be applied.  .
  • Example.  Workload requires 4.39 TiB (be it SSD or HDD), RF3 is used for Replication Factor, and Compression is set to 30%
    • Workload Total in Sizing Details = 4.39 TiB
    • RF Overhead in Sizing Details = 4.39* 2 = 8.79 TiB  (with RF3 there is 2 extra copies while with RF 2 there is just one extra copy)
    • Compression Savings in Sizing Details = 30% (Workload + RF Overhead) = 30% (4.39 + 8.79) = 3.96 TiB

Deduplication

  • Deduplication does not effect the compression sizing

Local Snapshots

  • First the local snapshots are computed using what the user enter for daily change rate  and number of snapshots retained (hourly, daily, weekly)
  • RF is applied to the local snapshots as extra copies need to be made.
  • Compression is applied
  • Example
    •  Workload requires 4.39 TiB HDD, RF3 is used for Replication Factor, and Compression is set to 30%
    • Daily change rate = 1% with 24 hourly snapshots, 7 daily snapshots, 4 weekly snapshots
    • Local Snapshot Overhead in Sizing Details =  1.76 TiB  (explained in separate section)
    • Snapshots RF Overhead in Sizing Details = 2*1.76 TiB  = 3.52 TiB (with RF3 there is 2 extra copies while with RF 2 there is just one extra copy)
    • Compression Savings in Sizing Details = 30% (Workload + RF Overhead + Local Snapshot Overhead + Snapshots RF Overhead) = 30% * ( 4.39 + 8.79 + 1.76 + 3.52) = 30% * 18.46 = 5.54 TiB
      • Though a lot of numbers this is saying compression is applied to all the cold user data (not CVM)

Remote Snapshots

  • Using same example used in local snapshots but adding remote snapshots put on a different cluster
  • Remote Snapshot overhead in Sizing Details  = 6.64 TiB  (note this is just for the remote cluster, also explained in separate section)
  • Snapshots RF Overhead in Sizing Details = 13.28 TiB  (note this is just for the remote cluster and remember it is RF3)
  • Compression Savings in Sizing Details = 30% * ( 6.64 + 13.28) = 5.98 TiB
    • Though a lot of numbers this is saying compression is applied to all the cold user data (not CVM)

Misc

  • If compression is ON then just Pro or Ultimate  license in financial assumptions and in the financial analysis section of the BOM

Login Information and Vendor Support

This is a common concern with various users as they will see different login approaches and vendor support

Login Approaches

My Nutanix Login –  This is for registered partners and for all Nutanix employees.  Most sizings will be done using this login approach.  You can do a complete sizing including generating a BOM or budgetary quotes.    You can not attach a BOM to a SFDC opportunity or generate a quote in SFDC.

Salesforce Login –  This is for Nutanix employees with SFDC Account.  This is used by Nutanix field who has access to SFDC.  You can do a complete sizing including generating a BOM or budgetary quotes.    You also can attach a BOM to a SFDC opportunity or generate a quote in SFDC.

Vendor Support

When you create a scenario you select what vendor the scenario should use, meaning their models.  Nutanix employees have access to all current vendors.

Partners often have to be registered partners with a given vendor.  When a partner logs in via My Nutanix their roles are retrieved and only those vendors are allowed.

Partners that feel they should be registered for a given vendor can send email request to:  partnerhelp@nutanix.com

Prospect Sizer

For customers we do have Prospect Sizer.  Same Sizer which is updated when we post a new sprint but with limitations

  • Intended for a prospect to get an initial sizing for a Nutanix solution
    • Not intended to be the final configuration to get a quote
    • Not intended to provide full sizer capability  where competitors can see what Nutanix partner will most likely  bid
  • What it can do
    • Get a sizing for VDI, RDSH/XenApp, Server Virtualization, RAW
    • Allow the prospect to do some sizings within 3 day period
  • What it can not do
    • No financial analysis or financial assumptions.
    • No sizing details
    • Set to homogenous sizing only (no mixed or manual)
    • Standard sizing only (not aggressive or conservative)
    • No BOM
    • Limited to 3 scenarios and 3 workloads per scenario maximum
    • List pricing used for determining recommendation (not margin)
    • No customization allowed
    • No Resiliency and Availability section

To access Prospect Sizer the customer should go here

https://productsizer.nutanix.com

If they have not registered or need to re-register they will be directed to the registration page