CVM Cores
Sizer will first compute the number of cores needed for the workloads. The sum of all the workload cores is called TotalCores in this equation.
Each workload type has its own number of cores
- NumVDICores
- NumDBCores (SQL Server)
- NumServVirtCores
- NumRawCores. Note: coreCVMOverhead is a user input for RAW workload to set CVM Cores with default being 4 cores
- NumServerComputingCores
- NumSplunkCores
- NumXenAppCores
- NumFileServicesCores
- Oracle cores is set to 6
What sizer then does is places a weighted average of CVM cores for these workloads with a range of 4 to 6 cores per node depending on workload mix
CVM cores per node = (NumVDICores / TotalCores) * 4 + (NumDBCores / TotalCores) * 6 + (NumExchgCores / TotalCores) * 6 + (NumServVirtCores / TotalCores) * 4 + (NumRawCores / TotalCores) * coreCVMOverhead + (NumServerComputingCores / TotalCores) * 4 + (NumSplunkCores / TotalCores) * 6 + (NumXenAppCores / TotalCores) * 4 + (NumFileServicesCores / TotalCores) * 4 + (number of Oracle cores/TotalCores) * 6;
For example, if only VDI is in a scenario than the NumVDICores / TotalCores ratio is 1 and then 4 cores are assigned for each node for CVM
The workload equations will come to a CVM core count per node for entire configuration that depends on the workload balance but is between 4 to 6 cores for CVM per node.
CVM Memory
CVM memory will vary by Platform type
Platform | Default Memory (GB) |
VDI, server virtualization | 20 |
Storage only | 28 |
Light Compute | 28 |
Large server, high-performance, all-flash | 32 |
Feature Addon CVM Memory
Sizer adds on the following amount of memory for features as noted below
Features | Memory (GB) |
Capacity tier deduplication (includes performance tier deduplication) | 16 |
Redundancy factor 3 | 8 |
Performance tier deduplication | 8 |
Cold-tier nodes + capacity tier deduplication | 4 |
Capacity tier deduplication + redundancy factor 3 | 16 |
Self-service portal (AHV only)
|
Sizer approach to calculate CVM Memory
- First determine Platform type CVM Memory from the tables. As we do a sizing for a given model determine what type of model it is (it should be a table to allow updates) and assign appropriate CVM memory per node (20, 28, or 32 GB per node).
- Next we add memory for addons which can not go higher than 32 GB
- Add CVM memory for extras. Total CVM Memory = Min (Platform CVM Memory + Addon memory, 32) where addon memory =
- If RF3 = 8 GB
- Dedupe only = 16GB
- Both RF3 and Dedupe = 16 GB
- No addons = 0GB
- Compression = 0 GB
- If EPIC workload take MAX(32GB, result found in step 2). Here should at least be 32 but may be more. If not EPIC go to step 4
- Add memory for Hypervisor. Looking at best practices for AHV, ESX and HyperV can assume 8GB needed for the hypervisor. Though not a CVM memory requirement per-se it is a per node requirement and so good place to add it (versus a new line item in the Sizing details).
- Total CVM Memory = Total CVM Memory + 8Gb
Examples.
- Either manual or automatic sizing is sizing 3060-G5. RF3 is turned on for one workload. User wants SSP. Not EPIC workload
CVM memory per node
· Platform CVM Memory for 3060-G5 = 20GB
· Add on feature CVM requirement = 8 GB
· · Hypervisor = 8 GB
CVM Memory per node = 28GB. Will show 36GB with hypervisor
- Either manual or automatic sizing is sizing 1065-G5. RF3 and Dedupe are OFF. Not EPIC workload
CVM memory per node
· Platform CVM Memory for 3060-G5 = 20GB
· Add on feature CVM requirement = 0 GB
· Hypervisor = 8 GB
CVM Memory per node = 20GB. Will show 28GB with hypervisor
- Either manual or automatic sizing is sizing 8035-G5. Dedupe is turned on for one workload and want SSP. Not EPIC workload
CVM memory per node
· Platform CVM Memory for 8035-G5 = 28GB
· Add on feature CVM requirement = 16 GB
· · Hypervisor = 8 GB
CVM Memory per node = 32GB. Though the addon requires 16GB we reached the maximum of 32 for the platform and addons together. Will show 40GB with hypervisor
CVM HDD
Below is how the CVM HDD overhead is calculated.
Ext4 | 5% of all HDD in TiB |
Genesis | 5% of all HDD in TiB after Ext4 is discounted |
Curator | Max (2% * HDD in TiB, 60 GiB) for Curator 1st HDD + Max (2% * HDD in TiB, 20 GiB) for Curator for all remaining HDDs |
HDD Capacity per node in TB | 32 |
Number of Nodes in Cluster | 3 |
Cluster total HDD Capacity in TB | 96 |
Cluster total HDD Capacity in TiB | 87.31 |
The example assumes each node has 4 x 8TB HDDs
Capacity of 1st HDD in TB | 8 |
Capacity of 1st HDD in TiB | 7.28 |
Capacity of all remaining HDDs in TB | 88 |
Capacity of all remaining HDDs in TiB | 80.04 |
Let us take the above numbers in the example and derive the HDD CVM overhead
Ext4 | 5% of all HDD in TiB | 4.37 |
Genesis | 5% of all HDD in TiB after Ext4 is discounted | 4.15 |
Curator | Max (2% * HDD in TiB, 60 GiB) for Curator 1st HDD + Max (2% * HDD in TiB, 20 GiB) for Curator for all remaining HDDs |
1.75 |
Total CVM Overhead | 10.26 |
CVM SSD
CVM SSD per node:
Nutanix Home | 60 GiB for first 2 SSDs. Assuming all nodes have at least 2 SSDs.
|
Ext 4 | 5% of of each SSD after downstroke in GiB after Nutanix Home capacity taken. |
Genesis | 5% of of each SSD in GiB after Ext 4 taken |
Cassandra | Homogeneous clusters
This is for all nodes Max(30 GiB per node, 3% of HDD raw capacity + 3% of SSD raw capacity) For heterogenous clusters Find the largest node and then apply above equation for all nodes |
Oplog | Oplog reservation per node = MIN(0.25 *(SSD space left after cassandra, cache, curator reservation), 400GiB) |
Content cache | 20GB per node converted to GiB |
What are the details on CVM overheads
- HDD numbers can be seen by clicking the “I” button
- SSD numbers can be seen by clicking the “I” button
- In case of AF all the CVM components are applied to SSD CVM as shown below