Sep 2024 Releases

Sizer Sprint Release Announcement (10 Sep 2024) 

Hello everyone,

We went live with our latest Sizer release today. Here are the highlights:

Key Features & Enhancements

1. Prism Central Updates

  • Over the last couple of months, we have continuously enhanced PC capabilities in Sizer
  • PnP 1.0 aka legacy portfolio is no longer supported and PnP 2.0 mandates PC to perform licensing operations.
  • Starting today, Sizer shall automatically add Prism Central workload to every Sizer scenario containing PnP 2.0 products
  • Users do have an option to delete the automatically added PC

2. NC2 Expert Libraries

  • NC2 Expert Libraries are introduced in Sizer to standardize NC2 Sizings across Nutanix SEs and partners.
  • We have a couple of expert templates for both NC2 on AWS & Azure
  • The templates should help the field produce competitive TCO numbers when competing against native cloud vendors
  • NC2 Expert templates can be found under Templates > Cluster Templates > Expert Templates


3. Expanded Upgrade Sales Scenario to Cisco OEM

  • Upgrade sales scenario is enabled for Cisco OEM
  • Users can simulate existing Cisco OEM hardware for expansion & upgrades
  • Supported vendors for upgrade are – NX, HPE DX, Fujitsu, Dell XC & Cisco OEM

Bug Fixes and Improvements

  • Security add-on options are now available for Files Dedicated backup targets.
  • FSVMs limit for File Services within a cluster is enhanced to 32.
  • Sizer detects storage from Files & Objects Storage containers when importing Collector data and highlights the same.
    • As of today, the storage associated with these storage containers should be manually sized
    • In the future, Sizer shall automatically account for Files & Objects Storage.
  • Updated the minimum number of HCI nodes needed in HCI+CO cluster from 4 to 3

Recent Hardware Updates

  • New Models
    • Lenovo HX360 V2 Edge
    • DX560-16-G11-AC-NVMe
    • DX560-16-G11-LC-NVMe
  • EMR CPUs are now available across – Cisco & Fujitsu EMEA models

Feedback
As we continually work to refine Sizer, your input is invaluable. Please feel free to share your thoughts, questions, or concerns so we can continue to improve Sizer.

If you have any queries feel free to reach out to Sizer Support via Sizer Help or alternatively, you can reach the Sizer team via the Nutanix Community Page – Sizer Configuration Estimator

Nutanix Collector vs Dell RVTools: A Comparative Analysis

In today’s complex IT environments, tools that provide insights into infrastructure performance and utilization are crucial. Nutanix Collector and Dell RVTools are two solutions, each with its unique features and benefits. This blog post will compare these tools based on various factors to help you choose the right one for your needs.

Performance Data Collection

Nutanix Collector

Nutanix Collector offers performance data collection for up to one week. This capability allows administrators to monitor and analyze system performance over a meaningful period, helping to identify trends and potential issues before they become critical. This is also very useful when right-sizing your existing environment.

Dell RVTools

RVTools, on the other hand, does not support performance data collection. This limitation might be a drawback for those who require detailed performance metrics to manage and size their infrastructure effectively.

Multi-Hypervisor Support

Nutanix Collector

Nutanix Collector supports multiple hypervisors, including ESXi, AHV, and Hyper-V. This versatility makes it an excellent choice for heterogeneous environments with different hypervisors.

Dell RVTools

RVTools is limited to ESXi only. While this might be sufficient for VMware-centric environments, it lacks the flexibility needed for mixed-hypervisor setups.

Extended Storage Support for ESXi

Nutanix Collector

Nutanix Collector and RVTools support extended storage options for ESXi environments, including RDM (planned in v5.2), vVOL, and vSAN. This feature is essential for environments leveraging advanced storage technologies.

Dell RVTools

RVTools also supports these extended storage options, providing parity with Nutanix Collector in this aspect.

Support for Capacity Workloads

Nutanix Collector

Nutanix Collector stands out by supporting capacity workloads such as NetApp ONTAP and MSSQL. This feature is particularly beneficial for enterprises that rely on these technologies for their data storage and database needs.

Dell RVTools

RVTools does not offer support for these capacity workloads, which may limit its usefulness in environments where these technologies are prevalent.

Native Cloud Support

Nutanix Collector

One of Nutanix Collector’s significant advantages is its native cloud support, including AWS (180 days of performance data) and Azure (90 days of performance data). This capability is critical for businesses leveraging cloud services and looking to move towards hybrid multi-cloud strategies.

Dell RVTools

RVTools does not provide native cloud support, which could be a considerable drawback for organizations with hybrid or multi-cloud strategies.

Data Visualization

Nutanix Collector

Nutanix Collector includes a portal or cloud version for rich data visualization, offering a comprehensive view of the infrastructure’s health and performance. This feature is crucial for making data-driven decisions and identifying issues quickly.

Dell RVTools

RVTools lacks built-in data visualization capabilities in its free version. While a paid version may offer this feature, it represents an additional cost.

Utilization Metrics and VM Provisioning Status

Nutanix Collector

Nutanix Collector provides utilization metrics at the cluster, host, and VM levels, along with VM provisioning status. This granular level of detail helps administrators optimize resource allocation, manage workloads efficiently, and right-size the environment.

Dell RVTools

RVTools does not offer these detailed utilization metrics or VM provisioning status, potentially limiting its effectiveness in comprehensive resource management and right-sizing the environment.

Networking Topology Support

Nutanix Collector

Nutanix Collector supports standard virtual networking topology, which is essential for understanding and managing network configurations in virtualized environments. Support for distributed virtual networking topology is planned in an upcoming release, version 5.2

Dell RVTools

RVTools supports both standard and distributed virtual networking topology, giving it an edge in environments where distributed networking is utilized.

Masking Sensitive Information

Nutanix Collector

Nutanix Collector includes features for masking sensitive information and enhancing security and compliance, particularly in environments dealing with sensitive or regulated data.

Dell RVTools

RVTools does not offer features for masking sensitive information, which could be a concern for organizations prioritizing data privacy.

Operating System Support

Nutanix Collector

Nutanix Collector supports multiple operating systems, including Windows, Linux, and Mac. This cross-platform compatibility makes it a versatile tool for diverse IT environments.

Dell RVTools

RVTools is limited to Windows only, restricting its use in environments with a mix of operating systems.

Conclusion

Both Nutanix Collector and Dell RVTools offer valuable features for managing and monitoring virtualized environments. However, Nutanix Collector provides a more comprehensive set of capabilities, including multi-hypervisor support, native cloud integration, advanced workload support, and cross-platform compatibility. While Dell RVTools has strengths in networking topology and is suitable for VMware-centric environments, its limitations in performance data collection, workload support, and data security features make Nutanix Collector a more robust and versatile choice for most enterprises.

Choosing the right tool ultimately depends on your specific needs and environment.

May 2024 Releases

Sizer Sprint Release Announcement (8 May 2024)

We’re excited to share with you the highlights of what’s included in this release.

Key Features & Enhancements

Updates to PC Sizing

  • PC workload is now enhanced to support all the PC add-on services
  • Rectified the oversizing issue with scale-out PC sizing
  • See attached image – PC Sizing Updates

Collector Azure Imports

  • With the launch of Collector 5.1, Sizer now supports importing Collector outputs from native Azure cloud environments.
  • Sizing options are similar to the ones associated with AWS. 

Updates to VDI & RDSH Workloads

  • Updated VDI workload with latest Office versions – Office 2021 & Office 365
  • Updated VDA/RDSH workload with Windows 2022 numbers

Upgrade sales enabled for Dell XC

  • Upgrade sales scenario is now enabled for Dell XC
  • Supported vendors for upgrades are – NX, HPE DX, Fujitsu & Dell XC

NCM Edge and NCM EUC Licenses

  • Sizer NCM licenses are expanded beyond core-based licensing
  • New options – NCM Edge (per VM) and NCM EUC (per User)
  • The new licensing options can be found under “Solution Options”
  • See attached image – NCM updates

Bug Fixes and Improvements

  • The below NX G9 models are now supported in 1-node & 2-node configurations
  • NX-3035, NX-3060, NX-3155, NX-8150, NX-8155, NX-8155A, NX-8170
  • Legacy portfolio sales approaching EOS, a message is displayed warning the users when the scenario contains legacy portfolio products.
  • Sizer BOM updated with Nutanix software license.
  • Workloads are sized by configuration when performance data is missing, the same is highlighted in the import summary output.

Sizer Sprint Release Announcement (21 May 2024) 

Key Features & Enhancements

Files Video Management System (VMS)

  • Introducing a new workload/use case for Files – Video Management System (VMS)
  • Designed based on the VMS solution from Milestone – a global leader in video solutions.
  • Supported across Mixed Mode and Dedicated Mode Files cluster.
  • Solution provisions for the archival storage and the Recording Server compute resource requirements

Nutanix Launchpad Promo

  • Available under the ‘License, Support & Services’ section
  • Both NCI Promo & NCM Promo options with guardrails
  • Integration with Frontline to get the promo quotes. 

Option to turn off CPU Applied Weight Factor

  • Switching off the Applied Weight Factor will not factor SPECInt-based adjustment
  • Effective cores will be the same as Physical cores
  • Useful when sizing as per RFP core requirements

Bug Fixes and Improvements

  • Sizer BOM report with new branding
  • Performance improvements reducing overall sizing time and better user experience. 

New Models launched in the last 2 weeks

Feedback

We’re continuously striving to improve Sizer and provide the best possible experience for our users. Your feedback is incredibly valuable to us as we continue to iterate and improve Sizer. If you have any suggestions, or questions, or encounter any issues, please don’t hesitate to reach out to us.

Thank you for your continued support and for being a part of our journey!

Thanks,

Sizer Team

April 2024 Releases

Sizer Sprint Release Announcement (23 April 2024)

Hello everyone,

We’re thrilled to announce the successful completion of another Sizer sprint! We’re excited to share with you the highlights of what’s included in this release:

Key Features & Enhancements

1. NC2 on AWS Overall Solution Cost

  • Sizer budgetary quote is updated to reflect the overall NC2 on AWS solution cost
  • Starting today, NC2 on AWS solution cost is not limited to Nutanix software cost but also shows the AWS hardware cost
  • Sizer pulls live pricing info from AWS for bare metal and EBS volumes
  • The prices are based on the opportunity theater, more details in the disclaimer in the footnotes of the budgetary quote

2. Online repository to capture your workload source details  

  • You can now attach your workload source files within the Sizer scenario
  • This will help you in easier referencing and traceability if you need to revisit them in the future. 
  • There are no restrictions on the file type – it can be Word doc, PDF, Excel, etc.
  • You can also attach Collector or RVTools output if you use them but do not prefer importing them
  • You can find this cool feature in the Workload tab – “Workload Source & Import History”

3. New default CPU for all workloads

  • Starting today, the default CPU for all workloads is updated to Intel Gold 5220 (Cascade Lake)
  • The new default CPU is more aligned with the CPUs that are seen for brownfield sizings
  • We highly encourage you to specify the exact CPU while adding workloads instead of defaulting
  • You can always switch to the previous default CPU by using the “Sizer Baseline” option

Bug Fixes and Improvements

  • The first phase of performance improvements are rolled out – this should help in certain areas and more to come in the future sprint release. 
  • Guardrail to avoid mixing 3 tiers of storage in the same cluster, for more info refer to Product Mixing Restrictions
  • Enhanced warning messages during import – For utilization based Sizings and AWS imports 
  • Enhanced guardrails to ensure the right networking options are enforced when configuring Cisco models and opting for Intersight Standalone Mode (ISM)

New Models launched in the last 2 weeks

  • Dell XC760xa-6N 
  • Dell XC660xs-4
  • Dell XC660xs-4s
  • HPE DX385-8-G11-GPU – GPU Dense platform

The following HPE DX models are now updated with AMD Bergamo

  • DX365 Gen11 10 NVMe
  • DX385 Gen11 12 LFF

Sizer Sprint Release Announcement (10 April 2024)

We’re thrilled to announce the successful completion of another Sizer sprint! We’re excited to share with you the highlights of what’s included in this release:

Services – IM :

  • Services-Infrastructure Modernization portfolio is available to add as part of the solution.
  • License & Support section is now License, Support & Services with a separate Services Tab
  • The PS SKUs are included as part of the BOM PDF and FL quote and more importantly for the Budgetary estimate of the solution w/Services.  
  • Available for Nutanix SEs and Partners     

Collaboration/Partners:

  • Sharing with Write Access is now enabled for Partners.
  • Partner SEs can share their solution with users within their Org(same domain) or with Nutanix SEs.  
  • Facilitates jointly developing a solution working on the same scenario by multiple users.

Others/Licenses:

  • NCI Ult license enforcement for NC2 on AWS with EBS
  • Prism Central support for Dedicated clusters.
  • Upgrade sales enabled for Fujitsu

Feedback

We’re continuously striving to improve Sizer and provide the best possible experience for our users. Your feedback is incredibly valuable to us as we continue to iterate and improveSizer. If you have any suggestions, or questions, or encounter any issues, please don’t hesitate to reach out to us.

Thank you for your continued support and for being a part of our journey!

Thanks,

Sizer Team

Feb 2024 release

New workload : Prism Central

  • Prism Central as a new sizing option in the workload dropdown.
  • Size for Single VM / Scale-out deployment options
  • Considers resource requirements for add-ons : Self Service/Flow/Prism Operations
  • Please note that this is an added option where users can manually add PC to the cluster. In addition, selecting Prism Central from solutions option continue to exist.

Split large workloads into multiple clusters:

  • Earlier, extremely large workloads gave a no optimal solution message when the workloads do not fit in a single cluster.
  • With split workload functionality, Sizer automatically creates multiple clusters and splits workloads across the clusters optimally.
  • Currently supported for Cluster(raw) sizing only
  • This should be especially helpful in two scenarios: large imports from Collector and sizing on NC2 on AWS/Azure with smaller max node limit.

Insights imports: Auto create existing cluster

  • When importing from Insights, there is an option to auto-recreate existing cluster.
  • This will create the cluster with the existing HW configuration (today it defaults to latest gen models and users must manually configure existing HW)
  • Only additional nodes need to be manually configured if considering for expansion.  

Proposals:

  • Latest HW spec on NX G9 and HPE DX G11

Usability enhancements:

  • View workload:
    • Currently it captures only essential workload inputs.
    • Now there is a ‘View All details’ that opens the workload page in view mode to view all inputs.
    • No need to clone just to see workload inputs.

Platforms: Dell 16G (Intel SPR) platforms

A short demo on splitting workloads into multiple clusters:

January 2024

We went live with the current sprint and excited to highlight a few major features.. one of which has been frequently requested by the SE community ..

Intel / AMD (View Other Alternatives) :

  • You would notice the ‘view other alternatives’ appear just below the default solution (pic below)
  • The solution includes the AMD models option for the same workload along with the Intel(default) models (along with the cost delta)
  • As per NX team, with the introduction of NX AMD models, there may be situation where the AMD option may come out to be optimal thus having both presented gives a broader perspective and may help drive its adoption.
  • The alternative option is not restricted to NX but is applied across vendors(which support AMD models).

Collaboration / Sharing with edit access:

  • One of the frequently requested features. Earlier sharing was read only, and users had to clone (under their name) to make edits.
  • While sharing, you can now choose to share as ‘read only’ or ‘write’. The latter allowing the shared user to make edits to the same scenario.
  • Only one user can make edits at a time and the scenario is locked by/for the user. It can be seen as ‘Active Editor’ on the top , next to scenario owner (pic below)
  • Once the active editor finishes making changes and scenario is in idle state, it releases the scenario for editing by others.
  • Alternatively, the active user can also release it immediately by clicking on ‘Stop editing’ button.
  • As the first phase, this is enabled only for internal users via SFDC login, opening to IDP/Partner login is in plan.
  • There is a short demo on this for an end-to-end flow: https://nutanix.zoom.us/rec/share/rWpWSXcUyUF5fx3gc8d91nVJs5_3xgyXiN3lvDl2WMls0sTE_qk6J4rVKlrBDdoj.D1DqFh_nlbg7UgIc?startTime=1705414710000

Platforms :

  • Cisco Edge Offer SKUs available for sizing/BOM
  • Adding platform integration fee as a line item in the Budgetary quote (in sync with FL)

Survey: Conducted a survey on feature gaps/UX review.

https://forms.gle/CrqKz5JnEFziAPSw6

Collector 5.0

The latest release of Nutanix Collector 5.0 is now available for download.  

WHAT’S NEW

Nutanix Collector 5.0 adds support to gather data from AWS environments using Windows command line interface (CLI) mode. You can gather the configuration and performance data of the AWS resources that help understand the cloud infrastructure requirements accurately and can be exported to Sizer to size the workloads either on NC2 on AWS or on-prem datacenter.

  • Gather details about the Elastic Compute Cloud (EC2) instances, Elastic Block Storage (EBS) volumes, Snapshots, and Elastic File System (EFS)
  • Gather data across multiple Organization Units and/or Accounts in one go
  • Instantaneously gather utilization or performance patterns up to 180 days
  • Gather utilization data of EC2 instances – Average, Peak & 95th Percentile CPU utilization
  • Gather performance data of EBS Volumes – IOPS, IO Size & Throughput
  • Option to export the data from Collector Portal to Sizer

Demo videos

Nutanix Collector 5.0 – Prerequisites for AWS Data Collection
Nutanix Collector 5.0 – Support for AWS Data Collection

For detailed information, see the Collector User Guide.

UPDATED FEATURES

The following features are updated as part of this release

  • AHV Collections
    • Support for “Storage Containers” information
    • CVM resource utilization information – helps in simulating the existing customer environment for expansion use cases
  • Hyper-V Collections
    • Provisioned & Consumed Capacity reporting at the host level
  • Updates to the metadata sheet
    • Collection Date & Time
    • Performance Data Duration

RESOLVED ISSUES

The following issues are resolved in this release

  • Resolved an issue where Nutanix Collector failed to collect performance data for the Hyper-V host.
  • The tooltip information was showing an incorrect time of collection while gathering the performance data from Hyper-V clusters.

RESOURCES

CALL TO ACTION

Simplify your requirement gathering by downloading Collector

Public URL: Collector Download Link for Prospects (no login required)

MyNutanix URL: Collector Download Link for MyNutanix Users 

Report issues in Collector via this Google form

Thank you,
Team Nutanix Collector

Storage calculation for clusters with mixed capacity nodes

This article explains the logic for storage calculations for clusters having node with different storage capacities

What has changed?

Previously, the capacity calculations had been based on aggregate capacity across nodes in the cluster. This total capacity used to be the base for calculating usable and effective capacity in the cluster.

For example: Consider 3 nodes , N1=20TB, N2=20TB and N3=10TB

Based on the above, the total capacity available is 20+20+10 = 50TB and assuming (N+1) , the available nodes are N2+N3 = 30TB. Thus, 15TB can be used for data and 15TB for RF (assuming RF2)

With the new update: Sizer also ensures that the RF copy of the data and the data itself do not share the same node.

In the above example: after N+1, two nodes are available N2 = 20TB ,N3 = 10TB.

If we allow writing 15TB of data (and 15 for RF), part of the data and RF has to be on same node as N3 is only 10TB. So, to ensure the RF copy and data is on separate nodes, the usable storage in this case would be 20TB ( 10TB of data on N2 and its RF on N3 or vice versa).

Note: Although the same logic is used for both homogeneous and mixed capacity clusters, the difference is seen primarily for the mixed capacity clusters.

Here is a detailed write up on how the usable storage is calculated for clusters with mixed capacity nodes for different scenarios across RF2 and RF3

Algorithm for RF2

If we have only one node with non-zero capacity Cx, then in RF2 the replication is done between the different disks of the same node and hence the extent store in this case will be Cx / 2 (RF), else one of the below cases applies. Let us say, we have nodes with capacities C1, C2, C3, …., C[n] which are in sorted order according to their capacities. There are 2 cases to consider for RF2, to compute the effective raw storage capacity:

Case-1: C1 + C2 + C3 + …. + C[n-1] <= C[n] 
If this is the case, then the total amount of storage that can be replicated with a factor of 2 is  ∑(C1, C2, C3, …., C[n-1])

Case-2: C1 + C2 + C3 + …. + C[n-1] > C[n]
If this is the case, then the (total storage capacity) / 2 (RF) can be replicated among the available nodes. In other words, half the total capacity can be replicated.

Algorithm for RF3

Let us say, we have nodes with capacities C1, C2, C3, …., C[n] which are in sorted order according to their capacities. Algorithm for RF3 is slightly different from that of RF2 because we need to accommodate the replica of data on 2 nodes, as opposed to a single node on RF2.

  1. Since there are 3 replicas to place, we calculate the capacity difference between the 2nd largest (C[n-1]) and the 3rd largest (C[n-2]) entities as ‘diff’. This information is necessary so that given an optimal placement scenario where the first replica is placed on the entity with the smallest capacity, the second replica is placed on the entity with the largest capacity (C[n]) and the third replica is placed on the entity with the 2nd largest capacity (C[n-1]); the difference between the 2nd and the 3rd largest capacities ((C[n-1]) – (C[n-2])) will help us quickly deduce when the 2nd largest entity will become equal to the 3rd largest entity by virtue of space consumed on the former via replica placement.
  2. By deducting either the ‘diff’ calculated above (or) the capacity of the smallest entity and simulating RF3 placement such that C[n-2] and C[n-1] have now become equal (note that the difference between C[n] and C[n-1] will remain constant during this since the same capacity is deducted from both of them), in O(N) we arrive at the possibility of:
    • Case-1:  Only 3 entities remain with non-zero capacities, in which case the amount of data that can be accommodated among these 3 nodes with RF of 3 (one actual node and 2 replicas) is the smallest remaining capacity, which is C[n-2].
    • Case-2:There is capacity left in C[n-3] (i.e. the 4th largest entity) and any number of nodes before it (i.e., C[n-4], C[n-5], … etc) and C[n-2] == C[n-1] (i.e. the capacities remaining on the third and the second largest entities have become equal). This is because at this point, the capacity on the smallest entity remaining (the smallest non-zero entity before C[n-2] i.e) is greater than C[n-1] – C[n-2], indicating that after placing the first replica on C[n] and second replica on C[n-1], the time has come where the capacity on C[n-1] == C[n-2]. At this point, for the next bytes of data, the second replica will go to C[n] while the third replica will be round robin-ed between at least 2 (or more) entities. Now in this scenario as well, 2 cases can arise:
      • Case-2(a): (C1 + C2 + … + C[n-1]) / 2 <= C[n]
        Now, if C[n]’s capacity is so high that it means that for every 1st and 3rd replicas placed on the lowest capacities nodes upto C[n-1], the second replica always finds space on C[n], then it implies that, if (C1 + C2 + … + C[n-1]) / 2 <= C[n], then the amount of storage that can be accommodated on available nodes with RF of 3 is the lowest among the two sides of the above equation i.e., (C1 + C2 + … + C[n-1]) / 2, as we cannot consume the full space on C[n].
      • Case-2(b): (C1 + C2 + … + C[n-1]) / 2 > C[n]
        But if C[n]’s capacity is not so high as in case (a), i.e., (C1 + C2 + … + C[n-1]) / 2 > C[n], then replica placements for one of the replicas will be on the largest entity C[n], while the other two replicas will round-robin amongst the other largest capacity entities (since the capacities remaining on at least 2 entities C[n-2], C[n-1] are already equal). This will continue until C[n] becomes equal to C[n-1], which is guaranteed to happen eventually because the replicas consume space on C[n] at least at a rate double than C[n-1], C[n-2], … From that point, both the second and the third replicas will continue being round robin-ed across all the remaining entities, and thus all the capacities remaining at that point can be fully consumed. Hence, in this case, the amount of storage that can be accommodated is the sum of all remaining (non-zero) entities divided by 3 (RF).

Terminologies

Effective Usable Capacity = 95% of (Raw capacity – failover capacity based on RF)

95% because AOS stops writing to the disk when the cluster utilization reaches 95%.

Effective Capacity = Effective Usable Capacity – CVM

Extent Store Capacity = Effective Capacity / RF

Effective Capacity without Saving = Extent Store Capacity

Effective Capacity with Savings = Extent Store Capacity + Savings (Storage Efficiency & Erasure Coding)

NVD for OpenShift

Nutanix Validated Design – AOS 6.5 with Red Hat OpenShift

 

Nutanix delivers the Validated Design – AOS 6.5 with Red Hat OpenShift as a bundled solution for running Red Hat OpenShift 4.12 that includes hardware, software, and services to accelerate and simplify the deployment and implementation process. This Validated Design features a full-stack solution for hybrid cloud deployments that integrates not only products like Nutanix NCI and NUS but also Red Hat OpenShift Platform Plus including Red Hat Quay and Advanced Cluster Manager.The Validated Design – AOS 6.5 with Red Hat OpenShift sizer reference scenario has the following features:

 

 

 

A baseline configuration for each Availability Zone is comprised of three clusters:

Management – 4-16 nodes cluster based on NX-3170-G8 for Prism Central, Active Directory, OpenShift Hub Cluster for Quay Registry and Advanced Cluster Manager, and any other management services including OpenShift Controlplane when using “large footprint” layout.

Workload – 4-16 nodes of NX-3170-G8 for Red Hat OpenShift Workloads.

Storage – 4-node cluster of NX-8155-G8 for Nutanix Files and Nutanix Objects.

 

 

A dedicated management cluster in each Availability Zone provides increased availability, security, operations, and performance. Management Cluster starts with 4 nodes and can be scaled up to 16 nodes if multiple “Large Footprint” OpenShift Clusters should be deployed.

 

The workload cluster has a defined maximum size of 16 nodes, which provides a reasonable maintenance window for rolling firmware and software upgrades. Smaller workload clusters can be deployed, but the maximum size should not exceed 16 nodes.

 

 NVD defines two footprint for OpenShift Cluster Sizes:

  • Small Footprint, OpenShift Controlplane shares same workload cluster as worker nodes, maximum of 25 worker nodes
  • Large Footprint, OpenShift Controlplane runs in Management Cluster, worker nodes are running in workload cluster

Table: OpenShift Cluster Sizing: OCP Cluster (Workload) Small Footprint

Component Instances Size
Control plane 3 4 CPU cores, 16 GB
Infrastructure 3 4 CPU cores, 16 GB
Worker 2+ (max 25) 4 CPU cores, 16 GB

Table: OpenShift Cluster Sizing: OCP Cluster (Workload) Large Footprint

Component Instances Size
Control plane 3 16 CPU cores, 128 GB
Infrastructure 3 16 CPU cores, 128 GB
Worker 2+ (max 360) Minimum 4 CPU cores, 16 GB

 

  • Services SKUs are not included in the Sizer reference scenarios. Review the BOM in the NVD appendix for the complete list of Services SKUs which should be included.

It is extremely important to review the entire Validated Design – AOS 6.5 with Red Hat OpenShift on the Solutions Support portal to understand the complete validated solution. 

 

For questions or additional information:

 

May 2023

 

With the current release, Sizer introduces two new capabilities .. Compute-only/Storage-only nodes and linking Frontline quotes:

  • Compute-only/Storage-only nodes:
    Cluster settings has Node type to select only HCI solution(or with CO or SO options)
    Supports the DBOptimized nodes (CO+SO) as part of AOS 6.6.2
    In UI, the nodes will be tagged as CO or SO to identify the node types
    In manual mode, you can treat a node as CO or SO by tagging
    As you know, this should help creating a solution especially on databases optimizing on 3rd party licenses

Linking quotes
All Frontline quotes generated for that scenario can be referenced via ‘Related quotes’ pop-up
Includes quoteId link along with the status and date created/modified
This will be helpful in tracking the scenarios to quotes for past or future reference
We will be soon coming up with a concept of primary sizing(locking it           once quote is created) and allow edit only on clone. This would help avoid 1 to many mappings and better tracking.

Other enhancements:
Discounts pre-filled in budgetary quote is more aligned with good/target              discounts from Frontline
Expert cluster templates – models changed to NG8
NC2 clusters max limits-13 for NC2/Azure and 28 for NC2/AWS
vCPU:pcore for imported workloads – using exact ratio (no rounding off)

Have a great week!