NVD for OpenShift

Nutanix Validated Design – AOS 6.5 with Red Hat OpenShift

 

Nutanix delivers the Validated Design – AOS 6.5 with Red Hat OpenShift as a bundled solution for running Red Hat OpenShift 4.12 that includes hardware, software, and services to accelerate and simplify the deployment and implementation process. This Validated Design features a full-stack solution for hybrid cloud deployments that integrates not only products like Nutanix NCI and NUS but also Red Hat OpenShift Platform Plus including Red Hat Quay and Advanced Cluster Manager.The Validated Design – AOS 6.5 with Red Hat OpenShift sizer reference scenario has the following features:

 

 

 

A baseline configuration for each Availability Zone is comprised of three clusters:

Management – 4-16 nodes cluster based on NX-3170-G8 for Prism Central, Active Directory, OpenShift Hub Cluster for Quay Registry and Advanced Cluster Manager, and any other management services including OpenShift Controlplane when using “large footprint” layout.

Workload – 4-16 nodes of NX-3170-G8 for Red Hat OpenShift Workloads.

Storage – 4-node cluster of NX-8155-G8 for Nutanix Files and Nutanix Objects.

 

 

A dedicated management cluster in each Availability Zone provides increased availability, security, operations, and performance. Management Cluster starts with 4 nodes and can be scaled up to 16 nodes if multiple “Large Footprint” OpenShift Clusters should be deployed.

 

The workload cluster has a defined maximum size of 16 nodes, which provides a reasonable maintenance window for rolling firmware and software upgrades. Smaller workload clusters can be deployed, but the maximum size should not exceed 16 nodes.

 

 NVD defines two footprint for OpenShift Cluster Sizes:

  • Small Footprint, OpenShift Controlplane shares same workload cluster as worker nodes, maximum of 25 worker nodes
  • Large Footprint, OpenShift Controlplane runs in Management Cluster, worker nodes are running in workload cluster

Table: OpenShift Cluster Sizing: OCP Cluster (Workload) Small Footprint

Component Instances Size
Control plane 3 4 CPU cores, 16 GB
Infrastructure 3 4 CPU cores, 16 GB
Worker 2+ (max 25) 4 CPU cores, 16 GB

Table: OpenShift Cluster Sizing: OCP Cluster (Workload) Large Footprint

Component Instances Size
Control plane 3 16 CPU cores, 128 GB
Infrastructure 3 16 CPU cores, 128 GB
Worker 2+ (max 360) Minimum 4 CPU cores, 16 GB

 

  • Services SKUs are not included in the Sizer reference scenarios. Review the BOM in the NVD appendix for the complete list of Services SKUs which should be included.

It is extremely important to review the entire Validated Design – AOS 6.5 with Red Hat OpenShift on the Solutions Support portal to understand the complete validated solution. 

 

For questions or additional information:

 

May 2023

 

With the current release, Sizer introduces two new capabilities .. Compute-only/Storage-only nodes and linking Frontline quotes:

  • Compute-only/Storage-only nodes:
    Cluster settings has Node type to select only HCI solution(or with CO or SO options)
    Supports the DBOptimized nodes (CO+SO) as part of AOS 6.6.2
    In UI, the nodes will be tagged as CO or SO to identify the node types
    In manual mode, you can treat a node as CO or SO by tagging
    As you know, this should help creating a solution especially on databases optimizing on 3rd party licenses

Linking quotes
All Frontline quotes generated for that scenario can be referenced via ‘Related quotes’ pop-up
Includes quoteId link along with the status and date created/modified
This will be helpful in tracking the scenarios to quotes for past or future reference
We will be soon coming up with a concept of primary sizing(locking it           once quote is created) and allow edit only on clone. This would help avoid 1 to many mappings and better tracking.

Other enhancements:
Discounts pre-filled in budgetary quote is more aligned with good/target              discounts from Frontline
Expert cluster templates – models changed to NG8
NC2 clusters max limits-13 for NC2/Azure and 28 for NC2/AWS
vCPU:pcore for imported workloads – using exact ratio (no rounding off)

Have a great week!