October 2021 sprints

October 1st sprint: 

We went live with the current sprint with some major enhancements around XenApp and Bulk Edit for Data Protection which were eagerly awaited.. : )Virtual Apps and Desktops (earlier called XenApp in sizer)
  • Added VMWare Horizon Apps support
  • Updated numbers with latest OS support (Windows 2019)
  • Updated profile info (based on User Type by VDI broker)
Bulk Edit : Data Protection
  • Now Bulk edit also supported for Data Protection
  • Both Local and Remote snapshots/DR allowed for bulk edit
  • All VMs being edited for DP need to be part of the same cluster
Other changes:
  • Rack Awareness – user specified rack count
  • Helps to spread the nodes across multiple racks if power limitation/rack at site
  • Lenovo HW(HX) quoting from Sizer/FrontLine now available for all regions(worldwide)
  • HPE DX Ice Lake platforms: DX380 Gen10 Plus 8SFF & DX380 Gen10 Plus 24SFF (in manual mode for now )

October 2nd sprint:

We did go live with new sprint for Sizer today. Big thing is Sizer now supports RF1 since AOS 6.0 supports RF1. RF1 fits best in certain situations where the data resiliency of RF2 is not critical.  We allow it for certain workloads as an option
  • Server Virtualization
  • Cluster(Raw) sizing
  • SQL DB (non business critical)
The following tooltip is worth noting as RF1 should be used with cautionSelect RF1 only if the application is SAAS Analytics or Hadoop, RF1 is used when workload does not require data resiliency or the data resiliency is handled at application level and currently supported for SAS Analytics or Hadoop.
Other enhancements:
File Services : There is an optimisation to the Files sizing which leads to fewer cores required. The FSVM is optimized to account for the the resources across all the FSVM(minimum 3) to meet the cores requirement and choose the FSVM profile accordingly. For smaller deployments, this becomes efficient as the FSVM size would not only depend based on 1 but a minimum 3 VMs to meet the resource consumption.
Platform Alignment
DX new GPU rules
Remove rule around 128GB DIMM & L CPU
HPE-DX IceLake platform DX360-Gen10Plus-10NVMe – Phase 2a
New platforms – NX-1175S-G8 & NX-8035-G8

September 2021 sprints

Sept 1st sprint: 

Hi Everyone ,Further update on NX-G8, we went live with two more NX platforms, NX-8150-G8 & NX-8170-G8.. covering the larger All Flash systems….both these platforms are available end-to-end for sizing and FL quoting..
NX-8150-G8
  • Upto 80 cores (dual socket), up from 56 cores in the G7
  • Upto 4TB of RAM, up from 3TB in G7
  • Upto 184.32 TB of flash capacity
 NX-8170-G8
  • Upto 64 cores (dual socket), up from 56 cores in the G7
  • Upto 2TB of RAM
  • Upto 10x 7.68 TB NVMe
More details in the NX spec sheet , which should get updated with these platforms by today:
https://www.nutanix.com/products/hardware-platforms/specsheet
September 2nd sprint:
Very glad to let you know that we have integrated Lenovo DCSC with Sizer & Frontline
  • Quoting Lenovo HX Certified Nodes (CN) along with Nutanix Software should be a breeze, actually better than HPE
  • Size your solution in Sizer, generate FL quote with Nutanix Software + Lenovo HX CN using 1-click
  • This integration also means you can quote HX IceLake Platforms via FL
  • Currently, the functionality is limited to the US Geo (via common disti – Ingram Micro US). Worldwide release – coming soon
Other product updates:
  • Support for A100 GPU is now available on HPE-DX platforms
  • Options used while importing Collector or RVTools data can be viewed in Sizer post import
  • Imported processor SpecInt is also visible in Sizer
Have a great day!

August 2021 sprints

August 2nd sprint:
Following the announcement around NX-G8 , Sizer also went live with the NX-G8 (3rd Gen Intel Xeon Processors/Ice Lake) with this sprint.. below are the highlights:NX-G8
  • Available for end to end sizing and FL quoting
  • NX-1065-G8 on the lower end with more core/cpu options
  • NX-3060-G8 and NX-3070-G8 with All flash/NVMe builds
  • NX-8055-G8 on the higher end with both Hybrid and All Flash(including NVMe)
  • NX-3055G-G8 with the GPUs
  • The larger All Flash systems NX-8150-G8 and NX-8170-G8 to follow in few weeks
  • Both G7 and G8 would be available for now (and either would come up in recommendations based on workload)
Others
  • CVM cores – accounting for physical cores going forward and translating into specInt adjusted cores for the sizing stats
  • Solution option – added Prism Ultimate option (both core and node based licensing)
More details on launch announcements for NX-G8 and spec sheet links here : https://www.nutanix.com/blog/nutanix-launches-support-for-next-gen-of-platforms-for-hybrid-cloud-deployments
August 1st sprint:
We went live with the current sprint with a major change around thresholds…, you would have already noticed the banner displaying in the UI today.. more details below..Thresholds in Sizer :
  • The default threshold in Sizer used to be 95% which is also the same as maximum allowed
  • With this release, the defaults are being adjusted and is set at 85%, while the max continues to be 95%
  • While the default is changed to 85%, users can still go to settings and change it back 95%
  • We feel its more prudent to go with 85% as it leaves some room for estimation/spikes and for upgrades (node is taken down-(N+0) while still allowing to go with 95% if needed or not critical or off peak upgrades..
  • This is also consistent with our observation on utilization % for manual sidings (which had more buffer than the corresponding auto sizing – possibly for the same reasons)
  • This applies for new sizing done starting today. Existing sizing’s continue their previous thresholds ( clones treated as new so new thresholds would apply)
Other enhancements:
Usability
  • Core based licensing for Prism / Flow
  • AOS rule on minimum flash at 4% of total node capacity (also applicable for Files/Objects)
Platforms
  • NX memory replacement from 2933MHz to 3200MHz
  • NEC: New Models CL 4LFF and 12LFF Platforms
  • New Platform: Lenovo HX7820 2 socket variant
More explanation behind the threshold changes and rationale in Sizer wiki : https://sizer.nutanix.com/#/help/articles/1035

August 2021 release (CVM explanation)

CVM cores – What has changed and Why?

How does Sizer assign CVM cores up until today(Aug 2021)?

Sizer allocates resources to CVM as part of the sizing exercise.  Here, we will be looking at CVM cores specifically.

Sizer allocates CVM cores based on a combination of factors – like workload type ( 4 for Server Virt, 12 for Databases etc) or based on node type (higher CVM cores for NVMe nodes for ex ) or as per guidance on certain features (rules around Async/NearSync etc).

However,  while attributing cores to CVM, Sizer used ‘Effective Cores’ , which means, these were specInt adjusted cores and not actual physical cores available to CVMs.

For ex:

Lete say Sizer allocated 4 cores to CVM.  These are ‘effective cores’ which are specInt adjusted.

As seen in the table below, these many “effective cores” are attributed to CVMs in the sizing stats table.

7 nodes and 4 cores for CVM per node : 7 x 4= 28 cores (effective cores)

Lets say that the node recommended had the CPU – Gold 5220.

So, translaitng the effective cores to physical cores –

28 ‘effective cores’ – is approximately 22.4 physical cores (adjusting for the specInt for Gold 5220)

=22.4/7 = 3.2 physical cores/CVM

So roughly 3 physical cores for CVM – which is on the lower side and can cause performance issues.

As CPUs get better in performance (Ice Lake > CaseCade Lake > Sky Lake), their specInts are higher and thus, it translates to even fewer physical cores.

What has changed?

The CVM cores allocation is now based on the physical cores (and not effective cores). 

So, when Sizer assigns 4 cores, its 4 physical cores (of Gold 5220 in above example) and not effective cores.

Folloiwing up from the previous example: (Refer to image below)

The tooltip shows the total physical cores assigned to CVM : 7 x 4= 28 cores

For the rest of the calculation on the sizing stats, these are convered to effective cores , so , 28 physial cores = 33.5 ‘effective cores’

Note:  Depending on the processor, the effective cores value ( in red here as -33.15) can go as high as 50-60% of the physical cores (for example for high end CasCade Lake refresh or high end Ice Lake processors), further validating the point that CVM would be getting fewer underlying physical cores (and hence the change).

What is the impact to sizing as a result of this change:

The sizings are more robust and aligns the CVM allocation ensuring to remove any CVM performance bottlenecks, while aliging it with foundation recommendations.

With more high end processors having significantly higher specInts ( Cascade Lake refresh and now Ice Lake), the gap between effective cores and physical cores is getting wider. This change will ensure that while UVMs do take advantage of the better processor capabilities, CVM gets the cores it requires for optimal performance and doesn;t lead to latency issues.

Understandably and expectedly, this change,however, would increase the core requirement for the cluster (as against previous sizings). For an existing sizing, this can be observed, more predominantly while cloning, which would apply the new CVM minimums to the cloned scenario leading to increased utilization of the CPU dials.  This , as a tradeoff, for higher CVM allocation for a better, more robust, performance optimal sizing scenario.

Collector 3.5.1

Nutanix Collector 3.5.1 goes beyond compute-focused workloads, Collector has now added support for the first of the many upcoming capacity-focused workloads. Here is a short summary:

WHAT’S NEW

Collector now provides you an option to collect data for NetApp Clustered DATA ONTAP (8.3 or above) File Shares, CIFS. Collector allows you to collect data directly from the native storage boxes. You can gather a valuable set of information that can be extremely helpful in sizing your File Share requirements accurately. 

Cluster details – Capacity, Storage Efficiency, # of CIFS Shares & # of Volumes

Node details – Node Health, Model, Vendor, ONTAP version, CPU & Memory info

Share details – Share Path, Access Control Levels, and Max Connections

Volume details – Capacity, Storage Efficiency, QoS Level, Snapshot & Backup Details

QoS details – Policy Group, Max Throughput & Workload Count

Short Demo link: Nutanix Collector support for ONTAP

ENHANCEMENTS

Guest OS details are now available in the case of AHV if NGT is installed and enabled. The Guest OS details can be seen within the “VM Summary” page and the same can be viewed under the “vInfo” tab of the XLSX file. 

Collector Portal – Invite & Share option allows you to share your Collector projects with users who are not registered on Collector Portal. This allows you to share your project with your peers or subject matter experts for collaboration and discussion.

RESOLVED ISSUES

Resolved inconsistencies around CPU & Memory dials seen in AHV collections.

Resolved issues in vCenter collections – CPU & Memory charts displaying more than 100%

RESOURCES

Release Notes, User Guide & Security Guide

Collector Help Pages

Collector FAQs 

Product Videos

Public URL: Collector Download Link for Prospects

MyNutanix URL: Collector Download Link for MyNutanix Users 

Collector FAQs

This page aims to address most of your queries with regards to Nutanix Collector. While we are happy to engage in insightful conversations over the slack channel, we request you to please go through the FAQs on this page before reaching out to us via slack or email.

How can my prospect or customer access Collector?

Prospects and Customers can access Collector in a few ways:

Download Collector via Collector Login Page as seen in the screenshot below:

Collector Public Download Link – no registration required

Collector Download Link for MyNutanix Users

The last approach via MyNutanix registration would also give our users access to the Collector Portal.

How do we report issues in Collector?

You can report issues in Collector by filling this Google form. The form can also be accessed by customers, prospects, and partners.

Why should my customer or prospect use Collector Portal?

Replicates the exact same view as seen by your prospect or customer

Share and Collaborate with peers, customers/prospects, capacity planning experts for improved sizing.

Collector Portal is enhanced on a regular basis to add more value – for example, VM provisioning status, VM list tab with the consolidated view, etc.

Data gathered by Collector across 200K+ VMs says 90+% of VMs are over-provisioned

How can I view the Collector output generated by my customer or prospect?

Collector contains data gathered from the customer or prospect data centers and hence the data gathered by customer or prospect is not accessible by anyone unless the data is explicitly shared.

How can my customer or prospect share the data gathered by Collector with me?

There are a few ways to request data from your customer or prospect:

  • Request the Collection file (zip file) and create a project in the Collector portal to replicate the exact same view as your customer or prospect. You can also generate an XLSX file once the project is created.
  • If the customer or prospect has already created a project in Collector Portal, you can request them to use the “Share” option to share the project with you. For more details on sharing projects, please refer to the User Guide.
  • Request the XLSX file generated by Collector, the XLSX file can be used to analyze the data and import the data into Sizer but XLSX can’t be used to replicate the visual views in Collector Portal.

How can I create a “Project” in the Collector Portal?

You can create a Project using the Collection zip file. For detailed steps, please refer to “Creating a Project” in the Collector Portal User Guide available here

How can I invite new users to Collector Portal?

We have simplified the process of inviting users to the Collector Portal. If you have an existing project which you want to share, just go ahead and share the project with them. If the user is not registered on Collector Portal, we will identify the same and then invite the user and share the project via 1-click.

If you want to invite users without sharing any projects, please use the “Invite” option present within the top right section under the “Summary” page of Collector. For more details, please refer Collector Portal User Guide available here

Where can I find the documents associated with Collector?

Please refer to this Portal link for User Guide, Release Notes, and Security Guide.

Is there any Collateral that I can share with my prospect or customer to brief them about Collector?

We have a flyer that is currently a work in progress – “Capacity Planning Data Collection in 30 seconds”. We will update this FAQ once the collateral is ready. Draft version available now – Collector in 30 seconds

Does the data gathered by Collector include CVM resources?

In the case of vCenter/ESXi or Hyper-V, the data includes CVM resources. Please ensure to turn off the CVMs before sizing to ignore the resources consumed by CVM.

In the case of Prism/AHV, the data does not include CVM resources as these are not needed for sizing. At the same time, we plan to enhance Prism/AHV to optionally show resources along with CVM.

Sizing FAQs

Can I trace back a solution back to the requirements gathered by the Collector?

Yes, if you happened to use Collector Portal and exported to Sizer. Recently, we have introduced the ability to view all the related sizings associated with the Collector Project. This mapping would be extremely beneficial in case of any customer satisfaction issues where the workload has changed pre and post-deployment.

Why are the Processor selection options disabled when I export the Collector output to Sizer?

Collector automatically considers the CPU based on the data gathered from the existing customer or prospect environment and hence the processor selection options are disabled. In the future, we plan to enhance UI to make this more intuitive.

How can I selectively turn off or on certain VMs before exporting the data to Sizer?

There are a couple of ways in which you can selectively turn off or on certain VMs before sizing via exported XLSX file.

If you using XLSX export in Sizer and want to power OFF a few VMs, you need to make a couple of changes:

a) In the “vInfo” sheet, edit the ‘Power State’ column value from “poweredOn” to “poweredOff”.

b) In the “vCPU” sheet, clear the values of the mentioned columns for the VMs that you desire to turn off – ‘Peak %’, ‘Average %’, ‘Median %’, ‘Custom Percentile %’ and ’95th Percentile % (recommended)’

Save the sheet and import data in Sizer.

Note: If you miss (b) you may see the below error:

Data being imported contains one or more VMs that were powered ON during the collection period and reported CPU utilization. 
We recommend you to size both Powered ON & Powered OFF VMs.

How can I selectively decide to place VMs in different clusters when sizing?

Similar to turning off/on the VMs, you can selectively specify the target cluster before sizing.

If you are using Collector Portal, you can go to the “VM List” tab and edit the ‘Target Cluster’ field before exporting the data to Sizer.

If you using XLSX export, you can edit the ‘Target Cluster’ column under the “vInfo” sheet and save it before importing data in Sizer.

You can also edit multiple VMs in one go via “Bulk Change” option.

Can we export more than one Collector Project to the same Sizer scenario?

Unfortunately, this is not possible via Collector Portal but you can do it directly in Sizer using the Import Workload functionality.

If you have the Collector XLSX file please import the individual project XLSX files in Sizer one by one within the same Sizer scenario.

You can get the XLSX from the Collector portal using the “Export to XLSX” functionality.

Is there any document explaining various options while exporting data to Sizer?

Yes, there is a detailed help page documenting various options available when exporting to Sizer, guidance, and caveats around the same. Please here the same here – Exporting Collector Data to Sizer

How can I view the Sizings created via Collector Portal in Sizer using Salesforce Login?

We plan to merge Sizer & Collector Portal in the future. But for now, there are a couple of workarounds to access the Sizings created via Collector Portal in Sizer when using Salesforce login:

1) If you know the scenario number, just open any scenario and replace the scenario number with the one you want to view.

2) If you don’t remember the scenario number, you the search functionality in Sizer. Search functionality is located on the left of your username (that can be seen at the top left corner).  Navigate to the “Advanced” tab in the search dialog and use “Scenario Owner” with the value ‘Created by me’ and you should be able to view the scenario created via Collector Portal.

CPU FAQs

Are the existing environment CPUs considered during sizing when we export Collector output to Sizer?

Yes, when you either export Collector data to Sizer or import Collector data from Sizer, the input CPUs are considered.

Storage FAQs

Does Collector report raw storage or usable storage?

Collector reports both raw storage and usable storage. At the cluster level, the storage represents raw cluster storage. Usable storage can indirectly be calculated using the vDisk tab of the XLSX export. There are a couple of caveats around the storage metrics, please refer to other questions under this section.

Does the Collector capture snapshot information?

As of today, Collector gathers storage consumed by snapshots when we take the vCenter/ESXi route. But this data is missing in data gathered via Prism/AHV or Hyper-V.  Very soon, we plan to enhance Prism/AHV to gather storage metrics around snapshot.

Does Collector report Datastores, RDM, or iSCSI disks?

Unfortunately, as of today, Collector does not report RDM or iSCSI disks. But we do have this in our roadmap.

Security FAQs

How can I convince my customer or prospect that Collector is safe to the user?

Nutanix does not compromise on the security of our customers, partners, and prospects. We have comprehensive security measures in place and the same is made available in the security guide available here. Additionally, the security guide is bundled along with Nutanix Collector bits.

Is there an option to mask sensitive information gathered by the Collector?

Yes, with the launch of Collector 4.0, users do have an option to mask information that might be considered sensitive. This option is available while exporting the data to .XLSX format. We will soon be enabling masking of data even in the Collection file (.zip file).

Hyper-V FAQs

How can we initiate performance data collection in the case of the Hyper-V cluster?

There is a detailed help page on Collector support for Hyper-V, please refer to Collector 3.3 page

Can we gather data of more than 1 cluster at a time?

At this moment, in the case of Hyper-V, Collector can gather data from only one cluster at a time.

Does the collector support a standalone Windows server? 

Not yet, it is on the roadmap.

AHV FAQs

Why can’t I see the Guest OS details in case of data gathered via Prism/AHV?

For Collector to gather Guest OS details, NGT needs to be installed and enabled on the guest OS. Collector 3.5.1 is enhanced to pick up guest OS details if the pre-requisites are met.

vCenter/ESX FAQs

Can Collector gather data from standalone ESXi hosts that are not managed by vCenter?

As of today, Collector gathers data from ESXi hosts is via vCenter APIs and hence we can’t pull data from a standalone ESXi host.

ONTAP FAQs

How can I export Collector output from the ONTAP system to Sizer?

We are currently working on supporting the export of ONTAP output to Sizer. For now, you will have to manually analyze the Collector output and feed it to Sizer. ETA: mid-Jan 2022

Misc FAQs

How can I identify the Hypervisor in use at the end-user site using Collector output?

The “vDatacenter” sheet within the XLSX can be used to identify the Hypervisor in use. The ‘MOID’ column within the “vDatacenter” maps to the Hypervisor as shown below:

MOID Hypervisor
Prism Element AHV
Prism Central AHV
Hyper-V Hyper-V
Anything starting with “Datacenter” ESXi

In the future, we do plan to add an additional column to call out the Hypervisor in use which would make this information self-explanatory and also display the same information UI.

What is the frequency at which performance data is gathered by Collector?

The granularity of performance data collection differs based on the hypervisor in use.

Performance data on both, ESXi & AHV, are gathered every 30 minutes. Most metrics are gathered over a period of 7 days.

In the case of Hyper-V, the frequency of performance data collection depends on the duration of data collection initiated. The below table provides a granularity of the data collected:

Duration of Performance Collection Frequency of Data Collection Data points over the duration
1 day 5 minutes 288
3 days 10 minutes 432
5 days 20 minutes 360
7 days 30 minutes 336

Does Collector gather IOPS & throughput-related information?

Yes, Collector gathers IOPS at cluster level and the throughput – disk usage and network usage at both cluster level as well as host level. The same can also be viewed in the XLSX export of Collector data. In the case of Collector Portal, the VM List tab in the performance view displays the same as well.

How is the VM Provisioning Status calculated?

The VM Provisioning Status is calculated based on the 95th percentile CPU utilization values. The categorization details can be seen in the tooltip and the same is mentioned below:

Normal: CPU utilization values is between 60% and 80%
Under-provisioned: CPU utilization > 80%
Over-provisioned: CPU utilization < 60%
Unknown: Utilization value is missing

Can I customize the VM Provisioning Status?

Yes, you can customize the VM Provisioning Status criteria using the “Manage” option next to VM Provisioning Status tooltip.

Can I edit the project name in Collector?

Yes, you can edit the project name. Please refer to the “My Projects” section within the Collector Portal User Guide available here

Is there a way to bulk edit the records under the VM List tab?

Collector Portal now allows you to edit multiple VMs in one go via the “Bulk Change” button. Use cases where bulk edits would be of great value:

Sizing a set of VMs in one cluster and the rest in another cluster. For example, all DB VMs “Target Cluster” values can be changed to a new cluster, say DB Cluster instead of default Cluster 1.  The rest of the VMs can be left to default, Cluster 1, or further targeted towards specific clusters before exporting to Sizer.

Edit the resources allocated to the set of VMs. For example, all knowledge workers are not allocated enough vCPUs, increase all of them at once via bulk edit.

Is there a way to export the graphs from the Collector portal?

Unfortunately, not yet but we do plan to export to PDF or PPT in the future.

I am having issues with uploading the collection file to Collector Portal, what could be wrong?

If you happen to see the below error message:

Invalid input file. Collection Zip file is expected. 

One of the possible reasons could be that the original zip file was extracted and zipped once again. Using the original zip file should resolve the issue. If the issue still persists please report the issue.

Where can I find the Collector support matrix?

Please refer to the “Nutanix Collector Compatibility Matrix” section under Collector User Guide available here

Who should I contact in case of any other queries?

Please reach out to us via collector@nutanix.com or reach out to us via slack channel – #collector

Alternatively, our partners, customers, and prospects can also reach out to us via Nutanix Community Page – Sizer Configuration Estimator

Report issues in Nutanix Collector – here

And you can always reach the Product Management @ arun.vijapur@nutanix.com

You have reached the end 🙂 Do let us know if you found this page useful or how we can make this better. Please feel free to share this page with your peers, partners, customers, and even prospects.

Thank you,

Team Nutanix Collector

 

 

July 2021 sprints(Thresholds )

Sizer Thresholds – What has changed and Why?

 

What are thresholds in Sizer? 

Sizer has a feature called thresholds. These are defined individually for each of the sizing resources – cores, memory, SSDs, HDDs & GPUs (wherever applicable). These thresholds ensure that the total available resources in the nodes(cluster) are sufficient to meet the workload requirements and also account for some buffers for the unforeseen surges in workload applications

What has changed in thresholds?

Up until July 2021, the threshold defaults across cores/memory/SSD/HDD used to be 95% as can be seen(and modified) under the policy screen as shown below.

Note that the default was set to 95% which is also the maximum allowed. Users can go for a lower threshold (more conservative sizing with more buffer for future spikes). However, under no circumstances, sizer allowed to go higher than the default – greater than 95% –  to provide for a 5% margin for accomodating sizing errors/estimates and workload usage uncertainties.

Starting August 2021, Sizer would be changing the defaults for these thresholds to 85% across ALL resources[cores/memory/ssds/hdds) as shown below.

Note that the defaults have moved left to 85% , however, the maximum allowable utilization of the cluster resources still remains at 95%.

Why?

Why the change?

Having the maximum allowable and default both at 95%  at times did not provide enough margin for sizing estimate errors or unforeseen workload usage or spikes as only 5% left.  Given making accurate estimates is hard, we felt it was prudent to provide more slack with a 85% threshold.

To be clear though, many sizings have been done successfully at the old 95% level.  This move was also supported by Sizer users doing manual sizings who often opted for more slack.  This change was done to be more prudent versus any sizing issue.

When is it best to leave it at 85% Threshold

We feel for most sizings this is the more prudent level.  Allows for more room for estimate errors and for that matter customer growth

When might it be fine to go to 95% Threshold

Certainly numerous sizings have been done with 95% threshold and customers were happy.   We still do allow 95% to be the threshold.  These are the N+0 thresholds and so at N+1 there is a lot more slack.  The 95% level hits when one node is taken offline like for upgrades.  If the customer does upgrades during off-hours, their Core and RAM requirements are a lot less than normal and do not hit the higher threshold anyway.    Again we feel it is more prudent to leave it at 85%, and going higher just means you need to be comfortable with your sizing estimates and especially when the cluster is at N+0 (during an upgrade) 

What are the implications to existing sizings? 

First-the new sizings :

All new sizings (effective 9th August 2021) will have default thresholds at 85%. And since it is a significant change which impacts ALL new sizings and ALL users(internal/partners/customers), there will be a BANNER displayed prominently for two weeks for general awareness.

 

Implications to existing sizings : 

There will be NO impact or implication to the sizings created before 9th August 2021. The existing sizings would continue with the default threshold of 95% and would calculate the Utilisation %ages,  N+0,1 etc based on the previous default threshold of 95%. Thus, there won’t be any resizing or a new recommendation for existing sizings and those sizings and their recommendation holds good for that scenario.

Cloning an existing scenario: 

Cloning an existing sizing will be treated as a new sizing created after 9th August,2021 and thus, new sizing rules and default thresholds will apply.

One implication of this can be that there will be an increase in utilisation %ages across the cluster resources. This is because now, only 85% of the resources would be considered available for running the workload as against 95% earlier. This unavailability or in other words – reservation – of additional 10% of resources may drive to a higher node count (or make an existing N+1 solution as N+0) in some edge circumstances.

User can choose to resize for the new defaults , which may lead to higher node or core count – but that is for the better-as explained above-providing for margings and spikes – or- since it is a clone for an exsiting sizing which may have been sold to the customer – user can , alternatively, go to the threshold setting and move it to the right- back at 95%- which would then give back the same recommendation as the original sizing.