February 2022 release

 

Sizer went live with current sprint with below major updates :

 

  • Proposals for Portfolio 2.0:

    • Updated SW license section with Portfolio 2.0 structure
    • A slide on HW support and Cloud services

 

  • Cluster rules update

    • Restrictions around mixing different node types
    • For ex: Hybrid + NVMe nodes not supported
    • Handling unbalanced memory qty for IceLake
    • Allowing single socket for storage heavy nodes

 

  • Reference scenarios : Reworked on NX-G8 (was on G7 earlier)

  • Support for Frame licenses with Portfolio 2.0 flow

  • Platforms

    • GPU related rules for Fujitsu
    • New vendor- OVH Cloud

 

 

December 2021 sprints

Sizer went live with the sprint and some important features to start the year :

Upgrades quoting:

  • Now quoting for upgrades scenarios is possible and the quote button for upgrade flow is enabled.
  • The nodes are marked as existing or new by sizer (or can also be done manually) and can generate FL quotes for the newly added nodes
  • More on Upgrades is coming with Insights integration soon !!

Performance :

  • This is a big one and we all should see a significant performance improvement and sizing speed while coming up with the node recommendation.
  • Depending on the complexity and size of the workload, over 70% improvement in performance has been observed on an average compared to earlier.
  • A change in the technical implementation of sizing mechanism is driving bulk of the improvements.

Reference scenarios:

  • Sizer has come up with a concept of Reference scenarios (a tab called ‘reference scenario’ on the dashboard screen).
  • These are pre-sized scenarios by experts across workloads (For ex: Files 50TB,250 users, 250TB, 750 users , VDI 300/500 mixed user types etc)
  • These reference scenarios serve as a guide/reference purpose only for a similar workload requirement (and not encouraged to clone as is)
  • The feature has been widely successful with one of the partners (cdw) and now opening up for everyone

Others:

  • Proposals has been updated with the latest Q1 financials/Gartner’s slides and newly added G8 models
  • Default input processor set to Gold 5220 for VDI/VDA workloads for matching the real time(lab) performance numbers
  • Platforms that went live:
  •  NX-1175S-G8
  • Dell XC ICX or 15G Phase 2 Platforms
  • A30/A100/A16 GPU cards and related vGPU profiles for VDI/VDA

Collector 4.0

Nutanix Collector 4.0 fully supports NetApp Clustered Data ONTAP CIFS Shares and a lot of other benefits. Here is a short summary:

WHAT’S NEW

Support for NetApp Clustered Data ONTAP CIFS Shares

Support for NetApp Clustered Data ONTAP is no longer beta, it is now GA and full support would be provided to gather CIFS Shares details from Clustered Data ONTAP version 8.3 or above. You can gather a valuable set of information directly from storage native boxes that can be extremely helpful in sizing your File Share requirements accurately. 

Cluster details – Capacity, Storage Efficiency, # of CIFS Shares & # of Volumes

Node details – Node Health, Model, Vendor, ONTAP version, CPU & Memory info

Share details – Type of Share, Share Path, Access Control Levels, and Max Connections

Volume details – Capacity, Storage Efficiency, QoS Level, Snapshot & Backup Details

QoS details – Policy Group, Max Throughput & Workload Count

Short Demo link: Nutanix Collector support for ONTAP

Masking Sensitive Information

Collector 4.0 also provides the option to mask sensitive information when exporting the output to excel. This would be beneficial in the case of Government and Financial accounts or RFPs demanding the same. The following fields are masked across the various sheets – Cluster Name, Host IP, Host Name, VM Name, Disk Name, Snapshot Name, Snapshot File, Switch Name, Port Group, vSwitch, Network, Adapter, Disk, License Name & License Key.

Mapping of Collector Project & Sizer Scenarios

Collector Portal now allows you to not only view the Collector Projects but also allows you to view the associated Sizer Scenarios. To make it even better, we have ensured that you can also see the associated Sizer Scenarios for all your existing Collector Projects as well. This would be extremely beneficial in case of any customer satisfaction issues where the workload has changed pre and post-deployment.  

Bulk edits on VM List Page

This has been a long-pending request, Collector Portal allows you to edit multiple VMs in one go via “Bulk Change”. Use cases where bulk edits would be of great value

Sizing a set of VMs in one cluster and the rest in another cluster. For example, all DB VMs “Target Cluster” values can be changed to a new cluster, say DB Cluster instead of default Cluster 1.   The rest of the VMs can be left to default, Cluster 1, or further targeted towards specific clusters before exporting to Sizer.

Edit the resources allocated to the set of VMs. For example, all knowledge workers are not allocated enough vCPUs, increase all of them at once via bulk edit.

Additional Checks for Potential Undersizing when exporting to Sizer

In the case of virtual desktop environments, not all the VMs are utilized at the same time, especially when users are working across shifts. The VMs might be actively utilizing the resources but the VM might be switched off when Collector is run to gather the data. Considering the VM as OFF might result in a potentially undersized solution. We have added warnings in place to avoid these kinds of issues both when exporting Collector data to Sizer or Importing Collector data in Sizer.

ENHANCEMENTS

Identification of Home Directory & Application Shares

Collector 4.0 can identify if the shares configured on ONTAP CIFS are Home Directory shares or Applications shares. Sizer shall support exporting home directory shares only or all discovered shares.

Ease of Download

No need to remember any URLs to download Collector Desktop Application. Just ask the partner, prospect or customer to visit https://collector.nutanix.com/ or Google “Nutanix Collector”. Find the “Download Collector Desktop Application” right on the login page. 

Retrieving back the Collection File (.zip)

We have also made minor adjustments in the placement of buttons to drive our users towards sharing the Collection (.zip) file over the .xlsx file.  The updated button also indicates to the users that the Collection file is a zip file and not xlsx file. Use of the Collection file helps to extract all the value-added in the Collector portal and now you can also map the Collector data to Sizer Scenarios. 

RESOLVED ISSUES

Resolved Out of Memory issues when trying to gather data from large Prism environments.

In the case of Hyper-V, Collector can now generate the output in the Excel (.xlsx) format, even when a few attributes for VMs, Cluster Name, and Disk Name are missing.

RESOURCES

Release Notes, User Guide & Security Guide

Collector Help Pages

Collector FAQs 

Product Videos

Report issues in Collector via this Google form

Download Collector via Collector Login Page

November 2021 sprints

In November, we went live with current sprint .. below are the major highlights..
Cluster settings:
  • Storage filter: Added separate filters for All NVMe and NVMe+SSD storage options
  • Sizer to consider onboard(LOM) NICs along with external ones
Workloads:
SQL:
  • Profile info: Mx RAM up from 1024 GiB to 2048 GiB
       Files:
  • Optimizing the FSVM resource requirements
RVTool:
  • Support for latest version 4.1.4
Proposals:
  • Updated with latest HW specs and images for G8 models
Usability:
  • Changes to the dials and the legends(Red, Green dots at the bottom)
Platforms:
  • AMD Milan : HPE DX Gen10 Plus v2 models [DX325 & DX385] – manual mode
  • Lenovo: IceLake models- HXxx31 , AMD Milan model- HX3376 10SFF
  • HPE DX- New GPU rules
  • NX: New platforms: NX-8035-G8

October 2021 sprints

October 1st sprint: 

We went live with the current sprint with some major enhancements around XenApp and Bulk Edit for Data Protection which were eagerly awaited.. : )Virtual Apps and Desktops (earlier called XenApp in sizer)
  • Added VMWare Horizon Apps support
  • Updated numbers with latest OS support (Windows 2019)
  • Updated profile info (based on User Type by VDI broker)
Bulk Edit : Data Protection
  • Now Bulk edit also supported for Data Protection
  • Both Local and Remote snapshots/DR allowed for bulk edit
  • All VMs being edited for DP need to be part of the same cluster
Other changes:
  • Rack Awareness – user specified rack count
  • Helps to spread the nodes across multiple racks if power limitation/rack at site
  • Lenovo HW(HX) quoting from Sizer/FrontLine now available for all regions(worldwide)
  • HPE DX Ice Lake platforms: DX380 Gen10 Plus 8SFF & DX380 Gen10 Plus 24SFF (in manual mode for now )

October 2nd sprint:

We did go live with new sprint for Sizer today. Big thing is Sizer now supports RF1 since AOS 6.0 supports RF1. RF1 fits best in certain situations where the data resiliency of RF2 is not critical.  We allow it for certain workloads as an option
  • Server Virtualization
  • Cluster(Raw) sizing
  • SQL DB (non business critical)
The following tooltip is worth noting as RF1 should be used with cautionSelect RF1 only if the application is SAAS Analytics or Hadoop, RF1 is used when workload does not require data resiliency or the data resiliency is handled at application level and currently supported for SAS Analytics or Hadoop.
Other enhancements:
File Services : There is an optimisation to the Files sizing which leads to fewer cores required. The FSVM is optimized to account for the the resources across all the FSVM(minimum 3) to meet the cores requirement and choose the FSVM profile accordingly. For smaller deployments, this becomes efficient as the FSVM size would not only depend based on 1 but a minimum 3 VMs to meet the resource consumption.
Platform Alignment
DX new GPU rules
Remove rule around 128GB DIMM & L CPU
HPE-DX IceLake platform DX360-Gen10Plus-10NVMe – Phase 2a
New platforms – NX-1175S-G8 & NX-8035-G8

September 2021 sprints

Sept 1st sprint: 

Hi Everyone ,Further update on NX-G8, we went live with two more NX platforms, NX-8150-G8 & NX-8170-G8.. covering the larger All Flash systems….both these platforms are available end-to-end for sizing and FL quoting..
NX-8150-G8
  • Upto 80 cores (dual socket), up from 56 cores in the G7
  • Upto 4TB of RAM, up from 3TB in G7
  • Upto 184.32 TB of flash capacity
 NX-8170-G8
  • Upto 64 cores (dual socket), up from 56 cores in the G7
  • Upto 2TB of RAM
  • Upto 10x 7.68 TB NVMe
More details in the NX spec sheet , which should get updated with these platforms by today:
https://www.nutanix.com/products/hardware-platforms/specsheet
September 2nd sprint:
Very glad to let you know that we have integrated Lenovo DCSC with Sizer & Frontline
  • Quoting Lenovo HX Certified Nodes (CN) along with Nutanix Software should be a breeze, actually better than HPE
  • Size your solution in Sizer, generate FL quote with Nutanix Software + Lenovo HX CN using 1-click
  • This integration also means you can quote HX IceLake Platforms via FL
  • Currently, the functionality is limited to the US Geo (via common disti – Ingram Micro US). Worldwide release – coming soon
Other product updates:
  • Support for A100 GPU is now available on HPE-DX platforms
  • Options used while importing Collector or RVTools data can be viewed in Sizer post import
  • Imported processor SpecInt is also visible in Sizer
Have a great day!

August 2021 sprints

August 2nd sprint:
Following the announcement around NX-G8 , Sizer also went live with the NX-G8 (3rd Gen Intel Xeon Processors/Ice Lake) with this sprint.. below are the highlights:NX-G8
  • Available for end to end sizing and FL quoting
  • NX-1065-G8 on the lower end with more core/cpu options
  • NX-3060-G8 and NX-3070-G8 with All flash/NVMe builds
  • NX-8055-G8 on the higher end with both Hybrid and All Flash(including NVMe)
  • NX-3055G-G8 with the GPUs
  • The larger All Flash systems NX-8150-G8 and NX-8170-G8 to follow in few weeks
  • Both G7 and G8 would be available for now (and either would come up in recommendations based on workload)
Others
  • CVM cores – accounting for physical cores going forward and translating into specInt adjusted cores for the sizing stats
  • Solution option – added Prism Ultimate option (both core and node based licensing)
More details on launch announcements for NX-G8 and spec sheet links here : https://www.nutanix.com/blog/nutanix-launches-support-for-next-gen-of-platforms-for-hybrid-cloud-deployments
August 1st sprint:
We went live with the current sprint with a major change around thresholds…, you would have already noticed the banner displaying in the UI today.. more details below..Thresholds in Sizer :
  • The default threshold in Sizer used to be 95% which is also the same as maximum allowed
  • With this release, the defaults are being adjusted and is set at 85%, while the max continues to be 95%
  • While the default is changed to 85%, users can still go to settings and change it back 95%
  • We feel its more prudent to go with 85% as it leaves some room for estimation/spikes and for upgrades (node is taken down-(N+0) while still allowing to go with 95% if needed or not critical or off peak upgrades..
  • This is also consistent with our observation on utilization % for manual sidings (which had more buffer than the corresponding auto sizing – possibly for the same reasons)
  • This applies for new sizing done starting today. Existing sizing’s continue their previous thresholds ( clones treated as new so new thresholds would apply)
Other enhancements:
Usability
  • Core based licensing for Prism / Flow
  • AOS rule on minimum flash at 4% of total node capacity (also applicable for Files/Objects)
Platforms
  • NX memory replacement from 2933MHz to 3200MHz
  • NEC: New Models CL 4LFF and 12LFF Platforms
  • New Platform: Lenovo HX7820 2 socket variant
More explanation behind the threshold changes and rationale in Sizer wiki : https://sizer.nutanix.com/#/help/articles/1035

August 2021 release (CVM explanation)

CVM cores – What has changed and Why?

How does Sizer assign CVM cores up until today(Aug 2021)?

Sizer allocates resources to CVM as part of the sizing exercise.  Here, we will be looking at CVM cores specifically.

Sizer allocates CVM cores based on a combination of factors – like workload type ( 4 for Server Virt, 12 for Databases etc) or based on node type (higher CVM cores for NVMe nodes for ex ) or as per guidance on certain features (rules around Async/NearSync etc).

However,  while attributing cores to CVM, Sizer used ‘Effective Cores’ , which means, these were specInt adjusted cores and not actual physical cores available to CVMs.

For ex:

Lete say Sizer allocated 4 cores to CVM.  These are ‘effective cores’ which are specInt adjusted.

As seen in the table below, these many “effective cores” are attributed to CVMs in the sizing stats table.

7 nodes and 4 cores for CVM per node : 7 x 4= 28 cores (effective cores)

Lets say that the node recommended had the CPU – Gold 5220.

So, translaitng the effective cores to physical cores –

28 ‘effective cores’ – is approximately 22.4 physical cores (adjusting for the specInt for Gold 5220)

=22.4/7 = 3.2 physical cores/CVM

So roughly 3 physical cores for CVM – which is on the lower side and can cause performance issues.

As CPUs get better in performance (Ice Lake > CaseCade Lake > Sky Lake), their specInts are higher and thus, it translates to even fewer physical cores.

What has changed?

The CVM cores allocation is now based on the physical cores (and not effective cores). 

So, when Sizer assigns 4 cores, its 4 physical cores (of Gold 5220 in above example) and not effective cores.

Folloiwing up from the previous example: (Refer to image below)

The tooltip shows the total physical cores assigned to CVM : 7 x 4= 28 cores

For the rest of the calculation on the sizing stats, these are convered to effective cores , so , 28 physial cores = 33.5 ‘effective cores’

Note:  Depending on the processor, the effective cores value ( in red here as -33.15) can go as high as 50-60% of the physical cores (for example for high end CasCade Lake refresh or high end Ice Lake processors), further validating the point that CVM would be getting fewer underlying physical cores (and hence the change).

What is the impact to sizing as a result of this change:

The sizings are more robust and aligns the CVM allocation ensuring to remove any CVM performance bottlenecks, while aliging it with foundation recommendations.

With more high end processors having significantly higher specInts ( Cascade Lake refresh and now Ice Lake), the gap between effective cores and physical cores is getting wider. This change will ensure that while UVMs do take advantage of the better processor capabilities, CVM gets the cores it requires for optimal performance and doesn;t lead to latency issues.

Understandably and expectedly, this change,however, would increase the core requirement for the cluster (as against previous sizings). For an existing sizing, this can be observed, more predominantly while cloning, which would apply the new CVM minimums to the cloned scenario leading to increased utilization of the CPU dials.  This , as a tradeoff, for higher CVM allocation for a better, more robust, performance optimal sizing scenario.