June 2019 Sprints

The second sprint in June has  following key items

  • HPE DX . We went live with this last week and so a reminder that Sizer has the DX models.
  • D N+0 threshold changed to 95%. In automatic or in manual we need to know what is the acceptable N+0 level for a cluster in regards to CPU, RAM, HDD, and SSD utilization.  Thus we have thresholds to define what is still acceptable for N+0 . Here we simply tightened the SSD threshold to from 98% to 95% at N+0.  CPU, RAM, SSD are now all 95% and HDD is 90%

 

  • Warning/Suggestion to enable Erasure Coding for Files workload. There is concern in the field on safety of using ECX in their workloads.  However, in case of Files Storage it often is quite large (200TiB+) and very few writes (which is where the concern for ECX is at).  In this case the savings with ECX can be very significant yet low risk.  Thus we give you a warning that you are opting out of a tremendous cost savings.  You can keep ECX ON or OFF

 

  • VDI Profile type enhancement for Collector imported workloads. In VDI we have always had the notion of user profiles like task workers or power users.  However, when pulling in actual customer workload data from Collector, we do not need profile type to come up with workload requirements. We know that through actual data. Hence, while sizing a VDI through Collector, we should skip the profile type attribute.

 

  • Complete Sizer Proposal (one presentation file). Now one slideset to use with your customer

 

  • Detailed error for manual sizing validations (invalid node) We enhanced our error messages while in Manual

 

In the first sprint in June we had following key enhancements

 

File Services- Major update to File Services.

 

First, Home directory and Department Shares have been combined to File Storage.  They are often large workloads but with few writes.  Most were home directory and so made sense to combine given the similarities.  The one big change is we derive the working set

 

Application storage went through major update.  We now have random I/O as well as the sequential I/O we had previously.  We ask for  throughput and time window for keeping data in hot storage and then derive the working set.  This update reflects all the latest lab testing.

Here is info

https://sizer.nutanix.com/#/help/topics/17

Oracle Sizing Improvements – We now go with the fatest cpu in a node.  They have to be >= 60 specint anyhow but now we go with the fastest

Cold tier data adjusted in Hot tier

Sizer always finds ways to propose the most optimal solution in terms of resources and cost.

As part of this effort, in certain cases , Sizer adjusts the workload data suposed to be sitting on the cold tier storage(HDDs) onto the hot tier storage(SSDs).

This happens in circumstances where there is surplus of SSDs in the BOM which is unutilized. The unutilised flash capacity is used for cold tier data if it helps reduce the overall number of nodes or if it helps avoid adding additional disks to met large HDD requirement if the extra SSD capacity can meet the same. The defined threshold levels are maintained and does not change for this adjustment in particular.

The same can be seen in a separate row in the calculation table and the Sizer BOM . Sample below: