May 2019 Sprints

We launched our second sprint for May and it is a BIG one

Key Enhancements

Super Auto  Automatic goes through all the options and finds the optimal sizing.  Often SEs will say Auto is a good start but I want play with it.  Increase or decrease nodes and see impact of dials.  Increase/decrease Cores, RAM, etc and again see that impact.

Well now we have Super Auto where right in where you see the recommendation and the dials you can make those changes and get the update for the dials.  Better yet you see % change in cost vs optimal.  Manual is still there but now you can do a lot more in Automatic sizing.

Here is info and snapshots

https://sizer.nutanix.com/#/help/articles/592

 

Proposals  We had the sizer screen shots for a few months now, but we worked with the Field Enablement team and Product Marketing to deliver the latest in corporate overview and product/ solution benefits.  Don’t have to hunt around for the latest.  Do your work in Sizer and the latest PowerPoint is available under Download Proposals.  We see this evolving but you can be assured you got the latest.

Oracle updates  Oracle often is used in larger Enterprise applications and they charge for all cores in a node running Oracle.  Given that we now require high speed processors (greater than or equal to 60 spec ints which is about 3Ghz) but do allow a VM to cross sockets in a node.  This way you can have a large Oracle VM and know it will be fulfilled with a high speed cpu with fewer cores to give you a higher end system and lower Oracle license costs.

Miscellaneous

  • Heterogeneous cluster support for N+1 in Storage Calculator.  We had been taking one node off the first model defined and now we take it off the largest node in the cluster.
  • Appliances can have addons like Prism Pro, Flow, Calm.   This was for decoupled nodes in the past

First sprint for May

Key Enhancements

Backup Sizing.  Now all workloads for Nutanix sizings can have backups in a separate backup cluster.  You can define the backup policy and target either Files Pro or Buckets Pro. The intent is that  the backups are in the backup cluster managed by 3rd party backup software.  Sizer sized for the backups, included either Files PRO or Buckets Pro, and allocated space for the backup software.  In near future, there will be dedicated Backup hardware that can be used in the backup cluster instead of Files PRO or Buckets Pro.  Here is the details

https://sizer.nutanix.com/#/help/articles/585

Miscellaneous

  • Compression is now allowed for Starter license
  • Buckets can have RF3.
  • Robo VM can now have more than 2 TiB of storage

Collector and Tools

  • VDI workloads created by Collector are now better fine-tuned to meet different usage levels (edited)

Backup Sizing

All  Nutanix workloads now support backups.    This does the following

  • For any workload you can define backup policy in terms of number of full and incremental backups.
  • When invoked Sizer computes the backup storage that is needed and puts that in a standalone cluster.  Only backup workloads can be included in the backup standalone cluster(s)
  • Sizer also allocates cores, ram and storage for 3rd party backup software in the backup cluster
  • In future,  you can specify the backup hardware that is to be used in the backup cluster(s).
  • Alternatively we do offer Files Pro and Buckets Pro standalone clusters as targets

The inputs are as follows

  • Recovery Point Objective – the time between last backup (be it incremental or full backup).  This represents what point in time you can recover data.
    • For example, you want to recover some information.  The last backup will have occurred less than or at most 24 hours ago
  • Backup cycles.  This would be the number of cycles you want retained
  • Full backups in a cycle.  Typically 1 but can be more.  Here all the data in the workload is backed up
  • Incremental backups in a cycle.  Typically several and amount of data is the % percent change * workload data
  • Retention in Days – Backup cycles * (Full Backups per cycle + Incremental backups per cycle)
  • Rate change – Percent change expected between incremental backups
  • Backup Target –  options for holding the data such as Files Pro
  • Standalone Cluster – Name of cluster that will hold the backups

April 2019 sprints

We now have two sprints a month

Second sprint released April 29

Key enhancements

  • Allow Disabling workloads in scenario.

Bet you like to see the effect an extra workload has on your sizing.?  Betting you do that multiple times when you work with Sizer?  Well now you can disable a  workload and it is like it has been deleted.  Flip the switch and voila it is back.

The use for this feature is tremendous.  Certainly to add a workload and see impact.  Coupled with ability to clone and edit workload and then could have a couple levels (say small and large).  Then toggle each one to be disabled and see the impact the difference in workloads make.

We took care in how we handle disable workloads as follows

o    You get a warning on top that one or more workloads are disabled so you don’t forget.

o    Since the sizing is based on just enabled workloads the BOM, budgetary quotes and quotes are based on what is enabled

o    You can clone the scenario and the current state of enabled/disabled workloads are preserved in the new scenario.  So can have multiple scenarios from there with some enabled and other disabled

  • Capacity growth for Files – this is important as File capacity is always growing and now can size for up to next 5 years

Collector and Tools

  • Warn you if too few of VMs can’t be sized (eg. Many are powered OFF). This is to inform you that the sizing could be undersized given the data
  • SpecInt core adjustment for Collector import
  • Default selections for VDI workloads by Collector Import (Also enable Files by default)
  • RVTools 3.11 Import support

Miscellaneous

  • Updated Calm as no longer offers a free 25VM pack
  • Product updates for NX, SW only vendors, and Dell XC
  • Validator Product Updates
  • Failover Capacity Indicator improvements for ECX and Block Awareness enabled scenarios
  • Oracle: Node allocation to DB VMs
  • Automatic Sizing with CBL improvements for Standalone cluster sizing

First Sprint released April 16

Key Enhancements

  • Auto Sizing with CBL – We take into account the CBL license cost in our Auto Sizing which is key as most value is now in the licenses.  We also moved to List Price Sizing for NX hardware  instead of COGS
  • Manual Sizing Warnings based on Failover indicators. We leverage the new N+0 warnings in our UI but also BOM, Budgetary quote, Quote
  • 120TB Cold Storage support for 5.11. Models will be coming that support this but we are ready
  • Two node ROBO sizing for Files for N+1 failover. This is great for lower end file server market
  • Add Files/Buckets SKUs for quotes for non-decoupled accounts – So now can have a Files or Buckets license with an appliance sale.
  • ROBO VM Limit changes – PM did update the limits and so now:  no limits on cores, 32GB RAM per VM, 2 TiB of combined HDD/SSD storage per VM, and 50 TiB total HDD/SSD storage per cluster

Miscellaneous

  • Default NIC Selection for ALB/Non-ALB Countries (Auto Sizing). We take care of this nuisance where have to have the right NIC SKU for Arab League and non Arab League countries.  We look at the country for the Account you are sizing for.
  • Oracle Workload only in dedicated clusters. This is best practice given Oracle charges for all cores on nodes with Oracle.
  • Require an external NIC card for Bucket workloads
  • New UX Implementation for allowing decoupled quote for non-decoupled accounts – We want to make easier to sell a CBL deal to non-decoupled accounts

March 2019 sprints

Second Sprint

  • Optimal VDI Sizing with Collector-  This is huge innovation.  We started Collector to get the best customer requirements and now that is reality for our top workload – VDI.

Here you run Collector at the customer’s site and it will collect 7 days of performance data from VCenter.  The information is already in VCenter and so it does not take long to grab that information (about 5 min).  In Collector you can specify if want to find the median value, average, peak or some percentile like 80% for each VM.  For example, 80th percentile means you are getting the cores and RAM utilization level that covers 80% of all the data points from last 7 days.  There you can be assured it is well sized as not all VMs will run that hot all the time.  That is what I would advise.

With that data going to Sizer we can then “right size” the VM so you don’t undersize or oversize it.  All the details are here

https://services.nutanix.com/#/help/articles/532

This is our first workload.  I don’t mean this as hype  but Collector will radically change sizing and in the process allow Nutanix to be more competitive  Why?  Because we will have precise customer data to size with.

In compute oriented workloads we will be doing similar processing as we did here for VDI.  For capacity workloads, like Buckets or Files we want to analyze data to get us the best compression values.  So this is just the sta

  • Hot storage allocation with FATVM –  Here if you have a large working set for Files or buckets we allow the SSD (hot storage) to be dispersed over the cluster.  Usually we want it on the local node for best performance (e.g. a heavy compute intensive VM) but here it is not a concern.  So net result is you can have a lot of large hybrid models
  • Allow decoupled quotes on non-decoupled accounts for the partners* – This is allowed now and we got this functionality to partners
  • Various product updates* across HP, Lenovo, and Nutanix (edited)

First Sprint

Add N+0, N+1, N+2 indicator

o    This is a BIG sizing improvement in Sizer where Sizer will always tell you if you are at N+0, N+1 or N+2 for all resources (CPU, RAM, HDD, SSD) for each cluster

o    Now as you make changes in manual you always know if you have adequate failover.  Best practice is N+1 so you can take down any one node and customer workloads can still run.

o    This can be very hard to figure out on your own.  ECX savings for example varies by node count.  Heterogenous clusters mean you have to find the largest node for each resource.  Multiple clusters mean you have to look at each separately.  Sizer does this for you !!

o    Here is all the info on how this works.  https://services.nutanix.com/#/help/articles/512

  • Allow decoupled quotes on non-decoupled accounts (SFDC Users) – At first the rule was an account had to be decoupled to get decoupled quotes or quotes with Files Pro or Buckets Pro.  Now that rule has been lifted and we support that in Sizer quoting.  We just ask you if you want de-coupled quotes or not.
  • ROBO VMs Sizing (Quoting, BOM with ROBO SKUs) – Here can have the new ROBO VMs where you sell VMs as a license and separately have hardware
  • 3 Node sizing for N+1 for Files – This allows a 3 node cluster to be used for Files.  Was 4 nodes was the minimum to be N+1
  • Add missing terms for Appliance Only License and Support
  • Multiple Clusters Support for Collector Import – Now can put different VMs in different clusters in Sizer
  • Default NIC Selection for ALB/Non-ALB Countries (Manual Sizing)
  • High/Critical Security fixes
  • Various product updates for Sizer and Validator

 

 

Super Auto Sizing

We have greatly enhanced our Automatic Sizing.  Auto goes through all the possible options (an option is a model with specific component combination) and finds the optimal solution (lowest net cost)

Certainly good to know and often is the best solution.  We find users want to “play” with the recommendation like

  • Increasing and decreasing nodes to see impact on the utilization dials.  Auto provides N+1 or N+2 but good to see impact of losing a node for example
  • Would a faster processor or more RAM help a compute intense workload mix?
  • How about impact of changing storage options?

In old days you had to go to Manual, make a change, hit apply and then see the impact.  If you want to play with say 10 different changes that is a LOT of effort

Now we have Super Auto !!!

Here is an Automatic Sizing and you see there is 6 nodes.  The one thing that is new is you see the Customize link.  Click on this and you will enter that Super Auto Zone

Here is Super Auto

Key things

  • You now have + and – buttons to increment and decrement the nodes and the resources.  In regards to resources
    • Cores – You increment or decrement with more or less overall specint.  We go through the product structure for that model
    • RAM, HDD, SSD – You increment or decrement with more or less capacity.  We go through the product structure for that model
  • Whenever you click a new sizing is done with that change in either the model or the node count.  The dials are updated
  • You see a cost delta vs optimal.  So you don’t have to go to Budgetary quote to realize relative cost change.  Here we added a node and it increased the expected net price by 15%.  It is approximate.  We have budgetary quote to get a better number
  • You see at the top, Sizer tells you it is now customized.  That will be recorded in the BOM too
  • Restore to Auto.  Here can just have Sizer go back to optimal.
  • Done.  Have fun clicking things and in the end can go with it

What about the product details when I make changes?  

Product component descriptions and quantities are too long to put in the UI by the buttons.  So we have you increment or decrement and you see the overall capacity (e.g. increase or decrease HDD).  However at any time you can look at the model description ( the i by the model) and get all that info