January 2020 Sprints

January 21

Collector

  • Collector or RVTools –  Can now size for just powered ON VMs or both Powered ON and OFF VMs. Was just Powered ON.
  • Collector or RVTools profile can go up to 64GB RAM (was 32GB)

Usability

  • Rich Text Format is now available for  Scenario Objectives. So you can bold things or underline words in the Objectives to make a key point.  This also makes it into the BOM and so a  nicer document

January 31

Products

  • Complete HPE DX BOM.  Now Sizer will provide a complete BOM including chasis, power supplies, etc.  This makes it easier and reduces potential errors when quoting HPE DX models on HP’s configurator.
  • Update to 8155-G7 product rules

Workloads

  • Allow for 240TB per node for Objects Dedicated workloads.   Since it is Objects Dedicated workloads other applications are not allowed

 

December 2019 Sprints

Dec 20

Hello everyone We went live with our latest sprint today with following key items:

  • Add some workload VMs to Object Dedicated clusters for free (1 VM per node)
  • Storage Capacity Calculator – Added support for NVMe only models and SSD/NVMe models
  • New Intel models – S2600WF0-2U1N-12 and S2600BPS-2U4N-12

Dec 9

Hi everyone.  We launched our latest sprint.

  • Glad to offer Mine as backup target now
  • Sizing Improvements and Processor input/Workload
    • Added Cascade Lake processors to the pull down list when you start a new workload

Usability

  • Allow editing of cluster in a workload – Now you can move workloads around to different clusters without cloning a new workload and deleting old one.

General

  • Add N+0/ N+1 pulldown to Extent Store Charts – UI. Now Extent Store charts and Extent Store values in the Sizing details are in sync.
  • Add scenario delete operation on dashboard (list and tile view) – Can get rid of old scenarios easier than opening each and deleting them.

Product Alignment

  • XC 740xd-24update/1.6TB SSD
  • Add 8260M and 8280M processors for Nutanix
  • DX is available to Nutanix partners
  • Added more profiles for the V100 GPU with 32GB RAM

Workloads

  • Now Mine is available.  In any workload you were able to direct backup to a different cluster and now Mine can be the target.  Sizer will do the sizing and include all the skus including the Nutanix skus for Veeam or HYCU software.  Unfortunately, you have to create a new opportunity in Sales Force and quote the Mine products separately from the main cluster with CBL.

November 2019 Sprints

Nov 25

Latest sprint went live last night, with the following changes:

  • Retain fields in manual mode: In manual mode, while changing the model, quantity is retained, previously it used to get reset.. similarly, all the other fields
  • CPU/Memory/SSD/HDD/NIC/GPU is retained upon changing the model type or even model if it has the same components..
  • Validator updated with latest product changes/rules for 3rd party platforms..
  • Platforms: Dell XC : XR2(XC Core only platform), HPE DX 560 Gen10 24SFF  available for sizing..
  • NX 8170-G7 ready from Sizer end..

Nov 11

Latest sprint went live last night and includes:

  • Deep Nodes for NearSync – We already made changes to extend support for hourly snapshots (1 hour RPO) for nodes with up to 64TB HDD ( for hybrid nodes) and up to 80TB SSD (all flash nodes). This was for async.  We now did that for NearSync as well.
  • Correct RVTools capacity  – Collector 2.1 stores VM capacity data in GiB/MiB (and not GB/MB) both for VMware and Nutanix clusters as stored that way in Prism and VCenter.   RVTool  unfortunately states states the capacity in GB/MB even though the data VCenter is in GiB/MiB.  We now treat the capacities as GIB/MiB which is correctWorking on SAP Hana, Mine, 8170-G7 … all coming soon

 

 

 

Era Support in Sizer

 

What is Era support in Sizer

Sizer focuses on both the sizing and the license aspects of using Era to manage your databases that are defined in Sizer.  So for a long time you could size either Oracle or SQL databases a customer may want to run on a Nutanix cluster.  With Era you can manage those databases but also set up data protection policy and manage clones.  Sizer then does the following in regards to Era that is turned on for either Oracle or SQL workloads

  • Determine the licensing required for the Oracle or SQL VMs defined in Sizer. Era is VCPU based and so number of VCPUs under management
  • Determine all the sizing requirements for the data protection policy defined in the workload including time machine requirements
  • Determine the cloning requirements (if enabled) for either database only (just storage) clones or the database plus VM clones (entire database VM clone)
  • Determine the sizing requirements for Era VM itself

Era License/Sizing

  • Let’s say you just want to buy Era for the Oracle workloads but not snapshots or clones. In next sections we will deal with database protection policy and cloning.  So here we just want to add the Era licenses
  • Here is the setting in the Oracle workload. We are saying here we want Era for all 10 Oracle VMs and each VM has 8 VCPUs.  Coincidentally it is VCPU:pCore of 1:1 and so 8 cores.  Era licensing though is VCPUs

  • Here is the budgetary quote and indeed shows 80 VCPUs must be licensed.

  • Here is the Era sizing. We do add the VM to run Era which is lightweight

 

Era Data Protection including Time Machine

  • To invoke data protection Era must be enabled and the licensing is scoped as described above.
  • Sizer will now let you define the data protection policy you would define in Era and figure out the sizing requirements.
    • Daily Database Change rate can either be in % or in GiB but is the amount of change per day for the databases defined in the workload (the database VMs defined in the workload)
    • Daily log size is either % or GiB. This is used by Time Machine to allow for continuous recovery for the time frame specified.  All the transactions are logged and Time Machine can allow for rollback to a given point in time
    • Continuous Snapshots is in days
    • Daily, Weekly, Monthly, and Quarterly are number of snapshots kept for snapshots done in those time frames

  • Here are the sizing results.
    • Era VM – the logs are kept by the Era VM in SSD. This is for Time Machine to do continuous recovery
    • The other snapshots are put in cold storage and like anything stored in a cluster has its RF overhead (here it is set to RF2).
    • Should note the quarterly snapshots add a lot of storage

Era Database Only Clones

  • You can define cloning policy in Era and thus in Sizer so it can calculate the sizing requirement
    • Define number of clones for all the database VMs in the workload. Here we had 10 VMs and so 2 clones per VM
    • Clone daily change rate – this would be the % or GiB change each day by typically developers that are using those clones.
    • Refresh rate. At some point (in days) organizations typically refresh the clones with new data and so represents maximum time the clones are kept

  • Here is the sizing. Note the impact is Era DB only clone is added in the workload summary and the just the capacity is added.  All the calculations form the Era data protection policy is not impacted

 

Era DB plus VM clones

  • Here we add in clones of the VMs and so the storage and VMs themselves
    • Define number of clones for all the database VMs in the workload. Here we had 10 VMs and so 2 clones per VM
    • Clone daily change rate – this would be the % or GiB change each day by typically developers that are using those clones.
    • Refresh rate. At some point (in days) organizations typically refresh the clones with new data and so represents maximum time the clones are kept
    • VCPU per VM. In the workload we defined a database VM needed 8 VCPUs.  Well if this clone is test/dev it could be less
    • VCPU:pCore ratio. In workload it is 1:1 but for test/dev 2:1 is more common
    • RAM per VM is needed

  • Here is the sizing. Note the impact is Era DB Plus VM Clone is added in the workload summary.  Where for the Era DB Only Clone it is just added capacity, the Era DB Plus VM Clone adds VMs.
    • 20 VMs were added as we have 10 VMs in the workload and we asked for 2 clones per source database
    • 80 cores are needed as those 20 VMs need 8 VCPUs but we specified 2:1 VCPU:pCore ratio. Thus 160 more VCPUs but just 80 cores.  Do note those VCPU’s are added into the Era licensing as Era is managing those VCPUs.
    • We need 2.5 TiB of RAM as we have 20 VMs and each needs 128 GiB
    • Capacity is same as what we had for the DB only clone as same settings
    • All the calculations form the Era data protection policy is not impacted

 

 

Sizer 4.0 Introduction

Welcome to Sizer 4.0

 

What is Sizer 4.0

It is based on Sizer 3.0 which has been around 1 year and has had over 120,000+ sizings.  It is shift in how to think about defining the best solution for the customer.

Workloads first -> Solution second

Here you focus on the customer requirements the bulk of your time and then the solution with hardware is secondary.

Sizer 4.0 Demo

 

Sizer 4.0 Demo

Why Sizer 4.0

  • Nutanix is now a software company. Customers buy our clusters to run their key applications be it VDI, Oracle, Server Virtualization, etc.  Hardware is then a secondary concern.  Many vendors are supported.    Given that shift it makes sense to focus on
    • First focus on the workloads. Work in Sizer interactively with the customer to define their requirements with precision
    • Then focus on the solution which here means pull in the hardware. With Sizer 4.0, you can stay in the same scenario and switch from vendor to vendor
  • Advent of Collector-driven sizing. Collector is a bit over a year old and with 2.x is a formidable collection tool for customers with either VMWare clusters or Nutanix clusters.  There are many advantages to collector-driven sizing
    • Ultimate precision. Though our workload profiles have their place, far better for example to gather 7 days of performance data for 1000 VDI users then assume they fit in one of a few profiles.
    • Today we group VMs into 25 different buckets but in near future we will allow each VM to be its own workload and then could have 100s of workloads

How it works

Create the scenario

Just like before you enter in the name of the scenario, customer, opportunity, and the objectives.  Don’t the objectives are in the BOM for customer review

Create the Workloads

 

The first page is the Workloads tab.  You start with an empty canvas and can add workloads

  • Add will allow you to get to the workload modules.
  • Import will allow you to import from tools like Collector

All the same workloads are there for you

Conversely can just add them via collector

 

So here added a couple workloads manually

 

Worth noting is on right is the 3 dots to indicate actions like edit workload or clone workload.  On left is the slider to disable the workload.  Gone are the little tiles.  That simply won’t do if you have 100s of workloads.  We had that in Sizer 2.0 when typical scenario had 1 to 3 workloads.  The tiles were kind of cute then, but now unusable for where we see things going.

So here I made it more interesting by using collector to size lot of VM’s.  I want precision and so selected performance data and 95th percentile.  This means over a 7 day period we size to fit at least 95% of all the workload demands.  Remember it just takes Collector about 10 min to run at customer site and gets all its data from either VCenter or Prism.

So now we have sized 100s of VMs and put them into groups.  You can see that it is a lot easier to review as a table and not a bunch of tiles

Finalize the solution

Go the Solution tab on left and you see the solution.  Here it is for Nutanix.  You see all the familiar things like the dials, the sizing summary, and all options to edit the hardware

Want to go to new vendor – it is easy !!.  Pull down on upper right and then get to HP DX for example.  Here is same sizing but for DX

 

October 2019 Sprints

Sizer 4.0

Here is the info –

Sizer 4.0 Introduction

 

Sizing improvements

  • Show Extent Store for N+1 in Sizing Details. We have Extent Store and here discount the largest node to accommodate for N+1
  • Min 8 cores per socket when NVMe is included in a model. This is new request from Engineering and so we moved it fast.  For NX this effects the NX-3170-G6 (NVMe+SSD), NX-8035-G6 (NVMe+SSD) , and NX-8155-G6 (NVMe+SSD) where C-CPU-6128 is no longer valid

Use Cases

  • Server Virtualization Enhancement. We will be enhancing our workloads with the rich data we get from Collector.  Right now with Server Virtualization we defaulted to VCPU:pCore ratio of 6:1.  Well at times that is too much for core overcommit and at times it is too little.  Now when we gather data on the customer’s source clusters we will determine the ratio that exists and use that to create the workloads.  We also know the processor that is used in those clusters and so precise requirements are coming into Sizer with combination of knowing the processor and the current overcommit ratio.  You of course can tweak that.

Business functionality

  • Discounting for SW licenses in budgetary quotes

Product Alignment

  • 3155G-G7 model and parts addition
  • XC740-xd 24 update
  • HPE DX 380 G10 8SFF (GPU model)
  • Moved 5055-G6 to manual
  • CPU 6226 update for Lenovo

September 2019 Sprints

Sizing improvements-

  • Updated the snapshot frequency for dense nodes. With 5.11.1, the limits for hourly snapshots have moved from 32TB to 64TB for Hybrid and from 48TB to 80TB for All Flash. Below these capacities we support hourly snapshots but above those capacities we support snapshots every 6 hours.  This is update from what we did in July.
  • SSD N+0 threshold for Hybrid is now 90% (except for Files Dedicated or Objects Dedicated). Given the drop in SSD prices it is best to be more conservative (was 95%) and this matches the HDD threshold.

Use Cases-

  • Era Support in Sizer – This was a heavy lift developed over couple months.  Should be big for SE’s given Era’s momentum.  Era is now supported in Sizer for both Oracle and SQL workloads.  Era snapshots are available.  Both the continuous snapshots for the Era Time Machine as well as traditional daily, weekly, monthly, and quarterly snapshots. Era cloning is also supported for both clones of the database only (storage) as well as setting up new database VMs (cpu, ram, and ssd).  To complete the picture the licenses are added to the BOM, budgetary quote and actual quote to SFDC
  • Several Files 3.6 updates. User connection assumptions in Files Storage has improved.  In File Storage we now assume 100% concurrent users (was 50%) and Vcpu:pCore ratio is 1:1 (was 2:1).  Given typically low core requirements for Files Storage this should not make a big impact

 

Product Alignment

  • Multicomponent support. This is ability to have more than one type of component for same component type.  For example two different ssd’s.  Traditionally Sizer assumes one component sku per component type.  Our first step is support for multiple NIC cards in HP DX.
  • Added the 7.68TB SSD in Storage calculator
  • Several product updates

August 2019 Sprints

Sizing improvements

  • Processor pull-down as workload input. This is cool. For all the history of Sizer when you enter cores or so much MHz for workload inputs we assume a baseline processor. Now you can tell Sizer to assume some processor and we have Ivy Bridge, Haswell, Broadwell, Skylake and Cascade Lake processors to choose from.  So benefit is more precision if you tell Sizer to refer to a processor that the customer is using
  • Cascade Lake updates(specInt values) for all vendors. We had updated Nutanix in prior sprint and got the rest of the vendors now.
  • Compression changes. From a lot of analysis on current compression ratios found in installed base we have gone a bit more conservative depending on workload.  Also hover over the slider and you see both the percent savings and the ratios like 50% and 2:1

Use Cases

  • Objects updates. We have had Objects (was Buckets) for 6 months now and got latest updates from Engineering
  • The Files and Buckets skus are updated with the new skus. For example Files Pro is now Files Dedicated.  Also updated the File license names in the Files workload.
  • File Analytics. You can include that now in the Files workload sizing.  No specific licenses required and available to both Files and Files Dedicated (Pro).
  • Changed name of Buckets to Objects

Business functionality

  • Very cool is we have a new BOM. It has never been updated in big way and we went all out. Even the dials from the UI are in the BOM.  I believe a much more professional document to share with your customer
  • New SFDC rules on what you can combine in a quote. SFDC has stricter rules now and we make Sizer follows those.

Product Alignment

  • Various product updates for NX-8155-G7, XC740xd-12 minimum drive count is now 4, and DX default NIC.
  • NX-8155-G7 is now live in Sizer
  • HPE-DX Robo Model – The 1-socket DX360 4LFF is considered as a Robo model and so can be 1 or 2 nodes

July 2019 Sprints

Sizing improvements

  • Update Rules as per compatibility matrix (G6 Models)
  • GPU sizing parity between auto and manual
  • Async snapshots for Large Nodes – Large nodes which is currently defined as a hybrid node with 40TB or more of storage (hdd and ssd combined) or all flash nodes with 48TB  or more of storage, has limits on hourly snapshots.    Sizer will enforce the following for auto or manual sizing.  In either case,  the customer is willing to have 4 or less hourly snapshots per day OR the cluster needs to be resized with smaller nodes to keep under 40TB storage
  • Auto/Manual Parity – minimum nodes for workloads. There are few gaps where Auto has more rules than manual and we want to close those.  There are workloads that have requirements on node counts.  For example, Buckets can only have 32 nodes max and can require certain number of nodes to process requests.  Files requires 3 min typically (yes can do 2 nodes with 1175S).  Now we test for these type conditions in manual too

Use Cases

  • Calm we now have their bundles
  • Updated Oracle specint min down to 55 (was 60)
  • Updates to Buckets Sizing (Removing versioning, clients fields) –  We have had Buckets out for about 6 months and got feedback and came to ideas to streamline the sizing approach

UX

  • A long-term request is ability to hide scenarios in your Dashboard. We want SEs work interactively with their customers using Sizer to build out the requirements.  Awkward is if they see other customer scenarios.  Now you can easily just show what you want. By default, all scenarios are displayed but there now is a Custom View button.  Click on that and you can quickly hide all the scenarios and just select what you want to show.

Product Alignment

  • Fujitsu – XF core
  • Inspur NF5280M5 12LFF
  • New OEM- inMerge1000 – Inspur
  • Lenovo – NVME drives :HX7820 & HX7520, GPUs: HX3520-G
  • Improved HCL integration – Downstroking for SSDs (80GB per drive set aside). Also more important is we test if a component is End of Sale.   HCL will have a long list of components that are supported but not all of them are currently for sale and we exclude those in Sizer

June 2019 Sprints

The second sprint in June has  following key items

  • HPE DX . We went live with this last week and so a reminder that Sizer has the DX models.
  • D N+0 threshold changed to 95%. In automatic or in manual we need to know what is the acceptable N+0 level for a cluster in regards to CPU, RAM, HDD, and SSD utilization.  Thus we have thresholds to define what is still acceptable for N+0 . Here we simply tightened the SSD threshold to from 98% to 95% at N+0.  CPU, RAM, SSD are now all 95% and HDD is 90%

 

  • Warning/Suggestion to enable Erasure Coding for Files workload. There is concern in the field on safety of using ECX in their workloads.  However, in case of Files Storage it often is quite large (200TiB+) and very few writes (which is where the concern for ECX is at).  In this case the savings with ECX can be very significant yet low risk.  Thus we give you a warning that you are opting out of a tremendous cost savings.  You can keep ECX ON or OFF

 

  • VDI Profile type enhancement for Collector imported workloads. In VDI we have always had the notion of user profiles like task workers or power users.  However, when pulling in actual customer workload data from Collector, we do not need profile type to come up with workload requirements. We know that through actual data. Hence, while sizing a VDI through Collector, we should skip the profile type attribute.

 

  • Complete Sizer Proposal (one presentation file). Now one slideset to use with your customer

 

  • Detailed error for manual sizing validations (invalid node) We enhanced our error messages while in Manual

 

In the first sprint in June we had following key enhancements

 

File Services- Major update to File Services.

 

First, Home directory and Department Shares have been combined to File Storage.  They are often large workloads but with few writes.  Most were home directory and so made sense to combine given the similarities.  The one big change is we derive the working set

 

Application storage went through major update.  We now have random I/O as well as the sequential I/O we had previously.  We ask for  throughput and time window for keeping data in hot storage and then derive the working set.  This update reflects all the latest lab testing.

Here is info

https://sizer.nutanix.com/#/help/topics/17

Oracle Sizing Improvements – We now go with the fatest cpu in a node.  They have to be >= 60 specint anyhow but now we go with the fastest