Era Support in Sizer

 

What is Era support in Sizer

Sizer focuses on both the sizing and the license aspects of using Era to manage your databases that are defined in Sizer.  So for a long time you could size either Oracle or SQL databases a customer may want to run on a Nutanix cluster.  With Era you can manage those databases but also set up data protection policy and manage clones.  Sizer then does the following in regards to Era that is turned on for either Oracle or SQL workloads

  • Determine the licensing required for the Oracle or SQL VMs defined in Sizer. Era is VCPU based and so number of VCPUs under management
  • Determine all the sizing requirements for the data protection policy defined in the workload including time machine requirements
  • Determine the cloning requirements (if enabled) for either database only (just storage) clones or the database plus VM clones (entire database VM clone)
  • Determine the sizing requirements for Era VM itself

Era License/Sizing

  • Let’s say you just want to buy Era for the Oracle workloads but not snapshots or clones. In next sections we will deal with database protection policy and cloning.  So here we just want to add the Era licenses
  • Here is the setting in the Oracle workload. We are saying here we want Era for all 10 Oracle VMs and each VM has 8 VCPUs.  Coincidentally it is VCPU:pCore of 1:1 and so 8 cores.  Era licensing though is VCPUs

  • Here is the budgetary quote and indeed shows 80 VCPUs must be licensed.

  • Here is the Era sizing. We do add the VM to run Era which is lightweight

 

Era Data Protection including Time Machine

  • To invoke data protection Era must be enabled and the licensing is scoped as described above.
  • Sizer will now let you define the data protection policy you would define in Era and figure out the sizing requirements.
    • Daily Database Change rate can either be in % or in GiB but is the amount of change per day for the databases defined in the workload (the database VMs defined in the workload)
    • Daily log size is either % or GiB. This is used by Time Machine to allow for continuous recovery for the time frame specified.  All the transactions are logged and Time Machine can allow for rollback to a given point in time
    • Continuous Snapshots is in days
    • Daily, Weekly, Monthly, and Quarterly are number of snapshots kept for snapshots done in those time frames

  • Here are the sizing results.
    • Era VM – the logs are kept by the Era VM in SSD. This is for Time Machine to do continuous recovery
    • The other snapshots are put in cold storage and like anything stored in a cluster has its RF overhead (here it is set to RF2).
    • Should note the quarterly snapshots add a lot of storage

Era Database Only Clones

  • You can define cloning policy in Era and thus in Sizer so it can calculate the sizing requirement
    • Define number of clones for all the database VMs in the workload. Here we had 10 VMs and so 2 clones per VM
    • Clone daily change rate – this would be the % or GiB change each day by typically developers that are using those clones.
    • Refresh rate. At some point (in days) organizations typically refresh the clones with new data and so represents maximum time the clones are kept

  • Here is the sizing. Note the impact is Era DB only clone is added in the workload summary and the just the capacity is added.  All the calculations form the Era data protection policy is not impacted

 

Era DB plus VM clones

  • Here we add in clones of the VMs and so the storage and VMs themselves
    • Define number of clones for all the database VMs in the workload. Here we had 10 VMs and so 2 clones per VM
    • Clone daily change rate – this would be the % or GiB change each day by typically developers that are using those clones.
    • Refresh rate. At some point (in days) organizations typically refresh the clones with new data and so represents maximum time the clones are kept
    • VCPU per VM. In the workload we defined a database VM needed 8 VCPUs.  Well if this clone is test/dev it could be less
    • VCPU:pCore ratio. In workload it is 1:1 but for test/dev 2:1 is more common
    • RAM per VM is needed

  • Here is the sizing. Note the impact is Era DB Plus VM Clone is added in the workload summary.  Where for the Era DB Only Clone it is just added capacity, the Era DB Plus VM Clone adds VMs.
    • 20 VMs were added as we have 10 VMs in the workload and we asked for 2 clones per source database
    • 80 cores are needed as those 20 VMs need 8 VCPUs but we specified 2:1 VCPU:pCore ratio. Thus 160 more VCPUs but just 80 cores.  Do note those VCPU’s are added into the Era licensing as Era is managing those VCPUs.
    • We need 2.5 TiB of RAM as we have 20 VMs and each needs 128 GiB
    • Capacity is same as what we had for the DB only clone as same settings
    • All the calculations form the Era data protection policy is not impacted

 

 

Sizer 4.0 Introduction

Welcome to Sizer 4.0

 

What is Sizer 4.0

It is based on Sizer 3.0 which has been around 1 year and has had over 120,000+ sizings.  It is shift in how to think about defining the best solution for the customer.

Workloads first -> Solution second

Here you focus on the customer requirements the bulk of your time and then the solution with hardware is secondary.

Sizer 4.0 Demo

 

Sizer 4.0 Demo

Why Sizer 4.0

  • Nutanix is now a software company. Customers buy our clusters to run their key applications be it VDI, Oracle, Server Virtualization, etc.  Hardware is then a secondary concern.  Many vendors are supported.    Given that shift it makes sense to focus on
    • First focus on the workloads. Work in Sizer interactively with the customer to define their requirements with precision
    • Then focus on the solution which here means pull in the hardware. With Sizer 4.0, you can stay in the same scenario and switch from vendor to vendor
  • Advent of Collector-driven sizing. Collector is a bit over a year old and with 2.x is a formidable collection tool for customers with either VMWare clusters or Nutanix clusters.  There are many advantages to collector-driven sizing
    • Ultimate precision. Though our workload profiles have their place, far better for example to gather 7 days of performance data for 1000 VDI users then assume they fit in one of a few profiles.
    • Today we group VMs into 25 different buckets but in near future we will allow each VM to be its own workload and then could have 100s of workloads

How it works

Create the scenario

Just like before you enter in the name of the scenario, customer, opportunity, and the objectives.  Don’t the objectives are in the BOM for customer review

Create the Workloads

 

The first page is the Workloads tab.  You start with an empty canvas and can add workloads

  • Add will allow you to get to the workload modules.
  • Import will allow you to import from tools like Collector

All the same workloads are there for you

Conversely can just add them via collector

 

So here added a couple workloads manually

 

Worth noting is on right is the 3 dots to indicate actions like edit workload or clone workload.  On left is the slider to disable the workload.  Gone are the little tiles.  That simply won’t do if you have 100s of workloads.  We had that in Sizer 2.0 when typical scenario had 1 to 3 workloads.  The tiles were kind of cute then, but now unusable for where we see things going.

So here I made it more interesting by using collector to size lot of VM’s.  I want precision and so selected performance data and 95th percentile.  This means over a 7 day period we size to fit at least 95% of all the workload demands.  Remember it just takes Collector about 10 min to run at customer site and gets all its data from either VCenter or Prism.

So now we have sized 100s of VMs and put them into groups.  You can see that it is a lot easier to review as a table and not a bunch of tiles

Finalize the solution

Go the Solution tab on left and you see the solution.  Here it is for Nutanix.  You see all the familiar things like the dials, the sizing summary, and all options to edit the hardware

Want to go to new vendor – it is easy !!.  Pull down on upper right and then get to HP DX for example.  Here is same sizing but for DX

 

October 2019 Sprints

Sizer 4.0

Here is the info –

Sizer 4.0 Introduction

 

Sizing improvements

  • Show Extent Store for N+1 in Sizing Details. We have Extent Store and here discount the largest node to accommodate for N+1
  • Min 8 cores per socket when NVMe is included in a model. This is new request from Engineering and so we moved it fast.  For NX this effects the NX-3170-G6 (NVMe+SSD), NX-8035-G6 (NVMe+SSD) , and NX-8155-G6 (NVMe+SSD) where C-CPU-6128 is no longer valid

Use Cases

  • Server Virtualization Enhancement. We will be enhancing our workloads with the rich data we get from Collector.  Right now with Server Virtualization we defaulted to VCPU:pCore ratio of 6:1.  Well at times that is too much for core overcommit and at times it is too little.  Now when we gather data on the customer’s source clusters we will determine the ratio that exists and use that to create the workloads.  We also know the processor that is used in those clusters and so precise requirements are coming into Sizer with combination of knowing the processor and the current overcommit ratio.  You of course can tweak that.

Business functionality

  • Discounting for SW licenses in budgetary quotes

Product Alignment

  • 3155G-G7 model and parts addition
  • XC740-xd 24 update
  • HPE DX 380 G10 8SFF (GPU model)
  • Moved 5055-G6 to manual
  • CPU 6226 update for Lenovo

September 2019 Sprints

Sizing improvements-

  • Updated the snapshot frequency for dense nodes. With 5.11.1, the limits for hourly snapshots have moved from 32TB to 64TB for Hybrid and from 48TB to 80TB for All Flash. Below these capacities we support hourly snapshots but above those capacities we support snapshots every 6 hours.  This is update from what we did in July.
  • SSD N+0 threshold for Hybrid is now 90% (except for Files Dedicated or Objects Dedicated). Given the drop in SSD prices it is best to be more conservative (was 95%) and this matches the HDD threshold.

Use Cases-

  • Era Support in Sizer – This was a heavy lift developed over couple months.  Should be big for SE’s given Era’s momentum.  Era is now supported in Sizer for both Oracle and SQL workloads.  Era snapshots are available.  Both the continuous snapshots for the Era Time Machine as well as traditional daily, weekly, monthly, and quarterly snapshots. Era cloning is also supported for both clones of the database only (storage) as well as setting up new database VMs (cpu, ram, and ssd).  To complete the picture the licenses are added to the BOM, budgetary quote and actual quote to SFDC
  • Several Files 3.6 updates. User connection assumptions in Files Storage has improved.  In File Storage we now assume 100% concurrent users (was 50%) and Vcpu:pCore ratio is 1:1 (was 2:1).  Given typically low core requirements for Files Storage this should not make a big impact

 

Product Alignment

  • Multicomponent support. This is ability to have more than one type of component for same component type.  For example two different ssd’s.  Traditionally Sizer assumes one component sku per component type.  Our first step is support for multiple NIC cards in HP DX.
  • Added the 7.68TB SSD in Storage calculator
  • Several product updates

August 2019 Sprints

Sizing improvements

  • Processor pull-down as workload input. This is cool. For all the history of Sizer when you enter cores or so much MHz for workload inputs we assume a baseline processor. Now you can tell Sizer to assume some processor and we have Ivy Bridge, Haswell, Broadwell, Skylake and Cascade Lake processors to choose from.  So benefit is more precision if you tell Sizer to refer to a processor that the customer is using
  • Cascade Lake updates(specInt values) for all vendors. We had updated Nutanix in prior sprint and got the rest of the vendors now.
  • Compression changes. From a lot of analysis on current compression ratios found in installed base we have gone a bit more conservative depending on workload.  Also hover over the slider and you see both the percent savings and the ratios like 50% and 2:1

Use Cases

  • Objects updates. We have had Objects (was Buckets) for 6 months now and got latest updates from Engineering
  • The Files and Buckets skus are updated with the new skus. For example Files Pro is now Files Dedicated.  Also updated the File license names in the Files workload.
  • File Analytics. You can include that now in the Files workload sizing.  No specific licenses required and available to both Files and Files Dedicated (Pro).
  • Changed name of Buckets to Objects

Business functionality

  • Very cool is we have a new BOM. It has never been updated in big way and we went all out. Even the dials from the UI are in the BOM.  I believe a much more professional document to share with your customer
  • New SFDC rules on what you can combine in a quote. SFDC has stricter rules now and we make Sizer follows those.

Product Alignment

  • Various product updates for NX-8155-G7, XC740xd-12 minimum drive count is now 4, and DX default NIC.
  • NX-8155-G7 is now live in Sizer
  • HPE-DX Robo Model – The 1-socket DX360 4LFF is considered as a Robo model and so can be 1 or 2 nodes

July 2019 Sprints

Sizing improvements

  • Update Rules as per compatibility matrix (G6 Models)
  • GPU sizing parity between auto and manual
  • Async snapshots for Large Nodes – Large nodes which is currently defined as a hybrid node with 40TB or more of storage (hdd and ssd combined) or all flash nodes with 48TB  or more of storage, has limits on hourly snapshots.    Sizer will enforce the following for auto or manual sizing.  In either case,  the customer is willing to have 4 or less hourly snapshots per day OR the cluster needs to be resized with smaller nodes to keep under 40TB storage
  • Auto/Manual Parity – minimum nodes for workloads. There are few gaps where Auto has more rules than manual and we want to close those.  There are workloads that have requirements on node counts.  For example, Buckets can only have 32 nodes max and can require certain number of nodes to process requests.  Files requires 3 min typically (yes can do 2 nodes with 1175S).  Now we test for these type conditions in manual too

Use Cases

  • Calm we now have their bundles
  • Updated Oracle specint min down to 55 (was 60)
  • Updates to Buckets Sizing (Removing versioning, clients fields) –  We have had Buckets out for about 6 months and got feedback and came to ideas to streamline the sizing approach

UX

  • A long-term request is ability to hide scenarios in your Dashboard. We want SEs work interactively with their customers using Sizer to build out the requirements.  Awkward is if they see other customer scenarios.  Now you can easily just show what you want. By default, all scenarios are displayed but there now is a Custom View button.  Click on that and you can quickly hide all the scenarios and just select what you want to show.

Product Alignment

  • Fujitsu – XF core
  • Inspur NF5280M5 12LFF
  • New OEM- inMerge1000 – Inspur
  • Lenovo – NVME drives :HX7820 & HX7520, GPUs: HX3520-G
  • Improved HCL integration – Downstroking for SSDs (80GB per drive set aside). Also more important is we test if a component is End of Sale.   HCL will have a long list of components that are supported but not all of them are currently for sale and we exclude those in Sizer

Processor input for workload(s)

What is this feature all about? 

Now Sizer provides an option to select the type of processor the workload (existing or proposed) is running on. This gets factored in while sizing for the workload adding precision to the sizing and overall recommendation.

To give an example of how it helps.. an existing worklaod (say Server Virt, 100 of them) running on a weak processor (say a Haswell 2699v3, specint-38.58) would require less cores in sizing than the same 100 VMs running on a high performing CPU( like Skylake 8156 specInt -68.85).

Previously, the processor for existing workload was not taken into account, though Sizer always used a baseline processor[E5 2680v2] . So irrespective of whether the current worklaod is running on a slowest processor or the fastsest one, sizings used to remain the same.

With this new addition, there is a lot more precision added to sizing as we account for the incremental changes due to different type of processors.

 

How do we handle the processor input during sizing? 

Here is an example: input processor Broadwell E5 2690v4[46.43 specInt]

  • Lets say sizing comes to 32 cores 
  • This sizing is at the baseline [E5 2680v2, 42.31 specInt]  – Sizer defualt used until now
  • This has to be adjusted against the input processor E5 2690v4
  • 32*[46.43/42.31] = 35.11
  • The way to read this: 
    • If your existing processor was E5 2680v2(42.31), then the workload would require 32 cores 
    • If your existing processor(E5 2690v4) is stronger than the above baseline (specInt wise), you would need more cores

 

Where do we select the processor input for the workload? 

In the page where we give the workload name and select the type of workload, there is a dropdown to select the processor the workload is running on.

Currently, we support only one processor type per workload , however, there are chances that sometimes a workload can be running on mixed CPUs. In that case, it is advisable to go with the processor with better performance among the two.

Please note: This feature only deals with sizing based on the selected processor. It does not reflect or has any influence on the type of processor chosen for the recommended hardware. The HW recommendation continues to be driven based on the optimal HW solution based on the resources required (cores/flash/capacity)

June 2019 Sprints

The second sprint in June has  following key items

  • HPE DX . We went live with this last week and so a reminder that Sizer has the DX models.
  • D N+0 threshold changed to 95%. In automatic or in manual we need to know what is the acceptable N+0 level for a cluster in regards to CPU, RAM, HDD, and SSD utilization.  Thus we have thresholds to define what is still acceptable for N+0 . Here we simply tightened the SSD threshold to from 98% to 95% at N+0.  CPU, RAM, SSD are now all 95% and HDD is 90%

 

  • Warning/Suggestion to enable Erasure Coding for Files workload. There is concern in the field on safety of using ECX in their workloads.  However, in case of Files Storage it often is quite large (200TiB+) and very few writes (which is where the concern for ECX is at).  In this case the savings with ECX can be very significant yet low risk.  Thus we give you a warning that you are opting out of a tremendous cost savings.  You can keep ECX ON or OFF

 

  • VDI Profile type enhancement for Collector imported workloads. In VDI we have always had the notion of user profiles like task workers or power users.  However, when pulling in actual customer workload data from Collector, we do not need profile type to come up with workload requirements. We know that through actual data. Hence, while sizing a VDI through Collector, we should skip the profile type attribute.

 

  • Complete Sizer Proposal (one presentation file). Now one slideset to use with your customer

 

  • Detailed error for manual sizing validations (invalid node) We enhanced our error messages while in Manual

 

In the first sprint in June we had following key enhancements

 

File Services- Major update to File Services.

 

First, Home directory and Department Shares have been combined to File Storage.  They are often large workloads but with few writes.  Most were home directory and so made sense to combine given the similarities.  The one big change is we derive the working set

 

Application storage went through major update.  We now have random I/O as well as the sequential I/O we had previously.  We ask for  throughput and time window for keeping data in hot storage and then derive the working set.  This update reflects all the latest lab testing.

Here is info

https://sizer.nutanix.com/#/help/topics/17

Oracle Sizing Improvements – We now go with the fatest cpu in a node.  They have to be >= 60 specint anyhow but now we go with the fastest

Cold tier data adjusted in Hot tier

Sizer always finds ways to propose the most optimal solution in terms of resources and cost.

As part of this effort, in certain cases , Sizer adjusts the workload data suposed to be sitting on the cold tier storage(HDDs) onto the hot tier storage(SSDs).

This happens in circumstances where there is surplus of SSDs in the BOM which is unutilized. The unutilised flash capacity is used for cold tier data if it helps reduce the overall number of nodes or if it helps avoid adding additional disks to met large HDD requirement if the extra SSD capacity can meet the same. The defined threshold levels are maintained and does not change for this adjustment in particular.

The same can be seen in a separate row in the calculation table and the Sizer BOM . Sample below:

 

May 2019 Sprints

We launched our second sprint for May and it is a BIG one

Key Enhancements

Super Auto  Automatic goes through all the options and finds the optimal sizing.  Often SEs will say Auto is a good start but I want play with it.  Increase or decrease nodes and see impact of dials.  Increase/decrease Cores, RAM, etc and again see that impact.

Well now we have Super Auto where right in where you see the recommendation and the dials you can make those changes and get the update for the dials.  Better yet you see % change in cost vs optimal.  Manual is still there but now you can do a lot more in Automatic sizing.

Here is info and snapshots

https://sizer.nutanix.com/#/help/articles/592

 

Proposals  We had the sizer screen shots for a few months now, but we worked with the Field Enablement team and Product Marketing to deliver the latest in corporate overview and product/ solution benefits.  Don’t have to hunt around for the latest.  Do your work in Sizer and the latest PowerPoint is available under Download Proposals.  We see this evolving but you can be assured you got the latest.

Oracle updates  Oracle often is used in larger Enterprise applications and they charge for all cores in a node running Oracle.  Given that we now require high speed processors (greater than or equal to 60 spec ints which is about 3Ghz) but do allow a VM to cross sockets in a node.  This way you can have a large Oracle VM and know it will be fulfilled with a high speed cpu with fewer cores to give you a higher end system and lower Oracle license costs.

Miscellaneous

  • Heterogeneous cluster support for N+1 in Storage Calculator.  We had been taking one node off the first model defined and now we take it off the largest node in the cluster.
  • Appliances can have addons like Prism Pro, Flow, Calm.   This was for decoupled nodes in the past

First sprint for May

Key Enhancements

Backup Sizing.  Now all workloads for Nutanix sizings can have backups in a separate backup cluster.  You can define the backup policy and target either Files Pro or Buckets Pro. The intent is that  the backups are in the backup cluster managed by 3rd party backup software.  Sizer sized for the backups, included either Files PRO or Buckets Pro, and allocated space for the backup software.  In near future, there will be dedicated Backup hardware that can be used in the backup cluster instead of Files PRO or Buckets Pro.  Here is the details

https://sizer.nutanix.com/#/help/articles/585

Miscellaneous

  • Compression is now allowed for Starter license
  • Buckets can have RF3.
  • Robo VM can now have more than 2 TiB of storage

Collector and Tools

  • VDI workloads created by Collector are now better fine-tuned to meet different usage levels (edited)