August 2021 release (CVM explanation)

CVM cores – What has changed and Why?

How does Sizer assign CVM cores up until today(Aug 2021)?

Sizer allocates resources to CVM as part of the sizing exercise.  Here, we will be looking at CVM cores specifically.

Sizer allocates CVM cores based on a combination of factors – like workload type ( 4 for Server Virt, 12 for Databases etc) or based on node type (higher CVM cores for NVMe nodes for ex ) or as per guidance on certain features (rules around Async/NearSync etc).

However,  while attributing cores to CVM, Sizer used ‘Effective Cores’ , which means, these were specInt adjusted cores and not actual physical cores available to CVMs.

For ex:

Lete say Sizer allocated 4 cores to CVM.  These are ‘effective cores’ which are specInt adjusted.

As seen in the table below, these many “effective cores” are attributed to CVMs in the sizing stats table.

7 nodes and 4 cores for CVM per node : 7 x 4= 28 cores (effective cores)

Lets say that the node recommended had the CPU – Gold 5220.

So, translaitng the effective cores to physical cores –

28 ‘effective cores’ – is approximately 22.4 physical cores (adjusting for the specInt for Gold 5220)

=22.4/7 = 3.2 physical cores/CVM

So roughly 3 physical cores for CVM – which is on the lower side and can cause performance issues.

As CPUs get better in performance (Ice Lake > CaseCade Lake > Sky Lake), their specInts are higher and thus, it translates to even fewer physical cores.

What has changed?

The CVM cores allocation is now based on the physical cores (and not effective cores). 

So, when Sizer assigns 4 cores, its 4 physical cores (of Gold 5220 in above example) and not effective cores.

Folloiwing up from the previous example: (Refer to image below)

The tooltip shows the total physical cores assigned to CVM : 7 x 4= 28 cores

For the rest of the calculation on the sizing stats, these are convered to effective cores , so , 28 physial cores = 33.5 ‘effective cores’

Note:  Depending on the processor, the effective cores value ( in red here as -33.15) can go as high as 50-60% of the physical cores (for example for high end CasCade Lake refresh or high end Ice Lake processors), further validating the point that CVM would be getting fewer underlying physical cores (and hence the change).

What is the impact to sizing as a result of this change:

The sizings are more robust and aligns the CVM allocation ensuring to remove any CVM performance bottlenecks, while aliging it with foundation recommendations.

With more high end processors having significantly higher specInts ( Cascade Lake refresh and now Ice Lake), the gap between effective cores and physical cores is getting wider. This change will ensure that while UVMs do take advantage of the better processor capabilities, CVM gets the cores it requires for optimal performance and doesn;t lead to latency issues.

Understandably and expectedly, this change,however, would increase the core requirement for the cluster (as against previous sizings). For an existing sizing, this can be observed, more predominantly while cloning, which would apply the new CVM minimums to the cloned scenario leading to increased utilization of the CPU dials.  This , as a tradeoff, for higher CVM allocation for a better, more robust, performance optimal sizing scenario.

July 2021 sprints(Thresholds )

Sizer Thresholds – What has changed and Why?

 

What are thresholds in Sizer? 

Sizer has a feature called thresholds. These are defined individually for each of the sizing resources – cores, memory, SSDs, HDDs & GPUs (wherever applicable). These thresholds ensure that the total available resources in the nodes(cluster) are sufficient to meet the workload requirements and also account for some buffers for the unforeseen surges in workload applications

What has changed in thresholds?

Up until July 2021, the threshold defaults across cores/memory/SSD/HDD used to be 95% as can be seen(and modified) under the policy screen as shown below.

Note that the default was set to 95% which is also the maximum allowed. Users can go for a lower threshold (more conservative sizing with more buffer for future spikes). However, under no circumstances, sizer allowed to go higher than the default – greater than 95% –  to provide for a 5% margin for accomodating sizing errors/estimates and workload usage uncertainties.

Starting August 2021, Sizer would be changing the defaults for these thresholds to 85% across ALL resources[cores/memory/ssds/hdds) as shown below.

Note that the defaults have moved left to 85% , however, the maximum allowable utilization of the cluster resources still remains at 95%.

Why?

Why the change?

Having the maximum allowable and default both at 95%  at times did not provide enough margin for sizing estimate errors or unforeseen workload usage or spikes as only 5% left.  Given making accurate estimates is hard, we felt it was prudent to provide more slack with a 85% threshold.

To be clear though, many sizings have been done successfully at the old 95% level.  This move was also supported by Sizer users doing manual sizings who often opted for more slack.  This change was done to be more prudent versus any sizing issue.

When is it best to leave it at 85% Threshold

We feel for most sizings this is the more prudent level.  Allows for more room for estimate errors and for that matter customer growth

When might it be fine to go to 95% Threshold

Certainly numerous sizings have been done with 95% threshold and customers were happy.   We still do allow 95% to be the threshold.  These are the N+0 thresholds and so at N+1 there is a lot more slack.  The 95% level hits when one node is taken offline like for upgrades.  If the customer does upgrades during off-hours, their Core and RAM requirements are a lot less than normal and do not hit the higher threshold anyway.    Again we feel it is more prudent to leave it at 85%, and going higher just means you need to be comfortable with your sizing estimates and especially when the cluster is at N+0 (during an upgrade) 

What are the implications to existing sizings? 

First-the new sizings :

All new sizings (effective 9th August 2021) will have default thresholds at 85%. And since it is a significant change which impacts ALL new sizings and ALL users(internal/partners/customers), there will be a BANNER displayed prominently for two weeks for general awareness.

 

Implications to existing sizings : 

There will be NO impact or implication to the sizings created before 9th August 2021. The existing sizings would continue with the default threshold of 95% and would calculate the Utilisation %ages,  N+0,1 etc based on the previous default threshold of 95%. Thus, there won’t be any resizing or a new recommendation for existing sizings and those sizings and their recommendation holds good for that scenario.

Cloning an existing scenario: 

Cloning an existing sizing will be treated as a new sizing created after 9th August,2021 and thus, new sizing rules and default thresholds will apply.

One implication of this can be that there will be an increase in utilisation %ages across the cluster resources. This is because now, only 85% of the resources would be considered available for running the workload as against 95% earlier. This unavailability or in other words – reservation – of additional 10% of resources may drive to a higher node count (or make an existing N+1 solution as N+0) in some edge circumstances.

User can choose to resize for the new defaults , which may lead to higher node or core count – but that is for the better-as explained above-providing for margings and spikes – or- since it is a clone for an exsiting sizing which may have been sold to the customer – user can , alternatively, go to the threshold setting and move it to the right- back at 95%- which would then give back the same recommendation as the original sizing.

Sizer 5.0

Sizer 5.0 is the latest version of Sizer going live on 24 Feb 2021

Whats New?

Three major features in Sizer 5.0 :

1. Multi recommendation

Sizer to now have an option to recommend more than one solution for a given workload, depending on the price range.

2. Sizer Policy

These are the recommended cluster settings based on the deployment environment for the cluster being sized. Sizer strongly recommends to go with the default settings for the chosen environments , however, it allows you to make modifications to adjust to a given requirement.

3. Advanced Cluster settings

These are advanced filters to narrow down the sizings to a more specific solution, providing for greater flexibility and ability o accomodate specific customer requests.

 

Sizer journey to 5.0:

From single workload to multi workload to multi cluster to finally multi recommendation with Sizer 5.0

 

Multi-era  for Sizer: 

 

  • Multiple Workloads
      • Bulk Edits – Ability to update, delete, disable, enable many workloads at once
      • Enable our next move towards Collector-driven sizing where Collector feeds Sizer with 100s of workloads to create the  most precise sizing

 

  • Multiple Clusters
      • Cluster Settings – Ability for each cluster to have its own settings for common characteristics
        • CPU speed, NIC, Max nodes, thresholds, etc
        • This allows each cluster to be optimized for specific workloads
      • Sizer Policy – Apply best practices defined by experts for different environments
        • Settings for Test/Dev, Production or Mission Critical Environments
  • Multiple Recommendations

  • Cluster Settings gives the user control so Automatic Sizing gives desired recommendation
  • Multiple recommendations then allow user to play with the results
    • Cost Optimized, Compute Optimized, Storage Optimized solutions are provided
    • Each can be further tweaked by the user

 

Sizer 5.0 – Multiple Recommendations

 

Toggle between multiple recommendations that fit your cost tolerance

  • Cost Optimized – lowest cost-default option
  • Compute Optimized- most cores within cost tolerance
  • Storage Optimized – most HDD/SSD within cost tolerance

  • Cost Tolerance
  • This is advanced settings in cluster
  • Allows to select a price delta(from the cheapest)
  • Triggers multi recommendation within the price range

 

Sizer 5.0 – Policy

Why..

Different deployment environments might have different needs in terms of availability/resiliency and performance

What..

These are the recommended cluster settings based on the deployment environment. Brings consistency of sizings for a given environment.  

How.. 

Each cluster to have one of below policy

  • Test/Dev, Production, Mission Critical
  • Allowed to edit the policy settings 

Apply your own Sizer Policy for cluster characteristics

  • Maintenance Window requirements
  • Network speed
  • Minimum Compression
  • Other settings

Each cluster can then follow operational policy

  • Test/Dev, Production, Mission Critical
  • You can edit the policy to better meet customer needs

Sizer 5.0 – “Customize” the auto 

  • Tweak the auto recommendation through “Customize” option 
  • Allows incremental increase or decrease of the selected resource
  • Checks for valid /qualified combinations when tweaking( for ex: if tweak violates SSD/HDD rule or balanced memory config, won’t allow 
  • Shows the cost delta for the customized solution
  • Stay on the Solutions page while playing with options.

Sizer 5.0 – Advanced Cluster settings

Minimum CPU frequency:

  • This will ensure the sizing recommends only the processors above the quoted frequency
  • Helpful if customer is keen on certain range of processors for performance reasons 

CVM/node

  • The values here will override the default CVM overhead applied by Sizer -1
  • Allows customer to provision more cores/ram to CVM in case of performance sensitive workloads

Cost Tolerance

  • Allows for a price delta (from the lowest cost)
  • Recommends more than one(default) solution-2
  • Cost optimized :default-lowest cost solution
  • Compute optimized : most core heavy solution in the price range
  • Storage optimized: most capacity heavy solution in the  price range

 

Short demos on Sizer 5.0 features:

Sizer 5.0/5.1 overview:

Multi Recommendation:

Maintainance Window :

Sizer Policy : 

November 2020 Sprints

November 24  – Collector 3.2

Hi everyone, we just went live with Collector 3.2.. major highlight being able to run the tool in local and remote mode for Hyper-V environments Hyper-V local support :
  • Collector now supports running the tool against a Hyper-V cluster directly from the Hyper-V hosts locally. UI has option to choose from Hyper-V (local) and Hyper-V (remote)
  • Collection can be done by downloading the tool in any of the hosts which are part of the cluster we wish to collect data from and choosing Hyper-V local in the drop down menu.
  • With both remote and now local collection option, it provides for greater flexibility in switching the mode in case of connectivity/access issues with remote setup. (particularly for Hyper-V as it connects directly to cluster hosts and not management APIs unlike vCenter)
  • This version supports Hyper-V clusters for both local and remote mode. Support for standalone Hyper-V hosts (not part of a cluster) is in plan.
Precheck  :
  • A precheck script is bundled within the tool which can run a few checks to see if the expected services are available and if other prerequisites are satisfied.
  • Upon running into the error screen, the tool will redirect to the script location which can be run on the host to get the relevant data
Usability :
  • The login page now has a drop down to choose the flow that is, vCenter, Prism, and Hyper-V (remote) and Hyper-V (local), and the default ports are populated upon selection.
  • VM Summary table – shows both consumed and provisioned storage across all the cluster VMs
  • The tool now accepts hostname (apart from host IPs) for connecting to the Hyper-V host instance is now supported. The previous limitations have been removed.
  • Improved error messages/log enhancements
We now have a dedicated Collector page with the latest 3.2 bits and documents – User Guides, Release notes , here :
https://portal.nutanix.com/page/downloads?product=collector
We went live with the latest sprint, below the major highlights.. Proposals:
  • Updated slides on quarterly financials w/Q4
  • Now includes the Backup cluster / DR cluster details along with primary  workload cluster, includes the config details and utilization dials
  • HW spec slide added for NX-Mine specific appliance : a subset of the standard NX HW spec
Sizing enhancement:
  • SQL workload supported on Nutanix clusters on AWS
Usability:
  • Bulk Edit: I/O input fields option added for bulk edits for Server Virtualization and Cluster sizing(Raw)
  • Storage calculator updates including new drive options – 16TB HDDs [support for 320TB nodes]
  • Validator support for new NEC and KTNF platforms
  • Changes to Solutions summary UI – Cluster in a separate row/ consistent with workload summary UI
  • New partner roles added for partner specific HW vendor visibility
Product updates:
  • HPE DX: New AMD platform support – DX325 Gen10 8SFF
  • mCPU/lCPU-DIMMs rule update for across vendors
  • Dell XC:  GPU with NVMe restrictions removed, now both can be in same config

November 3

Hi everyone, we went live yesterday with the current sprint, below the major highlights:GPU Dials:
  • You will see a 5th set of dials- for the GPU – (for nodes/workloads) requiring GPUs, of course.
  • The dials show the utilization %age and cluster fail over considerations just like for cores, ram etc.
  • The additional dial will feature in the BOM as well for GPU workloads
320 TB node support
  • For Objects and Files Dedicated workload, the node limits now go up to 320TB(HDD)/node
  • The total capacity(including the SSDs) can go up to 350TB [16TB x 20 + 7.68TB x 4]
  • HPE DX4200 supports this configuration currently and is supported in Sizer
Collector/RVTool import filter
  • During import, Sizer will filter out the CVM VMs while running Collector or RVTool against a vCenter managed Nutanix cluster.
  • CVM resources are added by Sizer anyway so this will help avoid double accounting .
  • For Prism managed Nutanix clusters, the CVMs are filtered out by Collector itself.
Platform updates:
  • Two new NEC platforms : NEC Express R120h-1M & R120h -2M
  • A new vendor got added this release – KTNF with their server model: KR580S1-308N
  • New server platform for Inspur  : InMerge1000M5S
  • Updates to Fujitsu,Dell XC and Lenovo platforms

October 2020 Sprints

Oct 19

hi everyone
We went live last night with latest.  Both big and small changesOn small but good things
  • Updated Oracle sizing to match the recent changes in SQL Server production cluster sizing.  We already had dedicated cluster as requirement fr Oracle but now 1:1 CVM with total of 12 physical cores (yep we want lot of I/O capability) and min of 14 cores and 52 specints.
  • Align the VCPU:pcore ratio when doing either configuration or performance sizing with Collector
  • Bulk edit can now be done for XenApp or SQL Server workloads
  • Robo model addition: NX3060-G7
  •  DX Mine appliance: 1.92TB SSDs – RI
On BIG things
  • Sizer FINALLY has I/O !!  Well technically we had it for Files Application sizing but not general purpose use.  Now have  I/O performance sizing for both Cluster sizing (Raw) and Server Virtualization workloads.  Where historically we would size for capacity, now can size for I/O and Capacity (whichever is greater requirement)
Want to thank both  and  for all their hard work in getting the I/O effort going.  There was a lot of testing and analysis to get this scoped.  They both worked very hard and is excellent work.  This is what I love to see in Sizer so it is a better for you all

Here is the I/O panel in the workloads

Oct 6

Hi everyone, we went live with the current sprint, below the major highlights:SQL sizing enhancements: Major changes to this one..

  • Changes to the Business Critical ON/OFF options, default settings. Default SQL sizing to be business critical.
  • Sizer to allocate additional CVM cores(1:1 vCPU to pCore ratio) to aid in performance for business critical option.
  • Business critical SQL workload to be In a dedicated cluster, with only other SQL workloads. VDI, Server Virt etc not allowed to the SQL dedicated cluster.
  • All Flash or NVMe models only with high frequency processors for higher performance

Budgetary quote : HPE-DX

  • Now, generate budgetary quote for sizings on DX. Earlier budgetary quote would show only SW/CBL quote but now HW BOM price estimates also included
  • The HW BOM quote covers complete BOM including PSU, transceivers, chassis etc and including HW support prices

Files changes:

  • New File SKUs with tiered pricing is supported now, including generating Frontline quote through Sizer.Sizer’s budgetary quote for Files is also updated with newer SKUs and pricing approach.
  • Application storage – updated with latest performance numbers across hybrid and AF nodes.
  • With increased throughput/IO per node, would need fewer nodes than before for same workload.
  • Defaults to 1x25GbE NIC for smaller nodes, 2x25GbE for larger nodes.

Collector/RVTools

  • Now can choose to size for storage based on VMs consumed or provisioned capacity  during import.

Usability

  • Era quoting in Frontline supported through Sizer
  • Bulk edit – now also supported for Oracle, Backup workloads
  • HPE-DX default NIC recommendation -FLOM/OCP in both auto and manual
  • Updates to XC models, updated list of CPUs and SSDs across models

Thanks Ratan.  In regards to SQL, it has grown up in Sizer sort of speak.  If you are looking for adding a small sql database in a cluster of say server virt, then go with Business Critical off.Then SQL workload can be in a cluster with other type workloads, we take 2:1 CVM and no min cpu in terms of cores and specintsGo to Business Critical and then it is a dedicated cluster, 1:1 CVM with total of 12 physical cores (yep we want lot of IO capability) and min of 14 cores and 52 specints.    Config is also AF or NVme.  We will be making the same changes for Oracle in current sprint.

September 2020 Sprints

Sept 21

We just went live with the current sprint. And some cool features in this release..

Compare with one less node

  • A second set of dials with utilization% on selecting ‘compare’ checkbox(screenshot below)
  • Helps compare and analyze the  scenario and the state(N+1/N+0) with one less node than the optimal recommended
  • Avoid extra steps to go to manual to replicate the above

Bulk edits – Workload section

  • Make bulk edits/changes to workload attributes like vCPU:pcore ratio, or User type(in VDI) etc, particularly helpful for imported workloads
  • Couple of weeks ago, went live with bulk edit for common section [RF, Compression, ECX, snapshot etc]
  • With this, all inputs to a workload can be edited in bulk
  • Currently most major workloads supported for bulk edit : Server Virtualization, VDI, Files and Cluster/Raw

Encryption changes

  • Overall enhancement to Encryption support in Sizer with latest encryption licenses and add-ons
  • Option to choose SW and HW encryption and Sizer adds appropriate encryption license
  • Add-on encryption licenses support for non AOS [Files/Objects/VDI core/Robo]

Other enhancements / Platforms

  • Sizing stats : Usable remaining capacity adjusted for RF and N+1
  • HPE DX: Power calculation and checks for x170r/x190r nodes on DX2200/2600

hi everyone

I’m super excited about the 2nd dials.  Very often SE’s go to pain of changing node count to see what is ilike with one less node (N+1 vs N+0)Then  you want to change something in the original sizing and have to go back and see the impact on N+0We make it easy.  By the way that is an official sizing at N+0.  We don’t just take a percentage difference but do a real sizing and apply all the rules but just with one less node.

 

August 2020 Sprints

Aug 26

We just went live with the current sprint.. and excited to share that we went live with Sizer Basic!!

A quick introduction to Sizer Basic:

This flavor of Sizer is aimed at slightly different set of users/persona  for ex: sales rep, AMs, and customers.. and thus designed to reduce friction and time spent between gathering workload requirements to solution to quote.. idea is to drive volume sales…  asking few workload related questions., filling in the defaults around cluster properties/settings thus avoiding the complexity for the user… and come up with a solution rather quickly. Currently, Basic Includes all major workloads which have highest sizing momentum, and covers all platforms. One major highlight of Basic is the In built self-help capability- illustrations, guided tour, context based help panel , triggers – all of which educates the first time or repeat users about the tool, its workflow and also on the specifics of sizing.  Also to note that Sizer Basic is role based access so only for users who are assigned Basic role. Existing users continue on current Sizer (and would see Advanced/Basic tags).

Here’s a detailed 45 minute demo video on Sizer Basic:
https://nutanix.zoom.us/rec/share/4s5kDZPexkRLb4HGyGaYU79mAaK_eaa81yUY8_YJxBtyFKu2rdfJ38WGdBrB8ePtOther changes as part of the sprint includes:

  • Support for sync rep (for metro availability)

Now you can choose synchronous replication in Sizer and it comes out with a primary and a secondary cluster. Somewhat similar to the DR cluster but not including the additional snapshots

  • Async/Near sync enhancements

Changes related to the async/near sync for 5.17 such as moving the limits for hourly snapshots from 80TB to 92TB for all flash and the related config rules..

  • Sizing stats table

Usable remaining capacity row added to the sizing stats, gives the details on resources available in the cluster after workload requirements are met. The numbers are adjusted for RF.

  • Platforms – AMD /  HPE DX

HPE DX came out with support for the AMD platform – HPE DX385. The platform is listed and can be selected for sizing by choosing AMD under auto settings. Sizer’s default is Intel processor based models.

Hi everyone

Wanted to provide some color on why Sizer Basic and what we will do in future for full Sizer

 

Sizer Basic – As a company we are covering a very wide range of use cases and scale. However, about 45% of all scenarios were in the top workloads like VDI, Files, Server Virtualization, etc  and stayed with the defaults. Well that gives us an opportunity to have Basic with these defaults and let many more people do either the initial sizing or just go with the defaults. Two benefits. First, enter collaborative sales. We have about 700 customer users of full Sizer now and it has been quite successful. Often they do sizings and then share those with their SE. Basic will allow us to get Sizer out to many more customers and continue with this trend. Second benefit is SE can focus more on the complex sizings. I would envision often the initial sizing is done in Basic and then they can share the sizing with you for enhancements.  So with Basic you have more opportunity to collaborate.

 

Sizer Advanced – With the introduction of Basic we are working on Advanced. Here we know it is a SE or advanced user and we plan to add a lot more dials and options to allow you create awesome complex multi-cluster solutions. We will still offer this to customers as we offer Sizer today but does require SE support.  Stay tuned

 

Aug 11

What the heck a double here day!! .Well this is big as this will be a game changer in how you sell.  There was a SE team that got created to work with me to finally get a GREAT proposal out of Sizer.  So now you can do all your edits and get to final sizing and Sizer will automatically create a super presentation complete with the Sizing dials in ppt, pictures of all the hardware, corporate overview, slides for any product you selected.  A real proposal created by real SEs.Easy to do

  1. Go Create Proposals
  2. Any product in the sizing is automatically added but you get a nice selection panel for any products you want to include.  Might notice this looks like Frontline.  (we are all the same team and like this UI).
    3. Takes some time but you get a zip and open it up and you get slides CUSTOMIZED for your presentation.Why is this important.  Well for enterprise SEs you often create lots of sizings and so each has a presentation.  For commercial SEs you may find your time is so limited that now you can present a good presentation to the customer.  Also assured this is all current.  What we found is SEs wasted lot of time creating ppts and everyone had their own version.

 

Let me give a sample of what this does for you First every cluster has its own slide with the configuration summary and the dials.  Working with the SEs they often would want to show N+1 and N+0 levels (what the customer should expect in an upgrade for example).  Affectionately this was called the Justin slide as this is what Justin Bell presents and everyone said YES.  Also we show all the hardware pics too

 

 

 

hi everyone

Welcome to new Fiscal Year and hope everyone had good break.  Sizer team is coming out with a big bang with the new sprint launched todaySizing

  • HPE DX Mine support and we have HP DX Mine appliance in Sizer
  • Improvements in Splunk Smartstore

Usability

  • Bulk edits.  So you got a bunch of workloads and you say darn I need to change the compression, or RF level, or ECX, etc.  In old days had to go in one by one and change the workloads and now you can make bulk edits !!  Still can go in one by one if that is your part of you Zen practice.
  • Extent Store chart.  There has been a lot of confusion with all our charts on the storage that is available.  Heck I get confused.  We did some cleanup in the Sizing details already and now you see a nice interactive panel below those details to get to extent store (raw less cvm) and effective capacity (extent store with storage efficiences).  On left you can play with RF, compression, N+1, ECX and in real time get update on right.  Don’t like that complex TiB stuff got a switch for you to go to TB

July 2020 Sprints

July 27

Hope all is going well.  We did go live with a sprint last night. Some cool things

  • HPE DX BOM – support SKUs and structure
  • XenApp / RDSH profile edit
  • Era VCPU configurable license sku

Some very cool things

  • customize the thresholds used in manaual and auto sizing
  • much better summary of the sizing details.  Now we show the capacity, any savings and then in red the consumption items

July 13

Hi everyone, we went live with the current sprint, below are the major highlights

Sizing Improvements:

  • Splunk Smartstore : The new Splunk Smartstore , which decouples compute and storage is now supported in Sizer. Sizer recommends a compute cluster and a storage(object) cluster respectively for indexer and cold/frozen data.
  • RVTool host info in sizing : Bringing parity with Collector, Sizer reads RVTool host information for the VMs and factors in sizing. The existing server’s CPU is normalized against the baseline.
  • ERA platforms – it went live mid sprint, now you can you can select Era Platform licensing, Sizer generates Era platform licenses for the total cores in the database workload cluster, including the child/accounting SKUs.

Usability:

  • HPE DX BOM, Transceivers: As a continuation of the exercise to provide complete BOM for HPE DX, now Sizer would also recommend the appropriate type and quantity of transceivers to go with the selected NIC, depending on NIC type and no of ports. Sizer already recommends the required PSUs, GPU cables, Chassis etc as part of the complete BOM initiative.
  • Storage efficiency slider: Similar to the workloads section, the storage efficiency in Storage capacity calculator and Extent Store charts has a slider to chose from a range of values.
  • HPE Arrow models[BTO configs]: This went out mid sprint. Enabling HPE Arrow models for SFDC/Internal users.

Platforms:

  • Dell XC: Dell XC is the second vendor to go out with AMD models. XC6515 AMD (XC Core only) is now live in Sizer (under AMD option in settings).
  • Regular platform updates across NX and OEMs keeping up to date with the config changes in product meta.

July 6

hi everyone

We delivered a few things in mid-sprint last night-  Era platform licensing is nowin Sizer for Oracle or SQL.  So you can specify Era Platform licensing and then the cluster the database workload is in uses that licensing which is total cores in the cluster.-  All Nutanix users have access to the Arrow models for HP DX scenarios-  RV Tools support – Sizer will pick up the host info and SPECint numbers for each host from the RVtools spreadsheet.  This is already supported in Collector and can help get more sizing precision.

June 2020 Sprints

June 30

we went out with the release for the current sprint. Below are the highlights :

  • AWS EC2 sizing –
    •  Sizer can map the AWS EC2 instances to equivalent Nutanix nodes. Helpful in sizing for migrating workload from AWS to Nutanix. Currently compute optimized and storage optimized EC2 instances are supported.  In Beta currently.
  • Change in the N+0 thresholds
    •  The N+0 defaults continue to remain 95% for the compute and memory. The SSD and HDD thresholds moved from 90 to 95% for better utilization. The N+1 yellow indicator at 5% range of N+0 threshold makes a good case for making the shift.
  • RVTools enhancements
    •  Sizer now applies the derived vCPU:pCore ratio based on the excel instead of using the Sizer default. Additionally, the host processor for the workload is factored in while sizing for the imported VMs. These are already supported for Collector imported workloads. Also, supported with this release is latest version of  RVTool 4.0.4
  • Scenario number as permalink
    •        To help identify and share the scenario in a better way, as an usability enhancement, the scenario url now has number, for ex: S-123456
  • HPE DX  enhancements: Rules around certain processors/memory combination for some scenarios involving Cascade lake.
  • Recurring platform updates across NX/OEM/SWO vendors.

Thanks Ratan.  Want to bring out a key innovation with AWS EC2 Sizing.  The Sizer Council suggested it as a “hot” opportunity in current environment as they have customers anxious to pull out at least some of their AWS deployments onto Nutanix given AWS costs.

Here you just specify number of each instance type they want to move and get precise recommendation.This is case of excellent collaboration with the Sizer Council.  This came up just about 8 weeks ago and now liveDo want to thank Ratan who got all the detail requirements defined

 

June 15

Hi Everyone
We went live with current sprint.. below are the highlights:

Workload updates:

Files : 240TB node support. Now Sizer can recommend denser nodes up to 240TB capacity tier.. this is supported for Files dedicated and with few prerequisites such as minimum cores/ram/flash for the dense nodes . Second workload after Objects to support higher capacity nodes.  Files licenses for VDI core: Selecting VDI core( dedicated VDI cluster) and opting for Files for storing user data generates Files(for AOS) license for required capacity..VDI/Frame licenses in quotes:  If Frame is chosen in VDI, the Sizer budgetary/SFDC quote will now include  required Frame subscription licenses along with regular license for the cluster

Usability:

NX Mine appliance: NX Mine XSmall – a new extra small form factor for Mine for NX platform is supported now with required licenses and quote
Mine enhancement: Non decoupled, Disabling Mine for appliance/Non decoupled scenario.  ECX update in storage calculator and extent store chart in Sizer. We revisited approach to storage calculator ECX calculations and some updates around effective capacity. Now considering/applying ECX on usable remaining as well, earlier the usable remaining only considered RFUI changes for Workload tab: A lot of new capabilities will be coming in Sizer . Bulk edit/delete, import each VM as workload, move workloads between clusters etc.. There are UI changes for these .. we had filters in Workload tab, now rearranging few columns. Cluster is now a separate row followed by all workloads in that cluster underneath.  Gives lot of space for Workload name, so we can have space for basic/advanced tags and few check boxes for bulk edits.

Platform updates:

  • HP-DX : New platform: DX8000 DX910 –  A new HPE DX NVMe platform
  • Inspur SWO/InMerge(OEM): GPU made non mandatory. The GPU models can be selected without a GPU as well.
  • Dell XC: 640-4 and 4i / processor update – New revised list of supported processors for these XC models.

June 1

Hi everyone.

Some big things came out today

Frontline Quoting  – Frontline is our new quote tool that will replace the existing Steelbrick quoting tool.  Much nicer UX.  Allows for more tighter integration with Sizer in our goal to  offer an excellent E2E presales experience from Collector/Collector Portal for gathering customer requirements, Sizer to design the right solution to meet customer needs, and finally Frontline to create the quote.

So now you have the option to quote in Frontline if you are a Frontline user.  In Quote options we still have the options to create a SFDC quote and a budgetary quote.  This is a third option.  At this time about 1200 users in the company are set up for Frontline.  Most of Americas, some in EMEA and some in APAC.  Don’t fret though we envision getting everyone on it in a couple months.

Dashboard Filters  –  Ever get frustrated you can’t find or filter out different sizings.  We had ways to hide things with Customize View .  Now we have Dashboard Filters and you can just get what you want in a couple filters.  Attached is the pulldown .  You can have multiple filters as an AND condition.  So for example two filters allow you to select certain customer and certain workloads.    This is great for those that are getting into 100s of scenariosWe also made various product updates including

  • GPU None option for XF8055 , XF8050
  • DX: New platform: DX360-10-G10-NVMe
  • Dell XC: LCPUs
  • Lenovo; HX7820-24

 

May 2020 Sprints

May 19

 

Hi everyone.We went live with our latest sprint last night.

Sizing Improvements:

Arrow DX models – With our new focus to adjust for Virus economy we added pre-built DX models from Arrow for USA Commercial reps and SE and managers. Today there are supply chain challenges that are causing delays when customers try to get HP DX models. These are pre-built and available at Arrow…TODAY.   So for either manual or in Auto sizing you can select Arrow models and size and quote those. At this point you do have to be in the US Commercial group.  We hope to expand it in the future.

Usability:

  • Frontline integration – Frontline is our new cool quoting system and we want to get Sizer tied to it… we are working hard on it is coming soon.
  • Streamlined the Input processor options on workloads to make it more intuitive. Typical Power value added to the BOM and UI for Nutanix

Product Alignment

  • Dell XC product updates
  • Dell XC: New processor: Xeon Gold 6246 / XC740xd-12