September 2020 Sprints

Sept 21

We just went live with the current sprint. And some cool features in this release..

Compare with one less node

  • A second set of dials with utilization% on selecting ‘compare’ checkbox(screenshot below)
  • Helps compare and analyze the  scenario and the state(N+1/N+0) with one less node than the optimal recommended
  • Avoid extra steps to go to manual to replicate the above

Bulk edits – Workload section

  • Make bulk edits/changes to workload attributes like vCPU:pcore ratio, or User type(in VDI) etc, particularly helpful for imported workloads
  • Couple of weeks ago, went live with bulk edit for common section [RF, Compression, ECX, snapshot etc]
  • With this, all inputs to a workload can be edited in bulk
  • Currently most major workloads supported for bulk edit : Server Virtualization, VDI, Files and Cluster/Raw

Encryption changes

  • Overall enhancement to Encryption support in Sizer with latest encryption licenses and add-ons
  • Option to choose SW and HW encryption and Sizer adds appropriate encryption license
  • Add-on encryption licenses support for non AOS [Files/Objects/VDI core/Robo]

Other enhancements / Platforms

  • Sizing stats : Usable remaining capacity adjusted for RF and N+1
  • HPE DX: Power calculation and checks for x170r/x190r nodes on DX2200/2600

hi everyone

I’m super excited about the 2nd dials.  Very often SE’s go to pain of changing node count to see what is ilike with one less node (N+1 vs N+0)Then  you want to change something in the original sizing and have to go back and see the impact on N+0We make it easy.  By the way that is an official sizing at N+0.  We don’t just take a percentage difference but do a real sizing and apply all the rules but just with one less node.

 

August 2020 Sprints

Aug 26

We just went live with the current sprint.. and excited to share that we went live with Sizer Basic!!

A quick introduction to Sizer Basic:

This flavor of Sizer is aimed at slightly different set of users/persona  for ex: sales rep, AMs, and customers.. and thus designed to reduce friction and time spent between gathering workload requirements to solution to quote.. idea is to drive volume sales…  asking few workload related questions., filling in the defaults around cluster properties/settings thus avoiding the complexity for the user… and come up with a solution rather quickly. Currently, Basic Includes all major workloads which have highest sizing momentum, and covers all platforms. One major highlight of Basic is the In built self-help capability- illustrations, guided tour, context based help panel , triggers – all of which educates the first time or repeat users about the tool, its workflow and also on the specifics of sizing.  Also to note that Sizer Basic is role based access so only for users who are assigned Basic role. Existing users continue on current Sizer (and would see Advanced/Basic tags).

Here’s a detailed 45 minute demo video on Sizer Basic:
https://nutanix.zoom.us/rec/share/4s5kDZPexkRLb4HGyGaYU79mAaK_eaa81yUY8_YJxBtyFKu2rdfJ38WGdBrB8ePtOther changes as part of the sprint includes:

  • Support for sync rep (for metro availability)

Now you can choose synchronous replication in Sizer and it comes out with a primary and a secondary cluster. Somewhat similar to the DR cluster but not including the additional snapshots

  • Async/Near sync enhancements

Changes related to the async/near sync for 5.17 such as moving the limits for hourly snapshots from 80TB to 92TB for all flash and the related config rules..

  • Sizing stats table

Usable remaining capacity row added to the sizing stats, gives the details on resources available in the cluster after workload requirements are met. The numbers are adjusted for RF.

  • Platforms – AMD /  HPE DX

HPE DX came out with support for the AMD platform – HPE DX385. The platform is listed and can be selected for sizing by choosing AMD under auto settings. Sizer’s default is Intel processor based models.

Hi everyone

Wanted to provide some color on why Sizer Basic and what we will do in future for full Sizer

 

Sizer Basic – As a company we are covering a very wide range of use cases and scale. However, about 45% of all scenarios were in the top workloads like VDI, Files, Server Virtualization, etc  and stayed with the defaults. Well that gives us an opportunity to have Basic with these defaults and let many more people do either the initial sizing or just go with the defaults. Two benefits. First, enter collaborative sales. We have about 700 customer users of full Sizer now and it has been quite successful. Often they do sizings and then share those with their SE. Basic will allow us to get Sizer out to many more customers and continue with this trend. Second benefit is SE can focus more on the complex sizings. I would envision often the initial sizing is done in Basic and then they can share the sizing with you for enhancements.  So with Basic you have more opportunity to collaborate.

 

Sizer Advanced – With the introduction of Basic we are working on Advanced. Here we know it is a SE or advanced user and we plan to add a lot more dials and options to allow you create awesome complex multi-cluster solutions. We will still offer this to customers as we offer Sizer today but does require SE support.  Stay tuned

 

Aug 11

What the heck a double here day!! .Well this is big as this will be a game changer in how you sell.  There was a SE team that got created to work with me to finally get a GREAT proposal out of Sizer.  So now you can do all your edits and get to final sizing and Sizer will automatically create a super presentation complete with the Sizing dials in ppt, pictures of all the hardware, corporate overview, slides for any product you selected.  A real proposal created by real SEs.Easy to do

  1. Go Create Proposals
  2. Any product in the sizing is automatically added but you get a nice selection panel for any products you want to include.  Might notice this looks like Frontline.  (we are all the same team and like this UI).
    3. Takes some time but you get a zip and open it up and you get slides CUSTOMIZED for your presentation.Why is this important.  Well for enterprise SEs you often create lots of sizings and so each has a presentation.  For commercial SEs you may find your time is so limited that now you can present a good presentation to the customer.  Also assured this is all current.  What we found is SEs wasted lot of time creating ppts and everyone had their own version.

 

Let me give a sample of what this does for you First every cluster has its own slide with the configuration summary and the dials.  Working with the SEs they often would want to show N+1 and N+0 levels (what the customer should expect in an upgrade for example).  Affectionately this was called the Justin slide as this is what Justin Bell presents and everyone said YES.  Also we show all the hardware pics too

 

 

 

hi everyone

Welcome to new Fiscal Year and hope everyone had good break.  Sizer team is coming out with a big bang with the new sprint launched todaySizing

  • HPE DX Mine support and we have HP DX Mine appliance in Sizer
  • Improvements in Splunk Smartstore

Usability

  • Bulk edits.  So you got a bunch of workloads and you say darn I need to change the compression, or RF level, or ECX, etc.  In old days had to go in one by one and change the workloads and now you can make bulk edits !!  Still can go in one by one if that is your part of you Zen practice.
  • Extent Store chart.  There has been a lot of confusion with all our charts on the storage that is available.  Heck I get confused.  We did some cleanup in the Sizing details already and now you see a nice interactive panel below those details to get to extent store (raw less cvm) and effective capacity (extent store with storage efficiences).  On left you can play with RF, compression, N+1, ECX and in real time get update on right.  Don’t like that complex TiB stuff got a switch for you to go to TB

July 2020 Sprints

July 27

Hope all is going well.  We did go live with a sprint last night. Some cool things

  • HPE DX BOM – support SKUs and structure
  • XenApp / RDSH profile edit
  • Era VCPU configurable license sku

Some very cool things

  • customize the thresholds used in manaual and auto sizing
  • much better summary of the sizing details.  Now we show the capacity, any savings and then in red the consumption items

July 13

Hi everyone, we went live with the current sprint, below are the major highlights

Sizing Improvements:

  • Splunk Smartstore : The new Splunk Smartstore , which decouples compute and storage is now supported in Sizer. Sizer recommends a compute cluster and a storage(object) cluster respectively for indexer and cold/frozen data.
  • RVTool host info in sizing : Bringing parity with Collector, Sizer reads RVTool host information for the VMs and factors in sizing. The existing server’s CPU is normalized against the baseline.
  • ERA platforms – it went live mid sprint, now you can you can select Era Platform licensing, Sizer generates Era platform licenses for the total cores in the database workload cluster, including the child/accounting SKUs.

Usability:

  • HPE DX BOM, Transceivers: As a continuation of the exercise to provide complete BOM for HPE DX, now Sizer would also recommend the appropriate type and quantity of transceivers to go with the selected NIC, depending on NIC type and no of ports. Sizer already recommends the required PSUs, GPU cables, Chassis etc as part of the complete BOM initiative.
  • Storage efficiency slider: Similar to the workloads section, the storage efficiency in Storage capacity calculator and Extent Store charts has a slider to chose from a range of values.
  • HPE Arrow models[BTO configs]: This went out mid sprint. Enabling HPE Arrow models for SFDC/Internal users.

Platforms:

  • Dell XC: Dell XC is the second vendor to go out with AMD models. XC6515 AMD (XC Core only) is now live in Sizer (under AMD option in settings).
  • Regular platform updates across NX and OEMs keeping up to date with the config changes in product meta.

July 6

hi everyone

We delivered a few things in mid-sprint last night-  Era platform licensing is nowin Sizer for Oracle or SQL.  So you can specify Era Platform licensing and then the cluster the database workload is in uses that licensing which is total cores in the cluster.-  All Nutanix users have access to the Arrow models for HP DX scenarios-  RV Tools support – Sizer will pick up the host info and SPECint numbers for each host from the RVtools spreadsheet.  This is already supported in Collector and can help get more sizing precision.

June 2020 Sprints

June 30

we went out with the release for the current sprint. Below are the highlights :

  • AWS EC2 sizing –
    •  Sizer can map the AWS EC2 instances to equivalent Nutanix nodes. Helpful in sizing for migrating workload from AWS to Nutanix. Currently compute optimized and storage optimized EC2 instances are supported.  In Beta currently.
  • Change in the N+0 thresholds
    •  The N+0 defaults continue to remain 95% for the compute and memory. The SSD and HDD thresholds moved from 90 to 95% for better utilization. The N+1 yellow indicator at 5% range of N+0 threshold makes a good case for making the shift.
  • RVTools enhancements
    •  Sizer now applies the derived vCPU:pCore ratio based on the excel instead of using the Sizer default. Additionally, the host processor for the workload is factored in while sizing for the imported VMs. These are already supported for Collector imported workloads. Also, supported with this release is latest version of  RVTool 4.0.4
  • Scenario number as permalink
    •        To help identify and share the scenario in a better way, as an usability enhancement, the scenario url now has number, for ex: S-123456
  • HPE DX  enhancements: Rules around certain processors/memory combination for some scenarios involving Cascade lake.
  • Recurring platform updates across NX/OEM/SWO vendors.

Thanks Ratan.  Want to bring out a key innovation with AWS EC2 Sizing.  The Sizer Council suggested it as a “hot” opportunity in current environment as they have customers anxious to pull out at least some of their AWS deployments onto Nutanix given AWS costs.

Here you just specify number of each instance type they want to move and get precise recommendation.This is case of excellent collaboration with the Sizer Council.  This came up just about 8 weeks ago and now liveDo want to thank Ratan who got all the detail requirements defined

 

June 15

Hi Everyone
We went live with current sprint.. below are the highlights:

Workload updates:

Files : 240TB node support. Now Sizer can recommend denser nodes up to 240TB capacity tier.. this is supported for Files dedicated and with few prerequisites such as minimum cores/ram/flash for the dense nodes . Second workload after Objects to support higher capacity nodes.  Files licenses for VDI core: Selecting VDI core( dedicated VDI cluster) and opting for Files for storing user data generates Files(for AOS) license for required capacity..VDI/Frame licenses in quotes:  If Frame is chosen in VDI, the Sizer budgetary/SFDC quote will now include  required Frame subscription licenses along with regular license for the cluster

Usability:

NX Mine appliance: NX Mine XSmall – a new extra small form factor for Mine for NX platform is supported now with required licenses and quote
Mine enhancement: Non decoupled, Disabling Mine for appliance/Non decoupled scenario.  ECX update in storage calculator and extent store chart in Sizer. We revisited approach to storage calculator ECX calculations and some updates around effective capacity. Now considering/applying ECX on usable remaining as well, earlier the usable remaining only considered RFUI changes for Workload tab: A lot of new capabilities will be coming in Sizer . Bulk edit/delete, import each VM as workload, move workloads between clusters etc.. There are UI changes for these .. we had filters in Workload tab, now rearranging few columns. Cluster is now a separate row followed by all workloads in that cluster underneath.  Gives lot of space for Workload name, so we can have space for basic/advanced tags and few check boxes for bulk edits.

Platform updates:

  • HP-DX : New platform: DX8000 DX910 –  A new HPE DX NVMe platform
  • Inspur SWO/InMerge(OEM): GPU made non mandatory. The GPU models can be selected without a GPU as well.
  • Dell XC: 640-4 and 4i / processor update – New revised list of supported processors for these XC models.

June 1

Hi everyone.

Some big things came out today

Frontline Quoting  – Frontline is our new quote tool that will replace the existing Steelbrick quoting tool.  Much nicer UX.  Allows for more tighter integration with Sizer in our goal to  offer an excellent E2E presales experience from Collector/Collector Portal for gathering customer requirements, Sizer to design the right solution to meet customer needs, and finally Frontline to create the quote.

So now you have the option to quote in Frontline if you are a Frontline user.  In Quote options we still have the options to create a SFDC quote and a budgetary quote.  This is a third option.  At this time about 1200 users in the company are set up for Frontline.  Most of Americas, some in EMEA and some in APAC.  Don’t fret though we envision getting everyone on it in a couple months.

Dashboard Filters  –  Ever get frustrated you can’t find or filter out different sizings.  We had ways to hide things with Customize View .  Now we have Dashboard Filters and you can just get what you want in a couple filters.  Attached is the pulldown .  You can have multiple filters as an AND condition.  So for example two filters allow you to select certain customer and certain workloads.    This is great for those that are getting into 100s of scenariosWe also made various product updates including

  • GPU None option for XF8055 , XF8050
  • DX: New platform: DX360-10-G10-NVMe
  • Dell XC: LCPUs
  • Lenovo; HX7820-24

 

May 2020 Sprints

May 19

 

Hi everyone.We went live with our latest sprint last night.

Sizing Improvements:

Arrow DX models – With our new focus to adjust for Virus economy we added pre-built DX models from Arrow for USA Commercial reps and SE and managers. Today there are supply chain challenges that are causing delays when customers try to get HP DX models. These are pre-built and available at Arrow…TODAY.   So for either manual or in Auto sizing you can select Arrow models and size and quote those. At this point you do have to be in the US Commercial group.  We hope to expand it in the future.

Usability:

  • Frontline integration – Frontline is our new cool quoting system and we want to get Sizer tied to it… we are working hard on it is coming soon.
  • Streamlined the Input processor options on workloads to make it more intuitive. Typical Power value added to the BOM and UI for Nutanix

Product Alignment

  • Dell XC product updates
  • Dell XC: New processor: Xeon Gold 6246 / XC740xd-12

 

 

Nutanix Unified Storage – Files Discovery Guidance (Revised 4/24/25)

Introduction – Please Read First

These questions are here to assist with ensuring that you’re gathering necessary information from a customer/prospect in order to put together an appropriate solution to meet their requirements in addition to capturing specific metrics from tools like Collector or RVTools.

This list is not exhaustive, but should be used as a guide to make sure you’ve done proper and thorough discovery.  Also, it is imperative that you don’t just ask a question without understanding the reason why it is being asked.  We’ve structured these questions with not only the question that should be asked, but why we are asking the customer to provide an answer to that question and why it matters to provide an optimal solution.

Questions marked with an asterisk (*) will likely require reaching out to a specialist/Solution Architect resource at Nutanix to go deeper with the customer on that topic/question.  Make sure you use the answers to these questions in the Scenario Objectives in Sizer when you create a new Scenario.  These questions should help guide you as to what the customer requirements, constraints, assumptions, and risks are for your opportunity.

This is a live document, and questions will be expand and update over time.

REVISION HISTORY
4/21/25 – 1st Revision – Mike McGhee
1/5/21 – 1st Publish – Matt Bator


Files

1.  Is this replacing a current solution, or is this a net new project?
     a.  What’s the current solution?

Why ask? This question helps us understand the use case, any current expectations and what the competitive landscape may look like as well as an initial idea of the size / scale of the current solution.

2.  Is there a requirement to use an existing Nutanix cluster (with existing workload) or net new Nutanix cluster?

Why ask? If we’re sizing into an existing cluster we need to understand current hardware and current workload.  For licensing purposes adding Files to an existing cluster means the Unified Storage Pro license. A common scenario has been to add storage only nodes to an existing cluster to support the new Files capacity.  If sizing into a new cluster we can potentially dedicate this cluster to Files and Unified Storage. 

3.  Is this for NFS, SMB or both? Which protocol versions (SMB 3.0, NFSv4, etc)?

Why ask?  We need to understand protocol to first validate they are using supported clients.  Supported clients are documented in the release notes of each version of Files.  Concurrent SMB connections also impact sizing with respect to the compute resources we need for the FSVMs to handle those clients.  Max concurrent connections are also documented in the release notes of each version. 

It also helps us validate supported authentication methods.  For SMB, we require Active Directory where we support 2008 domain functional level or higher.  There is limited local user support for Files but the file server must still be registered with a domain.  For NFS v4 we support AD with Kerberos, LDAP and Unmanaged (no auth) shares.  For NFS v3 we support LDAP and Unmanaged. 

4.  Is there any explicit performance requirement for the customer? Do they require specific IOPS or performance targets? 

Why ask?  Every FSVM has an expected performance envelope.  There is a sizing guide and performance tech note on the Nutanix Portal which give a relative expectation on the max read and write throughput per FSVM and max read or write IOPs per FSVM. 

Throughput based on reads and writes are integrated into Nutanix Sizer and will impact the recommended number of FSVMs.  This may also impact the hardware configuration,including choice of NICs, leveraging RDMA between the CVMs, or iSER supported since the Files 5.0 release via a performance profile. Also the choice of all flash vs. hybrid.  

5.  Do they have any current performance collection from their existing environment?
      a.  Windows File Server = Perfmon
      b.  Netapp = perfstat
      c.  Dell DPACK, Live Optics

Why ask?  Seeing data from an existing solution can help validate the performance numbers so that we size accurately for performance. 

6.  What are the specific applications using the shares?
       a.  VDI (Home Shares)
       b.  PACS (Imaging)
       c.  Video (Streaming)
       d.  Backup (Streaming)

Why ask?  When sizing for storage space utilization the application performing the writes could impact storage efficiency.  Backup, Video and Image data are most commonly compressed by the application.  For those applications we should not include compression savings when sizing, only Erasure Coding.  For general purpose shares with various document types assume some level of compression savings.  

7.  Are they happy with performance or looking to improve performance?

Why ask?  If the customer has existing performance data, it’s good to understand if they are expecting equivalent or better performance from Files.  This could impact sizing, including going from a hybrid to an all flash cluster. 

 8.  How many expected concurrent user connections?

Why ask? Concurrent SMB connections are a required sizing parameter.  Each FSVM needs enough memory assigned to support a given number of users.  A Standard share is owned by one FSVM.  A distributed share is owned by all FSVMs and is load balanced based on top level directories.  We need to ensure any one FSVM can support all concurrent clients to the standard share or top level directory with the highest expected connections. We should also be ensuring that the sizing for concurrent connections is taking into account N-1 redundancy for node maintenance/failure/etc.

 9.  What is your current share configuration including number of shares?

Why ask?  Files has a soft (recommended) limit of 100 shares per FSVM. We also leverage Nested shares to match an existing environment if there are more shares needed.  Files currently supports 5,000 nested shares since the 4.4 release.

10.  Does their directory structure have a large number of folders in share root?

Why ask?  This indicates a large number of top level directories making a distributed share a good choice for load balancing and data distribution.

11. Are there Files in the share root?

Why ask?  Distributed shares cannot store files in the share root.  If an application must store files in the root then you should plan for sizing using standard shares.  Alternatively, a nested share can be used. 

 12. What is the largest number of files/folders in a single folder?

Why ask?  Nutanix Files is designed to store millions of files within a single share and billions of files across a multi-node cluster with multiple shares.  To achieve speedy response time for high file and directory count environments it’s necessary to give some thought to directory design. Placing millions of files or directories into a single directory is going to be very slow in file enumeration that must occur before file access.  The optimal approach is to branch out from the root share with leaf directories up to a width (directory or file count in a single directory) no greater than 100,000.  Subdirectories should have similar directory width.  If file or directory counts get very wide within a single directory, this can cause slow data response time to client and application.  Increasing FSVM memory up to 96 GB to cache metadata can help improve performance for these environments especially if designs for directory and files listed above are followed.

13. What is the total size of largest single directories?

Why ask?  Nutanix supports standard shares up to 1PiB starting with the Files 5.0 release (prior to compression.)  And top level directories in a distributed share up to 1PiB.  These limits are based on the volume group supporting the standard share or top level directory.  We need to ensure no single folder or share (if using a standard share) surpasses 1PiB.

12. Largest number of files/folders in a single folder?

Why ask?  Nutanix Files is designed to store millions of files within a single share and billions of files across a multi-node cluster with multiple shares.  To achieve speedy response time for high file and directory count environments it’s necessary to give some thought to directory design. Placing millions of files or directories into a single directory is going to be very slow in file enumeration that must occur before file access.  The optimal approach is to branch out from the root share with leaf directories up to a width (directory or file count in a single directory) no greater than 100,000.  Subdirectories should have similar directory width.  If file or directory counts get very wide within a single directory, this can cause slow data response time to client and application.  Increasing FSVM memory to cache metadata and increasing the number of vCPUs can help improve performance for these environments especially if designs for directory and files listed above are followed.

13  Does the total storage and compute requirements including future growth?

Why ask?  Core sizing question to ensure adequate storage space is available with the initial purchase and over the expected timeframe. 

14.  What percent of data is considered to be active/hot?

 Why ask?  Understanding the expected active dataset can help with sizing the SSD tier for a hybrid solution.  Performance and statistical collection from an existing environment may help with this determination.

 15.  What is your storage change rate?

Why ask?  Change rate influences snapshot overheads based on retention schedules.  Nutanix Sizer will ask what the change rate is for the dataset to help with determining the storage space impact of snapshot retention.

 16.  Do you have any storage efficiency details from the current environment (dedup, compression, etc.)?

Why ask?  Helps to determine if data reduction techniques like dedup and compression are effective against the customers data.  Files does not support the use of deduplication today, so any dedup savings should not be taken into account when sizing for Files.  If the data is compressible in the existing environment it should also be compressible with Nutanix compression.

 17.  What is the block size of current solution (if known)?

Why ask?  Block size can impact storage efficiency.  A solution which has many small files with a fixed block size may show different space consumption when migrated to Files, which uses variable block lengths based on file size.  For files over 64KB in size, Files uses a 64KB block size.  In some cases a large number of large files have been slightly less efficient when moved to Nutanix Files.  Understanding this up front can help explain differences following migrations.

18.  Is there a requirement for Self Service Restore (SSR)?

Why ask?  Nutanix Files uses two levels of snapshots, SSR snapshots occur at the file share level via ZFS.  These snapshots have their own schedule and Sizer asks for their frequency and change rate under “Nutanix Files Snapshots.”  The schedule associated with SSR and retention periods will impact overall storage consumption. Nutanix Files Snapshots increase both the amount of licensing required and total storage required, so it’s important to get it right during the sizing process.

 19.  What are the customer’s Data Protection/Disaster Recovery requirements and what is their expected snapshot frequency and retention schedule (hourly, daily, weekly, etc.)?

Why ask? Data Protection snapshots occur at the AOS (protection domain) level via the NDSF.  The schedule and retention policy are managed against the protection domain for the file server instance and will impact overall storage consumption.  Sizer asks for the local and remote snapshot retention under “Data Protection.”
Files supports 1hr RPO today and will support near-sync in the AOS 5.11.1 release in conjunction with Files 3.6.  Keep in mind node density (raw storage) when determining RPO.  Both 1hr and near-sync RPO require hybrid nodes with 40TB or less raw or all flash nodes with 48TB or less raw.  Denser configurations can only support 6hr RPO.  These requirements will likely change so double check the latest guidance when sizing dense storage nodes. Confirm that underlying nodes and configs support NearSync per latest AOS requirements if NearSync will be used.

 20. Does the customer have an Active/Active requirement?

Why ask?  If the customer needs active/active file shares in different sites which represent the same data, we need to position a third party called Peer Software.  Peer performs near real time replication of data between heterogenous file servers.  Peer utilizes Windows VMs which consume some CPU and memory you may want to size into the Nutanix clusters intended for Files.

Files 5.0 introduced an active/active solution called VDI sync, specific for user profile data.  The solution supports activity against user specific profile data within one site at a time.  If the user moves to another site, the VDI session can follow and localize access for that user.

 21. Is there an auditing requirement? If so, which vendor or vendors

Why ask?  Nutanix is working to integrate with three main third-party auditing vendors today, Netwrix (supported and integrated with Files), Varonis (working on integration) and Stealthbits (not yet integrated).  Nutanix Files also has a native auditing solution in File Analytics.

Along with ensuring audit vendor support, a given solution may require a certain amount of CPU, Memory and Storage (to hold auditing events).  Ensure to include any vendor specific sizing in the configuration.  File Analytics for example could require 8vcpu 48GB of memory and 3TB of storage.

Data Lens is a SaaS offering in the public cloud, so you will need to ensure the customer is comfortable with a cloud solution. 

22. Is there an Antivirus requirement? If so, which vendors?

Why ask? Files supports specific Antivirus vendors today with respect to ICAP integration.  For a list of supported vendors see the software compatibility matrix on the Nutanix Portal and sort by Nutanix Files:

https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix/software

If centralized virus scan servers are to be used you will want to include their compute requirements into sizing the overall solution.

 23. Is there a backup requirement? If so, which vendor or vendors?

Why ask?  Files has full change file tracking (CFT) support with HYCU, Commvault, Veeam, Veritas and Storware.  There are also vendors like Rubrik who are validated but do not use CFT.  If including a backup vendor on the same platform, you may need to size for any virtual appliance which may also run on Nutanix. 

23. Is the customer using DFS (Distributed File Server) Namespace (DFS-N)?

Why ask?  Less about sizing and more about implementation.  Prior to Files 3.5.1 Files could only support distributed shares with DFS-N.  Starting with 3.5.1 both distributed and standard shares are fully supported as folder targets with DFS-N.

Files 5.1 introduced a native unified namespace to combine different file servers into a common namespace.  

 24.  Does the customer have tiering requirements?

Why ask?  Files supports tiering which means automatically moving data off Nutanix Files and to an S3 compliant object service either on-premises or in the cloud.  In scoping future requirements, customers may size for a given amount of on-premises storage and a larger amount of tiered storage for longer term retention.

Server Virtualization Discovery Guidance (Revised 4/15/2025)

Introduction – Please Read First

The questions here are to assist with ensuring that you’re gathering the necessary information from a customer/prospect to provide an appropriate solution to meet their requirements.  This is in addition to capturing specific metrics from tools such as Nutanix Collector or RVTools.   

The list is not exhaustive and will need to be adapted to the appropriate audience.  It should be used as a guide to make sure you’ve conducted a thorough discovery.  It is important that you don’t just ask a question without understanding the reason why and why it matters – this leads to providing an optimal solution. 

Always ask open questions (“Tell me more…”) and where possible avoid talking about Nutanix products and capabilities so as not to derail the gathering information.  If asked, suggest this will form part of a follow-up workshop. 

Questions marked with an asterisk (*) may require the assistance of a Portfolio Specialist or Solution Architect to go deeper with the customer on that topic/question.  Make sure you use the answers to these questions in the Scenario Objectives in Sizer when you create a new scenario.  The questions will help guide you to completing the customer’s requirements, constraints, assumptions, and risks for your opportunity.   

This is a live document, and questions will be expanded and updated periodically.
 

Revision History 

Revision 2025.1 Darren Woollard March 19, 2025

Initial Publication Lane Leverett November 2020
 


Server Virtualization

Generic & State of The Union Questions

Generic & State of The Union Questions 

1. What does your server virtualization environment look like today? 

       a. Hardware – single or multiple vendors?  By choice, risk mitigation or from mergers? 

       b. Software – the hypervisor and the eco-system that surrounds it (not just the hypervisor components, think of self-service, build automation, micro-segmentation, ticketing workflow, backup, etc…) 

2. What do you find most challenging about your current virtualization environment (examples below)? 

      a. Management? 

      b. Business/people process is manual (causing slow turnaround, bad perception of I.T.) 

       c.  Upgrading/patching across multiple sites 

       4.  Needing 3rd party software to complete certain tasks 

3. In your virtualized environment, what keeps you up at night? 

4. What is working well in your current virtualization environment that you want to ensure is continued? 

5. What does your cloud strategy currently look like?
      a. Private Cloud, Hybrid Cloud, Public Cloud, Multi-Cloud, “We’re Cloud First”
            i.   Why has this strategy been chosen?
            ii.  Who is directing/championing this strategy? 

6. What is the desired position when it comes to utilizing Public Cloud provider services?  In the next, say 1-3 years
      a. Is a distributed multi-cloud operating model perceived to be the best way to deliver the services of the business?        

       b. Is there a preference of a specific Public Cloud provider? 

      c. Will some services remain on-premises and some within the Public Cloud? 

       d. Will some services transform to SaaS offerings during a digital transformation project removing the need for the on-prem application? 

       e. What are the top 3 concerns about operating in a distributed ‘Cloud’ model? 

 Architecture/Solution Specific Questions

1. Do you have a preferred  x86 server vendor standard?
     a. Are you happy with this vendor?
     b.  If so, what do you enjoy/appreciate the most?
     c. If not, what do you find the most challenging? 

2. What is your preferred storage vendor for virtualization?
     a. Are you leveraging RDM (Raw Device Mappings) for your workloads?
     b. If so, can you please provide some workload examples?
           i.  Oracle, MS CSV’s, SCSI-3 Shared Devices 

3. What is your preferred storage vendor for physical workloads? 

       a. How do the physical servers connect to the storage presentation? 

       b.  What is the storage presentation protocol to these devices (iSCSI, NFS, etc…) 

4. How do you currently connect your storage to your x86 servers?
     a.  NFS, FC, FCOE, iSCSI 

 5.  What SAN/Storage hardware is in place today? 

       a. HDD/Hybrid/All Flash/etc…? 

       b. How many spindles of each? 

       c. How many Controllers/Storage Processors? 

       6. What does the logical disk layout look like? 

              a. RAID Level? 

              b. Number of disks per RAID Group? 

7. Who is your preferred hypervisor vendor and what version(s) are deployed? 

       a. If multiple vendors are used, is this due to architectural reasons? 

8. How open would you be to considering other hypervisors? 

9. Who is your preferred networking vendor? 

10. Are you using traditional 3-Tier networking or Leaf-Spine networking? 

11. What does your networking architecture/rack design look like? 

12. Are you integrating hypervisor networking and what are your current networking standards?
        a.  Cisco ACI / VMware NSX / Arista / Cumulus 

13. A collection of data from your current environment is preferred in order that a point in time capture can be reviewed.  Can we use Nutanix Collector, RVTools (VMware estates), Dell LiveOptics, Microsoft MAP, Oracle AWR, or any other inventory collection you may have used to gather at a minimum, the following information?:
      a.  # of Virtual Machines
      b.  # of vCPUs
      c.  Current vCPU to Physical Core oversubscription
      d.  Current Physical CPU Model In hosts (for SpecInt Sizing/Comparison- http://ewams.net/?view=How_to_Size_the_CPUs_of_New_Systems_Using_my_Specint_Rated_Tool )
      e.  Allocated memory
      f.  Provisioned storage
      g.  Consumed storage
      h.  Largest vCPU allocation (for NUMA design)
      i.  Largest Memory allocation (for NUMA design)
      j.  Working set size (what will sit in SSD for Hybrid)-can be determined from daily incremental backups (https://www.joshodgers.com/2014/09/25/rule-of-thumb-sizing-for-storage-performance-in-the-new-world/ ) 

Backup & Data Protection Questions

See BCDR Discovery Guidance Doc

Automation & Self Service Discovery Guidance (Revised 4/28/25)

Introduction – Please Read First

These questions are here to assist with ensuring that you’re gathering necessary information from a customer/prospect in order to put together an appropriate solution to meet their requirements in addition to capturing specific metrics from tools like Collector or RVTools. 

This list is not exhaustive, but should be used as a guide to make sure you’ve done proper and thorough discovery.  Also, it is imperative that you don’t just ask a question without understanding the reason why it is being asked.  We’ve structured these questions with not only the question that should be asked, but why we are asking the customer to provide an answer to that question and why it matters to provide an optimal solution. 

Questions marked with an asterisk (*) will likely require reaching out to a specialist/Solution Architect resource at Nutanix to go deeper with the customer on that topic/question.  Make sure you use the answers to these questions in the Scenario Objectives in Sizer when you create a new Scenario.  These questions should help guide you as to what the customer requirements, constraints, assumptions, and risks are for your opportunity. 

This is a live document, and questions will be expand and update over time.

Revision History 

April 2025 – First life cycle – Thomas Brown, Shane Lyndsey, John Hanna
November 2020 – First Publication – Lane Leverett 

 


Automation & Self Service

Discovery Questions

1.  How are you currently automating IT Service Delivery today, do you have any:
     a.  IAAS – Infrastructure as a service
     b.  PaaS – Platform as a service
     c.  SaaS – Software as a service

Why ask?  This question helps us understand the customer’s maturity level when it comes to application deployment and could uncover some of the competitive infrastructureIt can also uncover some of the products that we may need to integrate with.   

2.  Are standardizing and compliance important to you in your IT Automation Delivery Strategy? If so, do you currently use a business intake or self-service request process via a solution such as ServiceNow, Cherwell, Remedy, etc. to automate IT service delivery?

Why ask? This question will give you the opportunity to discuss our SERVICE NOW plugin. Also helps understand which front end they will use for the Self Service implementation. If they do not use ServiceNow, be aware that it is possible to integrate with other ITSM solutions using the Self Service API.

3. Do you have any contracts with the cloud providers (AWS, Azure or GCP)?
a.  What are the specific use cases or workload profiles consumed from the cloud providers?

Why ask? This question helps you understand which providers they use which may consume Self Service. Helps us understand which services are still on-prem and available as a target for NCI. May help position Cost Governance. Also helps us understand if they have a Microsoft EA which may force the spend to go to Azure.

4.  Can you describe the process, do you have any documentation  for VM, OS or Database Deployment and Management?

Why ask? This question helps to uncover their current pain points and possibly the competitive landscape. This would typically be asked when talking to the Infrastructure Team If the process is already well documented/defined, the hardest part of the implementation is already done.

5.  What tools do you leverage to automate your Windows or Linux Server builds beyond the imaging / template / cloning process?
      a.  vRA
      b.  Terraform
      c.  Puppet
      d.  Chef
      e.  Ansible
      f.  Salt
      g.  SCCM

Why ask? This question helps you to understand the competitive landscape as well as integration points that will need to be solved 

6.  How many VMs are under management today?

Why ask? This question will help you to estimate the size of the deal for licensing

7.  What does your infrastructure footprint for managing/running containers look like?
      a.  What tools are you using?
      b.  How many containers?
      c.  If Kubernetes, how many pods and containers?
      d.  Which distribution of Kubernetes? (NKP/ AKS/EKS/Anthos/Openshift/Tanzu/etc)

Why ask? This question will help you to understand their current location on the journey to cloud native appsIf they are still investigating, we have an option to position NKP. If they are using another product already, we may be able to provide the infrastructure for that environment. 

8. In your application development organization, is Continuous Integration/Continuous Delivery (CI/CD) an operating principle?
      a.  What tools do you leverage in your current/targeted pipeline?
            i.    Jenkins
            ii.   Atlassian Bamboo
            iii.  CircleCI
            iv.  GitLab CI/CD
            v.   Azure DevOps

Why ask? This question will help you to understand the integrations needed for a successful implementation.

Resources:

Glossary of Terms: https://github.com/nutanixworkshops/Self Servicebootcamp/blob/master/appendix/glossary.rst  

LinkedIn Learning – DevOps Foundations Learning Plan: https://www.nutanixuniversity.com//lms/index.php?r=coursepath/deeplink&id_path=79&hash=2ce3cb1f946cc3770bd466853e68ee36ddbcf5e1&generated_by=19794 

Udacity+Nutanix: Hybrid Cloud Engineer Nanodegree 

Self Service IaaS Hands On Lab (Partners – Reach out to your channel SE to participate in a hands on lab) 

Test Drive – Build a Self Service Platform  

Test Drive – Fast Track Your Cloud Native Journey 

End User Computing Discovery Guidance (Revised 4/15/25)

Introduction – Please Read First

These questions are here to assist with ensuring that you’re gathering necessary information from a customer/prospect in order to put together an appropriate solution to meet their requirements in addition to capturing specific metrics from tools like Collector or RVTools.  

This list is not exhaustive but should be used as a guide to make sure you’ve made a proper and thorough discovery.  Also, it is imperative that you don’t just ask a question without understanding the reason why it is being asked.  We’ve structured these questions with not only the question that should be asked, but why we are asking the customer to provide an answer to that question and why it matters to provide an optimal solution.  

Questions marked with an asterisk (*) will likely require reaching out to a specialist/Solution Architect resource at Nutanix to go deeper with the customer on that topic/question.  Make sure you use the answers to these questions in the Scenario Objectives in Sizer when you create a new Scenario.  These questions should help guide you as to what the customer requirements, constraints, assumptions, and risks are for your opportunity.  

This is a live document, and the questions will be expanded and updated over time.

Revision History
2025.1  1st Revision – Kees Baggerman, Thomas Brown – 4-15-25
Nov 2020 Initial Publication – Lane Leverett


Basic discovery

1. What is the expected type of EUC workload?
 

Why Ask? Are we talking about VDI (Full Desktop), RDSH (Shared Desktop), and Application Virtualization (like MSIX, App-V, or Horizon App Volumes).  Please keep in mind that you need to ask this question for every different workload the customer needs. In most EUC projects there is not just one type of user requirement. You will find a lot of mixed workloads, like persistent and non-persistent desktops as well as application virtualization.
NCI-VDI licensing can help if the customer wants to run resource-intensive VDI workloads, like developers or VDI with vGPUs and DR scenarios.  

For more information about NCI-VDI: https://www.nutanix.com/library/datasheets/nci-vdi  

 

 2. Which vendor is used to manage and broker Desktops and Apps?
 

Why ask? The main vendors are Citrix and Omnissa. Every vendor has its own display protocol, which makes a difference in CPU usage. Citrix = HDX, VMware = Blast, Blast Extreme  

3. What is the expected Operating System Version? 

Why ask? Every new version of Windows has higher CPU and Memory requirements. Comparing an older Windows version to the latest version can make a big difference.  

Windows 10 Performance Impact Analysis: https://portal.nutanix.com/page/documents/solutions/details?targetId=TN-2113-Windows-10-Performance-Impact:TN-2113-Windows-10-Performance-Impact 

Overall Performance Impact Analysis: 

End-User Computing Performance Impact Analysis 


4.  What Office version is used? 

Why ask? For Microsoft Office, it is the same issue as Windows versions. Newer versions need more resources.  Office 2019 Performance Impact Results from LoginVSI: https://www.loginvsi.com/login-vsi-blog/98-login-vsi/907-office-2019-performance-impact 

Overall Performance Impact Analysis: 

End-User Computing Performance Impact Analysis 


5. What other applications will be used? 

Why ask? The applications used have a strong impact on the CPU. There might be single-threaded applications that need high clock speed, or you run applications that are multi-threaded and need a higher core count. Think Microsoft Teams, Zoom but also specific LOB applications like the Epic client, CAD/CAM apps or Bloomberg Terminal.  

6. What is the expected type of user? 

Why ask? In sizer, we ask for user types: task, knowledge, power user or developer. Every user type comes with a specific workload profile – memory, # vCPUs, vCPU:pCPU ratio, disk size. Sometimes the customer can give you details on the VM sizing, but not on the expected vCPU:pCPU ratio. Depending on the workload expected you can set the ratio.  

For more information read here:  

 

7. How many concurrent users will you have on that workload? 

Why ask? Concurrent user defines the number of active users. How many VMs need to run at the same time? Our VDI licensing is based on active VMs. You can have more users in an environment, but they can share resources if they don’t work on the platform at the same time.  This can impact how you size compute and memory but remember that storage may be needed for all possible users.  

8. What provisioning method is used? 

Why ask? Depending on the workload, the VMs can be persistent or non-persistent. Persistent desktops will be treated like normal VMs. Non-persistent VMs will have a different storage footprint since they share a single boot disk and have additional write cache disks, which will be deleted after a VM reboots. Citrix uses MCS (Machine Creation Service) or PVS (Provisioning). Omnissa uses  InstantClones. 

9. Where are the user profiles stored? 

Why ask? Using our own Files solution, we can provide storage for user profiles. Today, you will mostly encounter FSLogix profile containers or Citrix Profile Management, which still need an SMB share to be stored on and loaded during user logon. 

For more information: 

 

 

10. Do you need additional GPU support?  * (This may warrant engaging with a Solutions Architect or EUC Specialist for proper sizing and configurations) 

Why ask? To accommodate applications like CAD or requirements in number of monitors and high resolution you need to add NVIDIA GPUs. An overview of vGPU Profiles can be found here: 

 11. Are there any other special requirements? 

Why ask? Does the customer need RF3? Block Awareness, Rack Awareness, Storage encryption or replication) 

VM Details

1. How many vCPUs? 

Why ask? The number of vCPUs impacts the performance of the VM and the density of the host. Solution Engineering found the sweet spot to be at 3 vCPUs for a Windows 10 desktop, 4 vCPUs for Windows 11 or 8 vCPUs for Windows Server based Desktops 

2. What is the ratio between vCPU to pCPU? 

Why ask? See question 6 

3. What is the requested CPU size per User? 

Why ask? How many MHz does every user need to run the workload? It is very rare that a customer can answer this. It is more common in application virtualization environments, where a number of users share the same VM and its resources. 

4. What CPU is currently in use? 

Why ask? If the CPU currently used can handle it, you could choose the same clock speed. But keep in mind that in a virtualized environment, resources are shared, and you might have additional tasks running, like Files, which can impact the CPU. 

5.  How much Memory is required per VM? 

6. What is the disk size? 

Why ask? Depending on the provisioning method used you need the size of the Master Image (also called parental disk or Sandbox) and the Write Cache per VM. The write cache stores the temporary files written during the VM is active. For persistent VMs you need the disk size of the Master Image, which is then cloned into separate VMs. 

7. Are you planning to use microsegmentation to secure your VMs? If yes, what solution will be used?  * (This may warrant engaging with a Flow or Networking specialist or Solutions Architect) 

Why ask? Position Flow on AHV or remember to size for a NSX appliance on every host. 

For more information: 

 

 

8. Are you planning on using an App layering solution? 

Why ask? This saves the customer the need to manage many different master images. With App layering, you have one master image, and when a user logs in, the system will automatically attach additional disks containing the required applications. We can use shadow cloning to make those disks available locally. 

Read: 

 

 

General supporting Infrastructure

1. What Hypervisor are you planning to use?

Why ask? Different Hypervisors have different needs. If the customer chooses VMware, we may need to accommodate vCenter. 

2. Where will generic required services run? 

Why ask? By generic services, we mean AD, DHCP, DNS, Printing, licensing or application backend services. Some of them might be running in the cloud or on existing infrastructure. If running on the Nutanix Cluster take note of size and see Server virtualization questions. 

 3. Where will user profiles, home shares or App disks (if used) be stored? 

Why ask? This is an opportunity to position Files. Today, customers usually use a profile container to store user profiles. FSLogix is the most common solution used by customers since it is included in their licensing. Please be aware that Files Services running on the same cluster do have a performance penalty during login times. 

4. What is your DR strategy? 

Why ask? Every customer needs a DR strategy for their EUC environment. A great question to position NC2 , replication and our unique VDI licensing approach.
Also need to calculate additional resources in your sizing, depending on the customer’s strategy. 

More information: 

 

Citrix Infrastructure

1. Where do you plan on running your Citrix services? 

Why ask? Customers can choose to run all Citrix-related services (like Studio, Databases, Storefront) as a service in the cloud, managed by Citrix, or on-premises. If the customer chooses to run the services on premises, he can still run it on a different infrastructure. If he chooses to run it on the same cluster, please size additional server virtualization VMs. Guidelines on VM requirements can be found here: https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/system-requirements.html
If the customer chooses Citrix Virtual Apps and Desktop (CVAD) Service you still need an additional Windows server as a Cloud Connector.

A typical on-premises implementation would need the following servers:
SQL deployment. What type of HA? (Always On, SQL Clustering with WSFC)
StoreFront Servers (HA, N+1)
Citrix Desktop Studio and Director (N+1)
Optional Provisioning Server (PVS) (N+1)
Network Load Balancer
Global Server Load Balancing
Profile Management Infrastructure (File services)
AppLayering Infrastructure
Any Endpoint Management technologies?
Failure domain sizes (Prism Central sizing)
Dedicated Infra Management Cluster or part of the Citrix cluster 

VMware Infrastructure

1. Where do you plan on running your Omnissa Horizon Services? 

Why ask? Customers can choose to run all Horizon-related services as a service in the cloud, managed by VMware or on-premises. If the customer chooses to run the services on premises, he can still run it on a different infrastructure. If he chooses to run it on the same cluster, please size additional server virtualization VMs.  

Guidelines on VM requirements can be found here: https://docs.vmware.com/en/VMware-Horizon-7/7.12/horizon-installation/GUID-858D1E0E-C566-4813-9D53-975AF4432195.html 

A typical on-premises implementation would need the following servers:
SQL deployment. What type of HA? (Always On, SQL Clustering with WSFC)
Unified Access Gateway Appliances (N+1)
vCenter (N+1)
Horizon Connection Server (N+1)
Optional View Composer (N+1)
Profile Management Infrastructure (File services)
AppLayering Infrastructure
Any Endpoint Management technologies?
Failure domain sizes (Prism Central sizing) 

Advanced Discovery

1. How do you optimize your image? 

Why ask? Image optimization is crucial in all EUC environments. Optimizing the VM using tools provided by Citrix, VMware, or vendor-independent vendors increases the host density and user experience.
Citrix: https://support.citrix.com/article/CTX224676
VMware: https://flings.vmware.com/vmware-os-optimization-tool 

2. What is your Antivirus Strategy? 

Why ask? The right AV solution can also have a massive impact on user experience and host density. If not done correctly all file operations lead to file scans, which increase CPU and IO on the host