Introduction – Please Read First
These questions are here to assist with ensuring that you’re gathering necessary information from a customer/prospect in order to put together an appropriate solution to meet their requirements in addition to capturing specific metrics from tools like Collector or RVTools.
This list is not exhaustive, but should be used as a guide to make sure you’ve done proper and thorough discovery. Also, it is imperative that you don’t just ask a question without understanding the reason why it is being asked. We’ve structured these questions with not only the question that should be asked, but why we are asking the customer to provide an answer to that question and why it matters to provide an optimal solution.
Questions marked with an asterisk (*) will likely require reaching out to a specialist/Solution Architect resource at Nutanix to go deeper with the customer on that topic/question. Make sure you use the answers to these questions in the Scenario Objectives in Sizer when you create a new Scenario. These questions should help guide you as to what the customer requirements, constraints, assumptions, and risks are for your opportunity.
This is a live document, and questions will be expand and update over time.
REVISION HISTORY
4/21/25 – 1st Revision – Mike McGhee
1/5/21 – 1st Publish – Matt Bator
Files
1. Is this replacing a current solution, or is this a net new project?
a. What’s the current solution?
Why ask? This question helps us understand the use case, any current expectations and what the competitive landscape may look like as well as an initial idea of the size / scale of the current solution.
2. Is there a requirement to use an existing Nutanix cluster (with existing workload) or net new Nutanix cluster?
Why ask? If we’re sizing into an existing cluster we need to understand current hardware and current workload. For licensing purposes adding Files to an existing cluster means the Unified Storage Pro license. A common scenario has been to add storage only nodes to an existing cluster to support the new Files capacity. If sizing into a new cluster we can potentially dedicate this cluster to Files and Unified Storage.
3. Is this for NFS, SMB or both? Which protocol versions (SMB 3.0, NFSv4, etc)?
Why ask? We need to understand protocol to first validate they are using supported clients. Supported clients are documented in the release notes of each version of Files. Concurrent SMB connections also impact sizing with respect to the compute resources we need for the FSVMs to handle those clients. Max concurrent connections are also documented in the release notes of each version.
It also helps us validate supported authentication methods. For SMB, we require Active Directory where we support 2008 domain functional level or higher. There is limited local user support for Files but the file server must still be registered with a domain. For NFS v4 we support AD with Kerberos, LDAP and Unmanaged (no auth) shares. For NFS v3 we support LDAP and Unmanaged.
4. Is there any explicit performance requirement for the customer? Do they require specific IOPS or performance targets?
Why ask? Every FSVM has an expected performance envelope. There is a sizing guide and performance tech note on the Nutanix Portal which give a relative expectation on the max read and write throughput per FSVM and max read or write IOPs per FSVM.
Throughput based on reads and writes are integrated into Nutanix Sizer and will impact the recommended number of FSVMs. This may also impact the hardware configuration,including choice of NICs, leveraging RDMA between the CVMs, or iSER supported since the Files 5.0 release via a performance profile. Also the choice of all flash vs. hybrid.
5. Do they have any current performance collection from their existing environment?
a. Windows File Server = Perfmon
b. Netapp = perfstat
c. Dell DPACK, Live Optics
Why ask? Seeing data from an existing solution can help validate the performance numbers so that we size accurately for performance.
6. What are the specific applications using the shares?
a. VDI (Home Shares)
b. PACS (Imaging)
c. Video (Streaming)
d. Backup (Streaming)
Why ask? When sizing for storage space utilization the application performing the writes could impact storage efficiency. Backup, Video and Image data are most commonly compressed by the application. For those applications we should not include compression savings when sizing, only Erasure Coding. For general purpose shares with various document types assume some level of compression savings.
7. Are they happy with performance or looking to improve performance?
Why ask? If the customer has existing performance data, it’s good to understand if they are expecting equivalent or better performance from Files. This could impact sizing, including going from a hybrid to an all flash cluster.
8. How many expected concurrent user connections?
Why ask? Concurrent SMB connections are a required sizing parameter. Each FSVM needs enough memory assigned to support a given number of users. A Standard share is owned by one FSVM. A distributed share is owned by all FSVMs and is load balanced based on top level directories. We need to ensure any one FSVM can support all concurrent clients to the standard share or top level directory with the highest expected connections. We should also be ensuring that the sizing for concurrent connections is taking into account N-1 redundancy for node maintenance/failure/etc.
9. What is your current share configuration including number of shares?
Why ask? Files has a soft (recommended) limit of 100 shares per FSVM. We also leverage Nested shares to match an existing environment if there are more shares needed. Files currently supports 5,000 nested shares since the 4.4 release.
10. Does their directory structure have a large number of folders in share root?
Why ask? This indicates a large number of top level directories making a distributed share a good choice for load balancing and data distribution.
11. Are there Files in the share root?
Why ask? Distributed shares cannot store files in the share root. If an application must store files in the root then you should plan for sizing using standard shares. Alternatively, a nested share can be used.
12. What is the largest number of files/folders in a single folder?
Why ask? Nutanix Files is designed to store millions of files within a single share and billions of files across a multi-node cluster with multiple shares. To achieve speedy response time for high file and directory count environments it’s necessary to give some thought to directory design. Placing millions of files or directories into a single directory is going to be very slow in file enumeration that must occur before file access. The optimal approach is to branch out from the root share with leaf directories up to a width (directory or file count in a single directory) no greater than 100,000. Subdirectories should have similar directory width. If file or directory counts get very wide within a single directory, this can cause slow data response time to client and application. Increasing FSVM memory up to 96 GB to cache metadata can help improve performance for these environments especially if designs for directory and files listed above are followed.
13. What is the total size of largest single directories?
Why ask? Nutanix supports standard shares up to 1PiB starting with the Files 5.0 release (prior to compression.) And top level directories in a distributed share up to 1PiB. These limits are based on the volume group supporting the standard share or top level directory. We need to ensure no single folder or share (if using a standard share) surpasses 1PiB.
12. Largest number of files/folders in a single folder?
Why ask? Nutanix Files is designed to store millions of files within a single share and billions of files across a multi-node cluster with multiple shares. To achieve speedy response time for high file and directory count environments it’s necessary to give some thought to directory design. Placing millions of files or directories into a single directory is going to be very slow in file enumeration that must occur before file access. The optimal approach is to branch out from the root share with leaf directories up to a width (directory or file count in a single directory) no greater than 100,000. Subdirectories should have similar directory width. If file or directory counts get very wide within a single directory, this can cause slow data response time to client and application. Increasing FSVM memory to cache metadata and increasing the number of vCPUs can help improve performance for these environments especially if designs for directory and files listed above are followed.
13 Does the total storage and compute requirements including future growth?
Why ask? Core sizing question to ensure adequate storage space is available with the initial purchase and over the expected timeframe.
14. What percent of data is considered to be active/hot?
Why ask? Understanding the expected active dataset can help with sizing the SSD tier for a hybrid solution. Performance and statistical collection from an existing environment may help with this determination.
15. What is your storage change rate?
Why ask? Change rate influences snapshot overheads based on retention schedules. Nutanix Sizer will ask what the change rate is for the dataset to help with determining the storage space impact of snapshot retention.
16. Do you have any storage efficiency details from the current environment (dedup, compression, etc.)?
Why ask? Helps to determine if data reduction techniques like dedup and compression are effective against the customers data. Files does not support the use of deduplication today, so any dedup savings should not be taken into account when sizing for Files. If the data is compressible in the existing environment it should also be compressible with Nutanix compression.
17. What is the block size of current solution (if known)?
Why ask? Block size can impact storage efficiency. A solution which has many small files with a fixed block size may show different space consumption when migrated to Files, which uses variable block lengths based on file size. For files over 64KB in size, Files uses a 64KB block size. In some cases a large number of large files have been slightly less efficient when moved to Nutanix Files. Understanding this up front can help explain differences following migrations.
18. Is there a requirement for Self Service Restore (SSR)?
Why ask? Nutanix Files uses two levels of snapshots, SSR snapshots occur at the file share level via ZFS. These snapshots have their own schedule and Sizer asks for their frequency and change rate under “Nutanix Files Snapshots.” The schedule associated with SSR and retention periods will impact overall storage consumption. Nutanix Files Snapshots increase both the amount of licensing required and total storage required, so it’s important to get it right during the sizing process.
19. What are the customer’s Data Protection/Disaster Recovery requirements and what is their expected snapshot frequency and retention schedule (hourly, daily, weekly, etc.)?
Why ask? Data Protection snapshots occur at the AOS (protection domain) level via the NDSF. The schedule and retention policy are managed against the protection domain for the file server instance and will impact overall storage consumption. Sizer asks for the local and remote snapshot retention under “Data Protection.”
Files supports 1hr RPO today and will support near-sync in the AOS 5.11.1 release in conjunction with Files 3.6. Keep in mind node density (raw storage) when determining RPO. Both 1hr and near-sync RPO require hybrid nodes with 40TB or less raw or all flash nodes with 48TB or less raw. Denser configurations can only support 6hr RPO. These requirements will likely change so double check the latest guidance when sizing dense storage nodes. Confirm that underlying nodes and configs support NearSync per latest AOS requirements if NearSync will be used.
20. Does the customer have an Active/Active requirement?
Why ask? If the customer needs active/active file shares in different sites which represent the same data, we need to position a third party called Peer Software. Peer performs near real time replication of data between heterogenous file servers. Peer utilizes Windows VMs which consume some CPU and memory you may want to size into the Nutanix clusters intended for Files.
Files 5.0 introduced an active/active solution called VDI sync, specific for user profile data. The solution supports activity against user specific profile data within one site at a time. If the user moves to another site, the VDI session can follow and localize access for that user.
21. Is there an auditing requirement? If so, which vendor or vendors
Why ask? Nutanix is working to integrate with three main third-party auditing vendors today, Netwrix (supported and integrated with Files), Varonis (working on integration) and Stealthbits (not yet integrated). Nutanix Files also has a native auditing solution in File Analytics.
Along with ensuring audit vendor support, a given solution may require a certain amount of CPU, Memory and Storage (to hold auditing events). Ensure to include any vendor specific sizing in the configuration. File Analytics for example could require 8vcpu 48GB of memory and 3TB of storage.
Data Lens is a SaaS offering in the public cloud, so you will need to ensure the customer is comfortable with a cloud solution.
22. Is there an Antivirus requirement? If so, which vendors?
Why ask? Files supports specific Antivirus vendors today with respect to ICAP integration. For a list of supported vendors see the software compatibility matrix on the Nutanix Portal and sort by Nutanix Files:
https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix/software
If centralized virus scan servers are to be used you will want to include their compute requirements into sizing the overall solution.
23. Is there a backup requirement? If so, which vendor or vendors?
Why ask? Files has full change file tracking (CFT) support with HYCU, Commvault, Veeam, Veritas and Storware. There are also vendors like Rubrik who are validated but do not use CFT. If including a backup vendor on the same platform, you may need to size for any virtual appliance which may also run on Nutanix.
23. Is the customer using DFS (Distributed File Server) Namespace (DFS-N)?
Why ask? Less about sizing and more about implementation. Prior to Files 3.5.1 Files could only support distributed shares with DFS-N. Starting with 3.5.1 both distributed and standard shares are fully supported as folder targets with DFS-N.
Files 5.1 introduced a native unified namespace to combine different file servers into a common namespace.
24. Does the customer have tiering requirements?
Why ask? Files supports tiering which means automatically moving data off Nutanix Files and to an S3 compliant object service either on-premises or in the cloud. In scoping future requirements, customers may size for a given amount of on-premises storage and a larger amount of tiered storage for longer term retention.