Nutanix Unified Storage – Files Discovery Guidance (Revised 4/24/25)

Introduction – Please Read First

These questions are here to assist with ensuring that you’re gathering necessary information from a customer/prospect in order to put together an appropriate solution to meet their requirements in addition to capturing specific metrics from tools like Collector or RVTools.

This list is not exhaustive, but should be used as a guide to make sure you’ve done proper and thorough discovery.  Also, it is imperative that you don’t just ask a question without understanding the reason why it is being asked.  We’ve structured these questions with not only the question that should be asked, but why we are asking the customer to provide an answer to that question and why it matters to provide an optimal solution.

Questions marked with an asterisk (*) will likely require reaching out to a specialist/Solution Architect resource at Nutanix to go deeper with the customer on that topic/question.  Make sure you use the answers to these questions in the Scenario Objectives in Sizer when you create a new Scenario.  These questions should help guide you as to what the customer requirements, constraints, assumptions, and risks are for your opportunity.

This is a live document, and questions will be expand and update over time.

REVISION HISTORY
4/21/25 – 1st Revision – Mike McGhee
1/5/21 – 1st Publish – Matt Bator


Files

1.  Is this replacing a current solution, or is this a net new project?
     a.  What’s the current solution?

Why ask? This question helps us understand the use case, any current expectations and what the competitive landscape may look like as well as an initial idea of the size / scale of the current solution.

2.  Is there a requirement to use an existing Nutanix cluster (with existing workload) or net new Nutanix cluster?

Why ask? If we’re sizing into an existing cluster we need to understand current hardware and current workload.  For licensing purposes adding Files to an existing cluster means the Unified Storage Pro license. A common scenario has been to add storage only nodes to an existing cluster to support the new Files capacity.  If sizing into a new cluster we can potentially dedicate this cluster to Files and Unified Storage. 

3.  Is this for NFS, SMB or both? Which protocol versions (SMB 3.0, NFSv4, etc)?

Why ask?  We need to understand protocol to first validate they are using supported clients.  Supported clients are documented in the release notes of each version of Files.  Concurrent SMB connections also impact sizing with respect to the compute resources we need for the FSVMs to handle those clients.  Max concurrent connections are also documented in the release notes of each version. 

It also helps us validate supported authentication methods.  For SMB, we require Active Directory where we support 2008 domain functional level or higher.  There is limited local user support for Files but the file server must still be registered with a domain.  For NFS v4 we support AD with Kerberos, LDAP and Unmanaged (no auth) shares.  For NFS v3 we support LDAP and Unmanaged. 

4.  Is there any explicit performance requirement for the customer? Do they require specific IOPS or performance targets? 

Why ask?  Every FSVM has an expected performance envelope.  There is a sizing guide and performance tech note on the Nutanix Portal which give a relative expectation on the max read and write throughput per FSVM and max read or write IOPs per FSVM. 

Throughput based on reads and writes are integrated into Nutanix Sizer and will impact the recommended number of FSVMs.  This may also impact the hardware configuration,including choice of NICs, leveraging RDMA between the CVMs, or iSER supported since the Files 5.0 release via a performance profile. Also the choice of all flash vs. hybrid.  

5.  Do they have any current performance collection from their existing environment?
      a.  Windows File Server = Perfmon
      b.  Netapp = perfstat
      c.  Dell DPACK, Live Optics

Why ask?  Seeing data from an existing solution can help validate the performance numbers so that we size accurately for performance. 

6.  What are the specific applications using the shares?
       a.  VDI (Home Shares)
       b.  PACS (Imaging)
       c.  Video (Streaming)
       d.  Backup (Streaming)

Why ask?  When sizing for storage space utilization the application performing the writes could impact storage efficiency.  Backup, Video and Image data are most commonly compressed by the application.  For those applications we should not include compression savings when sizing, only Erasure Coding.  For general purpose shares with various document types assume some level of compression savings.  

7.  Are they happy with performance or looking to improve performance?

Why ask?  If the customer has existing performance data, it’s good to understand if they are expecting equivalent or better performance from Files.  This could impact sizing, including going from a hybrid to an all flash cluster. 

 8.  How many expected concurrent user connections?

Why ask? Concurrent SMB connections are a required sizing parameter.  Each FSVM needs enough memory assigned to support a given number of users.  A Standard share is owned by one FSVM.  A distributed share is owned by all FSVMs and is load balanced based on top level directories.  We need to ensure any one FSVM can support all concurrent clients to the standard share or top level directory with the highest expected connections. We should also be ensuring that the sizing for concurrent connections is taking into account N-1 redundancy for node maintenance/failure/etc.

 9.  What is your current share configuration including number of shares?

Why ask?  Files has a soft (recommended) limit of 100 shares per FSVM. We also leverage Nested shares to match an existing environment if there are more shares needed.  Files currently supports 5,000 nested shares since the 4.4 release.

10.  Does their directory structure have a large number of folders in share root?

Why ask?  This indicates a large number of top level directories making a distributed share a good choice for load balancing and data distribution.

11. Are there Files in the share root?

Why ask?  Distributed shares cannot store files in the share root.  If an application must store files in the root then you should plan for sizing using standard shares.  Alternatively, a nested share can be used. 

 12. What is the largest number of files/folders in a single folder?

Why ask?  Nutanix Files is designed to store millions of files within a single share and billions of files across a multi-node cluster with multiple shares.  To achieve speedy response time for high file and directory count environments it’s necessary to give some thought to directory design. Placing millions of files or directories into a single directory is going to be very slow in file enumeration that must occur before file access.  The optimal approach is to branch out from the root share with leaf directories up to a width (directory or file count in a single directory) no greater than 100,000.  Subdirectories should have similar directory width.  If file or directory counts get very wide within a single directory, this can cause slow data response time to client and application.  Increasing FSVM memory up to 96 GB to cache metadata can help improve performance for these environments especially if designs for directory and files listed above are followed.

13. What is the total size of largest single directories?

Why ask?  Nutanix supports standard shares up to 1PiB starting with the Files 5.0 release (prior to compression.)  And top level directories in a distributed share up to 1PiB.  These limits are based on the volume group supporting the standard share or top level directory.  We need to ensure no single folder or share (if using a standard share) surpasses 1PiB.

12. Largest number of files/folders in a single folder?

Why ask?  Nutanix Files is designed to store millions of files within a single share and billions of files across a multi-node cluster with multiple shares.  To achieve speedy response time for high file and directory count environments it’s necessary to give some thought to directory design. Placing millions of files or directories into a single directory is going to be very slow in file enumeration that must occur before file access.  The optimal approach is to branch out from the root share with leaf directories up to a width (directory or file count in a single directory) no greater than 100,000.  Subdirectories should have similar directory width.  If file or directory counts get very wide within a single directory, this can cause slow data response time to client and application.  Increasing FSVM memory to cache metadata and increasing the number of vCPUs can help improve performance for these environments especially if designs for directory and files listed above are followed.

13  Does the total storage and compute requirements including future growth?

Why ask?  Core sizing question to ensure adequate storage space is available with the initial purchase and over the expected timeframe. 

14.  What percent of data is considered to be active/hot?

 Why ask?  Understanding the expected active dataset can help with sizing the SSD tier for a hybrid solution.  Performance and statistical collection from an existing environment may help with this determination.

 15.  What is your storage change rate?

Why ask?  Change rate influences snapshot overheads based on retention schedules.  Nutanix Sizer will ask what the change rate is for the dataset to help with determining the storage space impact of snapshot retention.

 16.  Do you have any storage efficiency details from the current environment (dedup, compression, etc.)?

Why ask?  Helps to determine if data reduction techniques like dedup and compression are effective against the customers data.  Files does not support the use of deduplication today, so any dedup savings should not be taken into account when sizing for Files.  If the data is compressible in the existing environment it should also be compressible with Nutanix compression.

 17.  What is the block size of current solution (if known)?

Why ask?  Block size can impact storage efficiency.  A solution which has many small files with a fixed block size may show different space consumption when migrated to Files, which uses variable block lengths based on file size.  For files over 64KB in size, Files uses a 64KB block size.  In some cases a large number of large files have been slightly less efficient when moved to Nutanix Files.  Understanding this up front can help explain differences following migrations.

18.  Is there a requirement for Self Service Restore (SSR)?

Why ask?  Nutanix Files uses two levels of snapshots, SSR snapshots occur at the file share level via ZFS.  These snapshots have their own schedule and Sizer asks for their frequency and change rate under “Nutanix Files Snapshots.”  The schedule associated with SSR and retention periods will impact overall storage consumption. Nutanix Files Snapshots increase both the amount of licensing required and total storage required, so it’s important to get it right during the sizing process.

 19.  What are the customer’s Data Protection/Disaster Recovery requirements and what is their expected snapshot frequency and retention schedule (hourly, daily, weekly, etc.)?

Why ask? Data Protection snapshots occur at the AOS (protection domain) level via the NDSF.  The schedule and retention policy are managed against the protection domain for the file server instance and will impact overall storage consumption.  Sizer asks for the local and remote snapshot retention under “Data Protection.”
Files supports 1hr RPO today and will support near-sync in the AOS 5.11.1 release in conjunction with Files 3.6.  Keep in mind node density (raw storage) when determining RPO.  Both 1hr and near-sync RPO require hybrid nodes with 40TB or less raw or all flash nodes with 48TB or less raw.  Denser configurations can only support 6hr RPO.  These requirements will likely change so double check the latest guidance when sizing dense storage nodes. Confirm that underlying nodes and configs support NearSync per latest AOS requirements if NearSync will be used.

 20. Does the customer have an Active/Active requirement?

Why ask?  If the customer needs active/active file shares in different sites which represent the same data, we need to position a third party called Peer Software.  Peer performs near real time replication of data between heterogenous file servers.  Peer utilizes Windows VMs which consume some CPU and memory you may want to size into the Nutanix clusters intended for Files.

Files 5.0 introduced an active/active solution called VDI sync, specific for user profile data.  The solution supports activity against user specific profile data within one site at a time.  If the user moves to another site, the VDI session can follow and localize access for that user.

 21. Is there an auditing requirement? If so, which vendor or vendors

Why ask?  Nutanix is working to integrate with three main third-party auditing vendors today, Netwrix (supported and integrated with Files), Varonis (working on integration) and Stealthbits (not yet integrated).  Nutanix Files also has a native auditing solution in File Analytics.

Along with ensuring audit vendor support, a given solution may require a certain amount of CPU, Memory and Storage (to hold auditing events).  Ensure to include any vendor specific sizing in the configuration.  File Analytics for example could require 8vcpu 48GB of memory and 3TB of storage.

Data Lens is a SaaS offering in the public cloud, so you will need to ensure the customer is comfortable with a cloud solution. 

22. Is there an Antivirus requirement? If so, which vendors?

Why ask? Files supports specific Antivirus vendors today with respect to ICAP integration.  For a list of supported vendors see the software compatibility matrix on the Nutanix Portal and sort by Nutanix Files:

https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix/software

If centralized virus scan servers are to be used you will want to include their compute requirements into sizing the overall solution.

 23. Is there a backup requirement? If so, which vendor or vendors?

Why ask?  Files has full change file tracking (CFT) support with HYCU, Commvault, Veeam, Veritas and Storware.  There are also vendors like Rubrik who are validated but do not use CFT.  If including a backup vendor on the same platform, you may need to size for any virtual appliance which may also run on Nutanix. 

23. Is the customer using DFS (Distributed File Server) Namespace (DFS-N)?

Why ask?  Less about sizing and more about implementation.  Prior to Files 3.5.1 Files could only support distributed shares with DFS-N.  Starting with 3.5.1 both distributed and standard shares are fully supported as folder targets with DFS-N.

Files 5.1 introduced a native unified namespace to combine different file servers into a common namespace.  

 24.  Does the customer have tiering requirements?

Why ask?  Files supports tiering which means automatically moving data off Nutanix Files and to an S3 compliant object service either on-premises or in the cloud.  In scoping future requirements, customers may size for a given amount of on-premises storage and a larger amount of tiered storage for longer term retention.

Automation

Introduction – Please Read First

These questions are here to assist with ensuring that you’re gathering necessary information from a customer/prospect in order to put together an appropriate solution to meet their requirements in addition to capturing specific metrics from tools like Collector or RVTools. 

This list is not exhaustive, but should be used as a guide to make sure you’ve done proper and thorough discovery.  Also, it is imperative that you don’t just ask a question without understanding the reason why it is being asked.  We’ve structured these questions with not only the question that should be asked, but why we are asking the customer to provide an answer to that question and why it matters to provide an optimal solution. 

Questions marked with an asterisk (*) will likely require reaching out to a specialist/Solution Architect resource at Nutanix to go deeper with the customer on that topic/question.  Make sure you use the answers to these questions in the Scenario Objectives in Sizer when you create a new Scenario.  These questions should help guide you as to what the customer requirements, constraints, assumptions, and risks are for your opportunity. 

This is a live document, and questions will be expand and update over time.


Calm

Discovery Questions

1.  How are you currently automating IT Service Delivery today, do you have any:
     a.  IAAS – Infrastructure as a service
     b.  PaaS – Platform as a service
     c.  SaaS – Software as a service

Why ask?  It helps us understand the customer’s maturity level when it comes to application deployment and could uncover some of the competitive infrastructure.  See some of the possible competitive or other products we may be able to work with or integrate with.

2.  Are standardizing and compliance important to you in your IT Automation Delivery Strategy?
a.  Do you currently use a business intake or self-service request process via a solution such as ServiceNow, Cherwell, Remedy, etc. to automate IT service delivery?

Why ask? Gives us the opportunity to discuss our SNOW plugin. Also helps understand which front end they will use for the Calm implementation.

3. Do you have any contracts with the cloud providers (AWS, Azure or GCP)?
a.  What are the specific use cases or workload profiles consumed from the cloud providers?

Why ask? Helps us understand which providers they may consume with Calm. Helps us understand which services are still on-prem and available as a target for AOS. May help position Beam. Also helps us understand if they have a Microsoft EA which may force their spend to go to Azure.

4.  Can you describe the process, do you have any documentation  for VM, OS or Database Deployment and Management?

Why ask? It helps uncover their current pain points and possibly competitive landscape.  (This would typically be asked when talking to the Infrastructure Team) If the process is already well documented/defined, the hardest part of the implementation is already done.

5.  What tools do you leverage to automate your Windows or Linux Server builds beyond the imaging / template / cloning process?
      a.  vRA
      b.  Terraform
      c.  Puppet
      d.  Chef
      e.  Ansible
      f.  Salt
      g.  SCCM

Why ask? Helps understand the competitive landscape as well as integration points that will need to be solved

6.  How many VMs are under management today?

Why ask? It helps us estimate the size of the deal for licensing

7.  What does your infrastructure footprint for managing/running containers look like?
      a.  What tools are you using?
      b.  How many containers?
      c.  If Kubernetes, how many pods and containers?
      d.  Which version of Kubernetes? (AKS/EKS/Anthos/Openshift/Tanzu/etc)

Why ask? Helps understand their current place on the journey to cloud native apps.  If they are still investigating, we have an option to position Karbon. If they are using another product already, we may be able to provide the infrastructure for that environment. 

8.  Do you have any contracts with the cloud providers (AWS, Azure or GCP)?
      a.  What are the specific use cases or workload profiles consumed from the cloud providers?

Why ask? Helps us understand which providers they may consume with Calm. Helps us understand which services are still on-prem and available as a target for AOS.  May help position Beam.  Also helps us understand if they have a Microsoft EA which may force their spend to go to Azure.

9.  Are standardizing and compliance important to you in your IT Automation Delivery Strategy?
      a.  Do you currently use a business intake or self-service request process via a solution such as ServiceNow, Cherwell, Remedy, etc. to automate IT service delivery?

Why ask? Gives us the opportunity to discuss our SNOW plugin. Also helps understand which front end they will use for the Calm implementation. 

10.  In your application development organization, is Continuous Integration/Continuous Delivery (CI/CD) an operating principle?
      a.  What tools do you leverage in your current/targeted pipeline?
            i.    Jenkins
            ii.   Atlassian Bamboo
            iii.  CircleCI
            iv.  GitLab CI/CD
            v.   Azure DevOps

Why ask? Helps us understand the integrations needed for a successful implementation.

Resources:

Glossary of Terms: https://github.com/nutanixworkshops/calmbootcamp/blob/master/appendix/glossary.rst 

xPert Automation team page: http://ntnx.tips/xPertAutomation (Internal Only)

LinkedIn Learning – DevOps Foundations Learning Plan: https://www.nutanixuniversity.com//lms/index.php?r=coursepath/deeplink&id_path=79&hash=2ce3cb1f946cc3770bd466853e68ee36ddbcf5e1&generated_by=19794

Udacity+Nutanix: Hybrid Cloud Engineer Nanodegree

Calls to action/next steps:

1.  Create a SFDC opportunity, quote a Calm+Services bundle, add a DevOps resource request
2.  Test Drive: Automation
3.  Calm bootcamps (+Karbon, +CI/CD, etc.) (Internal Only)

Databases

Introduction – Please Read First

These questions are here to assist with ensuring that you’re gathering necessary information from a customer/prospect in order to put together an appropriate solution to meet their requirements in addition to capturing specific metrics from tools like Collector or RVTools. 

This list is not exhaustive, but should be used as a guide to make sure you’ve done proper and thorough discovery.  Also, it is imperative that you don’t just ask a question without understanding the reason why it is being asked.  We’ve structured these questions with not only the question that should be asked, but why we are asking the customer to provide an answer to that question and why it matters to provide an optimal solution. 

Questions marked with an asterisk (*) will likely require reaching out to a specialist/Solution Architect resource at Nutanix to go deeper with the customer on that topic/question.  Make sure you use the answers to these questions in the Scenario Objectives in Sizer when you create a new Scenario.  These questions should help guide you as to what the customer requirements, constraints, assumptions, and risks are for your opportunity. 

This is a live document, and questions will be expand and update over time.


Databases

Generic

 1.  Is this replacing a current solution, or is this a net new project?  What’s the current solution?

Why ask? This question helps us understand the use case, any current expectations and what the competitive landscape may look like

2.  Is the current environment coming to the end of a contract and due for contract renewal/hardware refresh?  how soon?

Why ask?  It helps us understand how serious the customer is about migrating and the drivers : usually cost and helps create a pipe-line.

3.  Is the current Infrastructure solution bare-metal/3-tier or virtualized or Engineered Appliance ? provide details
      a.  e.g.  AIX  , Solaris Sparc , VMware Virtualized ,  OVM/KVM, Exadata/ODA (Oracle)
      b.  FC SAN/speed/10GbE Ethernet/iSCSI , Storage Array : Vendor : All Flash/Hybrid
      c.  If possible use automated means to capture configuration and performance information to help with capturing as much information as possible (RVTools, Collector, AWR, LiveOptics, MAP, etc.)

Why ask?  To determine which environment is easier to go after as a starting point.

4.  What are the 3 major pain points in current environment(s)  ( other than end of life/contract).  Examples:
      a.  License Consolidation
      b.  Managing multiple GUIs ( need single pane of Glass)
      c.  Life Cycle Management/Patching
      d.  Performance
      e.  Storage Sprawl due to multiple copies
      f.  Provisioning

Why ask?   Helps us articulate Nutanix Value for Relational Database Workloads.

 5.  How many sites/env.  ( PROD / DR / QA/DEV/Test)

Why ask?   Helps us articulate a Disaster recovery/backup strategy.

6.  How are backups done today : Native or 3rd Party tools , leveraging Snaps/clones?

Why ask?  Whether using third party DR tools ( Zerto/Actifio/SRM) or native database replication.  Whether using third party backup   ( Commvault/VEEAM/VERITAS ) or native tools

7.  Workload types? :  OLTP (Online Transactional Processing) / OLAP (Online Analytical Processing) /DWH (Data Warehouse)

Why ask?   Helps us identify transactional, OLTP, vs Analytical, OLAP/DWH  ( latency sensitivity )

8.  Largest Database size?

Why ask?  Beyond 30 TB , hyperconverged virtualizing may not be beneficial. Need to understand use case

9.  Performance Characteristics  desired : Bandwidth / IOPS / Latency.  These can be given directly from the customer if known, or gathered using local operating system metrics (perfmon/top) or via a discovery tool or script like AWR for Oracle, or a tool like LiveOptics, SolarWinds, etc.

Why ask?  Accurate sizing

 10.  Type of Database Clustering used if Any

Why ask?   Determine if there are potentially any mission critical workloads

MSSQL

SQL Server Inventory Questions:

1.  Number of SQL Server Instances in the environment?

Why ask?  Inventory purposes and Era only supports a single SQL Server instance on the same host.

2.  Number of SQL Server databases in the environment?

Why ask?  Inventory purposes and also helps identify which databases are considered critical for AG (Always On Availability Groups) etc. , databases reside in an instance.

3.  Total size of SQL Server databases in the environment?

Why ask?  Inventory sizing purposes.

4.  SQL Server versions used in the environment?

Why ask?  Different SQL Server versions have different features, limitations etc and also different CU cumulative update levels.  SQL Server stopped issuing service packs in SQL Server 2016 everything now is a CU format. External SQL Server Edition and Version Comparison.

5.  Windows versions used in the environment?

Why ask?  Different Windows versions have different features, limitations and update levels that may affect SQL Server, also driver versions etc.

6.  SQL Server licensing model used in the environment Core, or Server/Cal?

Why ask?  This can help differentiate which licensing  model the customer is using and why.

7.  SQL Server High Availability and Disaster Recovery being used in the environment?   *(Depending on the complexity for HA or DR, this would warrant further discussion with a Database Specialist/Solutions Architect)

Why ask?  This can help determine if shared storage is used such as a SQL Server Failover Cluster Instance (FCI), or a SQL Server Always On Availability Group (AG) which does not require shared storage.  Also is there any multi site replication being used either as a physical storage layer or logical SQL Server layer.

8.  CPU model, type, speed allocated for current/existing SQL Server hosts?

Why ask?  Inventory sizing purposes for baseline.

9.  Number of CPU/Cores allocated for SQL Server hosts?

Why ask?  Inventory sizing purposes for baseline.

10.  Amount of Memory allocated for SQL Server hosts?

Why ask?  Inventory sizing purposes for baseline.

11.  Amount of storage allocated for SQL Server hosts?

Why ask?  Inventory sizing purposes for baseline.

12.  Storage type used for SQL Server hosts, flash, HDD, DAS, SAN etc?

Why ask?  Inventory sizing purposes for baseline helpful in determining expectations with regard to latency.

13.  Network allocation (speed, number of nics) for SQL Server hosts?

Why ask?  Inventory sizing purposes for baseline.

SQL Server Performance Questions:

 1.  What is the total max IOPS required for all SQL Server Instances?

Why ask?  The number of I/O service requests to use as a baseline for their current workload.

2.  What is the latency requirement for SQL Server?

Why ask?  The response time requirement to use as a baseline for their current workload.

3.  What is the bandwidth requirement for SQL Server both read/write?

Why ask?  The throughput requirement to use as a baseline for their current workload.

4.  What is the current SQL Server workload profile read/write ratio?

Why ask?  This helps determine what their workload profile is like and how it will affect our platform (reads are local, writes incur node replication cost) as a baseline.

5.  What is the SQL Server average IO size?

Why ask?  This helps determine what their workload profile I/O size is related to bandwidth .

6.  Top current SQL Server wait statistics during peak workload?

Why ask?  This helps determine what SQL Server is waiting on to process transactions, where there may be a bottleneck.

7.  Current customer Microsoft SQL Server pain points?

Why ask?  This helps narrow the focus and develop a relationship with the customer.  It also assists in focusing on how Nutanix can help alleviate those specific pain points and gives information about how the solution can be shown to resolve those particular pain points.

Oracle

 1.  License Entitlement  ( Cores/NUPS/ELA/ULA/bundled licensing)?

Why ask?  Oracle licensing is expensive and customers want to make the best use of their entitlement when replatforming and not spend more $$ on new licensing when doing a new solution.  Customers are also looking forward to reducing their Oracle License overhead .

2.  Type of Licensing used? :  STD / Enterprise and other options (RAC/Partitioning …etc.).  Each is a paid item.

Why ask?  There may be possibilities to eliminate some Options by using Nutanix Features such as Compression, Encryption, Replication

3.  Is the customer ready to run a “SQL script” or provide details of the environment using RVTools/Collector?

Why ask? When inventorying an Oracle DB environment, you can use the Automatic Workload Repository (AWR) report to gather detailed inventory and performance statistics for an Oracle Database.  Nutanix has an AWR script that can be run to capture the necessary information and is able to be downloaded from within the Sizer Tool.  When adding a Workload select Import, then click the AWR tab and you will see the AWR SQL Script download link.  Once run, you can then upload the output using the Upload File option.

4.  What are the main pain points in the current environment?

5.  When moving to Nutanix would you consider AHV as a hypervisor?

6.  Have you been introduced to Era?

Era

1.  How do you do DB provisioning today and how long does it take to provision a multi-node database cluster?

Why ask :  To find out customer operational efficiency for provisioning . Era can help improve this from weeks to hours.

2.  How many dev/test copies of databases do you have for a your PROD instance(s)?

Why ask : Customers make multiple “full copies” of PROD for test-dev dev/test and use up to 5-10 times the space they need . Era will help in creating space optimized clones of database with “rapid speed”  

3.  What is your typical clone refresh interval and time it takes to refresh a DB clone?

Why ask: Customers using traditional techniques to refresh a copy of a database from a RMAN backup , takes multiple hours and is usually done once a month .  With Era , they can clone everyday or multiple times a day in minutes.

4.  How do you do your Database Patching (Oracle)?

Why ask :  Oracle patching is a huge pain point in large Oracle environments. Era provides a unique way to do “fleet patching”  which will help save 100’s of man hours spent in traditional patching

5.  How do you migrate Databases when required ( Oracle)?

Why ask : Migration is an involved process and a lot of planning and time is required for migration.
Era provides an easy method to “replicate & migrate” databases (Same version) for same-endian formats. ( Linux->Linux or Windows ->Linux)

6.  What is your choice of  Database Replication  (infrastructure/database/hypervisor based)?  Please elaborate.  * (Depending on the complexity of the environment, this would warrant further discussion with a Database Specialist/Solutions Architect)

Why ask:  customers are looking to reduce their software licensing cost of database replication and will look for opportunities to replicate using infrastructure (nutanix replication) . era enables cross-cluster replication including replicating to a NTNX cluster in AWS cloud in an upcoming release 2.0

7.  What are the database engines they currently use?