|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
- + Extensive platform support
- + Extensive data protection capabilities
- + Flexible deployment options
|
- + Extensive platform support
- + Native file and object services
- + Manageability
|
- + Built for simplicity
- + Policy-based management
- + Cost-effectiveness
|
|
Cons
|
- - No native data integrity verification
- - Dedup/compr not performance optimized
- - Disk/node failure protection not capacity optimized
|
- - Complex solution design
- - No QoS
- - Complex dedup/compr architecture
|
- - Single hypervisor support
- - No stretched clustering
- - No native file services
|
|
|
|
Content |
|
|
|
WhatMatrix
|
WhatMatrix
|
WhatMatrix
|
|
|
|
Assessment |
|
|
|
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
DataCore was founded in 1998 and began to ship its first software-defined storage (SDS) platform, SANsymphony (SSY), in 1999. DataCore launched a separate entry-level storage virtualization solution, SANmelody (v1.4), in 2004. This platform was also the foundation for DataCores HCI solution. In 2014 DataCore formally announced Hyperconverged Virtual SAN as a separate product. In May 2018 integral changes to the software licensing model enabled consolidation because the core software is the same and since then cumulatively called DataCore SANsymphony.
One year later, in 2019, DataCore expanded its software-defined storage portfolio with a solution especially for the need of file virtualization. The additional SDS offering is called DataCore vFilO and operates as scale-out global file system across distributed sites spanning on-premises and cloud-based NFS and SMB shares.
Recently, at the beginning of 2021, DataCore acquired Caringo and integrated its know how and software-defined object storage offerings into the DataCore portfolio. The newest member of the DataCore SDS portfolio is called DataCore Swarm and together with its complementary offering SwarmFS and DataCore FileFly it enables customers to build on-premises object storage solutions that radically simplify the ability to manage, store, and protect data while allowing multi-protocol (S3/HTTP, API, NFS/SMB) access to any application, device, or end-user.
DataCore Software specializes in the high-tech fields of software solutions for block, file, and object storage. DataCore has by far the longest track-record when it comes to software-defined storage, when comparing to the other SDS/HCI vendors on the WhatMatrix.
In April 2021 the company had an install base of more than 10,000 customers worldwide and there were about 250 employees working for DataCore.
|
Name: Enterprise Cloud Platform (ECP)
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2011
NEW
Nutanix was founded early 2009 and began to ship its first Hyper Converged Infrastructure (HCI) solution, Virtual Computing Platform (VCP), in 2011. The core of the Nutanix solution is the Nutanix Operating System (NOS). In 2015 Nutanix rebranded its solution to Xtreme Computing Platform (XCP), mainly because Nutanix developed its own hypervisor, Acropolis Hypervisor (AHV), that is based on KVM. In 2015 Nutanix also rebranded its operating system to Acropolis (AOS). In 2016 Nutanix rebranded its solution to Enterprise Cloud Platform (ECP). In september 2018 Nutanix rebranded Acropolis File Services (AFS) to Nutanix Files, Acropolis Block Services (ABS) to Nutanix Volumes, Object Storage Services to Nutanix Buckets and Xi Cloud DR Services to Nutanix Leap.
At the end of October 2020 the company had a customer install base of approximaltely 18,000 customers worldwide and there were more than 6,100 employees working for Nutanix.
|
Name: Hyperconvergence (HC3)
Type: Hardware+Software (HCI)
Development Start: 2011
First Product Release: 2012
Scale Computing was founded in 2007 and began to ship its first SAN/NAS scale-out storage product, in 2009. Mid 2011 development started on the Hyperconvergence (HC3) platform, which was to combine the 3 foundation layers, being compute, storage and virtualization, into a single hardware appliance. HC3 was built to provide ultra simple ease-of-use and initially targeted at the SMB market. The first HC3 models were released in August 2012.
In Januari 2019 the company had an install base of more than 3,500 customers worldwide. In January 2019 there were 130+ employees working for Scale Computing.
|
|
|
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
10th Generation software. DataCore currently has the most experience when it comes to SDS/HCI technology, when comparing SANsymphony to other SDS/HCI platforms.
SANsymphony (SSY) version 3 was the first public release that hit the market back in 1999. The product has evolved ever since and the current major release is version 10. The list includes only the milestone releases.
PSP = Product Support Package
U = Update
|
GA Release Dates:
AOS 5.19: dec 2020
AOS 5.18: aug 2020
AOS 5.17: may 2020
AOS 5.16: jan 2020
AOS 5.11: aug 2019
AOS 5.10: nov 2018
AOS 5.9: oct 2018
AOS 5.8: jul 2018
AOS 5.6.1: jun 2018
AOS 5.6: apr 2018
AOS 5.5: dec 2017
AOS 5.1.2 / 5.2*: sep 2017
AOS 5.1.1.1: jul 2017
AOS 5.1: may 2017
AOS 5.0: dec 2016
AOS 4.7: jun 2016
AOS 4.6: feb 2016
AOS 4.5: oct 2015
NOS 4.1: jan 2015
NOS 4.0: apr 2014
NOS 3.5: aug 2013
NOS 3.0: dec 2012
NEW
5th Generation software. Nutanix currently has the most all-round package when it comes to advanced functionality, when comparing ECP to other SDS/HCI platforms.
Nutanix has adopted an LTS/STS life cycle approach towards AOS releases:
5.5, 5.10, 5.15 (LTS)
5.6, 5.8, 5.9, 5.11, 5.16, 5.17, 5.18 (STS)
LTS are released anually, maintained for 18 months and supported for 24 months.
STS are released quarterly, maintained for 3 months and supported for 6 months.
*5.2 is a IBM Power Systems-only release; for all other platforms version 5.6 applies.
Release dates of AOS Add-on components:
Nutanix Files 3.7.1: sep 2020
Nutanix Files 3.7: jul 2020
Nutanix Files 3.6.1: dec 2019
Nutanix Files 3.6: oct 2019
Nutanix Files 3.5.2: aug 2019
Nutanix Files 3.5.1: jul 2019
Nutanix Files 3.5: mar 2019
AFS 3.0.1: jun 2018
AFS 2.2: aug 2017
AFS 2.1.1: jun 2017
AFS 2.1: apr 2017
AFS 2.0: jan 2017
Nutanix File Analytics 2.0: aug 2019
Nutanix Objects 3.1: nov 2020
Nutanix Objects 3.0: oct 2020
Nutanix Objects 2.2: jun 2020
Nutanix Objects 2.1: may 2020
Nutanix Objects 1.1: nov 2019
Nutanix Objects 1.0.1: oct 2019
Nutanix Objects 1.0: aug 2019
Nutanix Karbon 2.1: jul 2020
Nutanix Karbon 2.0: feb 2020
Nutanix Karbon 1.0.4: dec 2019
Nutanix Karbon 1.0.3: oct 2019
Nutanix Karbon 1.0.2: oct 2019
Nutanix Karbon 1.0.1: may 2019
Nutanix Karbon 1.0: mar 2019
Nutanix Calm 3.1: oct 2020
Nutanix Calm 3.0: jun 2020
Nutanic Calm 2.9.7: jan 2020
Nutanix Calm 2.9.0: nov 2019
Nutanix Calm 2.7.0: aug 2019
Nutanix Calm 2.6.0: feb 2019
Nutanix Calm 2.4.0: nov 2018
Nutanix Calm 5.9: oct 2018
Nutanix Calm 5.8: jul 2018
Nutanix Calm 5.7: dec 2017
Nutanix Era 2.0: sep 2020
Nutanix Era 1.3: jun 2020
Nutanix Era 1.2: jan 2020
Nutanix Era 1.1: jul 2019
Nutanix Era 10.: dec 2018
NOS = Nutanix Operating System
AOS = Acropolis Operating System
LTS = Long Term Support
STS = Short Term Support
AFS = Acropolis File Services
|
GA Release Dates:
HCOS 8.6.5: mar 2020
HCOS 8.5.3: oct 2019
HCOS 8.3.3: jul 2019
HCOS 8.1.3: mar 2019
HCOS 7.4.22: may 2018
HCOS 7.2.24: sep 2017
HCOS 7.1.11: dec 2016
HCOS 6.4.2: apr 2016
HCOS 6.0: feb 2015
HCOS 5.0: oct 2014
ICOS 4.0: aug 2012
ICOS 3.0: may 2012
ICOS 2.0: feb 2010
ICOS 1.0: feb 2009
NEW
8th Generation Scale Computing software on proven Lenovo and SuperMicro server hardware.
Scale Computing HC3s maturity has been steadily increasing ever since the first iteration by expanding its feature set with both foundational and advanced capabilities. Due to its primary focus on small- and midsized organizations, the feature set does not (yet) incorporate some of the larger enterprise capabilities.
HCOS = HyperCore Operating System
ICOS = Intelligent Clustered Operating System
|
|
|
|
Pricing |
|
|
Hardware Pricing Model
Details
|
N/A
SANsymphony is sold by DataCore as a software-only solution. Server hardware must be acquired separately.
The entry point for all hardware and software compatibility statements is: https://www.datacore.com/products/sansymphony/tech/compatibility/
On this page links can be found to: Storage Devices, Servers, SANs, Operating Systems (Hosts), Networks, Hypervisors, Desktops.
Minimum server hardware requirements can be found at: https://www.datacore.com/products/sansymphony/tech/prerequisites/
|
Per Node
Depends on specific model/subtype/resource configuration
|
Per Node
Each Scale Computing HC3 appliance purchased consists of hardware (server+storage), software (all-inclusive) and 1 year of premium support. Optionally end-users can also request for TOR-switches as part of the solution and deployment.
In june 2018 Scale Computing introduced an Managed Service Providers (MSP) Program that offers these organizations a price-per node, per-month, OpEx subscription license.
TOR = Top-of-Rack
|
|
Software Pricing Model
Details
|
Capacity based (per TB)
NEW
DataCore SANsymphony is licensed in three different editions: Enterprise, Standard, and Business.
All editions are licensed per capacity (in 1 TB steps). Except for the Business edition which has a fixed price per TB, the more capacity that is used by an end-user in each class, the lower the price per TB.
Each edition includes a defined feature set.
Enterprise (EN) includes all available features plus expanded Parallel I/O.
Standard (ST) includes all Enterprise (EN) features, except FC connections, Encryption, Inline Deduplication & Compression and Shared Multi-Port Array (SMPA) support with regular Parallel I/O.
Business (BZ) as entry-offering includes all essential Enterprise (EN) features, except Asynchronous Replication & Site Recovery, Encryption, Deduplication & Compression, Random Write Accelerator (RWA) and Continuous Data Protection (CDP) with limited Parallel I/O.
Customers can choose between a perpetual licensing model or a term-based licensing model. Any initial license purchase for perpetual licensing includes Premier Support for either 1, 3 or 5 years. Alternatively, term-based licensing is available for either 1, 3 or 5 years, always including Premier Support as well, plus enhanced DataCore Insight Services (predictive analytics with actionable insights). In most regions, BZ is available as term license only.
Capacity can be expanded in 1 TB steps. There exists a 10 TB minimum per installation for Business (BZ). Moreover, BZ is limited to 2 instances and a total capacity of 38 TB per installation, but one customer can have multiple BZ installations.
Cost neutral upgrades are available when upgrading from Business/Standard (BZ/ST) to Enterprise (EN).
|
Per Core + Flash TiB (AOS)
Per Node (AOS, Prism Pro/Ultimate)
Per Concurrent User (VDI)
Per VM (ROBO)
Per TB (Files)
Per VM (Calm)
Per VM (Xi Leap)
In September 2018 Nutanix introduced a 'per core + per Flash TB' (capacity) licensing model next to the existing a 'per node' (appliance) licensing model for AOS. This means software license cost is tightly coupled with the number of physical cores (compute), as well as the amount of flash TBs (storage) in the Nutanix nodes acquired.
In May 2019 Nutanix introduced 'per concurrent user' licensing for VDI use cases and 'per VM' licensing for ROBO use cases. Both VDI and ROBO bundle AOS, AHV and Prism. Both must run on dedicated clusters. ROBO is designed for sites running typically up to 10 VMs.
Capacity-based and VDI-based software licensing are sold in 1 to 7 year terms.
ROBO-based software licensing is sold in 1 to 5 year terms.
Appliance-based software licensing is sold for the lifetime of the hardware and is non-transferable.
AOS Editions: Starter, Pro, Ultimate
Prism Central Editions: Starter, Pro
AOS Editions:
Starter limits functionality, for example: No IBM Power, Cluster size restricted to 12, Replication Factor restricted to 2; lacks post-process deduplication, post-process compression, Availability Domains, Self Service Restore, Cloud Connect, VSS Integration, Intelligent VM Placement, Virtual Network Configuration, Host Profiles.
Ultimate exclusively offers VM Flash Mode, Multiple Site DR (1-to many, many-to 1), Metro Availability, Disaster Recovery with NearSync or Sync Replication, On-premises Leap, Data-at-Rest Encryption and Native KMS. All except VM Flash Mode can be purchased as an add-on license for Pro edition.
Prism Central Editions:
Prism Starter is included with every edition of Acropolis for single and multiple site management. It enables registration and management of multiple Prism Element clusters, upgrade Prism Central with 1-click through Life Cycle Manager (LCM), and monitor and troubleshoot managed clusters. Prism Pro is available as an add-on subscription. Pro adds customizable dashboards, capacity, planning, and analysis tools, advanced search capabilities, low-code/no-code automation, and reporting. Prism Ultimate adds application discovery and monitoring, budgeting/chargeback and cost metering for resources, and a SQL Server monitoring content pack. Every Prism Central deployment includes a 90 day trial version of this license tier.
AOS Add-ons require separate licensing:
Nutanix Files:
Nutanix Files is licensed separately and is sold under two different capacity licenses. Nutanix Files Add-on License for HCI is for Nutanix Files running on mixed mode clusters. Nutanix Files Dedicated License is for Nutanix Files running on dedicated clusters.
Nutanix Calm:
Nutanix Calm is licensed separately and is sold as an annual subscription on a per virtual-machine (VM) basis. Calm licenses are required only for VMs managed by Calm, running in either the Nutanix Enterprise cloud or public clouds. Nutanix Calm is sold in 25 VM subscription license packs. Both Prism Starter and Prism Pro include perpetual entitlement for the first 25 VMs managed by Calm.
Nutanix Microsegmentation:
Nutanix Microsegmentation is licensed seperately and is sold as an annual subscription on a per node basis. Licenses are needed for all nodes in a cluster where microsegmentation functionality will be used. This option requires a Nutanix cluster managed by Prism Central and using the AHV virtualization solution. Licenses are sold in 1 to 5 year subscription terms. Prism Central with Starter license is required to manage microsegmentation policies.
Xi Services:
Nutanix provides several subscription Plans: Pay-As-You-Go, 1 year and 3 year.
Xi Leap:
Xi Leap is licensed separately and is sold as an annual subscription on a per virtual-machine (VM) basis. Each VM protected by Xi Leap falls in one of three pricing levels: Basic Protection (RPO=24+ hours;2 snapshots,1TB/VM), Advanced Protection (RPO=4+ hours; 1 week of snapshots,2TB/VM), Premium Protection (RPO=1+ hours,1 month of snapshots,5TB/VM). Allowed capacity includes space allocated by snapshots.
Xi Frame:
Xi Frame is licensed separately ad is sold as an annual scubscriptuon on a named user or concurrent user basis.
|
Per Node (all-inclusive)
There is no separate software licensing. Each node comes equiped with an all-inclusive feature set. This means that without exception all Scale Computing HC3 software capabilities are available for use.
In june 2018 Scale Computing introduced an Managed Service Providers (MSP) Program that offers these organizations a price-per node, per-month, OpEx subscription license.
HC3 Cloud Unity DRaaS requires a monthly subscription that is in part based on Google Cloud Platform (GCP) resource usage (compute, storage, network). The HC3 Cloud Unity DRaaS subscription includes:
- 6 days of Active Mode testing
- Runbook outlining DR procedures
- 1 Runbook failover test and 1 separate Declaration
- Network egress equal to 12.5% of Storage
- ScaleCare Support
In addition end-users and first-time service providers can purchase a DR Planning Service (one-time fee) for onboarding.
|
|
Support Pricing Model
Details
|
Capacity based (per TB)
Support is always provided on a premium (24x7) basis, including free updates.
More information about DataCores support policy can be found here:
http://datacore.custhelp.com/app/answers/detail/a_id/1270/~/what-is-datacores-support-policy-for-its-products
|
Per Node
Subscriptions: Basic, Production, Mission Critical
Most notable differences:
- Basic offers 8x5 support, Production and Mission Critical offer 24x7 support.
- Basic and Production target 2/4/8 hours response times depending severity level, whereas Mission Critical targets 1/2/4 hours reponse times.
- Basic and Production provide Next Business Day hardware replacement, whereas Mission Critical provides 4 Hour hardware replacement.
- Mission Critical exclusively offers direct routing to senior level engineers.
|
Per Node
Each appliance comes with 1 year ScaleCare Premium Support that consists of:
- 24x7x365 by telephone (US and Europe)
- 2 hour response time for critical issues
- Live chat, email support, and general phone on Mo-Fr 8AM-8PM EDST.
- Next Business Day (NBD) delivery of hardware replacement parts
ScaleCare Premium Support also provides remote installation services on the initial deployment of ScaleComputing HC3 clusters.
In june 2018 Scale Computing introduced an Managed Service Providers (MSP) Program that offers these organizations a price-per node, per-month, OpEx subscription license.
|
|
|
Design & Deploy
|
|
|
|
|
|
|
Design |
|
|
Consolidation Scope
Details
|
Storage
Data Protection
Management
Automation&Orchestration
DataCore is storage-oriented.
SANsymphony Software-Defined Storage Services are focused on variable deployment models. The range covers classical Storage Virtualization over Converged and Hybrid-Converged to Hyperconverged including a seamless migration between them.
DataCore aims to provide all key components within a storage ecosystem including enhanced data protection and automation & orchestration.
|
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
Nutanix is stack-oriented.
With the ECP platform Nutanix aims to provide all functionality required in a Private Cloud ecosystem through a single platform.
|
Hypervisor
Compute
Storage
Networking (optional)
Data Protection
Management
Automation&Orchestration
Scale Computing is stack-oriented.
With the HC3 platform Scale Computing aims to provide all functionality required in a Private Cloud ecosystem.
|
|
|
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
The bandwidth required depends entirely on the specifc workload needs.
SANsymphony 10 PSP11 introduced support for Emulex Gen 7 64 Gbps Fibre Channel HBAs.
SANsymphony 10 PSP8 introduced support for Gen6 16/32 Gbps ATTO Fibre Channel HBAs.
|
1, 10, 25, 40 GbE
Nutanix hardware models include redundant ethernet connectivity using SFP+ or Base-T. Nutanix recommends 10GbE or higher to avoid the network becoming a performance bottleneck.
|
1, 10 GbE
Scale Computing hardware models include redundant ethernet connectivity in an active/passive setup.
|
|
Overall Design Complexity
Details
|
Medium
DataCore SANsymphony is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. DataCore SANsymphony seeks to provide important capabilities either natively or tightly integrated, and this keeps the design process relatively simple. However, because many features in SANsymphony are optional and thus can be turned on/off, in effect each one needs to be taken into consideration when preparing a detailed design.
|
High
Nutanix ECP is able to meet many different use-cases, but each requires specific design choices in order to reach an optimal end-state. This is not limited to choosing the right building blocks and the right software edition, but extends to the use (or non-use) of some of its core data protection and data efficiency mechanisms. In addition, the end-users hypervisor choice prohibits the use of some advanced functionality as these are only available on Nutanix own hypervisor, AHV. As Nutanix continues to add functionality to its already impressive array of capabilities, designing the solution could grow even more complex over time.
|
Low
Scale Computing HC3 was developed with simplicity in mind, both from a design and a deployment perspective. The HC3 platform architecture is meant to be applicable to general virtual server infrastructure (VSI) use-cases and seeks to provide important capabilities natively. There are only a few storage building blocks to choose from, and many advanced capabilities like deduplication are always turned on. This minimizes the amount of design choices as well as the number of deployment steps.
|
|
External Performance Validation
Details
|
SPC (Jun 2016)
ESG Lab (Jan 2016)
SPC (Jun 2016)
Title: 'Dual Node, Fibre Channel SAN'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-connected, SSY 10.0, 4x All-Flash Dell MD1220 SAS Storage Arrays
SPC (Jun 2016)
Title: 'Dual Node, High Availability, Hyper-converged'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-interconnect, SSY 10.0
ESG Lab (Jan 2016)
Title: 'DataCore Application-adaptive Data Infrastructure Software'
Workloads: OLTP
Benchmark Tools: IOmeter
Hardware: Hybrid (Tiered) Dell PowerEdge R720, 2-node cluster, SSY 10.0
|
Login VSI (may 2017)
ESG Lab (feb 2017; sep 2020)
SAP (nov 2016)
ESG Lab (Sep 2020)
Title: Nutanix Architecture and Performance Optimization'
Workloads: Synthetic, High-perf database, OLTP, Postgres Analytics
Benchmark Tools: Nutanix X-Ray (FIO), Silly Little Oracle Benchmark (SLOB), Pgbench
Hardware: Nutanix All-NVMe NX-8170-G7, 4-node cluster, AOS 5.18
Login VSI (May 2017)
Title: 'Citrix XenDesktop on AHV'
Workloads: Citrix XenDesktop VDI
Benchmark Tools: Login VSI (VDI)
Hardware: Nutanix All-flash NX-3460-G5, 6-node cluster, AOS 5.0.2
ESG Lab (Feb 2017)
Title: 'Performance Analysis: Nutanix'
Workloads: MSSQL OLTP, Oracle OLTP, MS Exchange, Citrix XenDesktop VDI
Benchmark Tools: Benchmark Factory (MSSQL), Silly Little Oracle Benchmark (Oracle), Jetstress (MS Exchange), Login VSI (Citrix XenDesktop)
Hardware: Nutanix All-flash NX-3460-G5, 4 node-cluster, AOS 5.0
SAP (Nov 2016)
Title: 'SAP Sales and Distribution (SD) Standard Application Benchmark'.
Workloads: SAP ERP
Benchmark Tools: SAPSD
Hardware: All-Flash Nutanix NX8150 G5, single -node, AOS 4.7
|
N/A
No Scale Computing HC3 validated test reports have been published in 2016/2017/2018/2019.
|
|
Evaluation Methods
Details
|
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
SANsymphony is freely downloadble after registering online and offers full platform support (complete Enterprise feature set) but is scale (4 nodes), capacity (16TB) and time (30 days) restricted, what all can be expanded upon request. The free trial version of SANsymphony can be installed on all commodity hardware platforms that meet the hardware requirements.
For more information please go here: https://www.datacore.com/try-it-now/
|
Community Edition (forever)
Hyperconverged Test Drive in GCP
Proof-of-Concept (POC)
Partner Driven Demo Environment
Xi Services Free Trial (60-days)
AOS Community Edition (CE) is freely downloadble after registering online and offers limited platform support (Acopolis Hypervisor = AHV) and scalability (4 nodes). AOS CE can be installed on all commodity hardware platforms that meet the hardware requirements. AOS CE use is not time-restricted. AOS CE is not for production environments.
A small running setup of AOS Community Edition (CE) can also be accessed instantly in Google Cloud Platform (GCP) by registering at nutanix.com/test-drive-hyperconverged-infrastructure. The Test Drive is limited to 2 hours.
Ravellos Smart Labs on AWS/GCE provides nested virtualization and offers a blueprint to run AOS CE in the public cloud. Using Ravello Smart Labs requires a subscription.
Nutanix also offers instant online access to live demo environments for partners to educate/show their customers.
Nutanix Xi Services include a Free Plan that provides a free trial for 60 days. At the end of the 60-day trail end-user organisations can choose to switcg to a Paid Plan, or choose to decide later. A Free Plan includes either DR for 100 VMs (100GB each) or running VMs (2vCPU, 4GB RAM each), and includes profesional support.
AWS = Amazon Web Services
GCE = Google Compute Engine
|
Public Facing Clusters
Proof-of-Concept (PoC)
|
|
|
|
Deploy |
|
|
Deployment Architecture
Details
|
Single-Layer
Dual-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
Single-Layer:
- SANsymphony is implemented as virtual machine (VM) or in case of Hyper-V as service layer on Hyper-V parent OS, managing internal and/or external storage devices and providing virtual disks back to the hypervisor cluster it is implemented in. DataCore calls this a hyper-converged deployment.
Dual-Layer:
- SANsymphony is implemented as bare metal nodes, managing external storage (SAN/NAS approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a traditional deployment.
- SANsymphony is implemented as bare metal nodes, managing internal storage devices (server-SAN approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a converged deployment.
Mixed:
- SANsymphony is implemented in any combination of the above 3 deployments within a single management entity (Server Group) acting as a unified storage grid. DataCore calls this a hybrid-converged deployment.
|
Single-Layer (primary)
Dual-Layer (secondary)
Single-Layer: Nutanix ECP is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
Nutanix ECP can also serve in a dual-layer model by providing storage to non-Nutanix hypervisor hosts, bare metal hosts and Windows clients (Please view the compute-only scale-out option for more information).
|
Single-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
|
|
Deployment Method
Details
|
BYOS (some automation)
BYOS = Bring-Your-Own-Server-Hardware
DataCore SANsymphony is made easy by providing a very straightforward implementation approach.
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by Nutanix, customer deployments can be executed in hours instead of days.
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by Scale Computing, customer deployments can be executed in hours instead of days.
|
|
|
Workload Support
|
|
|
|
|
|
|
Virtualization |
|
|
Hypervisor Deployment
Details
|
Virtual Storage Controller
Kernel (Optional for Hyper-V)
The SANsymphony Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the SANsymphony storage solution and commits its internal storage and/or externally connected storage to the shared resource pool. The Virtual Storage Controller (VSC) can be configured direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
In Microsoft Hyper-V environments the SANsymphony software can also be installed in the Windows Server Root Partition. DataCore does not recommend installing SANsymphony in a Hyper-V guest VM as it introduces virtualization layer overhead and obstructs DataCore Software from directly accessing CPU, RAM and storage. This means that installing SANsymphony in the Windows Server Root Partition is the preferred deployment option. More information about the Windows Server Root Partition can be found here: https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/architecture
The DataCore software can be installed on Microsoft Windows Server 2019 or lower (all versions down to Microsoft Windows Server 2012/R2).
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Virtual Storage Controller
The Nutanix Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the Nutanix storage solution and commits its internal storage to the shared resource pool. The Virtual Storage Controller (VSC) has direct access to the physical disks, so the hypervisor is not impeding the I/O flow. AOS 5.5 Controller VMs were running CentOS-7.3 with Python 2.7.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
KVM User Space
SCRIBE runs in KVM user space. Scale Computing made a conscious decision not to make SCRIBE kernel integrated in order to avoid the risk that storage problems would cause a system panic meaning that an entire node could go down as a result.
|
|
Hypervisor Compatibility
Details
|
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
'Not qualified' means there is no generic support qualification due to limited market footprint of the product. However, a customer can always individually qualify the system with a specific SANsymphony version and will get full support after passing the self-qualification process.
Only products explicitly labeled 'Not Supported' have failed qualification or have shown incompatibility.
|
VMware vSphere ESXi 6.0U1A-7.0U1
Microsoft Hyper-V 2012R2/2016/2019*
Microsoft CPS Standard
Nutanix Acropolis Hypervisor (AHV)
Citrix XenServer 7.0.0-7.1.0CU2**
NEW
Nutanix currently supports 4 major hypervisor platforms where many others support only 1 or 2. Nutanix offers its own hypervisor called Acropolis Hypervisor (AHV), which is based on Linux KVM. Using different hypervisors and or hypervisor-clusters within the same Nutanix cluster is supported.
Nutanix has official support for Microsoft Cloud Platform System (CPS), which bundles Windows Server 2012 R2, System Center 2012 R2 and Windows Azure Pack for easier hybrid cloud configuration. Nutanix offers the CPS Standard version pre-installed on nodes.
*Hyper-V 2016 is not supported on NX platform nodes with IvyBridge or SandyBridge processors (motherboard designator starts with X9). Hyper-V 2016 requires G5 hardware and is not supported on G3 and G4 hardware.
**Citrix Hypervisor 8.x is not supported on Nutanix server nodes by Citrix.
AOS 5.1 introduced general availability (GA) support for the Citrix XenServer hypervisor for use with XenApp and XenDesktop in Nutanix clusters.
AOS 5.2 exclusively introduced general availability (GA) support for the AHV hypervisor for use on IBM Power Systems. Only Linux is currently supported as Guest OS.
AOS 5.5 introduced support for Microsoft Hyper-V 2016 and provides 1-click non-disruptive upgrades from Hyper-V 2012 R2. AOS 5.5 also introduces support for virtual hard disks (.vhdx) that are 2 TB or greater in size in Hyper-V clusters.
AOS 5.17 introduces support for Microsoft Hyper-V 2019.
|
Linux KVM-based
NEW
ScaleComputing HC3 uses its own proprietary HyperCore operating system and KVM-based hypervisor.
SCRIBE is an integral part of the Linux KVM platform to enable it to own the full software stack. As VMware and Microsoft dont allow such a tight integration, SCRIBE cannot be used with any other hypervisor platform.
Scale Computing HC3 supports a single hypervisor in contrast to other SDS/HCI products that support multiple hypervisors.
The Scale Computing HC3 hypervisor fully supports the following Guest operating systems:
Windows Server 2019
Windows Server 2016
Windows Server 2012 R2
Windows 10
Windows 8.1
CentOS Enterprise Linux
RHEL Enterprise Linux
Ubuntu Server
FreeBSD
SUSE Linux Enterprise
Fedora
Versions supported are versions currently supported by the operating system manufacturer.
SCRIBE = Scale Computing Reliable Independent Block Engine
|
|
Hypervisor Interconnect
Details
|
iSCSI
FC
The SANsymphony software-only solution supports both iSCSI and FC protocols to present storage to hypervisor environments.
DataCore SANsymphony supports:
- iSCSI (Switched and point-to-point)
- Fibre Channel (Switched and point-to-point)
- Fibre Channel over Ethernet (FCoE)
- Switched, where host uses Converged Network Adapter (CNA), and switch outputs Fibre Channel
|
NFS
SMB3
iSCSI
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
Libscribe
In order to read/write from/to Scale Computing HC3 block devices (aka Virtual SCRIBE Devices or VSD for short) the Libscribe component needs to be installed in KVM on each physical host. Libscribe is part of the QEMU process and presents virtual block devices to the VM. Because Libscribe is a QEMU block driver, SCRIBE is a supported device type and qemu-img commands work by default.
Although a virtIO driver doesnt need to be installed perse in each VM, it is highly recommended as I/O performance benefits greatly from it. IO submission takes place via the Linux Native Asynchronous I/O (AIO) that is present in KVM.
Shared storage devices in virtual Windows Clusters are supported.
QEMU = Quick Emulator
|
|
|
|
Bare Metal |
|
|
Bare Metal Compatibility
Details
|
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Any operating system currently not qualified for support can always be individually qualified with a specific SANsymphony version and will get full support after passing the self-qualification process.
SANsymphony provides virtual disks (block storage LUNs) to all of the popular host operating systems that use standard disk drives with 512 byte or 4K byte sectors. These hosts can access the SANsymphony virtual disks via SAN protocols including iSCSI, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE).
Mainframe operating systems such as IBM z/OS, z/TPF, z/VSE or z/VM are not supported.
SANsymphony itself runs on Microsoft Windows Server 2012/R2 or higher.
|
Microsoft Windows Server 2008R2/2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.7/6.8/7.2
SLES 11/12
Oracle Linux 6.7/7.2
AIX 7.1/7.2 on POWER
Oracle Solaris 11.3 on SPARC
ESXi 5.5/6 with VMFS (very specific use-cases)
Nutanix Volumes, previously Acropolis Block Services (ABS), provides highly available block storage as iSCSI LUNs to clients. Clients can be non-Nutanix servers external to the cluster or guest VMs internal or external to the cluster, with the cluster block storage configured as one or more volume groups. This block storage acts as the iSCSI target for client Windows or Linux operating systems running on a bare metal server or as guest VMs using iSCSI initiators from within the client operating systems.
AOS 5.5 introduced support for Windows Server 2016.
IBM AIX: PowerHA cluster configurations are not supported as AIX requires a shared drive for cluster configuration information and that drive cannot be connected over iSCSI.
Nutanix Volumes supports exposing LUNs to ESXi clusters in very specific use cases.
|
N/A
Scale Computing HC3 does not support any non-hypervisor platforms.
|
|
Bare Metal Interconnect
Details
|
iSCSI
FC
FCoE
|
iSCSI
Block storage acts as one or more targets for client Windows or Linux operating systems running on a bare metal server or as guest VMs using iSCSI initiators from within the client operating systems.
ABS does not require multipath I/O (MPIO) configuration on the client but it is compatible with clients that are currently using or configured with MPIO.
|
N/A
Scale Computing HC3 does not support any non-hypervisor platforms.
|
|
|
|
Containers |
|
|
Container Integration Type
Details
|
Built-in (native)
DataCore provides its own Volume Plugin for natively providing Docker container support, available on Docker Hub.
DataCore also has a native CSI integration with Kubernetes, available on Github.
|
Built-in (native)
Nutanix provides its own software plugins for container support (both Docker and Kubernetes).
Nutanix also developed its own container platform software called 'Karbon'. Karbon provides on-premises Kubernetes-as-a-Service (KaaS) in order to enable end-users to quickly adopt container services.
Nutanix Karbon is not a hard requirement for running Docker containers and Kubernetes on top of ECP, however it does make it easier to use and consume.
Nutanix Karbon can only be used in combination with Nutanix native hypervisor AHV. VMware vSphere and Microsoft Hyper-V are not supported at this time. As Nutanix Karbon leverages Nutanix Volumes, it is not available for the Starter edition.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Platform Compatibility
Details
|
Docker CE/EE 18.03+
Docker EE = Docker Enterprise Edition
|
Docker EE 1.13+
Node OS CentOS 7.5
Kubernetes 1.11-1.14
Nutanix software plugins support both Docker and Kubernetes.
Nutanix Docker Volume plugin (DVP) supports Docker 1.13 and higher.
Nutanix Karbon 1.0.3 supported the following OS images:
- Node OS CentOS 7.5.1804-ntxnx-0.0, CentOS 7.5.1804-ntxnx-0.1
- Kubernetes v1.13.10, v1.14.6, v1.15.3
The current version of Nutanix Karbon is 2.1
Docker EE = Docker Enterprise Edition
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Platform Interconnect
Details
|
Docker Volume plugin (certified)
The DataCore SDS Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables SANsymphony data volumes to persist beyond the lifetime of both a container or a container host. DataCore leverages SANsymphony iSCSI and FC to provide storage to containers. This effectively means that the hypervisor layer is bypassed.
The Docker SDS Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the Docker Hub. The plugin is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a SANsymphony storage cluster.
For more information please go to: https://hub.docker.com/plugins/datacore-sds-volume-plugin
The Kubernetes CSI plugin can be downloaded from GitHub. The plugin is automatically deployed as several pods within the Kubernetes system.
For more information please go to: https://github.com/DataCoreSoftware/csi-plugin
Both plugins are supported with SANsymphony 10 PSP7 U2 and later.
|
Docker Volume plugin (certified)
The Nutanix Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables Nutanix data volumes to persist beyond the lifetime of both a container or a container host. Nutanix leverages Acropolis Block Services (ABS) to provide storage to containers through in-guest iSCSI connections. This effectively means that the hypervisor layer is bypassed.
The Nutanix Docker Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the online Docker Store. The plug-in is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a Nutanix storage cluster.
The Nutanix Docker Volume plugin (DVP) is supported with AOS 4.7 and later.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Host Compatibility
Details
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The DataCore native plug-ins are container-host centric and as such can be used across all SANsymphony-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, KVM, XenServer, Oracle VM Server) as well as on bare metal platforms.
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The Nutanix native plug-ins are container-host centric and as such can be used across all Nutanix-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, Nutanix AHV) as well as on bare metal platforms.
Nutanix Karbon can only be used in combination with Nutanix native hypervisor AHV. VMware vSphere and Microsoft Hyper-V are not supported at this time.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Host OS Compatbility
Details
|
Linux
All Linux versions supported by Docker CE/EE 18.03+ or higher can be used.
|
CentOS 7
Red Hat Enterprise Linux (RHEL) 7.3
Ubuntu Linux 16.04.2
The Nutanix Docker Volume Plugin (DVP) has been qualified for the mentioned Linux OS versions. However, the plug-in may also work with older OS versions.
Container hosts running the Windows OS are not (yet) supported.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Orch. Compatibility
Details
|
Kubernetes 1.13+
|
Kubernetes
Nutanix Karbon 2.1 supports Kubernetes.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
Container Orch. Interconnect
Details
|
Kubernetes CSI plugin
The Kubernetes CSI plugin provides several plugins for integrating storage into Kubernetes for containers to consume.
DataCore SANsymphony provides native industry standard block protocol storage presented over either iSCSI or Fibre Channel. YAML files can be used to configure Kubernetes for use with DataCore SANsymphony.
|
Kubernetes Volume plugin
The Nutanix Container Storage Interface (CSI) Volume Driver for Kubernetes uses Nutanix Volumes and Nutanix Files to provide scalable, persistent storage for stateful applications.
Kubernetes contains an in-tree CSI Volume Plug-In that allows the out-of-tree Nutanix CSI Volume Driver to gain access to containers and provide persistent-volume storage. The plugin runs in a pod and dynamically provisions requested PersistentVolumes (PVs) using Nutanix Files and Nutanix Volumes storage.
When Nutanix Files is used for persistent storage, applications on multiple pods can access the same storage and also have the benefit of multi-pod read-and-write access.
The Nutanix CSI Volume Driver requires Kubernetes v1.13 or later and AOS 5.6.2 or later. When using Nutanix Volumes, Kubernetes worker nodes must have the iSCSI package installed.
Nutanix also developed its own container platform software called 'Karbon'. Karbon provides on-premises Kubernetes-as-a-Service (KaaS) in order to enable end-users to quickly adopt container services.
|
N/A
Scale Computing HC3 does not officially support any container platforms.
|
|
|
|
VDI |
|
|
VDI Compatibility
Details
|
VMware Horizon
Citrix XenDesktop
There is no validation check being performed by SANsymphony for VMware Horizon or Citrix XenDesktop VDI platforms. This means that all versions supported by these vendors are supported by DataCore.
|
VMware Horizon
Citrix XenDesktop (certified)
Citrix Cloud (certified)
Parallels RAS
Xi Frame
Nutanix has published Reference Architecture whitepapers for VMware Horizon and Citrix XenDesktop platforms.
Nutanix Acropolis Hypervisor (AHV) is qualified as Citrix Ready. The Citrix Ready Program showcases verified products that are trusted to enhance Citrix solutions for mobility, virtualization, networking and cloud platforms. The Citrix Ready designation is awarded to third-party partners that have successfully met test criteria set by Citrix, and gives customers added confidence in the compatibility of the joint solution offering.
Nutanix qualifies as a Citrix Workspace Appliance: Nutanix and Citrix have worked side-by-side to automate the provisioning and integration of the entire Citrix stack from cloud service to on-premises applications and desktops. Nutanix Prism automatically provisions Citrix Cloud Connectors VMs for instantly connecting the on-premises Nutanix cluster to the XenApp and XenDesktop Service, and registers all nodes in the Nutanix cluster as a XenApp and XenDesktop Service Resource Location.
Parallels Remote Application Server (RAS) is a Nutanix-ready desktop virtualization solution. A Parallels RAS and Nutanix on VMWare reference architecture white paper was published in july 2016.
In May 2019 Nutanix introduced Xi Frame as a new option to use Frame Desktop-as-a-Service (DaaS) with apps, desktops, and user data hosted on Nutanix on-premises infrastructure. Xi Frame only supports the Nutanix AHV hypervisor with AOS 5.10 or later. At this time Windows 10, Windows Server 2016 and Windows Server 2019 guest OS are supported. Linux support (Ubuntu, CentOS) will be added in the second half of 2019.
|
Citrix XenDesktop
Parallels RAS
Leostream
Scale Computing HC3 HyperCore is a Citrix Ready platform. XenDesktop 7.6 LTSR, 7.8 and 7.9 are officially supported.
Scale Computing HC3 also actively supports the following desktop virtualization software:
- Parallels Remote Application Server (RAS);
- Leostream (=connection management).
A joint Reference Configuration white paper for Parallels RAS on Scale Computing HC3 was published in June 2019.
A joint Quick Start with Scale Computing HC3 and Leostream white paper was released in March 2019.
Since Scale Computing HC3 does not support the VMware vSphere hypervisor, VMware Horizon is not an option.
|
|
|
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
DataCore has not published any recent VDI reference architecture whitepapers. The only VDI related paper that includes a Login VSI benchmark dates back to december 2010. There a 2-node SANsymphony cluster was able to sustain a load of 220 VMs based on the Login VSI 2.0.1 benchmark.
|
VMware: up to 170 virtual desktops/node
Citrix: up to 175 virtual desktops/node
VMware Horizon 7.2: Load bearing is based on Login VSI tests performed on hybrid Lenovo ThinkAgile HX3320 Series using 2vCPU Windows 10 desktops and the Knowledge Worker profile.
Citrix XenDesktop 7.15: Load bearing is based on Login VSI tests performed on hybrid Lenovo ThinkAgile HX3320 Series using 2vCPU Windows 10 desktops and the Knowledge Worker profile.
For detailed information please view the corresponding whitepaper, dated July 2019.
|
Workspot: 40 virtual desktops/node
Workspot VDI 2.0: Load bearing number is based on Login VSI tests performed on hybrid HC2150 appliances using 2vCPU Windows 7 desktops and the Knowledge Worker profile.
For detailed information please view the corresponding whitepaper. Please note that this technical whitepaper is dated August 2016 and that Workspot VDI 2.0 no longer exists. Workspots current portfolio only includes cloud solutions that run in Microsoft Azure.
Scale Computing has not published any Reference Architecture whitepapers for the Citrix XenDesktop platform.
|
|
|
Server Support
|
|
|
|
|
|
|
Server/Node |
|
|
Hardware Vendor Choice
Details
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
Super Micro (Nutanix branded)
Super Micro (source your own)
Dell EMC (OEM)
Lenovo (OEM)
Fujitsu (OEM)
HPE (OEM)
IBM (OEM)
Inspur (OEM)
Cisco UCS (Select)
Crystal (Rugged)
Klas Telecom (Rugged)
Many (CE only)
When end-user organizations order a Nutanix solution from Nutanix channel partners they get the Nutanix software on Nutanix branded hardware. Nutanix is sourcing this hardware from SuperMicro. End-user organizations also have the option to source their own SuperMicro hardware and buy licensing and support from Nutanix.
Dell and Nutanix reached an OEM agreement in 2015. Lenovo and Nutanix reached an OEM agreement in 2016. IBM and Nutanix reached an OEM agreement in 2017. Customers and prospects should note that these hardware platforms should not be mixed in one cluster (technically possible but not supported).
In July 2016 Nutanix and Crystal Group partnered together to provide Enterprise Cloud solutions for extreme environments in energy, mining, hospitality, military, government and more. Combining the highly reliable, ruggedized Crystal RS2616PS18 Rugged 2U server platform and award-winning Nutanix Enterprise Cloud Platform for use in tactical environments.
As of August 2016 Nutanix completed independent validation of running Nutanix software on Cisco Unified Computing System (UCS) C-Series servers. Nutanix for Cisco UCS rack-mount servers is available through select Cisco and Nutanix partners worldwide.
In February 2017 Nutanix and Klas forged a partnership Nutanix to transform tactical data center solutions for government and military operations. The Klas Telecom Voyager Tactical Data Center system running the Nutanix Enterprise Cloud Platform allows data center operations to be carried out in the field via a single, airline carry-on-sized case.
As of July 2017 Nutanix completed independent validation of running Nutanix software on Cisco Unified Computing System (UCS) B-Series servers. Nutanix for Cisco UCS blade servers is available through select Cisco and Nutanix partners worldwide.
As of July 2017 Nutanix completed independent validation of running Nutanix software on HPE Proliant servers. Nutanix for HPE Proliant rack-mount servers is available through select HPE and Nutanix partners worldwide.
In September 2017 Nutanix released a version specifically for the IBM Power platform.
In May 2019 Fujitsu announced the availability of Fujitsu XF-series, combining Nutanix software with Fujitsu PRIMERGY servers.
In October 2019 HPE announced the availability of HPE ProLiant DX solution and HPE GreenLake for Nutanix.
The Nutanix Community Edition (CE) can be run on almost any x86 hardware but only have community support. Nutanix CE can also be run on AWS and Google using Ravello Systems for testing and training purposes.
A Nutanix node can run from AWS, backed with S3 storage as a backup target.
|
Lenovo (native and OEM)
SuperMicro (native)
Scale Computing leverages both Lenovo and SuperMicro server hardware as building blocks for is native HC3 appliances:
HC1200 is Supermicro server hardware
HC1250 is Supermicro server hardware
HC1250D is Lenovo server hardware
HC1250DF is Lenovo server hardware
HC5250D is Lenovo server hardware
Scale Computing has maintained a partnership with MBX Systems since 2012. MBX Systems is a hardware integrator based in the US, with headquarters both in Chicago and San Jose, that is tasked with assembling the native HC3 appliances.
In May 2018 Scale Computing and Lenovo entered in an OEM partnership to provide Scale Computing HC3 software on Lenovo ThinkSystem tower (ST250) or rack servers (SR630, SR650, SR250) with a wide variety of hardware choices (eg. CPU and RAM).
|
|
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
5 Native Models (SX-1000, NX-1000, NX-3000, NX-5000, NX-8000)
15 Native Model sub-types
Different models are available for different workloads.
Nutanix Native G6 and G7 Models (Super Micro):
SX-1000 (SMB, max 4 nodes per block and max 2 blocks)
NX-1000 (ROBO)
NX-3000 (Mainstream; VDI GPU)
NX-5000 (Storage Heavy; Storage Only)
NX-8000 (High Performance)
Dell EMC XC Core OEM Models (hardware support provided through Dell EMC; software support provided through Nutanix):
XC640-4 (ROBO 3-node cluster)
XC640-4i (ROBO 1-node cluster)
XC640-10 (Compute Heavy, VDI)
XC740xd-12 (Storage Heavy)
XC740xd-12C (Storage Heavy - up to 80TB of cold tier expansion)
XC740xd-12R (Storage Heavy - up to 80TB single-node replication target)
XC740xd-24 (Performance Intensive)
XC740xd2 (Storage Heavy - up to 240TB for fule and object workloads)
XC940-24 (Memory and Performance Intensive)
XC6420 (High-Density)
XCXR2 (Remote, harsh environments)
Lenovo ThinkAgile HX OEM Models (VMware ESXi, Hyper-V and Nutanix AHV supported):
HX 1000 Series (ROBO)
HX 2000 Series (SMB)
HX 3000 Series (Compute Heavy, VDI)
HX 5000 Series (Storage Heavy)
HX 7000 Series (Performance Intensive)
Fujitsu XF OEM Models (VMware ESXi and Nutanix AHV supported):
XF1070 4LFF (ROBO)
XF3070 10SFF (General Virtualized Workloads)
XF8050 24SFF (Compute Heavy Workloads)
XF8055 12LFF ((Stprage Heavy Workloads)
HPE ProLiant DX OEM Models:
DX360-4-G10
DX380-8-G10
DX380-12-G10
DX380-24-G10
DX560-24-G10
DX2200-DX170R-G10-12LFF
DX2200-DX190R-G10-12LFF
DX2600-DX170R-G10-24SFF
DX4200-G10-24LFF
IBM POWER OEM Models (AHV-only):
IBM CS821 (Middleware, DevOps, Web Services)
IBM CS822 (Storage Heavy Analytics, Open Source Databases)
Cisco UCS C-Series Models:
C220-M5SX (VDI, Middleware, Web Services)
C240-M5L (Storage Heavy, Server Virtualization)
C240-M5SX (Exchange, SQL, Large Databases)
C220-M4S (VDI, Middleware, Web Services)
C240-M4L (Storage Heavy, Server Virtualization)
C240-M4SX (Exchange, SQL, Large Databases)
Cisco UCS B-Series Models:
B200-M4 in 5108 Blade Chassis within 6248UP/6296UP/6332 Fabrics
HPE Proliant Models (ESXi, Hyper-V and Nutanix AHV supported):
DL360 Gen10 8SFF (VDI, Middleware, Web Services)
DL380 Gen10 2SFF + 12LFF (Storage Heavy, Server Virtualization)
DL380 Gen10 26SFF (High Performance, Exchange, SQL, Large Databases)
HPE Apollo Models (ESXi, Hyper-V and Nutanix AHV supported):
XL170r Gen 10 6SFF (VDI, Middleware, Web Services, Storage Heavy, Server Virtualization, High Performance, Exchange, SQL, Large Databases)
All models support hybrid and all-flash disk configurations.
All models can be deployed as storage-only nodes. These nodes run AHV and thus do not require additional hypervisor licenses.
Citrix XenServer is not supported on the NX-6035C-G5 (storage-only or light-compute) model.
LFF = Large Form Factor (3.5')
SFF = Small Form Factor (2.5')
|
4 Native Models
NEW
There are 4 native model series to choose from:
HE100 Edge Computing/Remote offices, stores, warehouses, labs, classrooms, ships
HE500 Edge Computing/Small remote sites/DR
HC1200 SMB/Midmarket
HC5000 Enterprise/Distributed Enterprise
There are 4 Lenovo model series to choose from:
ST250 Edge, Backup
SR250 Edge
SR630 Mid-market
SR650 Mid-market, High Capacity
|
|
|
1, 2 or 4 nodes per chassis
Note: Because SANsymphony is mostly hardware agnostic, customers can opt for multiple server densities.
Note: In most cases 1U or 2U building blocks are used.
Also Super Micro offers 2U chassis that can house 4 compute nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
Native:
1 (NX1000, NX3000, NX6000, NX8000)
2 (NX6000, NX8000)
4 (NX1000, NX3000)
3 or 4 (SX1000)
nodes per chassis
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1 node per chassis
NEW
Scale Computing HE100 appliances are Intel NUCs.
Scale Computing HE500 appliances are either 1U building blocks or Towers.
Scale Computing HC1200 appliances are 1U building blocks.
Scale Computing HC5000 appliances are 2U building blocks.
Lenovo HC3 Edge ST250 appliances are Towers.
Lenovo HC3 Edge SR250 appliances are 1U building blocks.
Lenovo HC3 Edge SR630 appliances are 1U building blocks.
Lenovo HC3 Edge SR650 appliances are 2U building blocks.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
NUC = Next Unit of Computing
|
|
|
Yes
DataCore does not explicitly recommend using different hardware platforms, but as long as the hardware specs are somehow comparable, there is no reason to insist on one or the other hardware vendor. This is proven in practice, meaning that some customers run their productive DataCore environment on comparable servers of different vendors.
|
Yes
Nutanix now allows mixing of all-flash (SSD-only) and hybrid (SSD+HDD) nodes in the same cluster. A minimum of 2 all-flash (SSD-only) nodes are required in mixed all-flash/hybrid clusters.
|
Yes
Scale Computing allows for mixing different server hardware in a single HC3 cluster, including nodes from different generations.
|
|
|
|
Components |
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: up to 10 options
Depending on the hardware model Nutanix offers one or multiple choices on CPU per Node: 8, 12, 16, 20, 24, 28, 32, 36, 44 cores. None of the hardware platforms offer all of the above processor configurations. Nutanix current hardware platforms provide up to 5 different CPU confgurations.
The NX-1000 series (NX-1065-G7 and NX-1175-G7 submodels), the NX-3000 series (NX-3060-G7 submodel ) and the NX-8000 series (NX-8035-G7 and NX8170-G7 submodels) currently support 2nd generation Intel Xeon Scalable processors (Cascade Lake). The NX-5000 series stil lacks a submodel that supports 2nd generation Intel Xeon Scalable (Cascade Lajke) processors.
Nutanix still provides multiple native models and submodels that offer a choice of previous generation Intel Xeon Scalable processors (Skylake).
Lenovo ThinkAgile HX (Nutanix OEM) models were the first to ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors. In July 2019 Nutanix native models (G7) and Dell EMC XC (Nutanix OEM) models also started to ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
AOS 5.17 introduces support for AMD processors across hypervisors (ESXi/Hyper-V/AHV). The first hardware partner to release an AMD platform for AOS is HPE with more to follow.
|
Flexible: up to 3 options (native); extensive (Lenovo OEM)
Scale Computing HE100-series CPU options:
HE150: 1x Intel i3-10110U (2 cores); 1x Intel i5-10210U (4 cores); 1x i7-10710U (6 cores)
Scale Computing HE500-series CPU options:
HE500: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
HE550: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
HE550F: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
HE500T: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
HE550TF: 1x Intel Xeon E-2124 (4 cores); 1x Intel Xeon E-2134 (4 cores); 1x Intel Xeon E-2136 (6 cores)
Scale Computing HC1200-series CPU options:
HC1200: 1x Intel Xeon Bronze 3204 (6 cores); 1x Intel Xeon Silver 4208 (8 cores)
HC1250: 1x Intel Xeon Silver 4208 (8 cores); 2x Intel Xeon Silver 4210 (10 cores); 2x Intel Xeon Gold 6242 (16 cores)
HC1250D: 2x Intel Xeon Silver 4208 (8 cores); 2x Intel Xeon Silver 4210 (10 cores); 2x Intel Xeon Gold 6230 (20 cores); 2x Intel Xeon Gold 6242 (16 cores); 2x Intel Xeon Gold 6244 (8 cores)
HC1250DF: 2x Intel Xeon Silver 4208 (8 cores); 2x Intel Xeon Silver 4210 (10 cores); 2x Intel Xeon Gold 6230 (20 cores); 2x Intel Xeon Gold 6242 (16 cores); 2x Intel Xeon Gold 6244 (8 cores)
Scale Computing HC5000-series CPU options:
HC5200: 1x Intel Xeon Silver 4208 (8 cores); 1x Intel Xeon Silver 4210 (10 cores); 1x Intel Xeon Gold 6230 (20 cores)
HC5250D: 2x Intel Xeon Silver 4208 (8 cores); 2x Intel Xeon Silver 4210 (10 cores); 2x Intel Xeon Gold 6230 (20 cores); 2x Intel Xeon Gold 6242 (16 cores)
Scale Computing HC1200 and HC5000 series nodes ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
Lenovo HC3 Edge CPU options:
ST250: 1x Intel Xeon E-2100
SR250: 1x Intel Xeon E-2100
SR630: 2x 1st or 2nd generation Intel Xeon Scalable (Skylake or Cascade Lake)
SR650: 2x 1st or 2nd generation Intel Xeon Scalable (Skylake or Cascade Lake)
|
|
|
Flexible
|
Flexible: up to 10 options
Depending on the hardware model Nutanix offers one or multiple choices on memory per Node: 64GB, 96GB, 128GB, 192GB, 256GB, 384GB, 512GB, 640GB, 768GB, 1TB, 1.5TB or 3.0TB. None of the model sub-types offers all of the above memory configurations. Nutanix current hardware platforms provide up to 10 different memory confgurations.
|
Flexible: up to 8 options
Scale Computing HE100-series memory options:
HE150: 8GB, 16GB, 32GB, 64GB
Scale Computing HE500-series memory options:
HE500: 16GB, 32GB, 64GB
HE550: 16GB, 32GB, 64GB
HE550F: 16GB, 32GB, 64GB
HE500T: 16GB, 32GB, 64GB
HE500TF: 16GB, 32GB, 64GB
Scale Computing HC1200-series memory options:
HC1200: 64GB, 96GB, 128GB, 192GB, 256GB, 384GB
HC1250: 64GB, 96GB, 128GB, 192GB, 256GB, 384GB
HC1250D: 128GB, 192GB, 256GB; 384GB, 512GB, 768GB
HC1250DF: 128GB, 192GB, 256GB; 384GB, 512GB, 768GB
Scale Computing HC5000-series memory options:
HC5200: 64GB, 128GB, 192GB, 256GB; 384GB, 512GB, 768GB
HC5250D: 128GB, 192GB, 256GB; 384GB, 512GB, 768GB, 1TB, 1.5TB
Lenovo HC3 Edge series memory options:
ST250: 16GB - 64GB
SR250: 16GB - 64GB
SR630: 64GB - 768GB
SR650: 64GB - 1.5TB
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: capacity (up to 7 options per disk type); number of disks (Dell, Cisco)
Fixed: Number of disks (hybrid, most all-flash)
Some All-Flash models offer 2 or 3 options for the number of SSDs to be installed in each appliance.
AOS supports the replacement of existing drives by larger drives when they become available. Nutanix facilitates these upgrades.
|
Capacity: up to 5 options (HDD, SSD)
Fixed: Number of disks
Scale Computing HE100-series storage options:
HE150: 1x 250GB/500GB/1TB/2TB M.2 NVMe
Scale Computing HE500-series storage options:
HE500: 4x 1/2/4/8TB NL-SAS [magnetic-only]
HE550: 1x 480GB/960GB SSD + 3x 1/2/4TB NL-SAS [hybrid]
HE550F: 4x 240GB/480GB/960GB SSD [all-flash]
HE500T: 4x 1/2/4/8TB NL-SAS + 8x 4/8TB NL-SAS [magnetic-only]
HE550TF: 4x 240GB/480GB/960GB SSD [all-flash]
Scale Computing HC1200-series storage options:
HC1200: 4x 1/2/4/8/12TB NL-SAS [magnetic-only]
HC1250: 1x 480GB/960GB/1.92TB/3.84TB/7.68TB SSD + 3x 1/2/4/8/12TB NL-SAS [hybrid]
HC1250D: 1x 960GB/1.92TB/3.84TB/7.68TB SSD + 3x 1/2/4/8TB NL-SAS [hybrid]
HC1250DF: 4x 960GB/1.92TB/3.84TB/7.68TB SSD [all-flash]
Scale Computing HC5000-series storage options:
HC5200: 12x 8/12TB NL-SAS [magnetic-only]
HC5250D: 3x 960GB/1.92TB/3.84TB/7.68TB SSD + 9x 4/8TB NL-SAS [hybrid]
Lenovo HC3 Edge series storage options:
ST250: 8x 1/2/4/8TB NL-SAS [magnetic only]
ST250: 4x 960GB/1.92TB/3.84TB SSD [all-flash]
SR250: 4x 1/2/4/8TB NL-SAS [magnetic only]
SR250: 1x 960GB/1.92TB/3.84TB SSD + 3x 1/2/4/8TB NL-SAS [hybrid]
SR250: 4x 960GB/1.92TB/3.84TB SSD [all-flash]
SR630: 4x 1/2/4/8TB NL-SAS [magnetic only]
SR630: 1x 480GB/960GB/1.92TB/3.84TB/7.68TB SSD + 3x 1/2/4/8TB NL-SAS [hybrid]
SR630: 4x 1.92TB/3.84TB/7.68TB SSD [all-flash]
SR650: 3x 480GB/960GB/1.92TB/3.84TB/7.68TB SSD + 9x 1/2/4/8TB NL-SAS [hybrid]
The SSDs in all mentioned nodes are normal SSDs (non-NMVe).
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: up to 4 add-on options
Depending on the hardware model Nutanix by default includes 1GbE or 10GbE network adapters and one or more of the following network add-ons: dual-port 1 GbE, dual-port 10 GbE, dual-port 10 GBase-T, quad-port 10 GbE, dual-port 25 GbE, dual-port 40GbE.
AOS 5.9 introduces support for Remote Direcy Memory Access (RDMA) support. RDMA provides a node with direct access to the memory subsystems of other nodes in the cluster, without needing the CPU-bounded network stack of the operating system. RDMA allows low-latency data transfer between memory subsystems, and so helps to improve network latency and lower CPU use.
With two RDMA-enabled NICs per node installed, the following platforms support RDMA:
- NX-3060-G6
- NX-3155G-G6
- NX-3170-G6
- NX-8035-G6
- NX-8155-G6
RDMA is not supported for the NX-1065-G6 platform as this platform includes only one NIC per node.
|
Fixed: HC1200/5000: 10GbE; HE150/500T: 1GbE
Flexible: HE500: 1/10GbE
Scale Computing HE100-series networking options:
HE150: 1x 1GbE
Scale Computing HE500-series networking options:
HE500: 4x 1GbE or 4x 10GbE SFP+
HE550: 4x 1GbE or 4x 10GbE SFP+
HE550F: 4x 1GbE or 4x 10GbE SFP+
HE500T: 2x 1GbE
HE550TF: 2x 1GbE
Scale Computing HC1200-series networking options:
HC1200: 4x 10GbE Base-T/SFP+ bonded active/passive
HC1250: 4x 10GbE Base-T/SFP+ bonded active/passive
HC1250D: 4x 10GbE Base-T/SFP+ bonded active/passive
HC1250DF: 4x 10GbE Base-T/SFP+ bonded active/passive
Scale Computing HC5000-series networking options:
HC5200: 4x 10GbE Base-T/SFP+ bonded active/passive
HC5250D:4x 10GbE Base-T/SFP+ bonded active/passive
Lenovo HC3 Edge series networking options:
ST250: 2x 1GbE
SR250: 4x 1GbE or 4x 10GbE SFP+
SR630: 4x 10GbE BaseT or 4x 10GbE SFP+
SR650: 4x 10GbE BaseT or 4x 10GbE SFP+
|
|
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
DataCore SANsymphony supports the hardware that is on the hypervisor HCL.
VMware vSphere 6.5U1 officially supports several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
Windows 2016 supports two graphics virtualization technologies available with Hyper-V to leverage GPU hardware:
- Discrete Device Assignment
- RemoteFX vGPU
More information is provided here: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-graphics-virtualization
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
NVIDIA Tesla (specific appliance models only)
Currently Nutanix and OEM partners support the following GPUs in a single server:
NVIDIA Tesla M60 (2x in NX-3155G-G5; 1x in NX-3175-G5; 2x in HX3510-G; 3x in XC740xd-24)
NVIDIA Tesla M10 (1x in NX-3170-G6; 2x in NX-3155G-G5; 1x in NX-3175-G5; 2x in XC740xd-24)
NVIDIA Tesla P40 (1x in NX-3170-G6; 2x in NX-3155G-G5; 1x in NX-3175-G5;3x in XC740xd-24)
NVIDIA Tesla V100 (AHV clusters running AHV-20170830.171 and later only)
Nutanix and OEM partners do not support Intel and/or AMD GPUs at this time.
AOS 5.5 introduced vGPU support for guest VMs in an AHV cluster.
|
N/A
Scale Computing HC3 currently does not provide any GPUs options.
|
|
|
|
Scaling |
|
|
|
CPU
Memory
Storage
GPU
The SANsymphony platform allows for expanding of all server hardware resources.
|
Memory
Storage
GPU
AOS supports the replacement of existing drives by larger drives when they become available. Nutanix facilitates these upgrades.
Empty drive slots in Cisco and Dell hardware models can be filled with additional physical disks when needed.
AOS 5.1 introduced general availability (GA) support for hot plugging virtual memory and vCPU on VMs that run on top of the AHV hypervisor from within the Self-Service Portal only. This means that the memory allocation and the number of CPUs on VMs can be increased while the VMs are powered on. However, the number of cores per CPU socket cannot be increased while the VMs are powered on.
AOS 5.11 with Foundation 4.4 and later introduces support for up to 120 Tebibytes (TiB) of storage per node. For these larger capacity nodes, each Controller VM requires a minimum of 36GB of host memory.
|
CPU
Memory
The Scale Computing HC3 platform allows for expanding CPU and Memory hardware resources. Storage resources (the number of disks within a single node) are usually not expanded.
|
|
|
Storage+Compute
Compute-only
Storage-only
Storage+Compute: In a single-layer deployment existing SANsymphony clusters can be expanded by adding additional nodes running SANsymphony, which adds additional compute and storage resources to the shared pool. In a dual-layer deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded simultaneously.
Compute-only: Because SANsymphony leverages virtual block volumes (LUNs), storage can be presented to hypervisor hosts not participating in the SANsymphony cluster. This is also beneficial to migrations, since it allows for online storage vMotions between SANsymphony and non-SANsymphony storage platforms.
Storage-only: In a dual-layer or mixed deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded independent from each other.
|
Compute+storage
Compute-only (NFS; SMB3)
Storage-only
Storage+Compute: Existing Nutanix clusters can be expanded by adding additional Nutanix nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: Because Nutanix leverages file-level protocols (NFS/SMB), storage can be presented to hypervisor hosts not participating in the Nutanix cluster. This is also beneficial to migrations, since it allows for online storage vMotions between Nutanix and non-Nutanix storage platforms.
Storage-only: Any Nutanix appliance model can be designated as a storage-only node. The storage-only node is part of the Nutanix storage cluster but does not actively participate in the hypervisor cluster. The storage-only node is always installed with AHV, which economizes on vSphere and/or Hyper-V licenses.
AOS 5.10 introduces the ability to add a never-schedulable node if you want to add a node to increase data storage on a Nutanix AHV cluster, but do not want any VMs to run on that node. In this way compliance and licensing requirements of virtual applications that have a physical CPU socket-based licensing model, can be met when scaling out a Nutanix AVH cluster.
|
Storage+Compute
Storage-only
Storage+Compute: Existing Scale Computing HC3 clusters can be expanded by adding additional nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: N/A; A Scale Computing HC3 node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
Storage-only: A Scale Computing HC3 node can be configured as a storage-only node by setting a flag and has to be performed by Scale Computing engineering (end-user organizations cannot set the flag themselves).
|
|
|
1-64 nodes in 1-node increments
There is a maximum of 64 nodes within a single cluster. Multiple clusters can be managed through a single SANsymphony management instance.
|
3-Unlimited nodes in 1-node increments
The hypervisor cluster scale-out limits still apply eg. 64 hosts for VMware vSphere and Microsoft Hyper-V in a single cluster. Nutanix AHV clusters have no scaling limit.
SX1000 clusters scale to a maximum of 4 nodes per block and a maximum of 2 blocks is allowed, making for a total of 8 nodes.
|
3-8 nodes in 1-node increments
There is a maximum of 8 nodes within a single cluster. Larger clusters do exist, but must be requested and are evaluated on a per use-case basis.
|
|
Small-scale (ROBO)
Details
|
2 Node minimum
DataCore prevents split-brain scenarios by always having an active-active configuration of SANsymphony with a primary and an alternate path.
In the case SANsymphony servers are fully operating but do not see each other, the application host will still be able to read and write data via the primary path (no switch to secondary). The mirroring is interrupted because of the lost connection and the administrator is informed accordingly. All writes are stored on the locally available storage (primary path) and all changes are tracked. As soon as the connection between the SANsymphony servers is restored, the mirror will recover automatically based on these tracked changes.
Dual updates due to misconfiguration are detected automatically and data corruption is prevented by freezing the vDisk and waiting for user input to solve the conflict. Conflict solutions could be to declare one side of the mirror to be the new active data set and discarding all tracked changes at the other side, or splitting the mirror and merge the two data sets into a 3rd vDisk manually.
|
3 Node minimum (data center)
1 or 2 Node minimum (ROBO)
1 Node minimum (backup)
Nutanix smallest deployment for Remote Office and Branch Office (ROBO) environments is a 1 node cluster based on the NX-1175S-G5 hardware model that is a good fit for small scale remote office and branche office (ROBO) environments. The Starter Edition license is often best suited for this type of deployment.
Nutanix also provides a single-node configuration for off-cluster on-premises backup purposes by leveraging native snapshots. The single-node supports compression and deduplication, but cannot be used as a generic backup target. It uses Replication Factor 2 (RF2 ) to protect data against magnetic disk failures.
AOS 5.6 introduces support for 2-node ROBO environments. These deployments require a Witness VM in a 3rd site as tie-breaker. This Witness is available for VMware vSphere and AHV. The same Witness VM is also used in Stretched Cluster scenarios based on VMware vSphere.
Nutanix smallest deployment for data center environments contains 3 nodes in a Nutanix Xpress 2U SX-1000 configuration that is specifically designed for small scale environment. The Starter Edition license is often best suited for this type of deployment.
|
1 Node minimum
|
|
|
Storage Support
|
|
|
|
|
|
|
General |
|
|
|
Block Storage Pool
SANsymphony only serves block devices to the supported OS platforms.
|
Distributed File System (ADSF)
Nutanix is powered by the Acropolis Distributed Storage Fabric (ADSF), previously known as Nutanix Distributed File System (NDFS).
|
Block Storage Pool
Scale Computing HC3 only serves block devices to the supported OS guest platforms. VMs running on HC3 have direct, block-level access to virtual SCRIBE devices (VSDs, aka virtual disks) in the clustered storage pool without the complexity or performance overhead introduced by using remote storage protocols.
A critical software component of HyperCore is the Scale Computing Reliable Independent Block Engine, known as
SCRIBE. SCRIBE is an enterprise class, clustered block storage layer, purpose built to be consumed by the HC3 embedded KVM based hypervisor directly.
SCRIBE discovers and aggregates all block storage devices across all nodes of the system into a single managed pool of storage. All data written to this pool is immediately available for read and write by any and every node in the storage system, allowing for sophisticated data redundancy, data deduplication, and load balancing schemes to be used by higher layers of the stack—such as the HyperCore
compute layer.
SCRIBE is a wide-striped storage architecture that combines all disks in the cluster into a single storage pool that is tiered between flash SSD and spinning HDD storage.
|
|
|
Partial
DataCores core approach is to provide storage resources to the applications without having to worry about data locality. But if data locality is explicitly requested, the solution can partially be designed that way by configuring the first instance of all data to be stored on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference.
By default data does not automatically follow the VM when the VM is moved to another node. However, virtual disks can be relocated on the fly to other DataCore node without losing I/O access, but this relocation takes some time due to data copy operations required. This kind of relocation usually is done manually, but we allow automation of such tasks and can integrate with VM orchestration using PowerShell for example.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Full
Data follows the VM, so if a VM is moved to another server/node, when data is read that data is copied to the node where the VM resides. New/changed data is always written locally and to one or more remote locations for protection.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
None
Scale Computing HC3 is based on a shared nothing storage architecture. Scale Computing HC3 enables every drive in every node throughout the cluster to contribute to the storage performance and capacity of every virtual disk (VDS) presented by the SCRIBE storage layer. When a VM is moved to another node, data remains in place and does not follow the VM because data is stored and available across all nodes residing in the cluster.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
|
|
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
VoV = Volume-on-Volume; The Virtual Storage Controller uses virtual disks provided by the hypervisor platform.
|
Direct-attached (Raw)
The software takes ownership of the unformatted physical disks available in the host.
|
Direct-Attached (Raw)
Direct-attached: The software takes ownership of the unformatted physical disks available each host.
|
|
|
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
|
Hybrid (Flash+Magnetic)
All-Flash
|
Magnetic-only
Hybrid (Flash+Magnetic)
All-Flash
Scale Computing HC3 appliance models storage composition:
HC1200: Magnetic-only
HC1250: Hybrid
HC1250D: Hybrid
HC1250DF: All-flash
HC5250D: Hybrid
A Magnetic-only node is called a Non-tiered node and contains 100% HDD drives and no SSD drives.
A Hybrid node is called a Tiered node and contains 25% SSD drives and 75% HDD drives.
An All-Flash node contains 100% SSD drives and no HDD drives.
|
|
Hypervisor OS Layer
Details
|
SD, USB, DOM, SSD/HDD
|
SuperMicro (G3,G4,G5): DOM
SuperMicro (G6): M2 SSD
Dell: SD or SSD
Lenovo: DOM, SD or SSD
Cisco: SD or SSD
|
HDD or SSD (partition)
By default for each 1TB of data, 8MB is allocated for metadata. The data and metadata is stored on the physical storage devices (RSDs) and both are protected using mirroring (2N). Because metadata is this lightweight, all of the metadata of all of the online VSDs is cached in DRAM.
VSD = Virtual SCRIBE Device
RSD = Real SCRIBE Device
|
|
|
|
Memory |
|
|
|
DRAM
|
DRAM
|
DRAM
|
|
|
Read/Write Cache
DataCore SANsymphony accelerates reads and writes by leveraging the powerful processors and large DRAM memory inside current generation x86-64bit servers on which it runs. Up to 8 Terabytes of cache memory may be configured on each DataCore node, enabling it to perform at solid state disk speeds without the expense. SANsymphony uses a common cache pool to store reads and writes in.
SANsymphony read caching essentially recognizes I/O patterns to anticipate which blocks to read next into RAM from the physical back-end disks. That way the next request can be served from memory.
When hosts write to a virtual disk, the data first goes into DRAM memory and is later destaged to disk, often grouped with other writes to minimize delays when storing the data to the persistent disk layer. Written data stays in cache for re-reads.
The cache is cleaned on a first-in-first-out (FiFo) basis. Segment overwrites are performed on the oldest data first for both read- and write cache segment requests.
SANsymphony prevents the write cache data from flooding the entire cache. In case the write data amount runs above a certain percentage watermark of the entire cache amount, then the write cache will temporarily be switched to write-through mode in order to regain balance. This is performed fully automatically and is self-adjusting, per virtual disk as well as on a global level.
|
Read Cache
|
Read Cache
Metadata structures
By default for each 1TB of data, 8MB is allocated for metadata. The data and metadata is stored on the physical storage devices (RSDs) and both are protected using mirroring (2N). Because metadata is this lightweight, all of the metadata of all of the online VSDs is cached in DRAM.
VSD = Virtual SCRIBE Device
RSD = Real SCRIBE Device
|
|
|
Up to 8 TB
The actual size that can be configured depends on the server hardware that is used.
|
Configurable
The memory of the Virtual Storage Controllers (VSCs) can be tuned to allow for a larger read cache by allocating more memory to these VMs. The amount of read cache is assigned dynamically; everything that is not used by the OS is used for read caching. Depending on other active internal processes, the amount of read cache can be larger or smaller.
|
4GB+
4GB of RAM is reserved per node for the entire HC3 system to function. No specific RAM is reserved for caching but the system will use any available memory as needed for caching purposes.
|
|
|
|
Flash |
|
|
|
SSD, PCIe, UltraDIMM, NVMe
|
SSD, NVMe
Nutanix supports NVMe drives in: Nutanix Community Edition (NCE), NX-3060-G6, NX-3170-G6, NX-8035-G6 and NX-8155-G6, as well as respective OEM models.
AOS 5.17 introduced All-NVMe drive support.
AOS 5.18 introduces Nutanix Blockstore that streamlines the I/O stack, taking the file system out of the equation altogether. With Blockstore Nutanix ECP effectively bypasses the OS (=Linux) kernel context switches and thus manages to accelerate AOS storage processing. Leveraging Nutanix Blockstore together with NVMe/Optane storage media delivers significant IOPS and latency improvements with no changes to the applications.
|
SSD, NVMe
HyperCore-Direct for NVMe can be requested and is evaluated by Scale Computing on a per-customer scenario basis.
|
|
|
Persistent Storage
SANsymphony supports new TRIM / UNMAP capabilities for solid-state drives (SSD) in order to reduce wear on those devices and optimize performance.
|
Read/Write Cache
Storage Tier
|
Persistent Storage
|
|
|
No limit, up to 1 PB per device
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
Hybrid: 1-4 SSDs per node
All-Flash: 3-24 SSDs per node
NVMe-Hybrid: 2-4 NVMe + 4-8 SSDs per node
Hybrid SSDs: 480GB, 960GB, 1.92TB, 3.84TB, 7.68TB
All-Flash SSDs: 480GB, 960GB, 1.92TB, 3.84TB, 7.68TB
NVMe: 1.6TB, 2TB, 4TB
|
Hybrid: 1-3 SSDs per node
All-Flash: 4 SSDs per node
Flash devices are not mandatory in a Scale Computing HC3 solution.
Each HC1200 hybrid node has 1 SSD drive attached.
Each HC1250 all-flash node has 4 SSD drives attached.
Each HC5250 node has 3 SSD drives attached.
An HC1250 all-flash node can have a maximum of 15.36TB of raw SSD storage attached.
|
|
|
|
Magnetic |
|
|
|
SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
In this case SATA = NL-SAS = MDL SAS
|
Hybrid: SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
Hybrid: SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
|
|
Persistent Storage
|
Persistent Storage
|
Persistent Storage
|
|
Magnetic Capacity
Details
|
No limit, up to 1 PB (per device)
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
2-20 SATA HDDs per node
Hybrid HDDs: 1TB, 2TB, 4TB, 6TB, 8TB, 12TB
|
Magnetic-only: 4 HDDs per node
Hybrid: 3 or 9 HDDs per node
Magnetic devices are not mandatory in a Scale Computing HC3 solution.
Each HC1200 magnetic-only node has 4 HDD drives attached.
Each HC1250 hybrid node has 3 HDD drives attached.
Each HC5250 hybrid node has 9 HDD drives attached.
An HC1200 magnetic-only node can have a maximum of 32TB of raw HDD storage attached.
An HC1250 hybrid node can have a maximum of 24TB of raw HDD storage attached.
An HC5250 hybrid node can have a maximum of 72TB of raw HDD storage attached.
|
|
|
Data Availability
|
|
|
|
|
|
|
Reads/Writes |
|
|
Persistent Write Buffer
Details
|
DRAM (mirrored)
If caching is turned on (default=on), any write will only be acknowledged back to the host after it has been succesfully stored in DRAM memory of two separate physical SANsymphony nodes. Based on de-staging algorithms each of the nodes eventually copies the written data that is kept in DRAM to the persistent disk layer. Because DRAM outperforms both flash and spinning disks the applications experience much faster write behavior.
Per default, the limit of dirty-write-data allowed per Virtual Disk is 128MB. This limit could be adjusted, but there has never been a reason to do so in the real world. Individual Virtual Disks can be configured to act in write-through mode, which means that the dirty-write-data limit is set to 0MB so effectively the data is directly written to the persistent disk layer.
DataCore recommends that all servers running SANsymphony software are UPS protected to avoid data loss through unplanned power outages. Whenever a power loss is detected, the UPS automatically signals this to the SANsymphony node and write behavior is switched from write-back to write-through mode for all Virtual Disks. As soon as the UPS signals that power has been restored, the write behavior is switched to write-back again.
|
Flash Layer (SSD; NVMe)
Nutanix supports NVMe drives in: Nutanix Community Edition (NCE), Native G6 and G7 hardware models, as well as respective OEM models.
All mentioned Nutanix models are available in Hybrid (SSD+Magnetic), All-Flash (SSD-only) and All-Flash with NVMe (NVMe+SSD) configurations.
|
Flash/HDD
The persisent write buffer depends on the type of the block storage pool (Flash or HDD).
|
|
Disk Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1-2 Replicas (2N-3N) (primary)
Erasure Coding (N+1/N+2) (secondary)
Nutanix implementation of replicas is called Replication Factor or RF in short (RF2 = 2N; RF3 = 3N). Maintaining replicas is the default method for protecting data that is written to the Nutanix cluster. An implementation of erasure coding, called EC-X by Nutanix, can be optionally enabled for protecting data once its write-cold (not overwritten for a cerain amount of time). RF and EC-X are enabled on a per-container basis, but are applied at the individual VM/file level.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on a different node in the cluster. The latter happens in a fully distributed manner, in other words, there is no dedicated partner node. When a disk fails, it is marked offline and data is read from another instance instead. At the same time data re-replication of the associated replicas is initiated in order to restore the desired replication factor.
Erasure Coding: Nutanix EC-X was introduced in AOS 5.0 and is used for protecting write cold data only. Nutanix EC-X provides more efficient protection than Nutanix RF, as it does not use full copies of data extents. Instead, EC-X uses parity for protecting data extents and distributes both data and parity across all nodes in the cluster. The amount of nodes within a Nutanix cluster determines the EC-X stripe size. When a disk fails, it is marked offline and data needs to be rebuild in-flight using the parity as data is being read, incurring a performance penalty. At the same time, data re-replication is initiated in order to restore the desired EC-X protection. Nutanix EC-X requires at least a 4-node setup. When Nutanix EC-X is enabled for the write cold data, it automatically uses the same resiliency level as is already in use for the write hot data, so 1 parity block per stripe for data with RF2 protection enabled and 2 parity blocks per stripe for data with RF3 protection enabled. Nutanix EC-X is enabled on the Storage Container (=vSphere Datastore) level.
AOS 5.18 introduces EC-X support for Object Storage containers.
|
2-way Mirroring (Network RAID-10)
Within a Scale Computing HC cluster all data is written twice to the block storage pool for redundancy (2N). It is equivalent to Network RAID-10, as the two data chunks are placed on separate physical disks of separate physical nodes within the cluster. This protects against 1 disk failure and 1 node failure at the same time, and aggregates the I/O and throughput capabilities of all the individual disks in the cluster (= wide striping).
Once an RSD fails, the system re-mirrors the data using the free space in the HC3 cluster as a hot spare. Because all physical disks contain data, rebuilds are very fast. Scale Computing HC3 is often to detect the deteriorated state of a physical storage device in advance and pro-actively copy data to other devices ahead of an actual failure.
Currently only 1 Replica (2N) can be maintained, as the setting is not configurable for end-users.
|
|
Node Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1)
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1-2 Replicas (2N-3N) (primary)
Erasure Coding (N+1/N+2) (secondary)
Nutanix implementation of replicas is called Replication Factor or RF in short (RF2 = 2N; RF3 = 3N). Maintaining replicas is the default method for protecting data that is written to the Nutanix cluster. An implementation of erasure coding, called EC-X by Nutanix, can be optionally enabled for protecting data once its write-cold (not overwritten for a cerain amount of time). RF and EC-X are enabled on a per-container basis, but are applied at the individual VM/file level.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on a different node in the cluster. The latter happens in a fully distributed manner, in other words, there is no dedicated partner node. When a disk fails, it is marked offline and data is read from another instance instead. At the same time data re-replication of the associated replicas is initiated in order to restore the desired replication factor.
Erasure Coding: Nutanix EC-X was introduced in AOS 5.0 and is used for protecting write cold data only. Nutanix EC-X provides more efficient protection than Nutanix RF, as it does not use full copies of data extents. Instead, EC-X uses parity for protecting data extents and distributes both data and parity across all nodes in the cluster. The amount of nodes within a Nutanix cluster determines the EC-X stripe size. When a disk fails, it is marked offline and data needs to be rebuild in-flight using the parity as data is being read, incurring a performance penalty. At the same time, data re-replication is initiated in order to restore the desired EC-X protection. Nutanix EC-X requires at least a 4-node setup. When Nutanix EC-X is enabled for the write cold data, it automatically uses the same resiliency level as is already in use for the write hot data, so 1 parity block per stripe for data with RF2 protection enabled and 2 parity blocks per stripe for data with RF3 protection enabled. Nutanix EC-X is enabled on the Storage Container (=vSphere Datastore) level.
AOS 5.18 introduces EC-X support for Object Storage containers.
|
2-way Mirroring (Network RAID-10)
Within a Scale Computing HC cluster all data is written twice to the block storage pool for redundancy (2N). It is equivalent to Network RAID-10, as the two data chunks are placed on separate physical disks of separate physical nodes within the cluster. This protects against 1 disk failure and 1 node failure at the same time, and aggregates the I/O and throughput capabilities of all the individual disks in the cluster (= wide striping).
Once an RSD fails, the system re-mirrors the data using the free space in the HC3 cluster as a hot spare. Because all physical disks contain data, rebuilds are very fast. Scale Computing HC3 is often to detect the deteriorated state of a physical storage device in advance and pro-actively copy data to other devices ahead of an actual failure.
Currently only 1 Replica (2N) can be maintained, as the setting is not configurable for end-users.
|
|
Block Failure Protection
Details
|
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk. However, block failure protection is in most cases irrelevant as 1-node appliances are used as building blocks.
SANsymphony works on an N+1 redundancy design allowing any node to acquire any other node as a redundancy peer per virtual device. Peers are replacable/interchangable on a per Virtual Disk level.
|
Block Awareness (integrated)
Nutanix intelligent software features include Block Awareness. A 'block' represents a physical appliance. Nutanix refers to a “block” as the chassis which contains either one, two, or four 'nodes'. The reason for distributing roles and data across blocks is to ensure if a block fails or needs maintenance the system can continue to run without interruption.
Nutanix Block Awareness can be broken into a few key focus areas:
- Data (The VM data)
- Metadata (Cassandra)
- Configuration Data (Zookeeper)
Block Awareness is automatic (always-on) and requires a minimum of 3 blocks to be activated, otherwise node awareness will be defaulted to. The 3-block requirement is there to ensure quorum.
AOS 5.8 introduced support for erasure coding in a cluster where block awareness is enabled. In previous versions of AOS block awareness was lost when implementing erasure coding. Minimums for erasure coding support are 4 blocks in an RF2 cluster and 6 blocks in an RF3 cluster. Clusters with erasure coding that have less blocks than specified will not regain block awareness after upgrading to AOS 5.8.
|
Not relevant (1U/2U appliances)
|
|
Rack Failure Protection
Details
|
Manual configuration
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk.
|
Rack Fault Tolerance
Rack Fault Tolerance is the ability to provide rack level availability domain. With rack fault tolerance, redundant copies of data are made and placed on the nodes that are not in the same rack. When rack fault tolerance is enabled, the cluster has rack awareness and the guest VMs can continue to run with failure of one rack (RF2) or two racks (RF3). The redundant copies of guest VM data and metadata exist on other racks when one rack fails.
AOS 5.17 introduces Rack Fault Tolerance support for Microsoft Hyper-V. Now all three hypervisors (ESXi, Hyper-V and AHV) have node, block, and rack level failure domain protection.
|
N/A
|
|
Protection Capacity Overhead
Details
|
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
|
RF2 (2N) (primary): 100%
RF3 (3N) (primary): 200%
EC-X (N+1) (secondary): 20-50%
EC-X (N+2) (secondary): 50%
EC-X (N+1): The optimal and recommended stripe size is 4+1 (20% capacity overhead for data protection). The minimum 4-node cluster configuration has a stripe size is 2+1 (50% capacity overhead for data protection).
EC-X (N+2): The optimal and recommended stripe size is 4+2 (50% capacity overhead for data protection). The minimum 6-node cluster configuration also has a stripe size is 4+2.
Because Nutanix Erasure Coding (EC-X) is a secondary feature that is only used for write cold data, with regard to an EC-X enabled Storage Container (=vSphere Datastore) the overall capacity overhead for data protection is always a combination of RF2 (2N) + EC-X (N+1) or RF3 (3N) + EC-X (N+2).
From AOS 5.18 onwards Nutanix Erasure Coding X (EC-X) is enabled for Object Storage containers.
|
Mirroring (2N) (primary): 100%
|
|
Data Corruption Detection
Details
|
N/A (hardware dependent)
SANsymphony fully relies on the hardware layer to protect data integrity. This means that the SANsymphony software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
Read integrity checks
Disk scrubbing (software)
While writing data checksums are created and stored. When read again, a new checksum is created and compared to the initial checksum. If incorrect, a checksum is created from another copy of the data. After succesful comparison this data is used to repair the corrupted copy in order to stay compliant with the configured protection level.
Disk Scrubbing is a background process that is used to perform checksum comparisons of all data stored within the solution. This way stale data is also verified for corruption.
|
Read integrity checks (software)
Disk scrubbing (software)
The HC3 system performs continuous read integrity checks on data blocks to detect corruption errors. As blocks are written to disk, replica blocks are written to other disks within the storage pool for redundancy. Disk are continuously scrubbed in the background for errors and any corruption found is repaired from the replica data blocks.
|
|
|
|
Points-in-Time |
|
|
|
Built-in (native)
|
Built-in (native)
Nutanix calls its native snapshot/backup feature Time Stream.
|
Built-in (native)
HyperCore snapshots use a space efficient allocate-on-write methodology where no additional storage is used at the time the snapshot is taken, but as blocks are changed the original content blocks are preserved, and new content written to freshly allocated space on the cluster.
|
|
|
Local + Remote
SANsymphony snapshots are always created on one side only. However, SANsymphony allows you to create a snapshot for the data on each side by configuring two snapshot schedules, one for the local volume and one for the remote volume. Both snapshot entities are independent and can be deleted independently allowing different retention times if needed.
There is also the capability to pair the snapshot feature along with asynchronous replication which provides you with the ability to have a third site long distance remote copy in place with its own retention time.
|
Local + Remote
|
Local (+ Remote)
Manual snapshots are always created on the source HC3 cluster only and are never deleted by the system.
Without remote replication active on a VM, snapshots created using snapshot schedules are also created on the source HC3 cluster only.
With remote replication active, a snapshot schedule repeatedly creates a VM snapshot on the source cluster and then copies that snapshot to the target cluster, where it is retained for a specified number of minutes/hours/days/weeks/months. The default remote replication frequency of 5 minutes, combined with the default retention of 25 minutes, means that by default 5 snapshots are maintained on the target HC3 cluster at any given time.
A VM can only have one snapshot schedule assigned at a time. However, a schedule can contain multiple recurrence rules. Each recurrence rule consists of a replication snapshot frequency (x minutes/hours/days/weeks/months), an execution time (eg. 12:00AM), and a retention (y minutes/hours/days/weeks).
|
|
Snapshot Frequency
Details
|
1 Minute
The snapshot lifecycle can be automatically configured using the integrated Automation Scheduler.
|
GUI: 1-15 minutes (nearsync replication); 1 hour (async replication)
NearSync and Continuous (Metro Availability) remote replication are only available in the Ultimate edition.
Nutanix Files 3.6 introduces support for NearSync. NearSync can be configured to take snapshots of a file server in 1-minute intervals.
AOS 5.9 introduced one-time snapshots of protection domains that have a NearSync schedule.
|
5 minutes
A snapshot schedule allows a minimum frequency of 5 minutes. However, ScaleCare Support recommends no less than every 15 minutes as a general best practice.
|
|
Snapshot Granularity
Details
|
Per VM (Vvols) or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is certified for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
Per VM
|
Per VM
|
|
|
Built-in (native)
DataCore SANsymphony incorporates Continuous Data Protection (CDP) and leverages this as an advanced backup mechanism. As the term implies, CDP continuously logs and timestamps I/Os to designated virtual disks, allowing end-users to restore the environment to an arbitrary point-in-time within that log.
Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage. Several of these markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent or crash-consistent restore point from just before the incident occurred.
|
Built-in (native)
Nutanix calls its native snapshot feature Time Stream.
Nutanix calls its cloud backup feature Cloud-Connect.
By combining Nutanix native snapshot feature with its native remote replication mechanism, backup copies can be created on remote Nutanix clusters or within the public cloud (AWS/Azure).
A snapshot is not a backup:
1. For a data copy to be considered a backup, it must at the very least reside on a different physical platform (=controller+disks) to avoid dependencies. If the source fails or gets corrupted, a backup copy should still be accessible for recovery purposes.
2. To avoid further dependencies, a backup copy should reside in a different physical datacenter - away from the source. If the primary datacenter becomes unavailable for whatever reason, a backup copy should still be accessible for recovery purposes.
When considering the above prerequisites, a backup copy can be created by combining snapshot functionality with remote replication functionality to create independent point-in-time data copies on other SDS/HCI clusters or within the public cloud. In ideal situations, the retention policies can be set independently for local and remote point-in-time data copies, so an organization can differentiate between how long the separate backup copies need to be retained.
Apart from the native features, Nutanix ECP can be used in conjunction with external data protection solutions like VMwares free-of-charge vSphere Data Protection (VDP) backup software, as well as any hypervisor compatible 3rd party backup application. VDP is part of the vSphere license and requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between Nutanix ECP and VMware VDP.
|
Built-in (native)
By combining Scale Computing HC3s native snapshot feature with its native remote replication mechanism, backup copies can be created on remote HC3 clusters.
A snapshot is not a backup:
1. For a data copy to be considered a backup, it must at the very least reside on a different physical platform (=controller+disks) to avoid dependencies. If the source fails or gets corrupted, a backup copy should still be accessible for recovery purposes.
2. To avoid further dependencies, a backup copy should reside in a different physical datacenter - away from the source. If the primary datacenter becomes unavailable for whatever reason, a backup copy should still be accessible for recovery purposes.
When considering the above prerequisites, a backup copy can be created by combining snapshot functionality with remote replication functionality to create independent point-in-time data copies on other SDS/HCI clusters or within the public cloud. In ideal situations, the retention policies can be set independently for local and remote point-in-time data copies, so an organization can differentiate between how long the separate backup copies need to be retained.
Apart from the native features, Scale Computing HC3 supports any in-guest 3rd party backup agents that are designed to run on Intel-based virtual machines on our supported OS platforms.
|
|
|
Local or Remote
All available storage within the SANsymphony group can be configured as targets for back-up jobs.
|
To local single-node
To local and remote clusters
To remote cloud object stores (Amazon S3, Microsoft Azure)
Nutanix calls its native snapshot feature Time Stream.
Nutanix calls its cloud backup feature Cloud-Connect.
Nutanix provides a single-node cluster configuration for on-premises off-production-cluster backup purposes by leveraging native snapshots. The single-node supports compression and deduplication, but cannot be used as a generic backup target. It uses Replication Factor 2 (RF2 ) to protect data against magnetic disk failures.
Nutanix also provides a single-node cluster configuration in the AWS and Azure public clouds for off-premises backup purposes. Data is moved to the cloud in an already deduplicated format. Optionally compression can be enabled on the single-node in the cloud.
|
Locally
To remote sites
|
|
|
Continuously
As Continuous Data Protection (CDP) is being leveraged, I/Os are logged and timestamped in a continous fashion, so end-users can restore to virtually any-point-in-time.
|
NearSync to remote clusters: 1-15 minutes*
Async to remote clusters: 1 hour
AWS/Azure Cloud: 1 hour
Nutanix calls its cloud backup feature Cloud-Connect.
*The retention of NearSync LightWeight snapshots (LWS) is relatively low (15 minutes), so these snapshots are of limited use in a backup scenario where retentions are usually a lot higher (days/weeks/months).
|
5 minutes (Asynchronous)
VM snapshots are created automatically by the replication process as quickly as every 5 minutes (as long as the previous snapshot’s change blocks have been fully replicated to the target HC3 cluster). The remote replication default schedule will take a snapshot every 5 minutes and keep snapshots for 25 minutes.
|
|
Backup Consistency
Details
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
By default CDP creates crash consistent restore points. Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage.
Several CDP Rollback Markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent, filesystem-consistent or crash-consistent restore point from (just) before the incident occurred.
In a VMware vSphere environment, the DataCore VMware vCenter plug-in can be used to create snapshot schedules for datastores and select the VMs that you want to enable VSS filesystem/application consistency for.
|
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
Nutanix provides the option to enable Microsoft VSS integration when configuring a backup policy. This ensures application-consistent backups are created for MS Exchange and MS SQL database environments.
AOS 5.9 introduced Application-Consistent Snapshot Support for NearSync DR.
AOS 5.11 introduces support for Nutanix Guest Tools (NGT) in VMware ESXi environments next to native AHV environments. This allows for the use of native Nutanix Volume Shadow Copy Service (VSS) instead of Microsofts VSS inside Windows Guest VMs. The Nutanix VSS hardware provider enables integration with native Nutanix data protection. The provider allows for application-consistent, VSS-based snapshots when using Nutanix protection domains. The Nunatix Guest Tools CLI (ngtcli) can also be used to execute the Nutanix Self-Service Restore CLI.
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
For Windows VMs that require it, VSS snapshot integration is provided in the VIRTIO driver package.
|
|
Restore Granularity
Details
|
Entire VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
When configuring the virtual environment as described above, effectively VM-restores are possible.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
Entire VM or Single File (local snapshots)
Self-service single-file restore is available for VMware ESXi and AHV (Acroplis Hypervisor based on KVM) environments. It is limited to NTFS basic disks in Windows VMs and requires the installation of the Nutanix Guest Tools (NGT) inside the protected VMs. The Self-Service Restore functionality has been enhanced with an in-guest user interface that allows application administrators to manage their own snapshots and perform single file restores. Prism can be used to manage and configure NGT.
|
Entire VM
Although Scale Computing HC3 uses block-storage, the platform is capable of attaining per VM-granularity.
|
|
Restore Ease-of-use
Details
|
Entire VM or Volume: GUI
Single File: Multi-step
Restoring VMs or single files from volume-based storage snapshots requires a multi-step approach.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
Entire VM: GUI
Single File: GUI, nCLI
The VM Administrator can initiate a restore from within the Guest VM using the Self-Service Restore (SSR) GUI or through nCLI commands. At this time only Async-DR workflow is supported for the self-service restore feature.
AOS 5.18 introduces self-service restore (=file-level restore) capabilities to Nutanix Leap.
|
Entire VM: Multi-step
Single File: Multi-step
Restoring VMs or single files from HC3 storage snapshots requires a multi-step approach.
For file-level restores a VM snapshot needs to be cloned and mounted so the file can be read from the mount. Cloning and mounting does not alter the original VM snapshot.
|
|
|
|
Disaster Recovery |
|
|
Remote Replication Type
Details
|
Built-in (native)
DataCore SANsymphonys remote replication function, Asynchronous Replication, is called upon when secondary copies will be housed beyond the reach of Synchronous Mirroring, as in distant Disaster Recovery (DR) sites. It relies on a basic IP connection between locations and works in both directions. That is, each site can act as the disaster recovery facility for the other. The software operates near-synchronously, meaning that it does not hold up the application waiting on confirmation from the remote end that the update has been stored remotely.
|
Built-in (native)
Nutanix provides native DR and replication capabilities.
|
Built-in (native)
All HC3 source and target clusters that will be participating in remote replication must run the same HCOS version. It is possible to replicate between a tiered and non-tiered HC3 cluster.
HC3 remote replication uses network compression by default.
|
|
Remote Replication Scope
Details
|
To remote sites
To MS Azure Cloud
On-premises deployments of DataCore SANsymphony can use Microsoft Azure cloud as an added replication location to safeguard highly available systems. For example, on-premises stretched clusters can replicate a third copy of the data to MS Azure to protect against data loss in the event of a major regional disaster. Critical data is continuously replicated asynchronously within the hybrid cloud configuration.
To allow quick and easy deployment a ready-to-go DataCore Cloud Replication instance can be acquired through the Azure Marketplace.
MS Azure can serve only as a data repository. This means that VMs cannot be restored and run in an Azure environment in case of a disaster recovery scenario.
|
To remote sites
To AWS and MS Azure Cloud
To Xi Cloud (US and UK only)
AWS and MS Azure can only serve as data repositories. This means that VMs cannot be restored and run in an AWS/Azure environment in case of a disaster recovery scenario.
Xi Cloud: The Nutanix DRaaS offering is called ' Xi Leap' is All protected Nutanix VMs can be failed over and run in Xi Cloud and then failed back once the on-prem resources are restored. Xi Leap is currently available in three regions worldwide: US-EAST (Northern Virginia), US-WEST (San Francisco Bay Area) and UK (London). Each region comprises multiple fault-tolerant zones known as Availability Zones.
Both Nutanix AHV and VMware ESXi are suported for Xi Leap.
|
To remote sites
To Google Cloud Platform (GCP)
Network latency between the source and HC3 target clusters should be below 2,000ms (2 seconds).
Scale Computing HC3 Cloud Unity DRaaS: This disaster recovery as a service offering provides an HC3 DR target running securely in Google Cloud Platform (GCP). Workloads can be replicated to the Google cloud for failover or recovery on a per VM basis. HC3 Cloud Unity DRaaS uses L2 networking to provide seamless connectivity between on-prem and remote hosted VMs in the event of failover. HC3 Cloud Unity DRaaS includes ScaleCare support at every stage to assist in setup, testing, failover, and recovery. The service also comes with a runbook to assist with both planning and execution. When needed, all protected VMs can be failed over and running in the cloud and then failed back once the on-prem resources are restored. The Recovery Point Objective (RPO) is 4 hours for recovery of the first VM on GCP.
HC3 Cloud Unity DRaaS requires a monthly subscription that is in part based on GCP resource usage (compute, storage, network).
|
|
Remote Replication Cloud Function
Details
|
Data repository
All public clouds can only serve as data repository when hosting a DataCore instance. This means that VMs cannot be restored and run in the public cloud environment in case of a disaster recovery scenario.
In the Microsoft Azure Marketplace there is a pre-installed DataCore instance (BYOL) available named DataCore Cloude Replication.
BYOL = Bring Your Own License
|
Data repository (AWS/Azure)
DR-site (Xi Cloud)
AWS and MS Azure can only serve as data repositories. This means that VMs cannot be restored and run in an AWS/Azure environment in case of a disaster recovery scenario.
|
DR-site (GCP)
All protected VMs can be failed over and running in the cloud and then failed back once the on-prem resources are restored.
Scale Computing HC3 Cloud Unity DRaaS leverages Google Cloud Platform (GCP) as a DR-site. All traffic between the on-premises HC3 environment and GCP utilizes an encrypted connection, authenticated via pre-shared key. Only changed blocks are transmitted. Replicated data remains solely in the zone chosen to run the HC3 Cloud instance in.
Current HC3 Cloud Unity/Google Datacenter Locations:
- United States: Iowa, South Carolina, Oregon
- Canada: Montreal
- Europe: Belgium, London, Frankfurt
|
|
Remote Replication Topologies
Details
|
Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
|
Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to-many, many-to-1
Multiple Site DR (1-to-many, many-to-1) is only available in the Ultimate edition.
AOS 5.17 introduces multi-site disaster recovery. Multi-Site DR combines Nutanix Metro Availablity, NearSync and Asynchronous replication with a DR orchestration framework. This enables recovery from the simultaneous failure of two or more datacenters.
|
Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
Scale Computing HC3 supports 1-to-1 replication as well as many-to-1 replication. 1-to-1 replication includes support for cross-replication between two systems, meaning source-to-target and target-to-source. 1-to-many replication means that different VMs from one system can be replicated to different remote systems; with HC3 the same VM cannot be replicated to different remote systems. Many-to-1 means that multiple source systems can replicate VMs to the same target system. A maximum of 25 HC3 source systems can replicate to a single HC3 target system.
|
|
Remote Replication Frequency
Details
|
Continuous (near-synchronous)
SANsymphony Asynchronous Replication is not checkpoint-based but instead replicates continuously. This way data loss is kept to a minimum (seconds to minutes). End-users can inject custom consistency checkpoints based on CDP technology which has no minimum time slot/frequency.
|
Synchronous to remote cluster: continuous
NearSync to remote clusters: 20 seconds*
Async to remote clusters: 1 hour
AWS/Azure Cloud: 1 hour
Xi Cloud: 1 hour
AOS 5.5 introduced NearSync replication that leverages light-weight snapshots (LWS). The NearSync feature provides the ability to protect data with up to 1 minute RPO, minimizing data loss in case of a disaster. NearSync allows for a short time to recover, thus providing a low RTO. NearSync places no restrictions on latency or distance and works with all supported hypervisors. NearSync is a best-effort mechanism: whenever for example the network can’t sustain the low RPO, replication will temporarily transition out of near-sync. NearSync requires all changes to be handled in SSD, so a percentage of SSD space is reserved to be used by NearSync when it’s enabled.
In AOS 5.9+ Asynchronous and NearSync schedules can coexist in a protection domain.
AOS 5.17 enhanced NearSync RPO from 1 minute to 20 seconds. AOS 5.17 also introduced support for NearSync Replication (1-15 minutes RPO) with Xi Leap. Protection Policies can now be configured to use a NearSync replication schedule between two on-prem AHV/ESXi clusters or an on-prem AHV cluster and Xi Cloud Services.
AOS 5.17 introduced Synchronous replication (0 RPO) between two on-prem AHV clusters when using Xi Leap, Nutanix own built-in disaster recovery service offering. This includes replication of VM data, metadata, and associated policies. This means all VM attributes and associated security and orchestration policies are preserved in case of a failover. At this time only an unplanned failover can be performed when leveraging Synchronous replication between two on-prem AHV clusters when using Xi Leap. Before AOS 5.17 AHV did not provide any support for Synchronous replication.
NearSync and Continuous (Metro Availability) remote replication are only available in the Ultimate edition.
Nutanix Files supports asynchronous replication with a minimum interval of 1 hour. Nutanix Files 3.6 introduced support for NearSync.
Nutanix Files 3.6.1 introduces Async disaster recovery (DR) with a 1-hour RPO on physical nodes up 80TB.
|
5 minutes
VM snapshots are created automatically by the replication process as quickly as every 5 minutes (as long as the previous snapshot’s change blocks have been fully replicated to the target HC3 cluster). The remote replication default schedule will take a snapshot every 5 minutes and keep snapshots for 25 minutes.
ScaleCare Support recommends a snapshot frequency of 15 minutes as a general best practice.
|
|
Remote Replication Granularity
Details
|
VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
VM
iSCSI LUN
VMs and Volume Groups can be replicated by placing them in a Protection Domain.
iSCSI LUNs can be part of a Volume Group.
VMs can be placed in a Consistency Group.
|
VM
Excluding virtual disks from VM is a feature that is still in testing and must currently be done by engaging Support and discussing the options and considerations required.
|
|
Consistency Groups
Details
|
Yes
SANsymphony provides the option to use Virtual Disk Grouping to enable end-users to restore multiple Virtual Disks to the exact same point-in-time.
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
|
Yes
VMs can be configured as part of a Consistency Group in order to guarantee all VMs in the group to reflect exactly the same point-in-time and thus guaranteeing crash-consistent consistency across VMs.
Application consistency can only be achieved for a single VM, because only 1 VM can be part of a consistency group when selecting application consistent snapshots.
Nutanix supports a maximum of 20 entities (VMs, volume groups) in a single consistency group.
|
No
|
|
|
VMware SRM (certified)
DataCore provides a certified Storage Replication Adapter (SRA) for VMware Site Recovery Manager (SRM). DataCore SRA 2.0 (SANsymphony 10.0 FC/iSCSI) shows official support for SRM 6.5 only. It does not support SRM 8.2 or 8.1.
There is no integration with Microsoft Azure Site Recovery (ASR). However, SANsymphony can be used with the control and automation options provided by Microsoft System Center (e.g. Operations Manager combined with Virtual Machine Manager and Orchestrator) to build a DR orchestration solution.
|
VMware SRM (certified)
Xi Leap (native; ESXi/AHV; US/UK/EU/JP)
NEW
VMware SRM: Nutanix provides a certified Storage Replication Adapter (SRA) for VMware Site Recovery Manager (SRM). Nutanix SRA 2.5 (AOS 5.10 and 5.11) shows official support for SRM versions 8.2, 8.1 and 6.5.
Nutanix AOS 5.11 introduced support for NearSync replication. A NearSync schedule can now be created on a protection domain for SRA replication. The NearSync schedule can coexist with an asynchronous schedule.
Xi Leap: In November 2018 Nutanix launched Xi Leap, its own built-in disaster recovery service (DRaaS) offering. Xi Leap allows end-user organisations to protect their applications and data running in Nutanix on-premises environments by replicating them to Xi Cloud by setting up protection policies. From the Prism management console, admins select virtual machines for protection and drag and drop them into the cloud environment. Disaster Recovery orchestration is provided by recovery plans, where boot sequencing of virtual machines can be controlled, re-IP can be configured and custom scripts can be included. The goal is to provide one-click failover and failback functionality. Xi Leap allows end-user organisations to verify and proof their DR readiness by offers non-disruptive testing and cleanup.
Network connectivity and common management between on-premises and cloud environments are preserved with Xi Leap, allowing end-user organisations to manage the source and target sites as a single environment.
Xi Leap supports Single Sign On (SSO) by integrating with Active Directory Federation Services (ADFS), as well as data at-rest and data in-flight encryption (AES-256).
Xi Leap is currently bound to the following maximums:
- 200 VMs per recovery plan
- 20 categories per recovery plan
- 20 stages in a recovery plan
- 15 categories per stage
- 5 recovery plans executed in parallel
Currently Xi Leap is currently available in 6 regions worldwide: US-EAST (Northern Virginia), US-WEST (San Francisco Bay Area), UK (London), EU-Germany, EU-Italy and AP-Japan. Each region comprises multiple fault-tolerant zones known as Availability Zones.
Xi Leap is supported for AHV and ESXi on-premises hypervisors (AOS 5.11 required).
Xi Leap subcription plans need to be paid for separately.
AOS 5.17 introduced more flexibility and granularity in Nutanix Recovery Plans, including in-guest custom script execution and IP address configuration during the recovery process. The latter enables automated disaster recovery without the need for stretched networks.
AOS 5.18 introduced self-service restore (=file-level restore) capabilities for Xi Leap. AOS 5.18 also introduced Cross-Hypervisor Disaster Recovery (CHDR) Support for NearSync Replication in Leap. This means that CHDR enables recovering VMs from AHV clusters to ESXi-based Nutanix clusters or VMs from ESXi-based Nutanix clusters to AHV clusters.
AOS 5.19 introduces multi-site replication capabilities for Xi Leap. Protection policies can now be created to replicate the recovery points to one or more recovery availability zones. Recovery points can be replicated to at most 2 different AHV or ESXi clusters at the same or different availability zones. To maintain the efficiency of replication only 1 recovery availability zone can be onfigured for NearSync replication schedules (1–15 minutes RPO) and Synchronous (0 RPO) replication schedules.
AOS 5.19 also introduces cross-cluster live migration capabilities This allows a live migration of guest VMs protected with a Synchronous replication schedule. Live migration offers zero downtime of protected guest VMs during a planned failover event to the recovery availability zone.
|
HC3 Cloud Unity (native)
HC3 Cloud Unity DRaaS is a complete cloud service that provides a Disaster Recovery (DR) runbook outlining DR procedures.
DR testing involves cloning a replicated VM snapshot on a remote cluster and booting the clone.
Cloud Unity DRaaS is available in the following Google Regions:
United States: Iowa, South Carolina, Oregon
Canada: Montreal
Europe: Brussels, London, Frankfurt
HyperCore 8.5.1 introduced a bulk action allowing the cloning of Replication Target VMs.
|
|
Stretched Cluster (SC)
Details
|
VMware vSphere: Yes (certified)
DataCore SANsymphony is certified by VMware as a VMware Metro Storage Cluster (vMSC) solution. For more information, please view https://kb.vmware.com/kb/2149740.
|
vSphere: Yes
Hyper-V: Yes
AHV: No
Nutanix calls this feature Metro Availability. The solution is bi-directionally active/passive in nature. What this means is, that active containers can exist on both sites, each with a passive mirror in the other site.
Metro Availability and synchronous replication are supported across different hardware vendors (NX, Dell, or Lenovo) from AOS 5.1 onwards. Note that mixing of nodes from different vendors in the same cluster are not supported (for example, not supported: NX and non-NX, Dell and non-Dell, Lenovo and non-Lenovo, Cisco UCS and non-Cisco UCS, and so on.) Asynchronous replication across different hardware vendors continues to be supported.
Data is compressed during data transfers.
Although Nutanix has been claiming VMware vSphere Metro Storage Cluster (vMSC) support since NOS 4.1, it is not officialy listed as such in the online VMware Storage Compatibility Guide. According to Nutanix no certification program is open for vMSC, so no new vendors are allowed to join.
Stretched Cluster is only available in the Ultimate edition and can be purchased as an add-on license for the Pro edition.
|
N/A
At this time Scale Computing does not support HC3 clusters that are stretched across data centers.
|
|
|
2+sites = two or more active sites, 0/1 or more tie-breakers
Theoretically up to 64 sites are supported.
SANsymphony does not require a quorum or tie-breaker in stretched cluster configurations, but can be used as an optional component. The Virtual Disk Witness can provide a tie-breaker role if for instance redundant inter site paths are not implemented. The tie-breaker node (server or device) must be other than the two nodes presenting a virtual disk. Access to the Virtual Disk Witness is leading for storage node behavior.
There are 3 ways to configure the stretched cluster without any tie-breakers:
1. Default: in a split-brain scenario both sides stay active allowing upper infrastructure layers (OS/database/application) to make a decision (eg. clustering principles). In any case SANsymphony prevents a merge when there is a risk to data integrity, and the end-user has to make the choice on how to proceed next (which side is true)
2. Select one side to go inaccessible
3. Select both sides to go inaccessible.
|
vSphere: 3-sites = two active sites + tie-breaker in 3rd site
Hyper-V: 3-sites = two active sites + tie-breaker in 3rd site
The use of the Metro Availability Witness automates failover decisions in order to avoid split-brain scenarios like network partitions and remote site failures. The witness is deployed as a VM on a third-party server or a Nutanix cluster within a separate failure domain.
The Metro Availability Witness is available for VMware vSphere environments and Microsoft Hyper-V 2016 environments (AOS 5.9 and later).
|
N/A
At this time Scale Computing does not support HC3 clusters that are stretched across data centers.
|
|
|
<=5ms RTT (targeted, not required)
RTT = Round Trip Time
In truth the user/app with the least tolerated write latency defines the acceptable RTT or distance. In practice
|
<=5ms RTT / <400 KMs
|
N/A
At this time Scale Computing does not support HC3 clusters that are stretched across data centers.
|
|
|
<=32 hosts at each active site (per cluster)
The maximum is per cluster. The SANsymphony solution can consist of multiple stretched clusters with a maximum of 64 nodes each.
|
No set max. # Nodes; Mixing hardware models allowed
Nutanix has not set a hard node limitation on Stretched Cluster Limitations. This means you are allowed to apply Stretched Clustering regardless of the number of nodes. It is currently unknown what the largest running implementation of Nutanix Stretched Clustering is in a live customer production environment.
|
N/A
At this time Scale Computing does not support HC3 clusters that are stretched across data centers.
|
|
SC Data Redundancy
Details
|
Replicas: 1N-2N at each active site
DataCore SANsymphony provides enhanced stretched cluster availability by offering local fault protection with In Pool Mirroring. With In Pool Mirroring you can choose to mirror the data inside the local Disk Pool as well as mirror the data across sites to a remote Disk Pool. In the remote Disk Pool data is then also mirrored. All mirroring happens synchronously.
1N-2N: With SANsymphony Stretched Clustering, there can be either 1 instance of the data at each site (no In Pool Mirroring) or 2 instances of the data a each site (In Pool RAID-1 Mirroring).
|
Replicas: 1N at each active site
Erasure Coding (optional): Nutanix EC-X at each active site
Nutanix Stretched Clustering works with Replication Factor 2 (RF2), meaning that there is only one instance of the data (1N) available at each of the active sites.
When using Nutanix EC-X, data is protected across cluster nodes within each active site.
|
N/A
At this time Scale Computing does not support HC3 clusters that are stretched across data centers.
|
|
|
Data Services
|
|
|
|
|
|
|
Efficiency |
|
|
Dedup/Compr. Engine
Details
|
Software (integration)
NEW
SANsymphony provides integrated and individually selectable inline deduplication and compression. In addition, SANsymphony is able to leverage post-processing deduplication and compression options available in Windows 2016/2019 as an alternative approach.
|
Software
|
Software
Scale Computing HC3 is able to leverage native background data deduplication to reduce the physical space occupied by virtual disks.
The storage details available in the HC3 Web interface provide information on efficiency gains resulting from deduplication.
|
|
Dedup/Compr. Function
Details
|
Efficiency (space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
Efficiency (full) and Performance (limited)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most storage solutions place emphasis on efficiency.
Nutanix ECP provides two deduplication methods:
1. Inline Performance Deduplication: performs near-line deduplication in order to provide for space savings on the performance tier (RAM/SSD), thus allowing more hot data to be stored here.
2. MapReduce Deduplication: performs post-process deduplication in order to provide for space savings on the capacity tier (HDD), thus allowing for more cold data to be stored here.
Nutanix ECP provides two compression methods:
1. Inline Compression: performs immediate compression on random writes in order for space savings on the performance tier (SSD), thus allowing for more hot data to be stored here.
2. MapReduce Compression: performs post-process deep compression in order to provide for additional space savings on the capacity tier (SSD/HDD), thus allowing for more cold data to be stored here.
|
Efficiency (space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
|
Dedup/Compr. Process
Details
|
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
DataCore SANSymphony 10 PSP12 and above leverage both inline deduplication and compression, as well as post process deduplication and compression techniques.
With inline deduplication incoming writes first hit the memory cache of the primary host and are replicated to the cache of a secondary host in an un-deduplicated state. After the blocks have been written to both memory caches, the primary host acknowledges the writes back to the originator. Each host then destages the written blocks to the persistent storage layer. During destaging, written blocks are deduplicates and/or compressed.
Windows Server 2019 deduplication is performed outside of IO path (post-processing) and is multi-threaded to speed up processing and keep performance impact minimal.
|
Perf. Tier: Inline (dedup post-ack / compr pre-ack)
Cap. Tier: Post-process
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
From AOS 5.0 and onwards compression is performed inline (pre-ack) for random writes, so before they are written to SSD. From AOS 5.1 onwards post-process compression is enabled by default. From AOS 5.18 onwards inline compression is enabled by default for new storage containers.
|
Post-Processing
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
The Scale Computing HC3 data deduplication feature is considered a post-process implementation that works with existing background processes to identify duplicate 1 MiB blocks of data on a given physical disk. The process leverages the SCRIBE metadata reference count mechanism by finding independently written blocks that are the same. This duplicate review is for each physical disk on a given node to ensure as little a footprint as possible while providing all of the benefits of full deduplication.
The deduplication process is broken into two steps. The first step reviews VM data blocks by creating a hash index of each block and storing the hash in the nodes RAM. The hashing algorithm will be able to scan the system data for deduplication candidates at roughly 1 MiB/s of data on HDDs and 4 MiB/s of data on SSDs, both of these estimates per node. The second process occurs during low system utilization. The system will work through the queue of hashed blocks in RAM. It will search for matching hashes until the background disk scan regenerates them. When the process finds two blocks with a matching hash it will verify the underlying blocks are in fact duplicates before incrementing the reference count in metadata on the block it is planning to free. Updating the metadata count for the block essentially releases the space of the duplicate block. The block then returns to the system’s free storage pool. This secondary process can progress much faster than 1 MiB/s; the speed is dependent on the current system load.
The SCRIBE metadata reference count mechanism is the same architecture utilized by snapshots and clones in SCRIBE to allow quick, efficient, low-impact thin-provisioning on the HC3 system. Shared blocks are referenced and a count to the block stored in the metadata.
SCRIBE = Scale Computing Reliable Independent Block Engine
|
|
Dedup/Compr. Type
Details
|
Optional
NEW
By default, deduplication and compression are turned off. For both inline and post-process, deduplication and compression can be enabled.
For inline deduplication and compression the feature can be turned on per node. The entire node represents a global deduplication domain. Deduplication and compression work across pools and across vDisks. Individual pools can be selected to participate in capacity optimization. Either deduplication or compression or both can be selected per individual vDisk. Pools can host both capacity optimized and non-capacity optimized vDisks at the same time. The optional capacity optimization settings can be added/changed/removed during operation for each vDisk.
For post-processing the feature can be enabled per pool. All vDisks in that pool would be deduplicated and compressed. Each pool is an independent deduplication domain. This means only data in the pool is capacity optimized, but not across pools. Additionally, for post-processing capacity optimization can be scheduled so admins can decide when deduplication should run.
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
|
Dedup Inline: Optional
Dedup Post-Process: Optional
Compr. Inline: Optional
Compr. Post-Process: Optional
Inline compression is turned on by default, post-process compression is turned on by default, and deduplication is turned off by default. Deduplication and compression can be enabled/disabled separately or together. This provides maximum flexibility.
|
Always-on
By default Scale Computing HC3 data deduplication is turned on. The platform has been designed to prioritize running workloads over the deduplication tasks to prevent any negative performance impact. As such, the process piggybacks on pre-existing background structures - such as the background disk scrubber - for the hashing process.
|
|
Dedup/Compr. Scope
Details
|
Persistent data layer
|
Dedup Inline: memory and flash layers
Dedup Post-process: persistent data layer (adaptive)
Compr. Inline: flash and persistent data layers
Compr. Post-process: persistent data layer (adaptive)
Nutanix ADSF allows data reducation technologies across all tiers of storage including memory, performance and capacity tiers as well as All-Flash.
Adaptive: With post-process deduplication, data is intelligently qualified first. Data candidates with low or no matches are not deduplicated. This avoids metadata bloat due to non-dedupable candidates and unnecessary use of CPU and memory resources.
|
Persistent data layer
|
|
Dedup/Compr. Radius
Details
|
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
For inline deduplication and compression raw physical disks are added to a capacity optimization pool. The entire node represents a global deduplication domain. Deduplication and compression work across pools and across vDisks. Individual pools can be selected to participate in capacity optimization.
The post-processing capability provided through Windows Server 2016/2019 is highly scalable and can be used with volumes up to 64 TB and files up to 1 TB in size. Data deduplication identifies repeated patterns across files on that volume.
|
Storage Container
Both Nutanix reduction techniques (deduplication and compression) can be applied on individual storage containers.
A storage container is a logical part of the Storage Pool and contains a group of VM or files (vDisks). Storage containers typically have a 1-to-1 mapping with an NFS datastore (vSphere) or SMB share (Hyper-V).
|
Per Node
Scale Computing HC3 data deduplication works on a per node basis. All blocks that are directly written or replicated from another node are deduplicated by the indiviual node independent from other nodes within the same cluster. This way data integrity is ensured for every single node.
|
|
Dedup/Compr. Granularity
Details
|
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
With inline deduplication and compression, the data is organized in 128 KB segments. Depending on the optimization setting, a write into such a segment first gets compressed (when compression is selected) and then a hash is generated. If the hash is unique, the 128 KB segment is written back and the hash is added to the deduplication hash-table. If the hash is not unique, the segment is referenced in the deduplication hash table and discarded. The smallest chunk in the segment can be 4 KB.
For post-processing the system leverages deduplication in Windows Server 2016/2019, files within a deduplication-enabled volume are segmented into small variable-sized chunks (32–128 KB), duplicate chunks are identified, and only a single copy of each chunk is physically stored.
|
16 KB fixed block size
Nutanix ECP Elastic Dedupe Engine fingerprints data during ingest at a 16K-block granularity using a SHA-1 hash. Intel acceleration is leveraged for the SHA-1 computation which accounts for very minimal CPU overhead. Fingerprinting is only performed on data ingest and is then stored persistently as part of the written block’s metadata. Fingerprinting during data ingest is performned on data with an I/O size of 64K or greater. In cases where fingerprinting is not done during ingest (e.g. smaller I/O sizes), fingerprinting can be done as a background process.
NOTE: Initially a 4K granularity was used for fingerprinting, however Nutanix internal testing revealed that 16K granularity offers the best blend of deduplication with reduced metadata overhead. Deduplicated data is pulled into the unified cache at a 4K granularity.
|
1 MiB
Scale Computing HC3 post-process data deduplication uses 1 MiB fixed block segments.
|
|
Dedup/Compr. Guarantee
Details
|
N/A
Microsoft provides the Deduplication Evaluation Tool (DDPEVAL) to assess the data in a particular volume and predict the dedup ratio.
|
N/A
|
N/A
|
|
|
Full (optional)
Data rebalancing needs to be initiated manually by the end-user. It depends on the specific use case and end-user environment if this makes sense. When end-users want to isolate new workloads and corresponding data on new nodes, data rebalancing is not used.
|
Full
Data is automatically redistributed evenly across all nodes in the cluster when a node is either added or removed.
|
Full (optional)
Data rebalancing needs to be initiated manually by the end-user. It depends on the specific use case and end-user environment if this makes sense.
|
|
|
Yes
DataCore SANsymphonys Auto-Tiering is a real-time intelligent mechanism that continuously positions data on the appropriate class of storage based on how frequently the data is accessed. Auto-Tiering leverages any combination of Flash and traditional disk technologies, whether it is internal or array based, with up to 15 different storage tiers that can be defined.
As more advanced storage technologies become available, existing tiers can be modified as necessary and additional tiers can be added to further diversify the tiering architecture.
|
N/A
The Nutanix storage architecture does not include multiple persistent storage layers, but rather consists of a caching layer (fastest storage devices) and a persistent layer (slower/most cost-efficient storage devices).
|
Yes
Scale Computing HC3s HyperCore Enhanced Automated Tiering (HEAT) is an extension of the SCRIBE storage layer that is available to HC3 hybrid clusters with 3 nodes or more.
HEAT allows virtual disk level, priority data placement for allocated data (actual consumed capacity on a VM virtual disk). This is accomplished through a real-time heat map of virtual disk I/O in order to “tier” the data. Data blocks that are “hot” (=accessed regularly by the virtual disk) are stored at the SSD level while “colder” data blocks are stored at the HDD level.
There are 4 basic HEAT principles:
1. All VMs have access to SSDs, no matter what node the VM may actually be running on.
2. SSDs are additional capacity for VM disks (subvirtual tiering), not a cache for system data.
3. Administrators have granular control of SSD access at the VM virtual disk level.
4. Administrators are able to mix and match Tiered HC3 nodes with standard HC3 nodes and Storage Only nodes without any extra work or requirements.
The HC3 web interface provides an easy-to-use slide bar on the property page of an individual virtual disk in order to set the flash priority level of a VM’s virtual disk data:
0 Off
1 Minimum
2 Very Low
3 Low
4 Normal (default)
5 High
6 Very High
7 Extreme
8 Absurd
9 Hyperspeed
10 Ludicrous Speed
11 These go to 11
When the flash priority level is set to 0, no data on the virtual disk ever gets promoted to the SSD layer. When the flash priority level is set to 11, all data on the virtual disk is promoted to the SSD layer.
Altering HEAT priority will effect all VM virtual disks within the HC3 cluster. Each increase in flash priority access will dedicate roughly twice as much flash capacity for the VM virtual disk, and consequently reduce the flash capacity available for other VM virtual disks on the system.
|
|
|
|
Performance |
|
|
|
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
DataCore SANsymphony iSCSI and FC are fully qualified for all VMware vSphere VAAI-Block capabilities that include: Thin Provisioning, HW Assisted Locking, Full Copy, Block Zero
Note: DataCore SANsymphony does not support Thick LUNs.
DataCore SANsymphony is also fully qualified for Microsoft Hyper-V 2012 R2 and 2016/2019 ODX and UNMAP/TRIM.
Note: ODX is not used for files smaller than 256KB.
VAAI = VMware vSphere APIs for Array Integration
ODX = Offloaded Data Transfers
UNMAP/TRIM support allows the Windows operating system to communicate the inactive block IDs to the storage system. The storage system can wipe these unused blocks internally.
|
vSphere: VMware VAAI-NAS (full), RDMA
Hyper-V: SMB3 ODX; UNMAP/TRIM
AVH: Integrated
Nutanix is fully qualified for all VMware vSphere VAAI-NAS capabilities that include: Native SS for LC, Space Reserve, File Cloning and Extended Stats.
Nutanix is also fully qualified for MS Hyper-V 2012R2 / 2016 / 2019 ODX and TRIM.
UNMAP/TRIM support allows the Windows operating system to communicate the inactive block IDs to the storage system. The storage system can wipe these unused blocks internally.
RDMA is a network protocol that enables offloading storage processes from the server CPU. RDMA is strongly recommended when implementing S2D solution. It enables reading the hosts memory thus bypassing the OS. The result is a reduction of CPU usage, a decrease of network latency and an increase in throughput.
VAAI = Vmware vSphere APIs for Array Integration
ODX = Offloaded Data Transfers
RDMA = Read Direct Memory Access
|
KVM: IOVirt
|
|
|
IOPs and/or MBps Limits
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical clients/hosts.
2. Ability to set guarantees to ensure service levels for mission-critical clients/hosts.
SANsymphony currently supports only the first method. Although SANsymphony does not provide support for the second method, the platform does offer some options for optimizing performance for selected workloads.
For streaming applications which burst data, it’s best to regulate the data transfer rate (MBps) to minimize their impact. For transaction-oriented applications (OLTP), limiting the IOPs makes most sense. Both parameters may be used simultaneously.
DataCore SANsymphony ensures that high-priority workloads competing for access to storage can meet their service level agreements (SLAs) with predictable I/O performance. QoS Controls regulate the resources consumed by workloads of lower priority. Without QoS Controls, I/O traffic generated by less important applications could monopolize I/O ports and bandwidth, adversely affecting the response and throughput experienced by more critical applications. To minimize contention in multi-tenant environments, the data transfer rate (MBps) and IOPs for less important applications are capped to limits set by the system administrator. QoS Controls enable IT organizations to efficiently manage their shared storage infrastructure using a private cloud model.
More information can be found here: https://docs.datacore.com/SSV-WebHelp/quality_of_service.htm
In order to achieve consistent performance for a workload, a separate Pool can be created where selected vDisks are placed. Alternatively 'Performance Classes' can be assigned to differentiate between data placement of multiple workloads.
|
IOPs Limits (maximums)
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. In general there are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
Nutanix ECP Storage QoS provides administrators granular control to manage the performance of VMs and ensure that the system delivers consistent performance for all workloads. Administrators can limit the IOPS for individual VMs. IOPS is the number of requests the storage layer can serve in a second. Throttling IOPS on VMs is used to prevent noisy VMs from over-utilizing the storage system resources. Nutanix ECP Storage QoS is availabe in the Ultimate and Pro edition.
Nutanix AOS 5.0 enhanced performance reliability by introducing separate internal Read and Write I/O Queues so write-intensive workloads (or write bursts) will not starve out read operations, and vice-versa. This behavior is non-tunable.
|
N/A
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
Scale Computing HC3 currently does not offer any QoS mechanisms.
|
|
|
Virtual Disk Groups and/or Host Groups
SANsymphony QoS parameters can be set for individual hosts or groups of hosts as well as for groups of Virtual Disks for fine grained control.
In a VMware VVols (=Virtual Volumes) environment a vDisk corresponds 1-to-1 to a virtual disk (.vmdk). Thus virtual disks can be placed in a Disk Group and a QoS Limit can then be assigned it. DataCore SANsymphony Provider v2.01 has VVols certification for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and QoS Limits can be applied on the virtual disk level. The 1-to-1 allignment is realized by installing the DataCore Storage Management Provider in SCVMM.
|
Per VM
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. In general there are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
Nutanix ECP Storage QoS provides administrators granular control to manage the performance of VMs and ensure that the system delivers consistent performance for all workloads. Administrators can limit the IOPS for individual VMs. IOPS is the number of requests the storage layer can serve in a second. Throttling IOPS on VMs is used to prevent noisy VMs from over-utilizing the storage system resources. Nutanix ECP Storage QoS is availabe in the Ultimate and Pro edition.
Nutanix AOS 5.0 enhanced performance reliability by introducing separate internal Read and Write I/O Queues so write-intensive workloads (or write bursts) will not starve out read operations, and vice-versa. This behavior is non-tunable.
|
N/A
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
Scale Computing HC3 currently does not offer any QoS mechanisms.
|
|
|
Per VM/Virtual Disk/Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
In SANsymphony 'Flash Pinning' can be achieved using one of the following methods:
Method #1: Create a flash-only pool and migrate the individual vDisks that require flash pinning to the flash-only pool. When using a VVOL configuration in a VMware environment, each vDisk represents a virtual disk (.vmdk). This method guarantees all application data will be stored in flash.
Method #2: Create auto-tiering pools with at least 1 flash tier. Assign the Performance Class “Critical” to the vDisks that require flash pinning and place them in the auto-tiering pool. This will effectively and intelligently put as much of the data that resides in the vDisk in the flash tier as long that the flash tier has enough space available. Therefore this method is on a best-effort basis and dependent on correct sizing of the flash tier(s).
Methods #1 and #2 can be uses side-by-side in the same DataCore environment.
|
VM Flash Mode: Per VM/Virtual Disk/iSCSI LUN
Nutanix 'VM Flash Mode' can be used to assign specific data in hybrid configurations to exist solely on flash storage and as such never have that data destaged to the magnetic layer. By pinning data to the flash storage layer low latencies can be better guaranteed regardless of what part of the data is being read (hot or cold) and at what time the read takes place.
Flash Mode can be enabled for individual VMs as well as baremetal iSCSI LUNs.
Nutanix 'VM Flash Mode' is only available in the Ultimate edition.
|
Per virtual disk
Scale Computing HC3s native HEAT feature allows for data of an individual virtual disk to reside completely in flash storage. This can be administered on-the-fly by setting the Flash priority in the virtual disks properties to 11. The new HEAT priority setting will be immediately activated on the VMs virtual disk.
For more information on HEAT please view the information provided with the 'Data Tiering' capability.
HEAT = HyperCore Enhanced Automated Tiering
|
|
|
|
Security |
|
|
Data Encryption Type
Details
|
Built-in (native)
SANsymphony 10.0 PSP9 introduced native encryption when running on Windows Server 2016/2019.
|
Built-in (native)
|
N/A
Although the design of the SCRIBE storage management layer provides some general protection for data stored on a
single hard drive, it is not the same as data encryption. If data encryption is required it is recommended to use in-guest encryption tools to ensure data protection.
|
|
Data Encryption Options
Details
|
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: In SANsymphony deployments the encryption data service capabilities can be offloaded to hardware-based SED offerings available in server- and storage solutions.
Software: SANsymphony provides software-based data-at-rest encryption that is XTS-AES 256bit compliant.
|
Hardware: Self-encrypting drives (SEDs)
Software: AOS encryption; Vormetric VTE (validated), Gemalto (verified)
Hardware:
Data-at-rest encryption is compatible with all hypervisor platforms: VMware vSphere, Microsoft Hyper-V, AHV. Hardware encryption is only available with the Ultimate edition.
AOS 5.8 introduced a Dual Encryption mechanism that protects the data on the clusters using both SEDs and AOS Software based encryption. Dual Encryption configuration requires an external key manager to store the keys.
Software:
AOS 5.5 introduced built-in software-based data-at-rest AES-256 encryption. Because it is 100% software, the built-in encryption works with standard drives, so does not require SED hardware. AOS 5.8 introduces the Cluster Native Key Management Server (KMS) which can manage the encryption keys on the cluster locally, without the need of an external KMS. Nutanix Acropolis Data Encryption (ADE) supports VMware vSphere, Microsoft Hyper-V and AHV hypervisors. Nutanix Acropolis Data Encryption (ADE) is only available in the Ultimate edition.
AOS 5.8 introduced a Dual Encryption mechanism that protects the data on the clusters using both SEDs and AOS Software based encryption. Dual Encryption configuration requires an external key manager to store the keys.
AOS 5.9 introduced Background Encryption. Software encryption can be enabled on clusters or containers having existing data. Switching between native key management server (KMS) and external KMS is supported.
Vormetric Transparent Encryption (VTE) and Vormetric Key Management (VKM) have been validated as Nutanix Ready for Networking and Security. Nutanix has also been verified for use with Gemalto SafeNet KeySecure. SafeNet KeySecure manages the encryption keys to Nutanix SEDs.
|
Hardware: N/A
Software: HyTrust KeyControl + Client (validated); WinMagic SecureDoc CloudVM (validated)
Hardware: N/A
Software: Scale Computing partners with HyTrust to encrypt the drives of Windows and Linux VMs running on a HC3 system. The HyTrust client software that is installed on all VMs that require encryption, can encrypt both boot drives and data drives. Scale Computing has also validated the interoperability of WinMagic SecureDoc CloudVM for encryption of drives of Windows VMs with its HC3 platform.
|
|
Data Encryption Scope
Details
|
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: SEDs provide encryption for data-at-rest; SEDs do not provide encryption for data-in-transit.
Software: SANsymphony provides encryption for data-at-rest; it does not provide encryption for data-in-transit. Encryption can be enabled per individual virtual disk.
|
Hardware: Data-at-rest
Software AOS: Data-at-rest
Software VTE/Gemalto: Data-at-rest + Data-in-transit
Hardware: Nutanix ECP Self-encrypting drives (SEDs) provide encryption for data-at-rest.
Software: Nutanix AOS encryption provides enhanced security for data on a drive, but it does not secure data in transit. In contrast, Vormetric VTE and Gemalto encyrption solutions do provide both encryption for data-at-rest and encryption for data-in-transit.
|
Hardware: N/A
Software: Data-at-rest
|
|
Data Encryption Compliance
Details
|
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (AOS, VTE)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: N/A
Software: FIPS 140-2 Level 1 (HyTrust; WinMagic)
|
|
Data Encryption Efficiency Impact
Details
|
Hardware: No
Software: No
Hardware: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
|
Hardware: No
Software AOS: No
Software VTE/Gemalto: Yes
Hardware: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software AOS: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software Vormetric/Gemalto: Because Vormetric and Gemalto are end-to-end solutions, encryption is performed at the start of the write path and some efficiency mechanisms (eg. deduplication and compression) are effectively negated.
|
Hardware: N/A
Software: Yes (HyTrust; WinMagic)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
|
|
|
Test/Dev |
|
|
|
Yes
Support for fast VM cloning via VMware VAAI and Microsoft ODX.
|
Yes
|
Yes
Scale Computing HC3 leverages block reference counting to avoid having to copy blocks of data when creating a clone of a virtual machine. Because block reference counting is integrated in both the storage protocol as well as the RSDs, it is very fast and eliminates a round-trip when performing copy-on-write actions.
The clone feature on a HC3 cluster will create an identical VM to the parent, but with its own unique name and description. The clone VM will be completely independent from the parent VM once it is created.
|
|
|
|
Portability |
|
|
Hypervisor Migration
Details
|
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
Microsoft Virtual Machine Converter (MVMC) supports conversion of VMware VMs and vdisks to Hyper-V VMs and vdisks. It is also possible to convert physical machines and disks to Hyper-V VMs and vdisks.
MVMC has been offcially retired and can only be used for converting VMs up to version 6.0.
Microsoft System Center Virtual Machine Manager (SCVMM) 2016 also supports conversion of VMs up to version 6.0 only.
|
ESXi to AHV (integrated)
AHV to ESXi (integrated)
Hyper-V to AHV (external)
Nutanix supports in-place hypervisor conversion through the Prism web console that allows converting a cluster from using ESXi hosts to using AHV hosts. Guest VMs are converted to the hypervisor target format, and cluster network configurations are stored and then restored as part of the conversion process.
Nutanix supports offline conversion of Hyper-V VHDX disk files to KVM RAW disk files using qemu.
AOS for AHV on IBM Power Systems does not yet offer hypervisor migration support.
|
HC3 Move
HC3 Move is powered by Carbonite (formerly Double-Take) and allows the migration of physical or virtual Windows and Linux-based server workloads with real-time replication and zero-downtime.
HC3 Move requires the purchase of a one-time-use license per server that needs to be migrated to the Scale Computing HC3 platform.
HC3 Move does not support desktop operating systems.
|
|
|
|
File Services |
|
|
|
Built-in (native)
SANsymphony delivers out-of-box (OOB) file services by leveraging Windows native SMB/NFS and Scale-out File Services capabilities. SANsymphony is capable of simultaneously handling highly-available block and file level services.
Raw storage is provisioned from within the SANsymphony GUI to the Microsoft file services layer, similar to provisioning Storage Spaces Volumes to the file services layer. This means any file services configuration is performed from within the respective Windows service consoles e.g. quotas.
More information can be found under: https://www.datacore.com/products/features/high-availability-nas-cluster-file-sharing.aspx
|
Built-in (native)
NEW
Nutanix native fileserving feature is called Nutanix Files (previously Acropolis File Services or AFS). Today Nutanix Files is a 3rd generation solution. The current release is 3.7.1.
With Nutanix Files, Nutanix offers a software-defined scale-out file serving solution for Windows environments with a single namespace. The solution is highly available, scalable, and supports Active Directory and LDAP integration, Windows previous versions, user and share quotas, as well as Access Based Enumeration (ABS). Nutanix Files can be used to host home directories, department shares and user profiles.
Nutanix Files 3.7.1 introduces limited Local User Support for SMB shares. Local users can be added to a file server using the command line or the Microsoft Management Console (MMC) Local Users and Groups snap-in. Nutanix Files 3.7.1 only supports native-SMB limited local users on SMB shares. Local groups are not supported.
Nutanix Files 3.7.1 introduces Continuous Availability of SMB shares, preventing client disconnection from a current session during a loss of service. Nutanix Files uses persistent file handles to facilitate continuously available (CA) shares. Persistent file handles improve the SMB caching mechanism of multi-user writes to facilitate continuous availability.
Nutanix Files 3.7 supports Nutanix nodes with up to 240TB of HDD storage. The volume groups that underpin the standard and distributed shares of Nutanix Files can now scale up to double the size, 280TB.
Nutanix Files 3.7 introduced greater flexibility by allowing creation of a customized namespace. Namespace customization comes from the new ability to mount shares as subdirectories within existing shares.
Nutanix Files 3.7 introduced the ability to change the block size based on either a more random or more sequential I/O profile. For random workloads a maximum block size of 16KB can be specified, and for sequential workloads a block size up to 1MB can be specified. Nutanix internal testing has shown sequential workload performance improvements up to 25 percent, and random workload benefits up to 45 percent.
Nutanix Files 3.7 introduced non-GA support for SMB 3.0 transparant failover, also called continuous availability (CA) shares. CA shares allow for durable and persistent file handles to minimize the impact of any storage disruption during failure or upgrade events. CA shares are intended for applications like Citrix App Layering and Fslogix that demand non-disruptive operations. SMB 3.0 transparent failover is a Technical Preview feature with the Nutanix Files 3.7 release.
Nutanix Files 3.6 introduced Windows Server 2019 support, File Blocking, SMB Message encryption, Durable SMB File Handles, Multi-byte Support for Share Root (NFS), AOS NearSync Disaster Recovery, Scale-up recommendation (vCPU and RAM).
Nutanix Files 3.6 supports a maximum size of 140TB for a general file share and a maximum size of 200TB per FSVM for a home share. Nutanix Files supports a maximum of 4,000 connections per FSVM (12 vCPU; 64-90GB RAM).
Nutanix Files 3.6 supports SMB 2.0/2.1/3.0 as well as NFS v3/v4 protocols to provide file shares/exports for Windows, Apple Mac, Linux and UNIX clients. Nutanix Files can be leveraged in VMware vSphere and AHV environments only, so there is no support for Hyper-V. AOS for AHV on IBM Power Systems also does not offer Nutanix Files support.
Nutanix Files 3.6 provides SMB 3.0 basic protocol support, so without specific SMB 3.0 features. Nutanix Files supports both NFSv3 and NFS v4, however Nutanix Files does not support the UDP protocol or Kerberos for NFS v3.
NFS v4 exports can be either distributed or non-distributed. A distributed export ('sharded') means the data is spread across all Nutanix Fle Services VMs (FSVMs) to help improve performance and resiliency. A non-distributed export ('non-sharded') means all data is contained in a single FSVM.
Nutanix Files 3.6 supports the following AOS capabilities: Erasuse Coding, Software Encryption. Nutanix Files also supports self-encrypting drives (SEDs). Deduplication is not recommended.
AFS 3.0 introduced an API that allows third-party developers to implement backup server change file tracking. This means that the API allows the backup application to record and collect information about any changes to the files in each snapshot sent to the backup server, thus providing a log of all file changes across snapshots.
AFS 3.0 introduced an API that allows third-party developers to implement file activity monitoring in their applications. Thsi means that the API allows an application to collect information about every action on each file in a file server and thus supports audit logging (eg. to a syslog server) and Global Name Space.
Nutanix Files is an add-on and thus requires a separate capacity-based license for all editions.
|
N/A
Scale Computing HC3 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Compatibility
Details
|
Windows clients
Linux clients
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, most Windows and Linux clients are able to connect.
|
Windows clients
Apple Mac clients
Linux clients
NEW
Nutanix Files 3.7.1 supports the following client platforms and versions for SMB:
- Windows 7/8/8.1/10
- Windows Server 2008/2008R2/2012//2012R2/2016/2019
- Apple MacOS 10.12/10.13/10.14/10.15
Nutanix Files 3.7.1 supports the following client platforms and versions for NFSv3:
- Linux CentOS/RHEL 6.x/7.x/8.x
- Linux Ubuntu 16.04/18.04.3/19.10
- Apple MacOS 10.12.6 (may result in I/O disruption)
- Windows 10
- Windows Server 2008/2008R2/2012//2012R2/2016
Nutanix Files 3.7.1 supports the following client platforms and versions for NFSv4.1:
- Linux CentOS/RHEL 6.5 and later/7.x/8.x
- Linux Ubuntu 16.04/18.04.3/19.10
|
N/A
Scale Computing HC3 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Interconnect
Details
|
SMB
NFS
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, Windows Server platform compatibility applies:
SMB versions1,2 and 3 are supported, as are NFS versions 2, 3 and 4.1.
|
SMB
NFS
NEW
Nutanix Files 3.7.1 supports SMB v2.0, SMB v2.1 and SMB v3.0 (basic protocol support without specific SMB 3.0 features. Nutanix Files 3.7.1 does not support mounting SMB shares on Linux clients. Use multi-protocol shares instead.
Nutanix Files 3.7.1 support NFSv3 and NFSv4.1. Nutanix Files 3.7.1 does not support the UDP protocol or Kerberos for NFSv3.
|
N/A
Scale Computing HC3 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Quotas
Details
|
Share Quotas, User Quotas
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, all Quota features available in Windows Server can be used.
|
Share Quotas, User Quotas
Nutanix Files, previously Acropolis File Services (AFS), offers support for share and user quotas.
Nutanix Files can be used in ESXi and AHV environments only (so no support for Hyper-V).
Nutanix Files requires a separate license for all editions.
|
N/A
Scale Computing HC3 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
Fileserver Analytics
Details
|
Partial
Because SANsymphony leverages Windows Server native CIFS/NFS, Windows Server built-in auditing capabilities can be used.
|
Yes
The GA release of Nutanix File Analytics (2.0) was introduced in August 2019 and is supported in conjunction with Nutanix Files 3.5.2 and above.
Nutanix File Analytics 2.0 includes the following capabilities:
- Data on Data: capacity trends, data age, file distribution and activity data; the time can be changed when viewing the data in the dasboard.
- Audit Trails: search option for auditing a specific user, file, or directory; activity data can be compiled into dynamic graphs or can be exported; data can also be filtered by operation type and date range.
- Anomaly Detection: creation of custom anomaly policies and anomaly alerts, monitoring anomaly trends for top users/folders and operation type.
|
N/A
Scale Computing HC3 does not provide any file serving capabilities of its own.
Inside a Guest VM all native file service features of the Microsoft Windows and/or Linux operating system can be leveraged to host network shares.
Linux requires Samba Server components to provide SMB file shares.
Depending on the OS of the Guest VM providing file services, quotas can been set on the share or the filesystem level.
|
|
|
|
Object Services |
|
|
Object Storage Type
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
S3-compatible
NEW
Nutanix Objects was introduced with the release of AOS 5.11. Nutanix Objects is compatible with Amazon’s Simple Storage Service API (S3 API) to simplify integration with applications. Nutanix Buckets presents a single namespace in the object storage instance and supports the ability to create different object policies as required for different application scenarios. Any component can be scaled out independently to match the workload demands. The architecture is designed with scalability and ease of upgrade in mind. Today Nutanix Objects is a 3rd generation solution. The current release is 3.1.
Nutanix Objects can be deployed on an existing cluster alongside VMs, files, and blocks, or standalone. Objects natively inherits Nutanix DSF (Distributed Storage Fabric) capabilities like erasure coding, compression and deduplication.
Nutanix Objects can be leveraged for the following use cases:
- Long Term Retention & Backup (simple, scalable and cost-effective active archive solution.
- Data Preservation & Compliancy (data in a non-rewritable and non-erasable format per SEC Rule 17a-4 in scalable compliant archive).
Nutanix Objects 3.0 introduces ESXi support, meaning that Object Stores can be deployed on ESXi clusters. It also introduces Object Replication, meaning that objects in a source bucket can be replicated to a destination bucket in a different Object Store Instance. Furthermote Objects 3.0 adds support for Objects Lock API Operations, meaning that object lock policies can be applied on individual objects within a bucket using the S3 supported APIs.
Nutanix Objects 2.2 introduced support for the PKCS#8 standard as well as a notification feature for Object Store. It also enhanced the quota policy for users.
Nutanix Objects 2.1 introduced support for scaling out, quota policies for users, enhanced API access key management, listing of shared Buckets. It also added Static Website policy and CORS policy for Buckets.
Nutanix Objects 1.1 introduced support for Object Store deployments in sites without internet access.
|
N/A
Scale Computing HC3 does not provide any object storage serving capabilities of its own.
|
|
Object Storage Protection
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
Versioning
Nutanix Objects provides object versioning, creating copies of objects so data is automatically protected from accidental overwriting or deleting.
|
N/A
Scale Computing HC3 does not provide any object storage serving capabilities of its own.
|
|
Object Storage LT Retention
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
WORM
Nutanix Objects provides WORM (Write Once Read Many) in order to meet technical regulatory requirements and to enhance security postures with software or hardware data at rest encryption to a level of FIPS 140-2 compliance. WORM policies can be enabled on the bucket level.
|
N/A
Scale Computing HC3 does not provide any object storage serving capabilities of its own.
|
|
|
Management
|
|
|
|
|
|
|
Interfaces |
|
|
GUI Functionality
Details
|
Centralized
SANsymphonys graphical user interface (GUI) is highly configurable to accommodate individual preferences and includes guided wizards and workflows to simplify administration. All actions available from the GUI may also be scripted with PowerShell Commandlets to orchestrate workflows with other tools and applications.
|
Centralized
Nutanix integrates all features and functions available in the ECP platform into a single management framework (Prism). Because of this, the management experience is identical across all hypervisor platforms.
|
Centralized
Scale Computing HC3 management, capacity monitoring, performance monitoring and efficiency reporting is performed through the HC3 HTML5 web-based user interface.
|
|
|
Single-site and Multi-site
|
Prism: Single-site
Prism Central: Multi-site
Single cluster management is performed through the Nutanix Prism interface. Centralized management of multicluster environments is performed through Nutanix Prism Central (up to 12.500 VMs across all clusters). In addition Prism Pro features are accessed through the Prism Central interface.
AOS 5.8 introduces support for SAML user authentication for Prism Central. The Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between parties, in particular between an identity provider such as ADFS or OKTA and a service provider, which in this case is Prism Central. Limitations in this version:
- Only one identity provider can be configured.
- The role mapping is restricted to individual users; groups are not supported.
- Session timeouts are based on Prism Central only; the identify provider is not queried for session expiry.
|
Single-site and Multi-site
Up to 25 clusters can be manage centrally using the Scale Computing HC3 web-based user interface.
|
|
GUI Perf. Monitoring
Details
|
Advanced
SANsymphony has visibility into the performance of all connected devices including front-end channels, back-end channels, cache, physical disks, and virtual disks. Metrics include Read/write IOPs, Read/write MBps and Read/Write Latency at all levels. These metrics can be exported to the Windows Performance Monitoring (Perfmon) utility where other server parameters are being tracked.
The frequency at which performance metrics can be captured and reported on is configurable, real-time down to 1 second intervals and long term recording at 2 minutes granularity.
When a trend analysis is required, an end-user can simply enable a recording session to capture metrics over a longer period of time.
|
Advanced
Nutanix provides extended performance metrics on multiple levels of the infrastructure, ranging from cluster-level to VM-level.
For each individual VM Nutanix has added an addtional tab in Prism where Average IO Latency, Block Size Distribution and Random vs Sequential for Reads and Writes, as well as the Read Source (DRAM/SSD/HDD) for Reads are displayed graphically.
Nutanix has also added end-to-end network performance visualization, but currently this is only available when using AHV (so no vSphere or Hyper-V support). Network Visualization is added as a separate Network page in Prism Element and Prism Central. Network Visualization extends from the VM to the virtual NICs, to physical NICs to physical switch ports.
AOS 5.5 provides enhancements to Prism Central analytics. Behavioral analytics functionality has been added to the predictive analytics functionality to provide anomaly based alerts and alarms. This is different from the existing mechanism where thresholds are used. Thresholds are typically user defined and based on tacit knowledge. Anomaly detection is valuable as it indicates when KPIs (eg. CPU utilization) have a significant deviation from the norm.
Nutanixs behavioral analytics functionality requires the Prism Pro edition which is a separate add-on subscription.
|
Basic
|
|
|
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
DataCore offers deep integration with VMware vSphere and Microsoft Hyper-V, as well as their respective systems management tools, vCenter and System Center.
SCVMM = Microsoft System Center Virtual Machine Manager
|
VMware: Prism (subset)
Microsoft: SCCM (SCOM and SCVMM)
AHV: Prism
Nutanix Files: Prism
Xi Frame: Prism Central
Prism is the name of Nutanix central management platform used for all hypervisors as well as file services. Prism is an integral part of the AOS platform and as such resilient to failures.
Prism offers a subset of the most frequently used VMware management operations (VM Create, VM Update, VM Delete, VM Power On/Off Operations, Launch Console, Clone). Prism is not meant as a replacement alternative for the VMware vCenter console, as it covers only a subset.
Nutanix Acropolis Hypervisor (AHV) is based on KVM and offers a full-service managing environment on top of this.
Prism Central is a hard technical requirement for Xi Frame.
|
Not relevant (Unified interface)
Because Scale Computing HC3 controls the entire Hyperconvergence stack (hypervisor, compute, storage), the HC3 web-based user interface provides all the required management functionality.
|
|
|
|
Programmability |
|
|
|
Full
Using DataCores native management console, Virtual Disk Templates can be leveraged to populate storage policies. Available configuration items: Storage profile, Virtual disk size, Sector size, Reserved space, Write-trough enabled/disabled, Storage sources, Preferred snapshot pool, Accelerator enabled/disabled, CDP enabled/disabled.
Virtual Disk Templates integrate with System Center Virtual Machine Manager (SCVMM), VMware Virtual Volumes (VVol) and OpenStack. Virtual Disk Templates are also fully supported by the REST-API allowing any third-party integration.
Using Virtual Volumes (VVols) defined through DataCore’s VASA provider, VMware administrators can self-provision datastores for virtual machines (VMs) directly from their familiar hypervisor interface. This is possible even for devices in the DataCore pool that don’t natively support VVols and never will, as SANsymphony can be used as a storage-virtualization layer for these devices/solutions. DataCore SANsymphony Provider v2.01 has VVols certification for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
Using Classifications and StoragePools defined through DataCore’s Storage Management Provider, Hyper-V administrators can self-provision virtual disks and pass-through LUNS for virtual machines (VMs) directly from their familiar SCVMM interface.
|
Partial (Protection)
Multiple VMs can be grouped together in a Nutanix protection domain enabling them to be operated upon as a single entity with the same RPO. This is useful when trying to protect complex applications such as Microsoft SQL Server-based applications or Microsoft Exchange. The main advantage of using a protection domain approach of grouping VMs versus the traditional SAN approach of consolidating different VMs on to a single LUN is VM portability.
|
Full
Scale Computing HC3 leverages Storage Policy-Based Management (SPBM) that allows administrators to build a profile for each VM with regard to protection and for each virtual disk with regard to data tiering.
|
|
|
REST-APIs
PowerShell
The SANsymphony REST-APIs library includes more than 200 new representational state transfer (REST) operations, so automation can be leveraged more extensively. RESTful interfaces are used by products such as Lenovo XClarity, Cisco Embedded Resource Manager and Dell OpenManage to manage infrastructure in the enterprise.
SANsymphony provides its own Powershell cmdlets.
|
REST-APIs
PowerShell
nCLI
REST-APIs, PowerShell CMDlets and Nutanix CLI (nCLI) commands can be used for automating storage related operational tasks.
|
REST-APIs
Apache Thrift
Python executables
Both Scale Computing end-users and ecosystem partners can programmatically manage the HC3 platform by using either REST-APIs, Apache Thrift and/or Python executables.
|
|
|
OpenStack
OpenStack: The SANsymphony storage solution includes a Cinder driver, which interfaces between SANsymphony and OpenStack, and presents volumes to OpenStack as block devices which are available for block storage.
Datacore SANsymphony programmability in VMware vRealize Automation and Microsoft System Center can be achieved by leveraging PowerShell and the SANsymphony specific cmdlets.
|
OpenStack
VMware vRealize Automation (vRA)
Nutanix Calm
Nutanix offers third-party drivers for OpenStack and include Acropolis compute, image, volume and network.
In June 2017 Nutanix Calm v1.0 was released as a follow-up the acqusition of Calm.io. Nutanix Calm adds native application orchestration and lifecycle management to Nutanix ECP. Calm decouples application management from the underlying infrastructure, thus enabling applications to be easily deployed into private or public cloud environments. The addition of advanced application management to the Nutanix platform turns common tasks into repeatable automations accessible to all IT teams, without giving up control across the infrastructure stack.
Nutanix Calm blueprints can be published directly for end user consumption through the Nutanix Marketplace, giving application owners and developers the ability to request IT services that can then be instantly provisioned.
Nutanix Calm provides role-based governance that limits user operations based on permissions. All activities and changes are centrally logged for end-to-end traceability and debugging, aiding security teams with key compliance initiatives.
Nutanix Calms latest version, 2.7.0, was released in August 2019 and is compatible with AOS 5.10/5.11 and Prism Central 5.10.6. Nutanix Calm 2.7.0 supports Nutanix AHV and VMware vSphere hypervisors. Nutanix Calm does not support Hyper-V.
Nutanix Calm requires a separate license.
|
N/A
Scale Computing HC3 does not provide tight integration with either OpenStack or any automation/orchestration platforms.
|
|
|
Full
The DataCore SANsymphony GUI offers delegated administration to secondary users through fine-grained Role-based Access Control (RBAC). The administrator is able to define Virtual Disk ownership as well as privileges associated with that particular ownership. Owners must have Virtual Disk privileges in an assigned role in order to perform operations on the virtual disk. Access can be very refined. For example, one owner may have the privilege to create a snapshot of a virtual disk, but not have the ability to serve or unserve the same virtual disk. Privilege sets define the operations that can be performed. For instance, in order for an owner to perform snapshot, rollback, or replication operations, they would require those privilege sets in an assigned role.
|
AHV only
The Prism Self-Service Portal (SSP) is a AHV-only feature (so not supported for vSphere and Hyper-V). SSP is integrated into Prism Central and enables end-users to access a portal where they can provision and manage VMs from templates, eliminating administrator requests or activity. SSP offers hands-off administration with fine-grained permissions and resources quotas through role based access controls (RBAC). The full range of self-service portal features is managed from the Prism Central web console user interface. Prism SSP can connect to the Active Directory in order to enable Single Sign-On (SSO) as well as to provide users and groups when configuring RBAC.
AOS 5.1 first introduced general availability (GA) support for hot plugging virtual memory and vCPU on VMs that run on top of the AHV hypervisor from within the Self-Service Portal. This means that the memory allocation and the number of CPUs on VMs can be increased while the VMs remain powered on. However, the number of cores per CPU socket cannot be increased while the VMs are powered on.
AOS 5.5 and above allow the full range of self-service portal features to be managed from the Prism Central web console user interface.
Self-service portal features include:
- VM Management: creating/updating*/deleting a VM, performing VM operations.
- Task Management: viewing the status of a task, restarting a failed task.
Updating a VM: the VM name, number of assigned vCPUs, or memory size of the VM cannot be changed, however disks and NICs can be added or deleted.
VM operations: launch console, power on/off, manage categories, (un)quarantine VM, add to catalog.
Prism Self-Service Portal (SSP) is included in both the Prism Starter and Prism Pro edition. Using Prism Central is a hard requirement. Prism Element is no longer supported as of AOS 5.5.
|
Partial
The Scale Computing HC3 GUI offers delegated administration to secondary users through Role-based Access Control (RBAC). The user access level can be changed to “Admin” with full administrator access or customized with a variety of role options that fall in between Read-only and Admin.
The following optional functional roles that represent groupings of funtional tasks, can be assigned:
- Backup - Clone, Export, Import, Add/Pause Replication to a VM, Create/Delete snapshots, and Create/Delete/Modify snapshot schedules.
- Cluster Settings - Create/Modify all settings within Control Center, except for User Management and Control (system/cluster shutdown).
- Cluster Shutdown - Shutdown the system/cluster and any running VMs.
- VM Create/Edit - Import VMs and Create, Modify, Clone, and Add/Modify VM block (virtual disk) and network devices.
- VM Delete - Delete VMs and their associated snapshots and devices.
- VM Power Controls - Start, Shutdown/Power Off, and Live Migrate VMs.
A user can be assigned any number of these optional roles.
|
|
|
|
Maintenance |
|
|
|
Unified
All storage related features and functionality are built into the DataCore SANsymphony platform. The consolidation means, that only one product needs to be installed and upgraded and minimal dependencies exist with other software.
Integration with 3rd party systems (e.g. OpenStack, vSphere, System Center) are delivered seperately but are free-of-charge.
|
Unified
A few minor components aside (eg. SRA), all storage related features and functionality are built into the Nutanix ECP platform. This type of consolidation means, that only one product needs to be installed/upgraded and minimal dependencies exist with other software.
|
Unified
All storage related features and functionality are built into the Scale Computing HC3 platform. The consolidation means, that only one product needs to be installed and upgraded and minimal dependencies exist with other software.
|
|
SW Upgrade Execution
Details
|
Rolling Upgrade (1-by-1)
Each SANsymphony update is packaged in an installation Wizard which contains a fully guided upgrade process. The upgrade process checks all system requirements and performs a system health before starting the upgrade process and before moving from one node to the next.
The user can also decide to upgrade a SANsymphony cluster manually and follow all steps that are outlined in the Release Notes.
|
Rolling Upgrade (1-by-1)
The upgrade of AOS in a Nutanix cluster can be performed via Prism using a 1-Click approach. Upgrades are non-disruptive in the sense that when a VSC is taken offline to be upgraded, availability is still provided by the other VSCs. Nutanix provides a step-by-step upgrade guide that includes a number of pre- and post checks.
AOS 5.17 introduces support for upgrading AHV hosts through the Nutanix Life Cycle Manager (LCM).
|
Rolling Upgrade (1-by-1)
Scale Computing provides one-click software and firmware upgrades of HC3 nodes that typically takes minutes to complete, while all VMs remain online during the entire upgrade procedure.
|
|
FW Upgrade Execution
Details
|
Hardware dependent
Some server hardware vendors offer rolling upgrade options with their base software or with a premium software suite. With some other server vendors, BIOS and Baseboard Management Controller (BMC) updated have to be performed manually and 1-by-1.
DataCore provides integrated firmware-control for FC-cards. This means the driver automatically loads the required firmware on demand.
|
1-Click
The 1-Click upgrade for BIOS and Baseboard Management Controller (BMC) firmware feature is available for AHV and ESXi hypervisors. Hardware must run on at least NX G4 (Haswell) platforms.
|
1-Click
Scale Computing provides one-click software and firmware upgrades of HC3 nodes that typically takes minutes to complete, while all VMs remain online during the entire upgrade procedure.
|
|
|
|
Support |
|
|
Single HW/SW Support
Details
|
No
With regard to DataCore SANsymphony as a software-only offering (SDS), DataCore does not offer unified support for the entire solution. This means storage software support (SANsymphony) and server hardware support are separate.
|
Yes (Nutanix; Dell; Lenovo; IBM)
Nutanix provides unified support for the entire native solution. This means Nutanix is the single point-of-contact for any storage software (Nutanix AOS) and server hardware (Super Micro) related issues.
When buying Nutanix on Dell server hardware, joint support is offered through Dell ProSupport with software assist from Nutanix.
When buying Nutanix on Lenovo server hardware (Converged HX Series), technical support for the entire solution is delivered by the Lenovo services organization.
When buying Nutanix on IBM Power hardware, technical support for the entire solution is delivered by the IBM services organization.
|
Yes
|
|
Call-Home Function
Details
|
Partial (HW dependent)
With regard to DataCore SANsymphony as a software-only offering (SDS), DataCore does not offer call-home for the entire solution. This means storage software support (SANsymphony) and server hardware support are separate.
|
Full
Nutanix call-home function is called 'Pulse' and is fully integrated into the platform. Enabling the feature is simple and straightforward and requires very few clicks.
Pulse provides diagnostic system data to Nutanix support teams in order to deliver proactive, context-aware support for Nutanix solutions. The information is collected unobtrusively and automatically from a Nutanix cluster, with no impact to system performance. Nutanix support teams get the data required for quick and timely resolution of issues - often before administrators even know there is a problem. Examples include failed disks, faulty network interface cards (NICs) and unusually high utilization of cluster resources that could lead to potential problems.
Nutanix AOS 5.6.1 introduces support for Prism Central 5.6.1 to be used as a proxy for routing support information to Nutanix support.
Pulse is a basic support service and is included with every software edition and support contract.
|
Full
When the Scale Computing HC3 state machines detect failure modes or significant issues, they notify the Scale Computing support group (by default settings) via SNMP. Also, the state machines themselves automatically remediate the issue if possible.
|
|
Predictive Analytics
Details
|
Partial
Capacity Management: DataCore SANsymphony Analysis and Reporting supports depletion monitoring of the capacity, complements pool space threshold warnings by regularly evaluating the rate of capacity consumption and estimating when space will be depleted. The regularly updated projections give you a chance to add more storage to the pool before you run out of storage. It also helps you do a better job of capacity planning with fewer surprises. To help allocate costs, especially in private cloud and hosted cloud services, SANsymphony generates reports quantifying the storage resources consumed by specific hosts or groups of hosts. The reports tally several parameters.
Health Monitoring: A combination of system health checks and access to device S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) alerts help to isolate performance and disk problems before they become serious.
DataCore Insight Services (DIS) offers additional capabilites including log-analytics for predictive failure analysis and actionable insights - including hardware.
DIS also provides predictive capacity trend analysis in order to pro-actively warn about licensing limitations being reached within x days and/or disk pools running out of capacity.
|
Full
Nutanix Prism Central includes machine-learning capabilities that analyze resource usage over time and provide tools to monitor resource consumption, identify abnormal behavior, and guide resource planning. Predictive analysis of capacity usage and trends based on workload behavior enables pay-as-you-grow scaling. The integrated advisor functionality offers infrastructure optimization recommendations to improve efficiency and performance.
Nutanixs predictive analytics functionality requires the Prism Pro edition.
|
N/A
Scale Computing HC3 does not natively have predictive analytics capabilities.
|
|