|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
- + Extensive platform support
- + Extensive data protection capabilities
- + Flexible deployment options
|
- + Extensive platform support
- + Native file and object services
- + Manageability
|
- + Extensive data protection capabilities
- + Policy-based management
- + Fast streamlined deployment
|
|
Cons
|
- - No native data integrity verification
- - Dedup/compr not performance optimized
- - Disk/node failure protection not capacity optimized
|
- - Complex solution design
- - No QoS
- - Complex dedup/compr architecture
|
- - Single hypervisor and server hardware
- - No bare-metal support
- - No hybrid configurations
|
|
|
|
Content |
|
|
|
WhatMatrix
|
WhatMatrix
|
WhatMatrix
|
|
|
|
Assessment |
|
|
|
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
DataCore was founded in 1998 and began to ship its first software-defined storage (SDS) platform, SANsymphony (SSY), in 1999. DataCore launched a separate entry-level storage virtualization solution, SANmelody (v1.4), in 2004. This platform was also the foundation for DataCores HCI solution. In 2014 DataCore formally announced Hyperconverged Virtual SAN as a separate product. In May 2018 integral changes to the software licensing model enabled consolidation because the core software is the same and since then cumulatively called DataCore SANsymphony.
One year later, in 2019, DataCore expanded its software-defined storage portfolio with a solution especially for the need of file virtualization. The additional SDS offering is called DataCore vFilO and operates as scale-out global file system across distributed sites spanning on-premises and cloud-based NFS and SMB shares.
Recently, at the beginning of 2021, DataCore acquired Caringo and integrated its know how and software-defined object storage offerings into the DataCore portfolio. The newest member of the DataCore SDS portfolio is called DataCore Swarm and together with its complementary offering SwarmFS and DataCore FileFly it enables customers to build on-premises object storage solutions that radically simplify the ability to manage, store, and protect data while allowing multi-protocol (S3/HTTP, API, NFS/SMB) access to any application, device, or end-user.
DataCore Software specializes in the high-tech fields of software solutions for block, file, and object storage. DataCore has by far the longest track-record when it comes to software-defined storage, when comparing to the other SDS/HCI vendors on the WhatMatrix.
In April 2021 the company had an install base of more than 10,000 customers worldwide and there were about 250 employees working for DataCore.
|
Name: Enterprise Cloud Platform (ECP)
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2011
NEW
Nutanix was founded early 2009 and began to ship its first Hyper Converged Infrastructure (HCI) solution, Virtual Computing Platform (VCP), in 2011. The core of the Nutanix solution is the Nutanix Operating System (NOS). In 2015 Nutanix rebranded its solution to Xtreme Computing Platform (XCP), mainly because Nutanix developed its own hypervisor, Acropolis Hypervisor (AHV), that is based on KVM. In 2015 Nutanix also rebranded its operating system to Acropolis (AOS). In 2016 Nutanix rebranded its solution to Enterprise Cloud Platform (ECP). In september 2018 Nutanix rebranded Acropolis File Services (AFS) to Nutanix Files, Acropolis Block Services (ABS) to Nutanix Volumes, Object Storage Services to Nutanix Buckets and Xi Cloud DR Services to Nutanix Leap.
At the end of October 2020 the company had a customer install base of approximaltely 18,000 customers worldwide and there were more than 6,100 employees working for Nutanix.
|
Name: HPE SimpliVity 2600
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2018
SimpliVity was founded late 2009 and began to ship its first Hyper Converged Infrastructure (HCI) solution, OmniStack (OmniStack), in 2013. The core of the SimpliVity solution is the OmniStack OS combined with OmniStack Accellerator PCIe card. In January 2017 SimpliVity was acquired by HPE. In the second quarter of 2017 HPE introduced SimpliVity on HPE ProLiant server hardware and the platform was rebranded to HPE SimpliVity 380. In July 2018 HPE extended the HPE SimpliVity product family with HPE SimpliVity 2600 featuring software-only deduplication and compression.
In January 2018 HPE SimpliVity had a customer install base of approximately 2,000 customers worldwide. The number of employees working in the HPE SimpliVity division is unknown at this time.
|
|
|
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
10th Generation software. DataCore currently has the most experience when it comes to SDS/HCI technology, when comparing SANsymphony to other SDS/HCI platforms.
SANsymphony (SSY) version 3 was the first public release that hit the market back in 1999. The product has evolved ever since and the current major release is version 10. The list includes only the milestone releases.
PSP = Product Support Package
U = Update
|
GA Release Dates:
AOS 5.19: dec 2020
AOS 5.18: aug 2020
AOS 5.17: may 2020
AOS 5.16: jan 2020
AOS 5.11: aug 2019
AOS 5.10: nov 2018
AOS 5.9: oct 2018
AOS 5.8: jul 2018
AOS 5.6.1: jun 2018
AOS 5.6: apr 2018
AOS 5.5: dec 2017
AOS 5.1.2 / 5.2*: sep 2017
AOS 5.1.1.1: jul 2017
AOS 5.1: may 2017
AOS 5.0: dec 2016
AOS 4.7: jun 2016
AOS 4.6: feb 2016
AOS 4.5: oct 2015
NOS 4.1: jan 2015
NOS 4.0: apr 2014
NOS 3.5: aug 2013
NOS 3.0: dec 2012
NEW
5th Generation software. Nutanix currently has the most all-round package when it comes to advanced functionality, when comparing ECP to other SDS/HCI platforms.
Nutanix has adopted an LTS/STS life cycle approach towards AOS releases:
5.5, 5.10, 5.15 (LTS)
5.6, 5.8, 5.9, 5.11, 5.16, 5.17, 5.18 (STS)
LTS are released anually, maintained for 18 months and supported for 24 months.
STS are released quarterly, maintained for 3 months and supported for 6 months.
*5.2 is a IBM Power Systems-only release; for all other platforms version 5.6 applies.
Release dates of AOS Add-on components:
Nutanix Files 3.7.1: sep 2020
Nutanix Files 3.7: jul 2020
Nutanix Files 3.6.1: dec 2019
Nutanix Files 3.6: oct 2019
Nutanix Files 3.5.2: aug 2019
Nutanix Files 3.5.1: jul 2019
Nutanix Files 3.5: mar 2019
AFS 3.0.1: jun 2018
AFS 2.2: aug 2017
AFS 2.1.1: jun 2017
AFS 2.1: apr 2017
AFS 2.0: jan 2017
Nutanix File Analytics 2.0: aug 2019
Nutanix Objects 3.1: nov 2020
Nutanix Objects 3.0: oct 2020
Nutanix Objects 2.2: jun 2020
Nutanix Objects 2.1: may 2020
Nutanix Objects 1.1: nov 2019
Nutanix Objects 1.0.1: oct 2019
Nutanix Objects 1.0: aug 2019
Nutanix Karbon 2.1: jul 2020
Nutanix Karbon 2.0: feb 2020
Nutanix Karbon 1.0.4: dec 2019
Nutanix Karbon 1.0.3: oct 2019
Nutanix Karbon 1.0.2: oct 2019
Nutanix Karbon 1.0.1: may 2019
Nutanix Karbon 1.0: mar 2019
Nutanix Calm 3.1: oct 2020
Nutanix Calm 3.0: jun 2020
Nutanic Calm 2.9.7: jan 2020
Nutanix Calm 2.9.0: nov 2019
Nutanix Calm 2.7.0: aug 2019
Nutanix Calm 2.6.0: feb 2019
Nutanix Calm 2.4.0: nov 2018
Nutanix Calm 5.9: oct 2018
Nutanix Calm 5.8: jul 2018
Nutanix Calm 5.7: dec 2017
Nutanix Era 2.0: sep 2020
Nutanix Era 1.3: jun 2020
Nutanix Era 1.2: jan 2020
Nutanix Era 1.1: jul 2019
Nutanix Era 10.: dec 2018
NOS = Nutanix Operating System
AOS = Acropolis Operating System
LTS = Long Term Support
STS = Short Term Support
AFS = Acropolis File Services
|
GA Release Dates:
OmniStack 4.0.1 U1: dec 2020
OmniStack 4.0.1: apr 2020
OmniStack 4.0.0: jan 2020
OmniStack 3.7.10: sep 2019
OmniStack 3.7.9: jun 2019
OmniStack 3.7.8: mar 2019
OmniStack 3.7.7: dec 2018
OmniStack 3.7.6 U1: oct 2018
OmniStack 3.7.6: sep 2018
OmniStack 3.7.5: jul 2018
OmniStack 3.7.4: may 2018
OmniStack 3.7.3: mar 2018
OmniStack 3.7.2: dec 2017
OmniStack 3.7.1: oct 2017
OmniStack 3.7.0: jun 2017
OmniStack 3.6.2: mar 2017
OmniStack 3.6.1: jan 2017
OmniStack 3.5.3: nov 2016
OmniStack 3.5.2: juli 2016
OmniStack 3.5.1: may 2016
OmniStack 3.0.7: aug 2015
OmniStack 2.1.0: jan 2014
OmniStack 1.1.0: aug 2013
NEW
4th Generation software on 9th and 10th Generation HPE server hardware. The HPE SimpliVity 380 platform has shaped up to be a full-featured platform in virtualized datacenter environments.
RapidDR 3.5.1: dec 2020
RapidDR 3.5.0: oct 2020
RapidDR 3.1.1: feb 2020
RapidDR 3.1: jan 2020
RapidDR 3.0.1: sep 2019
RapidDR 3.0: jun 2019
RapidDR 2.5.1: dec 2018
RapidDR 2.5: oct 2018
RapidDR 2.1.1: jun 2018
RapidDR 2.1: mar 2018
RapidDR 2.0: oct 2017
RapidDR 1.5: feb 2017
RapidDR 1.2: oct 2016
|
|
|
|
Pricing |
|
|
Hardware Pricing Model
Details
|
N/A
SANsymphony is sold by DataCore as a software-only solution. Server hardware must be acquired separately.
The entry point for all hardware and software compatibility statements is: https://www.datacore.com/products/sansymphony/tech/compatibility/
On this page links can be found to: Storage Devices, Servers, SANs, Operating Systems (Hosts), Networks, Hypervisors, Desktops.
Minimum server hardware requirements can be found at: https://www.datacore.com/products/sansymphony/tech/prerequisites/
|
Per Node
Depends on specific model/subtype/resource configuration
|
Per Node
|
|
Software Pricing Model
Details
|
Capacity based (per TB)
NEW
DataCore SANsymphony is licensed in three different editions: Enterprise, Standard, and Business.
All editions are licensed per capacity (in 1 TB steps). Except for the Business edition which has a fixed price per TB, the more capacity that is used by an end-user in each class, the lower the price per TB.
Each edition includes a defined feature set.
Enterprise (EN) includes all available features plus expanded Parallel I/O.
Standard (ST) includes all Enterprise (EN) features, except FC connections, Encryption, Inline Deduplication & Compression and Shared Multi-Port Array (SMPA) support with regular Parallel I/O.
Business (BZ) as entry-offering includes all essential Enterprise (EN) features, except Asynchronous Replication & Site Recovery, Encryption, Deduplication & Compression, Random Write Accelerator (RWA) and Continuous Data Protection (CDP) with limited Parallel I/O.
Customers can choose between a perpetual licensing model or a term-based licensing model. Any initial license purchase for perpetual licensing includes Premier Support for either 1, 3 or 5 years. Alternatively, term-based licensing is available for either 1, 3 or 5 years, always including Premier Support as well, plus enhanced DataCore Insight Services (predictive analytics with actionable insights). In most regions, BZ is available as term license only.
Capacity can be expanded in 1 TB steps. There exists a 10 TB minimum per installation for Business (BZ). Moreover, BZ is limited to 2 instances and a total capacity of 38 TB per installation, but one customer can have multiple BZ installations.
Cost neutral upgrades are available when upgrading from Business/Standard (BZ/ST) to Enterprise (EN).
|
Per Core + Flash TiB (AOS)
Per Node (AOS, Prism Pro/Ultimate)
Per Concurrent User (VDI)
Per VM (ROBO)
Per TB (Files)
Per VM (Calm)
Per VM (Xi Leap)
In September 2018 Nutanix introduced a 'per core + per Flash TB' (capacity) licensing model next to the existing a 'per node' (appliance) licensing model for AOS. This means software license cost is tightly coupled with the number of physical cores (compute), as well as the amount of flash TBs (storage) in the Nutanix nodes acquired.
In May 2019 Nutanix introduced 'per concurrent user' licensing for VDI use cases and 'per VM' licensing for ROBO use cases. Both VDI and ROBO bundle AOS, AHV and Prism. Both must run on dedicated clusters. ROBO is designed for sites running typically up to 10 VMs.
Capacity-based and VDI-based software licensing are sold in 1 to 7 year terms.
ROBO-based software licensing is sold in 1 to 5 year terms.
Appliance-based software licensing is sold for the lifetime of the hardware and is non-transferable.
AOS Editions: Starter, Pro, Ultimate
Prism Central Editions: Starter, Pro
AOS Editions:
Starter limits functionality, for example: No IBM Power, Cluster size restricted to 12, Replication Factor restricted to 2; lacks post-process deduplication, post-process compression, Availability Domains, Self Service Restore, Cloud Connect, VSS Integration, Intelligent VM Placement, Virtual Network Configuration, Host Profiles.
Ultimate exclusively offers VM Flash Mode, Multiple Site DR (1-to many, many-to 1), Metro Availability, Disaster Recovery with NearSync or Sync Replication, On-premises Leap, Data-at-Rest Encryption and Native KMS. All except VM Flash Mode can be purchased as an add-on license for Pro edition.
Prism Central Editions:
Prism Starter is included with every edition of Acropolis for single and multiple site management. It enables registration and management of multiple Prism Element clusters, upgrade Prism Central with 1-click through Life Cycle Manager (LCM), and monitor and troubleshoot managed clusters. Prism Pro is available as an add-on subscription. Pro adds customizable dashboards, capacity, planning, and analysis tools, advanced search capabilities, low-code/no-code automation, and reporting. Prism Ultimate adds application discovery and monitoring, budgeting/chargeback and cost metering for resources, and a SQL Server monitoring content pack. Every Prism Central deployment includes a 90 day trial version of this license tier.
AOS Add-ons require separate licensing:
Nutanix Files:
Nutanix Files is licensed separately and is sold under two different capacity licenses. Nutanix Files Add-on License for HCI is for Nutanix Files running on mixed mode clusters. Nutanix Files Dedicated License is for Nutanix Files running on dedicated clusters.
Nutanix Calm:
Nutanix Calm is licensed separately and is sold as an annual subscription on a per virtual-machine (VM) basis. Calm licenses are required only for VMs managed by Calm, running in either the Nutanix Enterprise cloud or public clouds. Nutanix Calm is sold in 25 VM subscription license packs. Both Prism Starter and Prism Pro include perpetual entitlement for the first 25 VMs managed by Calm.
Nutanix Microsegmentation:
Nutanix Microsegmentation is licensed seperately and is sold as an annual subscription on a per node basis. Licenses are needed for all nodes in a cluster where microsegmentation functionality will be used. This option requires a Nutanix cluster managed by Prism Central and using the AHV virtualization solution. Licenses are sold in 1 to 5 year subscription terms. Prism Central with Starter license is required to manage microsegmentation policies.
Xi Services:
Nutanix provides several subscription Plans: Pay-As-You-Go, 1 year and 3 year.
Xi Leap:
Xi Leap is licensed separately and is sold as an annual subscription on a per virtual-machine (VM) basis. Each VM protected by Xi Leap falls in one of three pricing levels: Basic Protection (RPO=24+ hours;2 snapshots,1TB/VM), Advanced Protection (RPO=4+ hours; 1 week of snapshots,2TB/VM), Premium Protection (RPO=1+ hours,1 month of snapshots,5TB/VM). Allowed capacity includes space allocated by snapshots.
Xi Frame:
Xi Frame is licensed separately ad is sold as an annual scubscriptuon on a named user or concurrent user basis.
|
Per Node (all-inclusive)
Add-ons: RapidDR (per VM)
There is no separate software licensing for most platform integrated features. By default all software functionality is available regardless of the hardware model purchased.
The HPE SimpliVity 2600 per node software license is tied to the selected CPU and storage configuration.
License Add-ons:
- RapidDR feature
RapidDR uses a 'per VM' licensing model and is available in 25 VM and 100 VM license packs.
|
|
Support Pricing Model
Details
|
Capacity based (per TB)
Support is always provided on a premium (24x7) basis, including free updates.
More information about DataCores support policy can be found here:
http://datacore.custhelp.com/app/answers/detail/a_id/1270/~/what-is-datacores-support-policy-for-its-products
|
Per Node
Subscriptions: Basic, Production, Mission Critical
Most notable differences:
- Basic offers 8x5 support, Production and Mission Critical offer 24x7 support.
- Basic and Production target 2/4/8 hours response times depending severity level, whereas Mission Critical targets 1/2/4 hours reponse times.
- Basic and Production provide Next Business Day hardware replacement, whereas Mission Critical provides 4 Hour hardware replacement.
- Mission Critical exclusively offers direct routing to senior level engineers.
|
Per Node
3-year HPE SimpliVity 2600 solution support (24x7x365) is mandatory.
|
|
|
Design & Deploy
|
|
|
|
|
|
|
Design |
|
|
Consolidation Scope
Details
|
Storage
Data Protection
Management
Automation&Orchestration
DataCore is storage-oriented.
SANsymphony Software-Defined Storage Services are focused on variable deployment models. The range covers classical Storage Virtualization over Converged and Hybrid-Converged to Hyperconverged including a seamless migration between them.
DataCore aims to provide all key components within a storage ecosystem including enhanced data protection and automation & orchestration.
|
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
Nutanix is stack-oriented.
With the ECP platform Nutanix aims to provide all functionality required in a Private Cloud ecosystem through a single platform.
|
Compute
Storage
Data Protection (full)
Management
Automation&Orchestration (DR)
HPE is stack-oriented, whereas the SimpliVity 2600 platform itself is heavily storage- and protection-focused.
HPE SimpliVity 2600 aims to provide key components within a Private Cloud ecosystem as well as integration with existing hypervisors and applications.
|
|
|
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
The bandwidth required depends entirely on the specifc workload needs.
SANsymphony 10 PSP11 introduced support for Emulex Gen 7 64 Gbps Fibre Channel HBAs.
SANsymphony 10 PSP8 introduced support for Gen6 16/32 Gbps ATTO Fibre Channel HBAs.
|
1, 10, 25, 40 GbE
Nutanix hardware models include redundant ethernet connectivity using SFP+ or Base-T. Nutanix recommends 10GbE or higher to avoid the network becoming a performance bottleneck.
|
1, 10 GbE
HPE SimpliVity 2600 hardware models include ethernet connectivity using SFP+. HPE SimpliVity 2600 recommends 10GbE to avoid the network becoming a performance bottleneck.
|
|
Overall Design Complexity
Details
|
Medium
DataCore SANsymphony is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. DataCore SANsymphony seeks to provide important capabilities either natively or tightly integrated, and this keeps the design process relatively simple. However, because many features in SANsymphony are optional and thus can be turned on/off, in effect each one needs to be taken into consideration when preparing a detailed design.
|
High
Nutanix ECP is able to meet many different use-cases, but each requires specific design choices in order to reach an optimal end-state. This is not limited to choosing the right building blocks and the right software edition, but extends to the use (or non-use) of some of its core data protection and data efficiency mechanisms. In addition, the end-users hypervisor choice prohibits the use of some advanced functionality as these are only available on Nutanix own hypervisor, AHV. As Nutanix continues to add functionality to its already impressive array of capabilities, designing the solution could grow even more complex over time.
|
Low
HPE SimpliVity was developed with simplicity in mind, both from a design and a deployment perspective. HPE SimpiVitys uniform platform architecture is meant to be applicable to all virtualization use-cases and seeks to provide important capabilities natively and on a per-VM basis. There are only a handful of storage building blocks to choose from, and many advanced capabilities like deduplication and compression are always turned on. This minimizes the amount of design choices as well as the number of deployment steps.
|
|
External Performance Validation
Details
|
SPC (Jun 2016)
ESG Lab (Jan 2016)
SPC (Jun 2016)
Title: 'Dual Node, Fibre Channel SAN'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-connected, SSY 10.0, 4x All-Flash Dell MD1220 SAS Storage Arrays
SPC (Jun 2016)
Title: 'Dual Node, High Availability, Hyper-converged'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-interconnect, SSY 10.0
ESG Lab (Jan 2016)
Title: 'DataCore Application-adaptive Data Infrastructure Software'
Workloads: OLTP
Benchmark Tools: IOmeter
Hardware: Hybrid (Tiered) Dell PowerEdge R720, 2-node cluster, SSY 10.0
|
Login VSI (may 2017)
ESG Lab (feb 2017; sep 2020)
SAP (nov 2016)
ESG Lab (Sep 2020)
Title: Nutanix Architecture and Performance Optimization'
Workloads: Synthetic, High-perf database, OLTP, Postgres Analytics
Benchmark Tools: Nutanix X-Ray (FIO), Silly Little Oracle Benchmark (SLOB), Pgbench
Hardware: Nutanix All-NVMe NX-8170-G7, 4-node cluster, AOS 5.18
Login VSI (May 2017)
Title: 'Citrix XenDesktop on AHV'
Workloads: Citrix XenDesktop VDI
Benchmark Tools: Login VSI (VDI)
Hardware: Nutanix All-flash NX-3460-G5, 6-node cluster, AOS 5.0.2
ESG Lab (Feb 2017)
Title: 'Performance Analysis: Nutanix'
Workloads: MSSQL OLTP, Oracle OLTP, MS Exchange, Citrix XenDesktop VDI
Benchmark Tools: Benchmark Factory (MSSQL), Silly Little Oracle Benchmark (Oracle), Jetstress (MS Exchange), Login VSI (Citrix XenDesktop)
Hardware: Nutanix All-flash NX-3460-G5, 4 node-cluster, AOS 5.0
SAP (Nov 2016)
Title: 'SAP Sales and Distribution (SD) Standard Application Benchmark'.
Workloads: SAP ERP
Benchmark Tools: SAPSD
Hardware: All-Flash Nutanix NX8150 G5, single -node, AOS 4.7
|
Login VSI (jun 2018)
Login VSI (Jun 2018)
Title: 'VMware Horizon 7.4 on HPE SimpliVity
2600'
Workloads: VMware Horizon VDI
Benchmark Tools: Login VSI (VDI)
Hardware: All-Flash HPE SimpliVity 170 Gen10, 4-node cluster / 6-node cluster, OmniStack 3.7.5
|
|
Evaluation Methods
Details
|
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
SANsymphony is freely downloadble after registering online and offers full platform support (complete Enterprise feature set) but is scale (4 nodes), capacity (16TB) and time (30 days) restricted, what all can be expanded upon request. The free trial version of SANsymphony can be installed on all commodity hardware platforms that meet the hardware requirements.
For more information please go here: https://www.datacore.com/try-it-now/
|
Community Edition (forever)
Hyperconverged Test Drive in GCP
Proof-of-Concept (POC)
Partner Driven Demo Environment
Xi Services Free Trial (60-days)
AOS Community Edition (CE) is freely downloadble after registering online and offers limited platform support (Acopolis Hypervisor = AHV) and scalability (4 nodes). AOS CE can be installed on all commodity hardware platforms that meet the hardware requirements. AOS CE use is not time-restricted. AOS CE is not for production environments.
A small running setup of AOS Community Edition (CE) can also be accessed instantly in Google Cloud Platform (GCP) by registering at nutanix.com/test-drive-hyperconverged-infrastructure. The Test Drive is limited to 2 hours.
Ravellos Smart Labs on AWS/GCE provides nested virtualization and offers a blueprint to run AOS CE in the public cloud. Using Ravello Smart Labs requires a subscription.
Nutanix also offers instant online access to live demo environments for partners to educate/show their customers.
Nutanix Xi Services include a Free Plan that provides a free trial for 60 days. At the end of the 60-day trail end-user organisations can choose to switcg to a Paid Plan, or choose to decide later. A Free Plan includes either DR for 100 VMs (100GB each) or running VMs (2vCPU, 4GB RAM each), and includes profesional support.
AWS = Amazon Web Services
GCE = Google Compute Engine
|
Cloud Technology Showcase (CTS)
Proof-of-Concept (POC)
HPE offers no Community Edition or Free Trial edition of their hyperconverged software.
However, HPE maintains a cloud-based evaluation environment in which demos can be conducted and where potential customers can load up their own workloads to execute Proof-of-Concepts. This is called the Cloud Technology Showcase (CTS).
|
|
|
|
Deploy |
|
|
Deployment Architecture
Details
|
Single-Layer
Dual-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
Single-Layer:
- SANsymphony is implemented as virtual machine (VM) or in case of Hyper-V as service layer on Hyper-V parent OS, managing internal and/or external storage devices and providing virtual disks back to the hypervisor cluster it is implemented in. DataCore calls this a hyper-converged deployment.
Dual-Layer:
- SANsymphony is implemented as bare metal nodes, managing external storage (SAN/NAS approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a traditional deployment.
- SANsymphony is implemented as bare metal nodes, managing internal storage devices (server-SAN approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a converged deployment.
Mixed:
- SANsymphony is implemented in any combination of the above 3 deployments within a single management entity (Server Group) acting as a unified storage grid. DataCore calls this a hybrid-converged deployment.
|
Single-Layer (primary)
Dual-Layer (secondary)
Single-Layer: Nutanix ECP is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
Nutanix ECP can also serve in a dual-layer model by providing storage to non-Nutanix hypervisor hosts, bare metal hosts and Windows clients (Please view the compute-only scale-out option for more information).
|
Single-Layer
Single-Layer: HPE SimpliVity 2600 is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
Although HPE SimpliVity 2600 can serve in a dual-layer model by providing storage to non-HPE SimpliVity 2600 hypervisor hosts, this would negate many of the platforms benefits as well as the financial business case. (Please view the compute-only scale-out option for more information).
|
|
Deployment Method
Details
|
BYOS (some automation)
BYOS = Bring-Your-Own-Server-Hardware
DataCore SANsymphony is made easy by providing a very straightforward implementation approach.
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by Nutanix, customer deployments can be executed in hours instead of days.
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by HPE SimpliVity 2600, customer deployments can be executed in hours instead of days.
In HPE SimpliVity 3.7.10 the Deployment Manager allows configuring NIC teaming during the deployment process. With this feature, one or more NICs can be assigned to the Management network, and one or more NICs to the Storage and Federation networks (shared between the two).
|
|
|
Workload Support
|
|
|
|
|
|
|
Virtualization |
|
|
Hypervisor Deployment
Details
|
Virtual Storage Controller
Kernel (Optional for Hyper-V)
The SANsymphony Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the SANsymphony storage solution and commits its internal storage and/or externally connected storage to the shared resource pool. The Virtual Storage Controller (VSC) can be configured direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
In Microsoft Hyper-V environments the SANsymphony software can also be installed in the Windows Server Root Partition. DataCore does not recommend installing SANsymphony in a Hyper-V guest VM as it introduces virtualization layer overhead and obstructs DataCore Software from directly accessing CPU, RAM and storage. This means that installing SANsymphony in the Windows Server Root Partition is the preferred deployment option. More information about the Windows Server Root Partition can be found here: https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/architecture
The DataCore software can be installed on Microsoft Windows Server 2019 or lower (all versions down to Microsoft Windows Server 2012/R2).
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Virtual Storage Controller
The Nutanix Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the Nutanix storage solution and commits its internal storage to the shared resource pool. The Virtual Storage Controller (VSC) has direct access to the physical disks, so the hypervisor is not impeding the I/O flow. AOS 5.5 Controller VMs were running CentOS-7.3 with Python 2.7.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Virtual Storage Controller
The HPE SimpliVity 2600 OmniStack Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the HPE SimpliVity 2600 storage solution and commits its internal storage to the shared resource pool. The Virtual Storage Controller (VSC) has direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
|
Hypervisor Compatibility
Details
|
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
'Not qualified' means there is no generic support qualification due to limited market footprint of the product. However, a customer can always individually qualify the system with a specific SANsymphony version and will get full support after passing the self-qualification process.
Only products explicitly labeled 'Not Supported' have failed qualification or have shown incompatibility.
|
VMware vSphere ESXi 6.0U1A-7.0U1
Microsoft Hyper-V 2012R2/2016/2019*
Microsoft CPS Standard
Nutanix Acropolis Hypervisor (AHV)
Citrix XenServer 7.0.0-7.1.0CU2**
NEW
Nutanix currently supports 4 major hypervisor platforms where many others support only 1 or 2. Nutanix offers its own hypervisor called Acropolis Hypervisor (AHV), which is based on Linux KVM. Using different hypervisors and or hypervisor-clusters within the same Nutanix cluster is supported.
Nutanix has official support for Microsoft Cloud Platform System (CPS), which bundles Windows Server 2012 R2, System Center 2012 R2 and Windows Azure Pack for easier hybrid cloud configuration. Nutanix offers the CPS Standard version pre-installed on nodes.
*Hyper-V 2016 is not supported on NX platform nodes with IvyBridge or SandyBridge processors (motherboard designator starts with X9). Hyper-V 2016 requires G5 hardware and is not supported on G3 and G4 hardware.
**Citrix Hypervisor 8.x is not supported on Nutanix server nodes by Citrix.
AOS 5.1 introduced general availability (GA) support for the Citrix XenServer hypervisor for use with XenApp and XenDesktop in Nutanix clusters.
AOS 5.2 exclusively introduced general availability (GA) support for the AHV hypervisor for use on IBM Power Systems. Only Linux is currently supported as Guest OS.
AOS 5.5 introduced support for Microsoft Hyper-V 2016 and provides 1-click non-disruptive upgrades from Hyper-V 2012 R2. AOS 5.5 also introduces support for virtual hard disks (.vhdx) that are 2 TB or greater in size in Hyper-V clusters.
AOS 5.17 introduces support for Microsoft Hyper-V 2019.
|
VMware vSphere ESXi 6.5U2-6.7U3
HPE SimpliVity OmniStack 4.0.0 introduced support for VMware vSphere 6.7 Update 3.
HPE SimpliVity 2600 currently does not support Microsoft Hyper-V, whereas HPE SimpliVity 380 does.
|
|
Hypervisor Interconnect
Details
|
iSCSI
FC
The SANsymphony software-only solution supports both iSCSI and FC protocols to present storage to hypervisor environments.
DataCore SANsymphony supports:
- iSCSI (Switched and point-to-point)
- Fibre Channel (Switched and point-to-point)
- Fibre Channel over Ethernet (FCoE)
- Switched, where host uses Converged Network Adapter (CNA), and switch outputs Fibre Channel
|
NFS
SMB3
iSCSI
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
NFS
NFS is used as the storage protocol in vSphere environments.
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
|
|
|
Bare Metal |
|
|
Bare Metal Compatibility
Details
|
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Any operating system currently not qualified for support can always be individually qualified with a specific SANsymphony version and will get full support after passing the self-qualification process.
SANsymphony provides virtual disks (block storage LUNs) to all of the popular host operating systems that use standard disk drives with 512 byte or 4K byte sectors. These hosts can access the SANsymphony virtual disks via SAN protocols including iSCSI, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE).
Mainframe operating systems such as IBM z/OS, z/TPF, z/VSE or z/VM are not supported.
SANsymphony itself runs on Microsoft Windows Server 2012/R2 or higher.
|
Microsoft Windows Server 2008R2/2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.7/6.8/7.2
SLES 11/12
Oracle Linux 6.7/7.2
AIX 7.1/7.2 on POWER
Oracle Solaris 11.3 on SPARC
ESXi 5.5/6 with VMFS (very specific use-cases)
Nutanix Volumes, previously Acropolis Block Services (ABS), provides highly available block storage as iSCSI LUNs to clients. Clients can be non-Nutanix servers external to the cluster or guest VMs internal or external to the cluster, with the cluster block storage configured as one or more volume groups. This block storage acts as the iSCSI target for client Windows or Linux operating systems running on a bare metal server or as guest VMs using iSCSI initiators from within the client operating systems.
AOS 5.5 introduced support for Windows Server 2016.
IBM AIX: PowerHA cluster configurations are not supported as AIX requires a shared drive for cluster configuration information and that drive cannot be connected over iSCSI.
Nutanix Volumes supports exposing LUNs to ESXi clusters in very specific use cases.
|
N/A
HPE SimpliVity 2600 does not support any non-hypervisor platforms.
|
|
Bare Metal Interconnect
Details
|
iSCSI
FC
FCoE
|
iSCSI
Block storage acts as one or more targets for client Windows or Linux operating systems running on a bare metal server or as guest VMs using iSCSI initiators from within the client operating systems.
ABS does not require multipath I/O (MPIO) configuration on the client but it is compatible with clients that are currently using or configured with MPIO.
|
N/A
HPE SimpliVity 2600 does not support any non-hypervisor platforms.
|
|
|
|
Containers |
|
|
Container Integration Type
Details
|
Built-in (native)
DataCore provides its own Volume Plugin for natively providing Docker container support, available on Docker Hub.
DataCore also has a native CSI integration with Kubernetes, available on Github.
|
Built-in (native)
Nutanix provides its own software plugins for container support (both Docker and Kubernetes).
Nutanix also developed its own container platform software called 'Karbon'. Karbon provides on-premises Kubernetes-as-a-Service (KaaS) in order to enable end-users to quickly adopt container services.
Nutanix Karbon is not a hard requirement for running Docker containers and Kubernetes on top of ECP, however it does make it easier to use and consume.
Nutanix Karbon can only be used in combination with Nutanix native hypervisor AHV. VMware vSphere and Microsoft Hyper-V are not supported at this time. As Nutanix Karbon leverages Nutanix Volumes, it is not available for the Starter edition.
|
N/A
HPE SimpliVity 2600 relies on the container support delivered by the hypervisor platform.
|
|
Container Platform Compatibility
Details
|
Docker CE/EE 18.03+
Docker EE = Docker Enterprise Edition
|
Docker EE 1.13+
Node OS CentOS 7.5
Kubernetes 1.11-1.14
Nutanix software plugins support both Docker and Kubernetes.
Nutanix Docker Volume plugin (DVP) supports Docker 1.13 and higher.
Nutanix Karbon 1.0.3 supported the following OS images:
- Node OS CentOS 7.5.1804-ntxnx-0.0, CentOS 7.5.1804-ntxnx-0.1
- Kubernetes v1.13.10, v1.14.6, v1.15.3
The current version of Nutanix Karbon is 2.1
Docker EE = Docker Enterprise Edition
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
|
Container Platform Interconnect
Details
|
Docker Volume plugin (certified)
The DataCore SDS Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables SANsymphony data volumes to persist beyond the lifetime of both a container or a container host. DataCore leverages SANsymphony iSCSI and FC to provide storage to containers. This effectively means that the hypervisor layer is bypassed.
The Docker SDS Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the Docker Hub. The plugin is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a SANsymphony storage cluster.
For more information please go to: https://hub.docker.com/plugins/datacore-sds-volume-plugin
The Kubernetes CSI plugin can be downloaded from GitHub. The plugin is automatically deployed as several pods within the Kubernetes system.
For more information please go to: https://github.com/DataCoreSoftware/csi-plugin
Both plugins are supported with SANsymphony 10 PSP7 U2 and later.
|
Docker Volume plugin (certified)
The Nutanix Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables Nutanix data volumes to persist beyond the lifetime of both a container or a container host. Nutanix leverages Acropolis Block Services (ABS) to provide storage to containers through in-guest iSCSI connections. This effectively means that the hypervisor layer is bypassed.
The Nutanix Docker Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the online Docker Store. The plug-in is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a Nutanix storage cluster.
The Nutanix Docker Volume plugin (DVP) is supported with AOS 4.7 and later.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
|
|
Container Host Compatibility
Details
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The DataCore native plug-ins are container-host centric and as such can be used across all SANsymphony-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, KVM, XenServer, Oracle VM Server) as well as on bare metal platforms.
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The Nutanix native plug-ins are container-host centric and as such can be used across all Nutanix-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, Nutanix AHV) as well as on bare metal platforms.
Nutanix Karbon can only be used in combination with Nutanix native hypervisor AHV. VMware vSphere and Microsoft Hyper-V are not supported at this time.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
|
Container Host OS Compatbility
Details
|
Linux
All Linux versions supported by Docker CE/EE 18.03+ or higher can be used.
|
CentOS 7
Red Hat Enterprise Linux (RHEL) 7.3
Ubuntu Linux 16.04.2
The Nutanix Docker Volume Plugin (DVP) has been qualified for the mentioned Linux OS versions. However, the plug-in may also work with older OS versions.
Container hosts running the Windows OS are not (yet) supported.
|
Linux
Windows 10 or 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
|
Container Orch. Compatibility
Details
|
Kubernetes 1.13+
|
Kubernetes
Nutanix Karbon 2.1 supports Kubernetes.
|
Kubernetes 1.6.5+ on ESXi 6.0+
|
|
Container Orch. Interconnect
Details
|
Kubernetes CSI plugin
The Kubernetes CSI plugin provides several plugins for integrating storage into Kubernetes for containers to consume.
DataCore SANsymphony provides native industry standard block protocol storage presented over either iSCSI or Fibre Channel. YAML files can be used to configure Kubernetes for use with DataCore SANsymphony.
|
Kubernetes Volume plugin
The Nutanix Container Storage Interface (CSI) Volume Driver for Kubernetes uses Nutanix Volumes and Nutanix Files to provide scalable, persistent storage for stateful applications.
Kubernetes contains an in-tree CSI Volume Plug-In that allows the out-of-tree Nutanix CSI Volume Driver to gain access to containers and provide persistent-volume storage. The plugin runs in a pod and dynamically provisions requested PersistentVolumes (PVs) using Nutanix Files and Nutanix Volumes storage.
When Nutanix Files is used for persistent storage, applications on multiple pods can access the same storage and also have the benefit of multi-pod read-and-write access.
The Nutanix CSI Volume Driver requires Kubernetes v1.13 or later and AOS 5.6.2 or later. When using Nutanix Volumes, Kubernetes worker nodes must have the iSCSI package installed.
Nutanix also developed its own container platform software called 'Karbon'. Karbon provides on-premises Kubernetes-as-a-Service (KaaS) in order to enable end-users to quickly adopt container services.
|
Kubernetes Volume Plugin
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, VVol, VMFS or NFS datastores.
|
|
|
|
VDI |
|
|
VDI Compatibility
Details
|
VMware Horizon
Citrix XenDesktop
There is no validation check being performed by SANsymphony for VMware Horizon or Citrix XenDesktop VDI platforms. This means that all versions supported by these vendors are supported by DataCore.
|
VMware Horizon
Citrix XenDesktop (certified)
Citrix Cloud (certified)
Parallels RAS
Xi Frame
Nutanix has published Reference Architecture whitepapers for VMware Horizon and Citrix XenDesktop platforms.
Nutanix Acropolis Hypervisor (AHV) is qualified as Citrix Ready. The Citrix Ready Program showcases verified products that are trusted to enhance Citrix solutions for mobility, virtualization, networking and cloud platforms. The Citrix Ready designation is awarded to third-party partners that have successfully met test criteria set by Citrix, and gives customers added confidence in the compatibility of the joint solution offering.
Nutanix qualifies as a Citrix Workspace Appliance: Nutanix and Citrix have worked side-by-side to automate the provisioning and integration of the entire Citrix stack from cloud service to on-premises applications and desktops. Nutanix Prism automatically provisions Citrix Cloud Connectors VMs for instantly connecting the on-premises Nutanix cluster to the XenApp and XenDesktop Service, and registers all nodes in the Nutanix cluster as a XenApp and XenDesktop Service Resource Location.
Parallels Remote Application Server (RAS) is a Nutanix-ready desktop virtualization solution. A Parallels RAS and Nutanix on VMWare reference architecture white paper was published in july 2016.
In May 2019 Nutanix introduced Xi Frame as a new option to use Frame Desktop-as-a-Service (DaaS) with apps, desktops, and user data hosted on Nutanix on-premises infrastructure. Xi Frame only supports the Nutanix AHV hypervisor with AOS 5.10 or later. At this time Windows 10, Windows Server 2016 and Windows Server 2019 guest OS are supported. Linux support (Ubuntu, CentOS) will be added in the second half of 2019.
|
VMware Horizon
Citrix XenDesktop
HPE SimpliVity OmniStack 3.7.8 introduces support for VMware Horizon Instant Clone provisioning technology for vSphere 6.7.
HPE SimpliVity 2600 has published a Reference Architecture whitepaper for VMware Horizon 7.4. HPE SimpliVity 2600 has been validated by LoginVSI.
|
|
|
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
DataCore has not published any recent VDI reference architecture whitepapers. The only VDI related paper that includes a Login VSI benchmark dates back to december 2010. There a 2-node SANsymphony cluster was able to sustain a load of 220 VMs based on the Login VSI 2.0.1 benchmark.
|
VMware: up to 170 virtual desktops/node
Citrix: up to 175 virtual desktops/node
VMware Horizon 7.2: Load bearing is based on Login VSI tests performed on hybrid Lenovo ThinkAgile HX3320 Series using 2vCPU Windows 10 desktops and the Knowledge Worker profile.
Citrix XenDesktop 7.15: Load bearing is based on Login VSI tests performed on hybrid Lenovo ThinkAgile HX3320 Series using 2vCPU Windows 10 desktops and the Knowledge Worker profile.
For detailed information please view the corresponding whitepaper, dated July 2019.
|
VMware: up to 175 virtual desktops/node
Citrix: unknown
VMware Horizon 7.4: Load bearing number is based on Login VSI tests performed on HPE SimpliVity 170 Gen10 all-flash model using 2vCPU Windows 10 desktops and the Knowledge Worker profile.
|
|
|
Server Support
|
|
|
|
|
|
|
Server/Node |
|
|
Hardware Vendor Choice
Details
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
Super Micro (Nutanix branded)
Super Micro (source your own)
Dell EMC (OEM)
Lenovo (OEM)
Fujitsu (OEM)
HPE (OEM)
IBM (OEM)
Inspur (OEM)
Cisco UCS (Select)
Crystal (Rugged)
Klas Telecom (Rugged)
Many (CE only)
When end-user organizations order a Nutanix solution from Nutanix channel partners they get the Nutanix software on Nutanix branded hardware. Nutanix is sourcing this hardware from SuperMicro. End-user organizations also have the option to source their own SuperMicro hardware and buy licensing and support from Nutanix.
Dell and Nutanix reached an OEM agreement in 2015. Lenovo and Nutanix reached an OEM agreement in 2016. IBM and Nutanix reached an OEM agreement in 2017. Customers and prospects should note that these hardware platforms should not be mixed in one cluster (technically possible but not supported).
In July 2016 Nutanix and Crystal Group partnered together to provide Enterprise Cloud solutions for extreme environments in energy, mining, hospitality, military, government and more. Combining the highly reliable, ruggedized Crystal RS2616PS18 Rugged 2U server platform and award-winning Nutanix Enterprise Cloud Platform for use in tactical environments.
As of August 2016 Nutanix completed independent validation of running Nutanix software on Cisco Unified Computing System (UCS) C-Series servers. Nutanix for Cisco UCS rack-mount servers is available through select Cisco and Nutanix partners worldwide.
In February 2017 Nutanix and Klas forged a partnership Nutanix to transform tactical data center solutions for government and military operations. The Klas Telecom Voyager Tactical Data Center system running the Nutanix Enterprise Cloud Platform allows data center operations to be carried out in the field via a single, airline carry-on-sized case.
As of July 2017 Nutanix completed independent validation of running Nutanix software on Cisco Unified Computing System (UCS) B-Series servers. Nutanix for Cisco UCS blade servers is available through select Cisco and Nutanix partners worldwide.
As of July 2017 Nutanix completed independent validation of running Nutanix software on HPE Proliant servers. Nutanix for HPE Proliant rack-mount servers is available through select HPE and Nutanix partners worldwide.
In September 2017 Nutanix released a version specifically for the IBM Power platform.
In May 2019 Fujitsu announced the availability of Fujitsu XF-series, combining Nutanix software with Fujitsu PRIMERGY servers.
In October 2019 HPE announced the availability of HPE ProLiant DX solution and HPE GreenLake for Nutanix.
The Nutanix Community Edition (CE) can be run on almost any x86 hardware but only have community support. Nutanix CE can also be run on AWS and Google using Ravello Systems for testing and training purposes.
A Nutanix node can run from AWS, backed with S3 storage as a backup target.
|
HPE
HPE SimpliVity 2600 deployments are solely based on HPE Apollo 2600 Gen10 server hardware.
|
|
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
5 Native Models (SX-1000, NX-1000, NX-3000, NX-5000, NX-8000)
15 Native Model sub-types
Different models are available for different workloads.
Nutanix Native G6 and G7 Models (Super Micro):
SX-1000 (SMB, max 4 nodes per block and max 2 blocks)
NX-1000 (ROBO)
NX-3000 (Mainstream; VDI GPU)
NX-5000 (Storage Heavy; Storage Only)
NX-8000 (High Performance)
Dell EMC XC Core OEM Models (hardware support provided through Dell EMC; software support provided through Nutanix):
XC640-4 (ROBO 3-node cluster)
XC640-4i (ROBO 1-node cluster)
XC640-10 (Compute Heavy, VDI)
XC740xd-12 (Storage Heavy)
XC740xd-12C (Storage Heavy - up to 80TB of cold tier expansion)
XC740xd-12R (Storage Heavy - up to 80TB single-node replication target)
XC740xd-24 (Performance Intensive)
XC740xd2 (Storage Heavy - up to 240TB for fule and object workloads)
XC940-24 (Memory and Performance Intensive)
XC6420 (High-Density)
XCXR2 (Remote, harsh environments)
Lenovo ThinkAgile HX OEM Models (VMware ESXi, Hyper-V and Nutanix AHV supported):
HX 1000 Series (ROBO)
HX 2000 Series (SMB)
HX 3000 Series (Compute Heavy, VDI)
HX 5000 Series (Storage Heavy)
HX 7000 Series (Performance Intensive)
Fujitsu XF OEM Models (VMware ESXi and Nutanix AHV supported):
XF1070 4LFF (ROBO)
XF3070 10SFF (General Virtualized Workloads)
XF8050 24SFF (Compute Heavy Workloads)
XF8055 12LFF ((Stprage Heavy Workloads)
HPE ProLiant DX OEM Models:
DX360-4-G10
DX380-8-G10
DX380-12-G10
DX380-24-G10
DX560-24-G10
DX2200-DX170R-G10-12LFF
DX2200-DX190R-G10-12LFF
DX2600-DX170R-G10-24SFF
DX4200-G10-24LFF
IBM POWER OEM Models (AHV-only):
IBM CS821 (Middleware, DevOps, Web Services)
IBM CS822 (Storage Heavy Analytics, Open Source Databases)
Cisco UCS C-Series Models:
C220-M5SX (VDI, Middleware, Web Services)
C240-M5L (Storage Heavy, Server Virtualization)
C240-M5SX (Exchange, SQL, Large Databases)
C220-M4S (VDI, Middleware, Web Services)
C240-M4L (Storage Heavy, Server Virtualization)
C240-M4SX (Exchange, SQL, Large Databases)
Cisco UCS B-Series Models:
B200-M4 in 5108 Blade Chassis within 6248UP/6296UP/6332 Fabrics
HPE Proliant Models (ESXi, Hyper-V and Nutanix AHV supported):
DL360 Gen10 8SFF (VDI, Middleware, Web Services)
DL380 Gen10 2SFF + 12LFF (Storage Heavy, Server Virtualization)
DL380 Gen10 26SFF (High Performance, Exchange, SQL, Large Databases)
HPE Apollo Models (ESXi, Hyper-V and Nutanix AHV supported):
XL170r Gen 10 6SFF (VDI, Middleware, Web Services, Storage Heavy, Server Virtualization, High Performance, Exchange, SQL, Large Databases)
All models support hybrid and all-flash disk configurations.
All models can be deployed as storage-only nodes. These nodes run AHV and thus do not require additional hypervisor licenses.
Citrix XenServer is not supported on the NX-6035C-G5 (storage-only or light-compute) model.
LFF = Large Form Factor (3.5')
SFF = Small Form Factor (2.5')
|
2 models
HPE SimpliVity 2600 is available in 2 series:
170 Gen10-series
190 Gen10-series
HPE positions HPE Simplivity 2600 as a VDI Optimized Hyperconverged Solution. The platform is also beneficial for compute-intensive workload
use cases in high density environments.
The Gen10 6000-series is best for high performance, IO intensive mixed workloads, whereas the gen10 4000-series is best for typical workloads (heavy reads/lower ratio of writes) at lower cost than the 6000-series. The difference between 4000/6000 is the SSD-type that is inserted in the server hardware.
There are no HPE SimpliVity 2600 Hybrid (SSD+HDD) models to choose from.
|
|
|
1, 2 or 4 nodes per chassis
Note: Because SANsymphony is mostly hardware agnostic, customers can opt for multiple server densities.
Note: In most cases 1U or 2U building blocks are used.
Also Super Micro offers 2U chassis that can house 4 compute nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
Native:
1 (NX1000, NX3000, NX6000, NX8000)
2 (NX6000, NX8000)
4 (NX1000, NX3000)
3 or 4 (SX1000)
nodes per chassis
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
170-series: 3-4 nodes per chassis
190-series: 2 nodes per chassis
The HPE Apollo 2600 server is a 2U/4-node building block. The nodes in each system have an identical hardware configuration.
Up to 4 slots in each chassis may be used for placement of HPE SimpliVity 2600 170-series nodes.
Up to 2 slots in each chassis may be used for placement of HPE SimpliVity 2600 190-series nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
|
|
Yes
DataCore does not explicitly recommend using different hardware platforms, but as long as the hardware specs are somehow comparable, there is no reason to insist on one or the other hardware vendor. This is proven in practice, meaning that some customers run their productive DataCore environment on comparable servers of different vendors.
|
Yes
Nutanix now allows mixing of all-flash (SSD-only) and hybrid (SSD+HDD) nodes in the same cluster. A minimum of 2 all-flash (SSD-only) nodes are required in mixed all-flash/hybrid clusters.
|
Partial
For mixing HPE SimpliVity 2600 nodes in a cluster, HPE recommends following these general guidelines:
- Only models of equal socket count are supported.
- All hosts should contain equal amounts of CPU & Memory.
- As a best practice, it’s recommended to use the same CPU model within a single cluster.
Heterogenous Federation Support: Although HPE SimpliVity 380 nodes cannot be mixed with HPE SimpliVity 2600 nodes or legacy SimpliVity nodes within the same cluster, they can coexist with such clusters within the same Federation.
HPE OmniStack 3.7.9 introduces suppor for using different versions of the OmniStack software within a federation. Some clusters can have hosts using HPE OmniStack 3.7.9 and other clusters can have hosts all using HPE OmniStack 3.7.8 and above. The hosts in each datacenter and cluster must use the same version of the software.
|
|
|
|
Components |
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: up to 10 options
Depending on the hardware model Nutanix offers one or multiple choices on CPU per Node: 8, 12, 16, 20, 24, 28, 32, 36, 44 cores. None of the hardware platforms offer all of the above processor configurations. Nutanix current hardware platforms provide up to 5 different CPU confgurations.
The NX-1000 series (NX-1065-G7 and NX-1175-G7 submodels), the NX-3000 series (NX-3060-G7 submodel ) and the NX-8000 series (NX-8035-G7 and NX8170-G7 submodels) currently support 2nd generation Intel Xeon Scalable processors (Cascade Lake). The NX-5000 series stil lacks a submodel that supports 2nd generation Intel Xeon Scalable (Cascade Lajke) processors.
Nutanix still provides multiple native models and submodels that offer a choice of previous generation Intel Xeon Scalable processors (Skylake).
Lenovo ThinkAgile HX (Nutanix OEM) models were the first to ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors. In July 2019 Nutanix native models (G7) and Dell EMC XC (Nutanix OEM) models also started to ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
AOS 5.17 introduces support for AMD processors across hypervisors (ESXi/Hyper-V/AHV). The first hardware partner to release an AMD platform for AOS is HPE with more to follow.
|
Flexible
HP SimpliVity 2600 on Apollo Gen10 server hardware:
- Choice of Intel Xeon Scalable (Skylake) Silver and Gold processors (2x per node), 12 to 22 cores selectable.
- Single socket option or dual socket option with less cores (8 or 10) for ROBO deployments only.
Although HPE does support 2nd generation Intel Xeon Scalable (Cascade Lake) processors in its ProLiant server line-up as of April 2019, HPE SimpliVity nodes do not yet ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
|
|
|
Flexible
|
Flexible: up to 10 options
Depending on the hardware model Nutanix offers one or multiple choices on memory per Node: 64GB, 96GB, 128GB, 192GB, 256GB, 384GB, 512GB, 640GB, 768GB, 1TB, 1.5TB or 3.0TB. None of the model sub-types offers all of the above memory configurations. Nutanix current hardware platforms provide up to 10 different memory confgurations.
|
Flexible
HP SimpliVity 2600 on Apollo Gen10 server hardware:
- 384GB to 768GB per node selectable.
- 128GB per node selectable for ROBO deployments only.
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: capacity (up to 7 options per disk type); number of disks (Dell, Cisco)
Fixed: Number of disks (hybrid, most all-flash)
Some All-Flash models offer 2 or 3 options for the number of SSDs to be installed in each appliance.
AOS supports the replacement of existing drives by larger drives when they become available. Nutanix facilitates these upgrades.
|
Fixed: number of disks + capacity
For HP SimpliVity 260 on Apollo Gen10 server hardware, the following kits are selectable per node:
XS (6x 1.92TB SSD in RAID1+0)
The always-on inline deduplication and compression by default allows HPE SimpliVity 2600 to have much higher amounts of effective storage capacity on a single node than the raw disk capacity would indicate.
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: up to 4 add-on options
Depending on the hardware model Nutanix by default includes 1GbE or 10GbE network adapters and one or more of the following network add-ons: dual-port 1 GbE, dual-port 10 GbE, dual-port 10 GBase-T, quad-port 10 GbE, dual-port 25 GbE, dual-port 40GbE.
AOS 5.9 introduces support for Remote Direcy Memory Access (RDMA) support. RDMA provides a node with direct access to the memory subsystems of other nodes in the cluster, without needing the CPU-bounded network stack of the operating system. RDMA allows low-latency data transfer between memory subsystems, and so helps to improve network latency and lower CPU use.
With two RDMA-enabled NICs per node installed, the following platforms support RDMA:
- NX-3060-G6
- NX-3155G-G6
- NX-3170-G6
- NX-8035-G6
- NX-8155-G6
RDMA is not supported for the NX-1065-G6 platform as this platform includes only one NIC per node.
|
Flexible: additional 10Gbps (190-series only)
HP SimpiVity 2600 190 Gen10-series:
One optional 10 Gbps or 10/25 Gbps PCI adapter can be added to a node.
In HPE SimpliVity 3.7.10 the Deployment Manager allows configuring NIC teaming during the deployment process. With this feature, one or more NICs can be assigned to the Management network, and one or more NICs to the Storage and Federation networks (shared between the two).
|
|
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
DataCore SANsymphony supports the hardware that is on the hypervisor HCL.
VMware vSphere 6.5U1 officially supports several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
Windows 2016 supports two graphics virtualization technologies available with Hyper-V to leverage GPU hardware:
- Discrete Device Assignment
- RemoteFX vGPU
More information is provided here: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-graphics-virtualization
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
NVIDIA Tesla (specific appliance models only)
Currently Nutanix and OEM partners support the following GPUs in a single server:
NVIDIA Tesla M60 (2x in NX-3155G-G5; 1x in NX-3175-G5; 2x in HX3510-G; 3x in XC740xd-24)
NVIDIA Tesla M10 (1x in NX-3170-G6; 2x in NX-3155G-G5; 1x in NX-3175-G5; 2x in XC740xd-24)
NVIDIA Tesla P40 (1x in NX-3170-G6; 2x in NX-3155G-G5; 1x in NX-3175-G5;3x in XC740xd-24)
NVIDIA Tesla V100 (AHV clusters running AHV-20170830.171 and later only)
Nutanix and OEM partners do not support Intel and/or AMD GPUs at this time.
AOS 5.5 introduced vGPU support for guest VMs in an AHV cluster.
|
NVIDIA Tesla (190-series only)
HPE SimpliVity 2600 offers a GPU option in the HPE SimpliVity 190 Gen10-series for leveraging vGPU in virtual desktop/application environments.
Currently HPE SimpliVity 2600 190 Gen10-series supports the following GPUs in a single server:
2x NVIDIA Tesla M10
|
|
|
|
Scaling |
|
|
|
CPU
Memory
Storage
GPU
The SANsymphony platform allows for expanding of all server hardware resources.
|
Memory
Storage
GPU
AOS supports the replacement of existing drives by larger drives when they become available. Nutanix facilitates these upgrades.
Empty drive slots in Cisco and Dell hardware models can be filled with additional physical disks when needed.
AOS 5.1 introduced general availability (GA) support for hot plugging virtual memory and vCPU on VMs that run on top of the AHV hypervisor from within the Self-Service Portal only. This means that the memory allocation and the number of CPUs on VMs can be increased while the VMs are powered on. However, the number of cores per CPU socket cannot be increased while the VMs are powered on.
AOS 5.11 with Foundation 4.4 and later introduces support for up to 120 Tebibytes (TiB) of storage per node. For these larger capacity nodes, each Controller VM requires a minimum of 36GB of host memory.
|
CPU
Memory
GPU
|
|
|
Storage+Compute
Compute-only
Storage-only
Storage+Compute: In a single-layer deployment existing SANsymphony clusters can be expanded by adding additional nodes running SANsymphony, which adds additional compute and storage resources to the shared pool. In a dual-layer deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded simultaneously.
Compute-only: Because SANsymphony leverages virtual block volumes (LUNs), storage can be presented to hypervisor hosts not participating in the SANsymphony cluster. This is also beneficial to migrations, since it allows for online storage vMotions between SANsymphony and non-SANsymphony storage platforms.
Storage-only: In a dual-layer or mixed deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded independent from each other.
|
Compute+storage
Compute-only (NFS; SMB3)
Storage-only
Storage+Compute: Existing Nutanix clusters can be expanded by adding additional Nutanix nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: Because Nutanix leverages file-level protocols (NFS/SMB), storage can be presented to hypervisor hosts not participating in the Nutanix cluster. This is also beneficial to migrations, since it allows for online storage vMotions between Nutanix and non-Nutanix storage platforms.
Storage-only: Any Nutanix appliance model can be designated as a storage-only node. The storage-only node is part of the Nutanix storage cluster but does not actively participate in the hypervisor cluster. The storage-only node is always installed with AHV, which economizes on vSphere and/or Hyper-V licenses.
AOS 5.10 introduces the ability to add a never-schedulable node if you want to add a node to increase data storage on a Nutanix AHV cluster, but do not want any VMs to run on that node. In this way compliance and licensing requirements of virtual applications that have a physical CPU socket-based licensing model, can be met when scaling out a Nutanix AVH cluster.
|
Compute+storage
Compute-only
Storage+Compute: Existing HPE SimpliVity 2600 federations can be expanded by adding additional HPE SimpliVity 2600 nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: Because HPE SimpliVity 2600 leverages a file-level protocol (NFS), storage can be presented to hypervisor hosts not participating in the HPE SimpliVity 2600 cluster. This is also beneficial to migrations, since it allows for online storage vMotions between HPE SimpliVity 2600 and non-HPE SimpliVity 2600 storage platforms.
Storage-only: N/A; A HPE SimpliVity 2600 node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
|
|
|
1-64 nodes in 1-node increments
There is a maximum of 64 nodes within a single cluster. Multiple clusters can be managed through a single SANsymphony management instance.
|
3-Unlimited nodes in 1-node increments
The hypervisor cluster scale-out limits still apply eg. 64 hosts for VMware vSphere and Microsoft Hyper-V in a single cluster. Nutanix AHV clusters have no scaling limit.
SX1000 clusters scale to a maximum of 4 nodes per block and a maximum of 2 blocks is allowed, making for a total of 8 nodes.
|
vSphere: 2-16 storage nodes (cluster); 2-8 storage nodes (stretched cluster); 2-32+ storage nodes (Federation) in 1-node increments
HPE SimpliVity 2600 currently offers support for up to 16 storage nodes and 720 VMs within a single VSI cluster, and up to 8 storage noes within a single VDI cluster. Up to 32 storage nodes are supported within a single Federation. Multiple Federations can be used in either single-site or multi-site deployments, allowing for a scalable as well as a flexible solution. Data protection can be configured to run in between federations.
Cluster scale enhancements (16 nodes instead of 8 nodes within a single cluster) apply to new as well as existing SimpliVity clusters that run OmniStack 3.7.7.
For specific use-cases a Request for Product Qualification (RPQ) process can be initiated to authorize more than 32 storage nodes within a single Federation.
HPE SimpliVity 2600 also supports adding compute nodes to a storage node cluster in vSphere environments.
Stretched Clusters with Availability Zones remain supported for up to 8 HPE OmniStack hosts (4 per Availability Zone).
OmniStack 3.7.7 introduces support for multi-host deployment at one time to a cluster.
HPE OmniStack 3.7.8 introduces support for 48 clusters per Federation (previously 16 clusters) when using multiple vCenter Servers in Enhanced Linked Mode.
VDI = Virtual Desktop Infrastructure
VSI = Virtual Server Infrastructure
|
|
Small-scale (ROBO)
Details
|
2 Node minimum
DataCore prevents split-brain scenarios by always having an active-active configuration of SANsymphony with a primary and an alternate path.
In the case SANsymphony servers are fully operating but do not see each other, the application host will still be able to read and write data via the primary path (no switch to secondary). The mirroring is interrupted because of the lost connection and the administrator is informed accordingly. All writes are stored on the locally available storage (primary path) and all changes are tracked. As soon as the connection between the SANsymphony servers is restored, the mirror will recover automatically based on these tracked changes.
Dual updates due to misconfiguration are detected automatically and data corruption is prevented by freezing the vDisk and waiting for user input to solve the conflict. Conflict solutions could be to declare one side of the mirror to be the new active data set and discarding all tracked changes at the other side, or splitting the mirror and merge the two data sets into a 3rd vDisk manually.
|
3 Node minimum (data center)
1 or 2 Node minimum (ROBO)
1 Node minimum (backup)
Nutanix smallest deployment for Remote Office and Branch Office (ROBO) environments is a 1 node cluster based on the NX-1175S-G5 hardware model that is a good fit for small scale remote office and branche office (ROBO) environments. The Starter Edition license is often best suited for this type of deployment.
Nutanix also provides a single-node configuration for off-cluster on-premises backup purposes by leveraging native snapshots. The single-node supports compression and deduplication, but cannot be used as a generic backup target. It uses Replication Factor 2 (RF2 ) to protect data against magnetic disk failures.
AOS 5.6 introduces support for 2-node ROBO environments. These deployments require a Witness VM in a 3rd site as tie-breaker. This Witness is available for VMware vSphere and AHV. The same Witness VM is also used in Stretched Cluster scenarios based on VMware vSphere.
Nutanix smallest deployment for data center environments contains 3 nodes in a Nutanix Xpress 2U SX-1000 configuration that is specifically designed for small scale environment. The Starter Edition license is often best suited for this type of deployment.
|
2 Node minimum
HPE SimpliVity 2600 supports 2-node configurations without sacrificing any of the data reduction and data protection capabilities. The HPE SimpliVity 2600 is ideal for ROBO deployments.
All the remote sites can be centrally managed from a single dasboard at the central site.
|
|
|
Storage Support
|
|
|
|
|
|
|
General |
|
|
|
Block Storage Pool
SANsymphony only serves block devices to the supported OS platforms.
|
Distributed File System (ADSF)
Nutanix is powered by the Acropolis Distributed Storage Fabric (ADSF), previously known as Nutanix Distributed File System (NDFS).
|
Parallel File System
on top of Object Store
Both the File System and the Object Store have been internally developed by HPE SimpliVity 2600.
|
|
|
Partial
DataCores core approach is to provide storage resources to the applications without having to worry about data locality. But if data locality is explicitly requested, the solution can partially be designed that way by configuring the first instance of all data to be stored on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference.
By default data does not automatically follow the VM when the VM is moved to another node. However, virtual disks can be relocated on the fly to other DataCore node without losing I/O access, but this relocation takes some time due to data copy operations required. This kind of relocation usually is done manually, but we allow automation of such tasks and can integrate with VM orchestration using PowerShell for example.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Full
Data follows the VM, so if a VM is moved to another server/node, when data is read that data is copied to the node where the VM resides. New/changed data is always written locally and to one or more remote locations for protection.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Full
When a VM is created, it is optimally placed on the best 2 nodes available. When data is written, it is deduplicated and compressed on arrival, and then stored on the local node as well as a dedicated partner node. To keep the performance optimal through the VMs lifecycle, OmniStack automatically creates VMware vSphere DRS affinity rules and policies. This is called Intelligent Workload Optimization. VMware DRS is made aware of where the data of an individual VM is. In effect, the VM follows the data rather than having the data follow the VM, as this prevents heavy moves of data to the VM. When a HPE SimpliVity 2600 node is added to the federation, the DRS rules related to OmniStack are automatically re-evaluated.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
|
|
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
VoV = Volume-on-Volume; The Virtual Storage Controller uses virtual disks provided by the hypervisor platform.
|
Direct-attached (Raw)
The software takes ownership of the unformatted physical disks available in the host.
|
Direct-attached (RAID)
The software takes ownership of the RAID groups provisioned by the servers hardware RAID controller.
|
|
|
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
|
Hybrid (Flash+Magnetic)
All-Flash
|
All-Flash (SSD-only)
HPE has exclusively released all-flash models for the HPE SimpliVity 2600 platform. These models facilitate a variety of workloads including those that demand ultra-high low-latency performance.
|
|
Hypervisor OS Layer
Details
|
SD, USB, DOM, SSD/HDD
|
SuperMicro (G3,G4,G5): DOM
SuperMicro (G6): M2 SSD
Dell: SD or SSD
Lenovo: DOM, SD or SSD
Cisco: SD or SSD
|
SSD
1x 480GB M.2 SSD is used for system boot.
|
|
|
|
Memory |
|
|
|
DRAM
|
DRAM
|
DRAM (VSC)
|
|
|
Read/Write Cache
DataCore SANsymphony accelerates reads and writes by leveraging the powerful processors and large DRAM memory inside current generation x86-64bit servers on which it runs. Up to 8 Terabytes of cache memory may be configured on each DataCore node, enabling it to perform at solid state disk speeds without the expense. SANsymphony uses a common cache pool to store reads and writes in.
SANsymphony read caching essentially recognizes I/O patterns to anticipate which blocks to read next into RAM from the physical back-end disks. That way the next request can be served from memory.
When hosts write to a virtual disk, the data first goes into DRAM memory and is later destaged to disk, often grouped with other writes to minimize delays when storing the data to the persistent disk layer. Written data stays in cache for re-reads.
The cache is cleaned on a first-in-first-out (FiFo) basis. Segment overwrites are performed on the oldest data first for both read- and write cache segment requests.
SANsymphony prevents the write cache data from flooding the entire cache. In case the write data amount runs above a certain percentage watermark of the entire cache amount, then the write cache will temporarily be switched to write-through mode in order to regain balance. This is performed fully automatically and is self-adjusting, per virtual disk as well as on a global level.
|
Read Cache
|
DRAM (VSC): Read Cache
|
|
|
Up to 8 TB
The actual size that can be configured depends on the server hardware that is used.
|
Configurable
The memory of the Virtual Storage Controllers (VSCs) can be tuned to allow for a larger read cache by allocating more memory to these VMs. The amount of read cache is assigned dynamically; everything that is not used by the OS is used for read caching. Depending on other active internal processes, the amount of read cache can be larger or smaller.
|
DRAM (VSC): 16-48GB for Read Cache
Each Virtual Storage Controller (VSC) is equipped with 48-100GB total memory capacity, of which 16-48GB is used as read cache. The amount of memory allocated is fixed (non-configurable) and model dependent.
|
|
|
|
Flash |
|
|
|
SSD, PCIe, UltraDIMM, NVMe
|
SSD, NVMe
Nutanix supports NVMe drives in: Nutanix Community Edition (NCE), NX-3060-G6, NX-3170-G6, NX-8035-G6 and NX-8155-G6, as well as respective OEM models.
AOS 5.17 introduced All-NVMe drive support.
AOS 5.18 introduces Nutanix Blockstore that streamlines the I/O stack, taking the file system out of the equation altogether. With Blockstore Nutanix ECP effectively bypasses the OS (=Linux) kernel context switches and thus manages to accelerate AOS storage processing. Leveraging Nutanix Blockstore together with NVMe/Optane storage media delivers significant IOPS and latency improvements with no changes to the applications.
|
SSD
|
|
|
Persistent Storage
SANsymphony supports new TRIM / UNMAP capabilities for solid-state drives (SSD) in order to reduce wear on those devices and optimize performance.
|
Read/Write Cache
Storage Tier
|
All-Flash: Metadata + Write Buffer + Persistent Storage Tier
Read cache is not necessary in All-flash configurations.
|
|
|
No limit, up to 1 PB per device
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
Hybrid: 1-4 SSDs per node
All-Flash: 3-24 SSDs per node
NVMe-Hybrid: 2-4 NVMe + 4-8 SSDs per node
Hybrid SSDs: 480GB, 960GB, 1.92TB, 3.84TB, 7.68TB
All-Flash SSDs: 480GB, 960GB, 1.92TB, 3.84TB, 7.68TB
NVMe: 1.6TB, 2TB, 4TB
|
All-Flash: 6 SSDs per node
All-Flash SSD configurations:
170-series: 6x 1.92TB
190-series: 6x 1.92TB
|
|
|
|
Magnetic |
|
|
|
SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
In this case SATA = NL-SAS = MDL SAS
|
Hybrid: SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
N/A
|
|
|
Persistent Storage
|
Persistent Storage
|
N/A
|
|
Magnetic Capacity
Details
|
No limit, up to 1 PB (per device)
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
2-20 SATA HDDs per node
Hybrid HDDs: 1TB, 2TB, 4TB, 6TB, 8TB, 12TB
|
N/A
|
|
|
Data Availability
|
|
|
|
|
|
|
Reads/Writes |
|
|
Persistent Write Buffer
Details
|
DRAM (mirrored)
If caching is turned on (default=on), any write will only be acknowledged back to the host after it has been succesfully stored in DRAM memory of two separate physical SANsymphony nodes. Based on de-staging algorithms each of the nodes eventually copies the written data that is kept in DRAM to the persistent disk layer. Because DRAM outperforms both flash and spinning disks the applications experience much faster write behavior.
Per default, the limit of dirty-write-data allowed per Virtual Disk is 128MB. This limit could be adjusted, but there has never been a reason to do so in the real world. Individual Virtual Disks can be configured to act in write-through mode, which means that the dirty-write-data limit is set to 0MB so effectively the data is directly written to the persistent disk layer.
DataCore recommends that all servers running SANsymphony software are UPS protected to avoid data loss through unplanned power outages. Whenever a power loss is detected, the UPS automatically signals this to the SANsymphony node and write behavior is switched from write-back to write-through mode for all Virtual Disks. As soon as the UPS signals that power has been restored, the write behavior is switched to write-back again.
|
Flash Layer (SSD; NVMe)
Nutanix supports NVMe drives in: Nutanix Community Edition (NCE), Native G6 and G7 hardware models, as well as respective OEM models.
All mentioned Nutanix models are available in Hybrid (SSD+Magnetic), All-Flash (SSD-only) and All-Flash with NVMe (NVMe+SSD) configurations.
|
Flash Layer (SSD)
The HPE SimpliVity 2600 does not contain a propietary PCIe based HPE OmniStack Accelerator Card.
|
|
Disk Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1-2 Replicas (2N-3N) (primary)
Erasure Coding (N+1/N+2) (secondary)
Nutanix implementation of replicas is called Replication Factor or RF in short (RF2 = 2N; RF3 = 3N). Maintaining replicas is the default method for protecting data that is written to the Nutanix cluster. An implementation of erasure coding, called EC-X by Nutanix, can be optionally enabled for protecting data once its write-cold (not overwritten for a cerain amount of time). RF and EC-X are enabled on a per-container basis, but are applied at the individual VM/file level.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on a different node in the cluster. The latter happens in a fully distributed manner, in other words, there is no dedicated partner node. When a disk fails, it is marked offline and data is read from another instance instead. At the same time data re-replication of the associated replicas is initiated in order to restore the desired replication factor.
Erasure Coding: Nutanix EC-X was introduced in AOS 5.0 and is used for protecting write cold data only. Nutanix EC-X provides more efficient protection than Nutanix RF, as it does not use full copies of data extents. Instead, EC-X uses parity for protecting data extents and distributes both data and parity across all nodes in the cluster. The amount of nodes within a Nutanix cluster determines the EC-X stripe size. When a disk fails, it is marked offline and data needs to be rebuild in-flight using the parity as data is being read, incurring a performance penalty. At the same time, data re-replication is initiated in order to restore the desired EC-X protection. Nutanix EC-X requires at least a 4-node setup. When Nutanix EC-X is enabled for the write cold data, it automatically uses the same resiliency level as is already in use for the write hot data, so 1 parity block per stripe for data with RF2 protection enabled and 2 parity blocks per stripe for data with RF3 protection enabled. Nutanix EC-X is enabled on the Storage Container (=vSphere Datastore) level.
AOS 5.18 introduces EC-X support for Object Storage containers.
|
1 Replica (2N)
+ Hardware RAID (5 or 6)
HPE SimpliVity 2600 uses replicas to protect data within the cluster. In addition, hardware RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated on a designated partner node. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. When a physical disk fails, hardware RAID maintains data availability.
Only when more than 2 disks fail within the same node, data has to be read from the partner node instead. Given the high level of redundancy within and across nodes, the desire to reduce unnecessary I/O, and the fact that most node outages are easily recoverable, node level redundancy is re-established based on a user initiated action.
|
|
Node Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1)
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1-2 Replicas (2N-3N) (primary)
Erasure Coding (N+1/N+2) (secondary)
Nutanix implementation of replicas is called Replication Factor or RF in short (RF2 = 2N; RF3 = 3N). Maintaining replicas is the default method for protecting data that is written to the Nutanix cluster. An implementation of erasure coding, called EC-X by Nutanix, can be optionally enabled for protecting data once its write-cold (not overwritten for a cerain amount of time). RF and EC-X are enabled on a per-container basis, but are applied at the individual VM/file level.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on a different node in the cluster. The latter happens in a fully distributed manner, in other words, there is no dedicated partner node. When a disk fails, it is marked offline and data is read from another instance instead. At the same time data re-replication of the associated replicas is initiated in order to restore the desired replication factor.
Erasure Coding: Nutanix EC-X was introduced in AOS 5.0 and is used for protecting write cold data only. Nutanix EC-X provides more efficient protection than Nutanix RF, as it does not use full copies of data extents. Instead, EC-X uses parity for protecting data extents and distributes both data and parity across all nodes in the cluster. The amount of nodes within a Nutanix cluster determines the EC-X stripe size. When a disk fails, it is marked offline and data needs to be rebuild in-flight using the parity as data is being read, incurring a performance penalty. At the same time, data re-replication is initiated in order to restore the desired EC-X protection. Nutanix EC-X requires at least a 4-node setup. When Nutanix EC-X is enabled for the write cold data, it automatically uses the same resiliency level as is already in use for the write hot data, so 1 parity block per stripe for data with RF2 protection enabled and 2 parity blocks per stripe for data with RF3 protection enabled. Nutanix EC-X is enabled on the Storage Container (=vSphere Datastore) level.
AOS 5.18 introduces EC-X support for Object Storage containers.
|
1 Replica (2N)
+ Hardware RAID (5 or 6)
HPE SimpliVity 2600 uses replicas to protect data within the cluster. In addition, hardware RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated on a designated partner node. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. When a physical node fails, VMs need to be restarted and data is read from the partner node instead. Given the high level of redundancy within and across nodes, the desire to reduce unnecessary I/O, and the fact that most node outages are easily recoverable, node level redundancy is re-established based on a user initiated action.
|
|
Block Failure Protection
Details
|
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk. However, block failure protection is in most cases irrelevant as 1-node appliances are used as building blocks.
SANsymphony works on an N+1 redundancy design allowing any node to acquire any other node as a redundancy peer per virtual device. Peers are replacable/interchangable on a per Virtual Disk level.
|
Block Awareness (integrated)
Nutanix intelligent software features include Block Awareness. A 'block' represents a physical appliance. Nutanix refers to a “block” as the chassis which contains either one, two, or four 'nodes'. The reason for distributing roles and data across blocks is to ensure if a block fails or needs maintenance the system can continue to run without interruption.
Nutanix Block Awareness can be broken into a few key focus areas:
- Data (The VM data)
- Metadata (Cassandra)
- Configuration Data (Zookeeper)
Block Awareness is automatic (always-on) and requires a minimum of 3 blocks to be activated, otherwise node awareness will be defaulted to. The 3-block requirement is there to ensure quorum.
AOS 5.8 introduced support for erasure coding in a cluster where block awareness is enabled. In previous versions of AOS block awareness was lost when implementing erasure coding. Minimums for erasure coding support are 4 blocks in an RF2 cluster and 6 blocks in an RF3 cluster. Clusters with erasure coding that have less blocks than specified will not regain block awareness after upgrading to AOS 5.8.
|
Not relevant (1-node chassis only)
HPE SimpliVity 2600 building blocks are based on 1-node chassis only. Therefore multi-node block (appliance) level protection is not relevant for this solution as Node Failure Protection applies.
|
|
Rack Failure Protection
Details
|
Manual configuration
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk.
|
Rack Fault Tolerance
Rack Fault Tolerance is the ability to provide rack level availability domain. With rack fault tolerance, redundant copies of data are made and placed on the nodes that are not in the same rack. When rack fault tolerance is enabled, the cluster has rack awareness and the guest VMs can continue to run with failure of one rack (RF2) or two racks (RF3). The redundant copies of guest VM data and metadata exist on other racks when one rack fails.
AOS 5.17 introduces Rack Fault Tolerance support for Microsoft Hyper-V. Now all three hypervisors (ESXi, Hyper-V and AHV) have node, block, and rack level failure domain protection.
|
Group Placement
HPE SimpliVity 2600 intelligent software features include Rack failure protection. Both rack level and site level protection within a cluster is administratively determined by placing hosts into groups. Data is balanced appropriately to ensure that each VM is redundantly stored across two separate groups.
|
|
Protection Capacity Overhead
Details
|
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
|
RF2 (2N) (primary): 100%
RF3 (3N) (primary): 200%
EC-X (N+1) (secondary): 20-50%
EC-X (N+2) (secondary): 50%
EC-X (N+1): The optimal and recommended stripe size is 4+1 (20% capacity overhead for data protection). The minimum 4-node cluster configuration has a stripe size is 2+1 (50% capacity overhead for data protection).
EC-X (N+2): The optimal and recommended stripe size is 4+2 (50% capacity overhead for data protection). The minimum 6-node cluster configuration also has a stripe size is 4+2.
Because Nutanix Erasure Coding (EC-X) is a secondary feature that is only used for write cold data, with regard to an EC-X enabled Storage Container (=vSphere Datastore) the overall capacity overhead for data protection is always a combination of RF2 (2N) + EC-X (N+1) or RF3 (3N) + EC-X (N+2).
From AOS 5.18 onwards Nutanix Erasure Coding X (EC-X) is enabled for Object Storage containers.
|
Replica (2N) + RAID5: 125-133%
Replica (2N) + RAID6: 120-133%
Replica (2N) + RAID60: 125-140%
The hardware RAID level that is applied depends on drive count in an individual node:
2 drives = RAID1
4-5 drives = RAID5
8-12 drives = RAID6
14-20 drives = RAID60 (2 per RAID6 set)
|
|
Data Corruption Detection
Details
|
N/A (hardware dependent)
SANsymphony fully relies on the hardware layer to protect data integrity. This means that the SANsymphony software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
Read integrity checks
Disk scrubbing (software)
While writing data checksums are created and stored. When read again, a new checksum is created and compared to the initial checksum. If incorrect, a checksum is created from another copy of the data. After succesful comparison this data is used to repair the corrupted copy in order to stay compliant with the configured protection level.
Disk Scrubbing is a background process that is used to perform checksum comparisons of all data stored within the solution. This way stale data is also verified for corruption.
|
Read integrity checks (CLI)
Disk scrubbing (software)
While writing data, checksums are created and stored as part of the inline deduplication process. When one of the underlying layers detects data corruption, a checksum comparison is performed and when required, another copy of the data is used to repair the corrupted copy in order to stay compliant with the configured protection level.
Read integrity checks can be enabled through the CLI.
Disk Scrubbing, termed 'RAID Patrol' by HPE SimpliVity 2600, is a background process that is used to perform checksum comparisons of all data stored within the solution. This way stale data is also verified for corruption.
|
|
|
|
Points-in-Time |
|
|
|
Built-in (native)
|
Built-in (native)
Nutanix calls its native snapshot/backup feature Time Stream.
|
Built-in (native)
HPE SimpliVity 2600 data protection capabilities are entirely integrated in its approach to backup/restore, so there is no need for additional Point-in-Time (PiT) capabilities.
Traditional snapshots can still be created using the features natively available in the hypervisor platform (eg. VMware Snapshots).
|
|
|
Local + Remote
SANsymphony snapshots are always created on one side only. However, SANsymphony allows you to create a snapshot for the data on each side by configuring two snapshot schedules, one for the local volume and one for the remote volume. Both snapshot entities are independent and can be deleted independently allowing different retention times if needed.
There is also the capability to pair the snapshot feature along with asynchronous replication which provides you with the ability to have a third site long distance remote copy in place with its own retention time.
|
Local + Remote
|
Local + Remote
HPE SimpliVity 2600 data protection capabilities are entirely integrated in its approach to backup/restore, so there is no need for additional Point-in-Time (PiT) capabilities.
Traditional snapshots can still be created using the features natively available in the hypervisor platform (eg. VMware Snapshots).
|
|
Snapshot Frequency
Details
|
1 Minute
The snapshot lifecycle can be automatically configured using the integrated Automation Scheduler.
|
GUI: 1-15 minutes (nearsync replication); 1 hour (async replication)
NearSync and Continuous (Metro Availability) remote replication are only available in the Ultimate edition.
Nutanix Files 3.6 introduces support for NearSync. NearSync can be configured to take snapshots of a file server in 1-minute intervals.
AOS 5.9 introduced one-time snapshots of protection domains that have a NearSync schedule.
|
GUI: 10 minutes (Policy-based)
CLI: 1 minute
Backups can be scheduled.
Although setting the backup frequency below 10 minutes is possible through the Command Line Interface (CLI), it should not be used for a large number of protected VMs as this could severely impact performance.
|
|
Snapshot Granularity
Details
|
Per VM (Vvols) or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is certified for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
Per VM
|
Per VM
|
|
|
Built-in (native)
DataCore SANsymphony incorporates Continuous Data Protection (CDP) and leverages this as an advanced backup mechanism. As the term implies, CDP continuously logs and timestamps I/Os to designated virtual disks, allowing end-users to restore the environment to an arbitrary point-in-time within that log.
Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage. Several of these markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent or crash-consistent restore point from just before the incident occurred.
|
Built-in (native)
Nutanix calls its native snapshot feature Time Stream.
Nutanix calls its cloud backup feature Cloud-Connect.
By combining Nutanix native snapshot feature with its native remote replication mechanism, backup copies can be created on remote Nutanix clusters or within the public cloud (AWS/Azure).
A snapshot is not a backup:
1. For a data copy to be considered a backup, it must at the very least reside on a different physical platform (=controller+disks) to avoid dependencies. If the source fails or gets corrupted, a backup copy should still be accessible for recovery purposes.
2. To avoid further dependencies, a backup copy should reside in a different physical datacenter - away from the source. If the primary datacenter becomes unavailable for whatever reason, a backup copy should still be accessible for recovery purposes.
When considering the above prerequisites, a backup copy can be created by combining snapshot functionality with remote replication functionality to create independent point-in-time data copies on other SDS/HCI clusters or within the public cloud. In ideal situations, the retention policies can be set independently for local and remote point-in-time data copies, so an organization can differentiate between how long the separate backup copies need to be retained.
Apart from the native features, Nutanix ECP can be used in conjunction with external data protection solutions like VMwares free-of-charge vSphere Data Protection (VDP) backup software, as well as any hypervisor compatible 3rd party backup application. VDP is part of the vSphere license and requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between Nutanix ECP and VMware VDP.
|
Built-in (native)
HPE SimpliVity 2600 provides native backup capabilities. Its backup feature supports remote-replication, is deduplication aware and data is compressed over the wire.
A snapshot is not a backup:
1. For a data copy to be considered a backup, it must at the very least reside on a different physical platform (=controller+disks) to avoid dependencies. If the source fails or gets corrupted, a backup copy should still be accessible for recovery purposes.
2. To avoid further dependencies, a backup copy should reside in a different physical datacenter - away from the source. If the primary datacenter becomes unavailable for whatever reason, a backup copy should still be accessible for recovery purposes.
When considering the above prerequisites, a backup copy can be created by combining snapshot functionality with remote replication functionality to create independent point-in-time data copies on other SDS/HCI clusters or within the public cloud. In ideal situations, the retention policies can be set independently for local and remote point-in-time data copies, so an organization can differentiate between how long the separate backup copies need to be retained.
|
|
|
Local or Remote
All available storage within the SANsymphony group can be configured as targets for back-up jobs.
|
To local single-node
To local and remote clusters
To remote cloud object stores (Amazon S3, Microsoft Azure)
Nutanix calls its native snapshot feature Time Stream.
Nutanix calls its cloud backup feature Cloud-Connect.
Nutanix provides a single-node cluster configuration for on-premises off-production-cluster backup purposes by leveraging native snapshots. The single-node supports compression and deduplication, but cannot be used as a generic backup target. It uses Replication Factor 2 (RF2 ) to protect data against magnetic disk failures.
Nutanix also provides a single-node cluster configuration in the AWS and Azure public clouds for off-premises backup purposes. Data is moved to the cloud in an already deduplicated format. Optionally compression can be enabled on the single-node in the cloud.
|
Locally
To other SimpliVity sites
To Service Providers
Backup remote-replication is deduplication aware + data is compressed over the wire.
|
|
|
Continuously
As Continuous Data Protection (CDP) is being leveraged, I/Os are logged and timestamped in a continous fashion, so end-users can restore to virtually any-point-in-time.
|
NearSync to remote clusters: 1-15 minutes*
Async to remote clusters: 1 hour
AWS/Azure Cloud: 1 hour
Nutanix calls its cloud backup feature Cloud-Connect.
*The retention of NearSync LightWeight snapshots (LWS) is relatively low (15 minutes), so these snapshots are of limited use in a backup scenario where retentions are usually a lot higher (days/weeks/months).
|
GUI: 10 minutes (Policy-based)
CLI: 1 minute
Although setting the backup frequency below 10 minutes is possible through the Command Line Interface (CLI), it should not be used for a large number of protected VMs as this could severely impact performance.
|
|
Backup Consistency
Details
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
By default CDP creates crash consistent restore points. Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage.
Several CDP Rollback Markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent, filesystem-consistent or crash-consistent restore point from (just) before the incident occurred.
In a VMware vSphere environment, the DataCore VMware vCenter plug-in can be used to create snapshot schedules for datastores and select the VMs that you want to enable VSS filesystem/application consistency for.
|
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
Nutanix provides the option to enable Microsoft VSS integration when configuring a backup policy. This ensures application-consistent backups are created for MS Exchange and MS SQL database environments.
AOS 5.9 introduced Application-Consistent Snapshot Support for NearSync DR.
AOS 5.11 introduces support for Nutanix Guest Tools (NGT) in VMware ESXi environments next to native AHV environments. This allows for the use of native Nutanix Volume Shadow Copy Service (VSS) instead of Microsofts VSS inside Windows Guest VMs. The Nutanix VSS hardware provider enables integration with native Nutanix data protection. The provider allows for application-consistent, VSS-based snapshots when using Nutanix protection domains. The Nunatix Guest Tools CLI (ngtcli) can also be used to execute the Nutanix Self-Service Restore CLI.
|
vSphere: File System Consistent (Windows), Application Consistent (MS Apps on Windows)
Hyper-V: File System Consistent (Windows)
| |