|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
- + Extensive platform support
- + Extensive data protection capabilities
- + Flexible deployment options
|
- + Strong Cisco integration
- + Fast streamlined deployment
- + Strong container support
|
- + Flexible architecture
- + Broad range of hardware support
- + Built for performance
|
|
Cons
|
- - No native data integrity verification
- - Dedup/compr not performance optimized
- - Disk/node failure protection not capacity optimized
|
- - Single server hardware support
- - No bare-metal support
- - Limited native data protection capabilities
|
- - Limited data protection capabilities
- - No stretched clustering
- - No dedup capabilities
|
|
|
|
Content |
|
|
|
|
WhatMatrix
|
WhatMatrix
|
WhatMatrix
|
|
|
|
Assessment |
|
|
|
|
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
DataCore was founded in 1998 and began to ship its first software-defined storage (SDS) platform, SANsymphony (SSY), in 1999. DataCore launched a separate entry-level storage virtualization solution, SANmelody (v1.4), in 2004. This platform was also the foundation for DataCores HCI solution. In 2014 DataCore formally announced Hyperconverged Virtual SAN as a separate product. In May 2018 integral changes to the software licensing model enabled consolidation because the core software is the same and since then cumulatively called DataCore SANsymphony.
One year later, in 2019, DataCore expanded its software-defined storage portfolio with a solution especially for the need of file virtualization. The additional SDS offering is called DataCore vFilO and operates as scale-out global file system across distributed sites spanning on-premises and cloud-based NFS and SMB shares.
Recently, at the beginning of 2021, DataCore acquired Caringo and integrated its know how and software-defined object storage offerings into the DataCore portfolio. The newest member of the DataCore SDS portfolio is called DataCore Swarm and together with its complementary offering SwarmFS and DataCore FileFly it enables customers to build on-premises object storage solutions that radically simplify the ability to manage, store, and protect data while allowing multi-protocol (S3/HTTP, API, NFS/SMB) access to any application, device, or end-user.
DataCore Software specializes in the high-tech fields of software solutions for block, file, and object storage. DataCore has by far the longest track-record when it comes to software-defined storage, when comparing to the other SDS/HCI vendors on the WhatMatrix.
In April 2021 the company had an install base of more than 10,000 customers worldwide and there were about 250 employees working for DataCore.
|
Name: HyperFlex (HX)
Type: Hardware+Software (HCI)
Development Start: 2015
First Product Release: apr 2016
NEW
Springpath Inc., founded in 2012, released its first Software Defined Storage (SDS) solution, Springpath Data Platform (SDP), in february 2015. Early 2016 Springpath Inc. exclusively partnered with Cisco to re-launch its SDS platform as part of a hyper-converged (HCI) offering, Cisco HyperFlex (HX), which surfaced in April 2016. In September 2016 Cisco officialy completed the acquisition of Springpath, solidyfing the core of its HCI technology.
In October 2019 Cisco HyperFlex (HX) had a customer install base of more than 4,000 customers worldwide. The number of employees working in the HyperFlex division is unknown at this time.
|
Name: PowerFlex
Type: Software-only (SDS)
Development Start: Early 2011
First Product Release: dec 2012
NEW
ScaleIO was founded early 2011 and began to ship its first Software Defined Storage (SDS) solution, Elastic Converged Storage (ECS), late 2012. In June 2013, ScaleIO was acquired by EMC. Early 2016 EMC ScaleIO introduced its first Hyper Converged Infrastructure (HCIS) solution, ScaleIO Node, with the ECS platform at its core. In september 2016 ScaleIO Node was re-branded to ScaleIO Ready Node when the server hardware changed from Quantum to Dell. In march 2018 the ScaleIO product family was re-branded to VxFlex OS and VxFlex Ready Node respectively. In june 2020 the VxFlex OS product family was re-branded to PowerFlex.
Customer install base and number of employees are unknown at this time.
|
|
|
|
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
10th Generation software. DataCore currently has the most experience when it comes to SDS/HCI technology, when comparing SANsymphony to other SDS/HCI platforms.
SANsymphony (SSY) version 3 was the first public release that hit the market back in 1999. The product has evolved ever since and the current major release is version 10. The list includes only the milestone releases.
PSP = Product Support Package
U = Update
|
GA Release Dates:
HX 4.0: apr 2019
HX 3.5.2a: jan 2019
HX 3.5.1a: nov 2018
HX 3.5: oct 2018
HX 3.0: apr 2018
HX 2.6.1b: dec 2017
HX 2.6.1a: oct 2017
HX 2.5: jul 2017
HX 2.1: may 2017
HX 2.0: mar 2017
HX 1.8: sep 2016
HX 1.7.3: aug 2016
1.7.1-14835: jun 2016
HX 1.7.1: apr 2016
NEW
4th Generation software on 4th and 5th generation Cisco UCS server hardware.
Cisco HyperFlex is fueled by Springpath software, which is now co-developed with Cisco and renamed to HX Data Platform. Cisco HyperFlex maturity has been gradually increasing ever since the first iteration by expanding its range of features with a set of foundational and advanced functionality.
|
GA Release Dates:
PowerFlex 3.5: jun 2020
VxFlex OS 3.0.1.1: jan 2020
VxFlex OS 3.0.1: sep 2019
VxFlex OS 3.0: mar 2019
VxFlex OS 2.6.1.1: jan 2019
VxFlex OS 2.6.1: oct 2018
VxFlex OS 2.6: may 2018
VxFlex OS 2.5: feb 2018
VxFlex OS 2.0.1.4: oct 2017
VxFlex OS 2.0.1.3: mar 2017
VxFlex OS 2.0: mar 2016
VxFlex OS 1.32: may 2015
VxFlex OS 1.31: dec 2014
VxFlex OS 1.30: sep 2014
VxFlex OS 1.20: oct 2013
VxFlex OS 1.10: dec 2012
NEW
6th Generation software. PowerFlexs feature set is getting richer, but the platform still lacks some advanced functionality that competing SDS/HCI offerings provide.
|
|
|
|
Pricing |
|
|
|
Hardware Pricing Model
Details
|
N/A
SANsymphony is sold by DataCore as a software-only solution. Server hardware must be acquired separately.
The entry point for all hardware and software compatibility statements is: https://www.datacore.com/products/sansymphony/tech/compatibility/
On this page links can be found to: Storage Devices, Servers, SANs, Operating Systems (Hosts), Networks, Hypervisors, Desktops.
Minimum server hardware requirements can be found at: https://www.datacore.com/products/sansymphony/tech/prerequisites/
|
Per Node
Bundle (ROBO)
Next to acquiring individual nodes Cisco also offers a bundle that is aimed at small ROBO deployments, HyperFlex Edge.
Cisco HyperFlex Edge consists of 3 HX220x Edge M5 hybrid nodes with 1GbE connectivity. The Edge configuration cannot be expanded.
|
N/A
For the PowerFlex software-only solution, server hardware must be acquired separately.
Dell EMC does not maintain a Hardware Compatibility List (HCL) with supported hardware for PowerFlex implementations.
For guidance on proper hardware configurations, Dell EMC provides hardware requirements and reference architectures (SPEX).
|
|
|
Software Pricing Model
Details
|
Capacity based (per TB)
NEW
DataCore SANsymphony is licensed in three different editions: Enterprise, Standard, and Business.
All editions are licensed per capacity (in 1 TB steps). Except for the Business edition which has a fixed price per TB, the more capacity that is used by an end-user in each class, the lower the price per TB.
Each edition includes a defined feature set.
Enterprise (EN) includes all available features plus expanded Parallel I/O.
Standard (ST) includes all Enterprise (EN) features, except FC connections, Encryption, Inline Deduplication & Compression and Shared Multi-Port Array (SMPA) support with regular Parallel I/O.
Business (BZ) as entry-offering includes all essential Enterprise (EN) features, except Asynchronous Replication & Site Recovery, Encryption, Deduplication & Compression, Random Write Accelerator (RWA) and Continuous Data Protection (CDP) with limited Parallel I/O.
Customers can choose between a perpetual licensing model or a term-based licensing model. Any initial license purchase for perpetual licensing includes Premier Support for either 1, 3 or 5 years. Alternatively, term-based licensing is available for either 1, 3 or 5 years, always including Premier Support as well, plus enhanced DataCore Insight Services (predictive analytics with actionable insights). In most regions, BZ is available as term license only.
Capacity can be expanded in 1 TB steps. There exists a 10 TB minimum per installation for Business (BZ). Moreover, BZ is limited to 2 instances and a total capacity of 38 TB per installation, but one customer can have multiple BZ installations.
Cost neutral upgrades are available when upgrading from Business/Standard (BZ/ST) to Enterprise (EN).
|
Per Node
Cisco HyperFlex HX Data Platform (HXDP) Software is offered as an annual software subscription (1 year or 3 years).
There are 3 software editions to choose from: Edge, Standard and Enterprise.
HXDP Edge is the most limited edition and does not have the following software capabilities: Microsoft Hyper-V support, Kubernetes Container Persistent Storage, CCP, Maximum cluster scale, NVMe Flash caching, Logical Availablity Zones, Stretched Clustering and Synchronous replication, SEDs, Client Authentication and Cluster Lockdown.
HXDP Enterprise has the following advanced capabilities not available in HXDP Standard: Stretched Clustering, Synchronous replication and support for HX Hardware Acceleration Engine (PCIe).
Compute-only nodes do not require a subscription fee (free license).
|
Per TB
Editions:
Basic
Enterprise (add-on)
The PowerFlex Basic license includes: High Availability, Self-Healing, Scale-out architecture, Support for 1,000s of nodes, Elasticity, Hyper Convergence, Automated performance tuning, Any Drive support (SSD/PCIe/HDD), Automatic Cluster Growth, Asymmetric nodes support – heterogeneous support of different server brands / OSes. Advanced Management – includes vSphere plugin, REST API, OpenStack Support.
The PowerFlex Enterprise license includes: QoS, Data Obfuscation, Snapshots, Auto tiering (Flash caching) – use XtremCache with PowerFlex, RAM caching, Fault Sets, Thin provisioning, Fine Granularity (FG) storage pools.
EMC uses a tiered pricing structure per physical device capacity TB for both the Basic and Enterprise licenses. The more capacity that is purchased, the cheaper the price per TB.
|
|
|
Support Pricing Model
Details
|
Capacity based (per TB)
Support is always provided on a premium (24x7) basis, including free updates.
More information about DataCores support policy can be found here:
http://datacore.custhelp.com/app/answers/detail/a_id/1270/~/what-is-datacores-support-policy-for-its-products
|
Per Node
Cisco provides a variety of support service offerings, including:
- Unified Computing Warranty, No Contract (non-production environments)
- Smart Net Total Care for UCS (8x5 or 24x7; with or without Onsite)
|
Per TB
Subscriptions: Basic, Enhanced, Premium
Basic: Response based on severity level within 9-5 basis plus 24x7 remote support.
Enhanced: Basic support plus next business day onsite.
Premium: Options include four-hour onsite responses as well as post-warranty maintenance.
|
|
|
Design & Deploy
|
|
|
|
|
|
|
Design |
|
|
|
Consolidation Scope
Details
|
Storage
Data Protection
Management
Automation&Orchestration
DataCore is storage-oriented.
SANsymphony Software-Defined Storage Services are focused on variable deployment models. The range covers classical Storage Virtualization over Converged and Hybrid-Converged to Hyperconverged including a seamless migration between them.
DataCore aims to provide all key components within a storage ecosystem including enhanced data protection and automation & orchestration.
|
Compute
Storage
Network
Management
Automation&Orchestration
Both Cisco and the HyperFlex platform itself are stack-oriented.
With the HyperFlex platform Cisco aims to provide all key functionality required in a Private Cloud ecosystem as well as integrate with existing hypervisors and applications.
|
Compute
Storage
Dell EMC is stack-oriented, whereas the PowerFlex platform itself is heavily storage-focused.
With the PowerFlex platform Dell EMC aims to provide key generic components within a Private Cloud ecosystem.
|
|
|
|
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
The bandwidth required depends entirely on the specifc workload needs.
SANsymphony 10 PSP11 introduced support for Emulex Gen 7 64 Gbps Fibre Channel HBAs.
SANsymphony 10 PSP8 introduced support for Gen6 16/32 Gbps ATTO Fibre Channel HBAs.
|
1, 10, 40 GbE
Cisco HyperFlex hardware models include redundant ethernet connectivity using SFP+. Cisco recommends at least 10GbE to avoid the network becoming a performance bottleneck.
Cisco also supports 40GbE Fabrics as of HX 2.0.
Ciso HyperFlex M4 models have 10GbE onboard; Cisco HyperFlex M5 models have 40GbE onboard.
As of HX 3.5 Cisco HyperFlex Edge bundle supports both 1GbE and 10GbE.
|
1, 10, 25, 40, 100 GbE
PowerFlex supports ethernet connectivity using SFP+ or Base-T. Dell EMC recommends at least 10GbE to avoid the network becoming a performance bottleneck.
|
|
|
Overall Design Complexity
Details
|
Medium
DataCore SANsymphony is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. DataCore SANsymphony seeks to provide important capabilities either natively or tightly integrated, and this keeps the design process relatively simple. However, because many features in SANsymphony are optional and thus can be turned on/off, in effect each one needs to be taken into consideration when preparing a detailed design.
|
Low
Cisco HyperFlex was developed with simplicity in mind, both from a design and a deployment perspective. Cisco HyperFlex uniform platform architecture is meant to be applicable to all virtualized enterprise application use-cases. With the exception of backup/restore, most capabilities are provided natively and on a per-VM basis, keeping the design relatively clean and simple. Advanced features like deduplication and compression are always turned on. This minimizes the amount of design choices as well as the number of deployment steps.
|
High
PowerFlex is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. In addition PowerFlex does not encompass many native data protection capabilities and data services. A complete solution design therefore requires the presence of multiple technology platforms.
|
|
|
External Performance Validation
Details
|
SPC (Jun 2016)
ESG Lab (Jan 2016)
SPC (Jun 2016)
Title: 'Dual Node, Fibre Channel SAN'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-connected, SSY 10.0, 4x All-Flash Dell MD1220 SAS Storage Arrays
SPC (Jun 2016)
Title: 'Dual Node, High Availability, Hyper-converged'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-interconnect, SSY 10.0
ESG Lab (Jan 2016)
Title: 'DataCore Application-adaptive Data Infrastructure Software'
Workloads: OLTP
Benchmark Tools: IOmeter
Hardware: Hybrid (Tiered) Dell PowerEdge R720, 2-node cluster, SSY 10.0
|
ESG Lab (jul 2018)
SAP (dec 2017)
ESG Lab (mar 2017)
ESG Lab (Jul 2018)
Title: 'Mission-critical Workload Performance Testing of Different Hyperconverged Approaches on the Cisco Unified Computing System Platform (UCS)'
Workloads: MSSQL OLTP, Oracle OLTP, Virtual Servers (VSI), Virtual desktops (VDI)
Benchmark Tools: Vdbench (MSSQL, Oracle)
Hardware: All-flash HyperFlex HX220c M4, 4-node cluster, HX 2.6
Remark: Also impact to performance caused by deduplication and compression was measured in comparison to two SDS platforms.
SAP (Dec 2017)
Title: 'SAP Sales and Distribution (SD) Standard Application Benchmark'.
Workloads: SAP ERP
Benchmark Tools: SAPSD
Hardware: All-Flash HyperFlex HX240c M4, single -node, HX 2.6
ESG Lab (Mar 2017)
Title: 'Hyperconverged Infrastructure with Consistent High Performance for Virtual Machines'.
Workloads: MSSQL OLTP
Benchmark Tools: Vdbench (MSSQL)
Hardware: Hybrid+All-Flash HyperFlex HX220c M4, 4-node cluster, HX 2.0
|
ESG Lab (oct 2016)
ESG Lab (Oct 2016)
Title: 'Optimize Hyper-converged Infrastructure with Dell EMC ScaleIO Software-defined Storage'
Workloads: Generic, Oracle OLTP
Benchmark Tools: FIO, SLOB
Hardware: Hybrid Intel servers, 4-8 node cluster, ScaleIO 2.0.0
|
|
|
Evaluation Methods
Details
|
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
SANsymphony is freely downloadble after registering online and offers full platform support (complete Enterprise feature set) but is scale (4 nodes), capacity (16TB) and time (30 days) restricted, what all can be expanded upon request. The free trial version of SANsymphony can be installed on all commodity hardware platforms that meet the hardware requirements.
For more information please go here: https://www.datacore.com/try-it-now/
|
Online Labs
Proof-of-Concept (PoC)
Cisco has a few online HyperFlex simulators within its Demo Cloud (dcloud) environment.
|
Free Download (forever)
Proof-of-Concept (PoC)
PowerFlex block storage software is available for free and for an unlimited time, without any capacity restrictions. The free version of PowerFlex is restricted to non-production usage and as such does not contain any maintenance/support.
|
|
|
|
Deploy |
|
|
|
Deployment Architecture
Details
|
Single-Layer
Dual-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
Single-Layer:
- SANsymphony is implemented as virtual machine (VM) or in case of Hyper-V as service layer on Hyper-V parent OS, managing internal and/or external storage devices and providing virtual disks back to the hypervisor cluster it is implemented in. DataCore calls this a hyper-converged deployment.
Dual-Layer:
- SANsymphony is implemented as bare metal nodes, managing external storage (SAN/NAS approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a traditional deployment.
- SANsymphony is implemented as bare metal nodes, managing internal storage devices (server-SAN approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a converged deployment.
Mixed:
- SANsymphony is implemented in any combination of the above 3 deployments within a single management entity (Server Group) acting as a unified storage grid. DataCore calls this a hybrid-converged deployment.
|
Single-Layer (primary)
Dual-Layer (secondary)
Single-Layer: Cisco HyperFlex is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
Cisco HyperFlex can also serve in a dual-layer model by providing storage to non-HyperFlex hypervisor hosts (Please view the compute-only scale-out option for more information).
|
Single-Layer
Dual-Layer
There are two main PowerFlex components to consider:
- Storage Data Server (SDS) that is used to present storage volumes
- Storage Data Client (SDC) that is used to access storage volumes that are presented
Therefore PowerFlex can be setup in two ways:
1. By installing both the SDS and the SDC component on the same server, creating a hyper-converged single-layer configuration.
2. By installing the SDS and SDC component on separate servers, creating a traditional dual-layer configuration.
The flexibility of the PowerFlex platform allows the two configurations to be mixed within the same environment.
|
|
|
Deployment Method
Details
|
BYOS (some automation)
BYOS = Bring-Your-Own-Server-Hardware
DataCore SANsymphony is made easy by providing a very straightforward implementation approach.
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by Cisco, customer deployments can be executed in hours instead of days.
For initial deployment, Cisco expanded end-to-end automation across network, compute, hypervisor and storage in HX 1.8 and refined this in HX 2.0.
HX 3.0 introduced the ability for centralized global deployment from the cloud, delivered through Cisco Intersight.
|
BYOS (some automation)
BYOS = Bring-Your-Own-Server-Hardware
|
|
|
Workload Support
|
|
|
|
|
|
|
Virtualization |
|
|
|
Hypervisor Deployment
Details
|
Virtual Storage Controller
Kernel (Optional for Hyper-V)
The SANsymphony Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the SANsymphony storage solution and commits its internal storage and/or externally connected storage to the shared resource pool. The Virtual Storage Controller (VSC) can be configured direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
In Microsoft Hyper-V environments the SANsymphony software can also be installed in the Windows Server Root Partition. DataCore does not recommend installing SANsymphony in a Hyper-V guest VM as it introduces virtualization layer overhead and obstructs DataCore Software from directly accessing CPU, RAM and storage. This means that installing SANsymphony in the Windows Server Root Partition is the preferred deployment option. More information about the Windows Server Root Partition can be found here: https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/architecture
The DataCore software can be installed on Microsoft Windows Server 2019 or lower (all versions down to Microsoft Windows Server 2012/R2).
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Virtual Storage Controller
Cisco HyperFlex uses Virtual Storage Controller (VSC) VMs on the VMware vSphere and Microsoft Hyper-V hypervisor platform.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
vSphere: Virtual Storage Controller + kernel module
Hyper-V/KVM: OS Drivers and packages
VMware vSphere: The PowerFlex Virtual Machine (SVM) is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the PowerFlex storage solution and commits its internal storage to the shared resource pool. The Virtual Storage Controller (VSC) has direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
Hyper-V/KVM: The PowerFlex components can be installed and configured on multiple nodes from one central server via a web client by using PowerFlex Installation Manager (IM). IM has a REST API that enables install, extend, and uninstall functionalities.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
|
|
Hypervisor Compatibility
Details
|
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
'Not qualified' means there is no generic support qualification due to limited market footprint of the product. However, a customer can always individually qualify the system with a specific SANsymphony version and will get full support after passing the self-qualification process.
Only products explicitly labeled 'Not Supported' have failed qualification or have shown incompatibility.
|
VMware vSphere ESXi 6.0U3/6.5U2/6.7U2
Microsoft Hyper-V 2016/2019
NEW
Cisco HyperFlex 3.0 introduced support for Microsoft Hyper-V.
Cisco HyperFlex 3.01b added support for VMware vSphere 6.5U2.
Cisco HyperFlex 3.5.2 added support for VMware vSphere 6.7U1.
Cisco HyperFlex 4.0 introduces support for VMware vSphere 6.7U2 and Microsoft Hyper-V 2019.
|
VMware vSphere ESXi 6.5-6.7U3*
Microsoft Hyper-V 2012R2/2016/2019**
Citrix Hypervisor 7.1.2/7.6/8.0./8.1 (XenServer)
NEW
PowerFlex supports all major hypervisor platforms.
*At this time PowerFlex 3.5 only supports VMware vSphere 7 in dual-layer configurations (SDC core component). Support for VMware vSphere 7 (SDS core component) is planned in a 2H 2020 release of PowerFlex Manager.
**PowerFlex only supports Microsoft Windows Server + Hyper-V dual-layer configurations (SDC core component). Single-layer deployments (SDS core component) are not supported.
SDC =Storage Data Client
SDS =Storage Data Server
|
|
|
Hypervisor Interconnect
Details
|
iSCSI
FC
The SANsymphony software-only solution supports both iSCSI and FC protocols to present storage to hypervisor environments.
DataCore SANsymphony supports:
- iSCSI (Switched and point-to-point)
- Fibre Channel (Switched and point-to-point)
- Fibre Channel over Ethernet (FCoE)
- Switched, where host uses Converged Network Adapter (CNA), and switch outputs Fibre Channel
|
NFS
SMB
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
PowerFlex
PowerFlex uses a proprietary block storage and metadata protocol over TCP/IP. It is NOT iSCSI due to PowerFlex’s distributed nature.
|
|
|
|
Bare Metal |
|
|
|
Bare Metal Compatibility
Details
|
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Any operating system currently not qualified for support can always be individually qualified with a specific SANsymphony version and will get full support after passing the self-qualification process.
SANsymphony provides virtual disks (block storage LUNs) to all of the popular host operating systems that use standard disk drives with 512 byte or 4K byte sectors. These hosts can access the SANsymphony virtual disks via SAN protocols including iSCSI, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE).
Mainframe operating systems such as IBM z/OS, z/TPF, z/VSE or z/VM are not supported.
SANsymphony itself runs on Microsoft Windows Server 2012/R2 or higher.
|
N/A
Cisco HyperFlex does not support any non-hypervisor platforms.
|
Microsoft Windows Server
Linux Distributions
IBM AIX
NEW
Supported Windows versions (SDC core component only):
Windows Server 2012R2/2016/2019
Supported Linux versions:
RHEL 6.9/6.10/7.5/7.6/7.7/7.8/8.0/8.1/8.2
CentOS 6.9/6.10/7.5/7.6/7.7/7.8/8.0/8.1/8.2
Oracle Linux 6.9/6.10/7.5/7.6/7.7
SLES 11SP4/12SP4/12SP5/15/15SP1
Ubuntu 16.04.6/18.04.2/18.04.3
Supported AIX versions (SDC core component only):
AIX 7.2 TL3
SDC =Storage Data Client
|
|
|
Bare Metal Interconnect
Details
|
iSCSI
FC
FCoE
|
N/A
Cisco HyperFlex does not support any non-hypervisor platforms.
|
Block Device Driver
The PowerFlex Data Client (SDC) component is installed on application servers that are going to consume storage.
The PowerFlex Data Server (SDS) component is installed on storage servers that are used to contribute local storage to the shared resource pool.
|
|
|
|
Containers |
|
|
|
Container Integration Type
Details
|
Built-in (native)
DataCore provides its own Volume Plugin for natively providing Docker container support, available on Docker Hub.
DataCore also has a native CSI integration with Kubernetes, available on Github.
|
Built-in (native)
Cisco developed its own container platform software called 'Cisco Container Platform' (CCP). CCP provides on-premises Kubernetes-as-a-Service (KaaS) in order to enable end-users to quickly adopt container services.
Cisco Container Platform (CCP) is not a hard requirement for running Docker containers and Kubernetes on top of HX, however it does make it easier to use and consume.
|
Built-in (native)
Dell EMC provides its own software plugins for container support (both Docker and Kubernetes).
|
|
|
Container Platform Compatibility
Details
|
Docker CE/EE 18.03+
Docker EE = Docker Enterprise Edition
|
Docker EE 1.13+
Cisco Container Platform (CCP) supports deployment of Kubernetes clusters on HyperFlex IaaS (VMware). The Kubernetes pods leverage the Docker container platform as the runtime environment.
Cisco Container Platform (CCP) is not a hard requirement for running Docker containers and Kubernetes on top of HX, however it does make it easier to use and consume.
Docker EE = Docker Enterprise Edition
|
Docker EE 1.12+
Mesos
Docker EE = Docker Enterprise Edition
|
|
|
Container Platform Interconnect
Details
|
Docker Volume plugin (certified)
The DataCore SDS Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables SANsymphony data volumes to persist beyond the lifetime of both a container or a container host. DataCore leverages SANsymphony iSCSI and FC to provide storage to containers. This effectively means that the hypervisor layer is bypassed.
The Docker SDS Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the Docker Hub. The plugin is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a SANsymphony storage cluster.
For more information please go to: https://hub.docker.com/plugins/datacore-sds-volume-plugin
The Kubernetes CSI plugin can be downloaded from GitHub. The plugin is automatically deployed as several pods within the Kubernetes system.
For more information please go to: https://github.com/DataCoreSoftware/csi-plugin
Both plugins are supported with SANsymphony 10 PSP7 U2 and later.
|
HX FlexVolume Driver
The Cisco HX FlexVolume Driver provides persistent storage for containers running in a Cisco Container Platform (CCP) environment. The driver communicates with an API of the HX Virtual Storage Controller and provides storage request details through use of a YAML file. Storage is presented to containers by HyperFlex through in-guest iSCSI connections. This effectively means that the hypervisor layer is bypassed.
Cisco Container Platform (CCP) is not a hard requirement for running Docker containers and Kubernetes on top of HX, however it does make it easier to use and consume.
The Cisco HX FlexVolume Driver is supported with HX 3.0 and later.
|
Docker Volume Plugin (certified)
The 'REX-Ray for PowerFlex' plugin is a block volume plugin that connects containers to persistent storage served by PowerFlex.
The 'REX-Ray for PowerFlex' plugin is officially 'Docker Certified' and can be downloaded from the online Docker Store.
|
|
|
Container Host Compatibility
Details
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The DataCore native plug-ins are container-host centric and as such can be used across all SANsymphony-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, KVM, XenServer, Oracle VM Server) as well as on bare metal platforms.
|
Virtualized container hosts on VMware vSphere hypervisor
Because Cisco HyperFlex currently does not offer bare-metal support, Cisco Container Platform (CCP) on HyperFlex cannot be used for bare metal hosts running containers.
Cisco Container Platform (CCP) on HyperFlex only support the VMware vSphere hypervisor at this time.
Cisco Container Platform (CCP) is not a hard requirement for running Docker containers and Kubernetes on top of HX, however it does make it easier to use and consume.
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The PowerFlex native plugins are container-host centric and as such can be used across all PowerFlex-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, Linux KVM and Citrix XenServer) as well as on bare metal platforms.
|
|
|
Container Host OS Compatbility
Details
|
Linux
All Linux versions supported by Docker CE/EE 18.03+ or higher can be used.
|
Ubuntu Linux 16.04.3 LTS
A Kubernetes tenant cluster consists of 1 master and 2 worker nodes at minimum in Cisco HyperFlex environments. The nodes run Ubuntu Linux 16.04.3 LTS as the operating system.
|
CentOS
CoreOS
Debian
Red Hat
Ubuntu
'REX-Ray for PowerFlex' has been qualified for the mentioned Linux operating systems.
Container hosts running the Windows OS are not (yet) supported.
Supported CoreOS versions (SDC core component only):
CoreOS 2191.5.0
|
|
|
Container Orch. Compatibility
Details
|
Kubernetes 1.13+
|
Kubernetes
Cisco Container Platform (CCP) configuration consists of 1 master and 3 worker nodes for the CCP control plane (one VM for each HyperFlex cluster node). The CCP nodes are deployed from a VMware OVF template.
From the CCP control plane Kubernetes 1.9.2+ tenant clusters can be deployed. A Kubernetes tenant cluster consists of 1 master and X worker nodes.
|
Kubernetes 1.6+
|
|
|
Container Orch. Interconnect
Details
|
Kubernetes CSI plugin
The Kubernetes CSI plugin provides several plugins for integrating storage into Kubernetes for containers to consume.
DataCore SANsymphony provides native industry standard block protocol storage presented over either iSCSI or Fibre Channel. YAML files can be used to configure Kubernetes for use with DataCore SANsymphony.
|
HX-CSI Plugin
NEW
Cisco HyperFlex 4.0 introduces support for the HyperFlex CSI plugin based on the Kubernetes Container Storage Interface (CSI) specification. The HX-CSI plugin is leveraged to provision and manage persistent volumes in Kubernetes v1.13 and later. The Cisco HyperFlex CSI plugin driver is deployed as containers.
Before CSI volume plugins were “in-tree” meaning their code was part of the core Kubernetes code and shipped with the core Kubernetes binaries. Storage vendors wanting to add support for their storage system to Kubernetes (or even fix a bug in an existing volume plugin) were forced to align with the Kubernetes release process. In addition, third-party storage code caused reliability and security issues in core Kubernetes binaries and the code was often difficult (and in some cases impossible) for Kubernetes maintainers to test and maintain. CSI is 'out-of-tree' meaning that with CSI, third-party storage providers can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code. This gives Kubernetes users more options for storage and makes the system more secure and reliable.
The HX FlexVolume Driver, supported with HX 3.0 and HX 3.5, is hereby deprecated. The HX FlexVolume Driver was an external volume driver for Kubernetes. It ran in a K8S Node VM and provisioned a requested persistent volume that was compatible with the Kubernetes iSCSI volume.
|
PowerFlex-CSI Plugin
VxFlex OS 3.0 introduced support for the PowerFlex CSI plugin based on the Kubernetes Container Storage Interface (CSI) specification. The PowerFlex-CSI plugin is leveraged to provision and manage persistent volumes in Kubernetes v1.13 and later.
Before CSI volume plugins were “in-tree” meaning their code was part of the core Kubernetes code and shipped with the core Kubernetes binaries. Storage vendors wanting to add support for their storage system to Kubernetes (or even fix a bug in an existing volume plugin) were forced to align with the Kubernetes release process. In addition, third-party storage code caused reliability and security issues in core Kubernetes binaries and the code was often difficult (and in some cases impossible) for Kubernetes maintainers to test and maintain. CSI is 'out-of-tree' meaning that with CSI, third-party storage providers can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code. This gives Kubernetes users more options for storage and makes the system more secure and reliable.
|
|
|
|
VDI |
|
|
|
VDI Compatibility
Details
|
VMware Horizon
Citrix XenDesktop
There is no validation check being performed by SANsymphony for VMware Horizon or Citrix XenDesktop VDI platforms. This means that all versions supported by these vendors are supported by DataCore.
|
VMware Horizon
Citrix XenDesktop
Cisco has published Reference Architecture whitepapers for both VMware Horizon and Citrix XenDesktop platforms.
|
VMware Horizon
Citrix XenDesktop
Dell EMC has not published any recent PowerFlex Reference Architecture whitepapers for both VMware Horizon and Citrix XenDesktop platforms.
|
|
|
|
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
DataCore has not published any recent VDI reference architecture whitepapers. The only VDI related paper that includes a Login VSI benchmark dates back to december 2010. There a 2-node SANsymphony cluster was able to sustain a load of 220 VMs based on the Login VSI 2.0.1 benchmark.
|
VMware: up to 137 virtual desktops/node
Citrix: up to 125 virtual desktops/node
VMware Horizon 7.6: Load bearing number is based on Login VSI tests performed on all-flash HX220c M5 appliances using 2vCPU Windows 10 desktops and the Knowledge Worker profile.
Citrix XenDesktop 7.16: Load bearing number is based on Login VSI tests performed on all-flash HX220c M5 appliances using 2vCPU Windows 10 desktops and the Knowledge Worker profile.
For detailed information please view the corresponding whitepapers.
|
VMware: unknown
Citrix:unknown
Dell EMC has not published any recent PowerFlex Reference Architecture whitepapers for both VMware Horizon and Citrix XenDesktop platforms.
|
|
|
Server Support
|
|
|
|
|
|
|
Server/Node |
|
|
|
Hardware Vendor Choice
Details
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
Cisco
NEW
Cisco HyperFlex (HX) compute+storage nodes are based on Cisco UCS C220 M5 and Cisco UCS C240 M5 rack server hardware. M4 server hardware reached End-of-Life (EOL) status on February 14th 2019. This means that end users cannot acquire M4 hardware any longer.
Cisco HyperFlex (HX) compute-only nodes are based on Cisco UCS B200 M4/M5, B260 M4/M5, B420 M4/M5 and B460 M4/M5 blade server hardware. The Cisco C220 M4/M5, C240 M4/M5 and C460 M4/M5 rack servers can optionally be used as compute-only nodes.
Cisco HyperFlex 4.0 introduces support for C480 ML compute-only nodes that serve in Deep Learning / Machine Learning environments.
|
Many
PowerFlex does not have a HCL and instead provides minimum requirements for hardware resources.
|
|
|
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
9 storage models:
HX220x Edge M5, HX220c M4/M5, HX240c M4/M5, HXAF220c M4/M5, HXAF240c M4/M5
8 compute-only models: B2x0 M4/M5, B4x0 M4/M5, C2x0 M4/M5, C4x0 M4/M5, C480 ML
NEW
HX220x Edge M5 are 1U building blocks.
HX220c M5 and HXAF220c M5 are 1U building blocks.
HX240c M5 and HXAF240c M5 are 2U building blocks.
A maximum of eight B200 M4/M5 blade servers fit in a Cisco UCS 5108 6U Blade Chassis.
|
Many
PowerFlex does not have a HCL and instead provides minimum requirements for hardware resources.
|
|
|
|
1, 2 or 4 nodes per chassis
Note: Because SANsymphony is mostly hardware agnostic, customers can opt for multiple server densities.
Note: In most cases 1U or 2U building blocks are used.
Also Super Micro offers 2U chassis that can house 4 compute nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
HX2x0/HXAF2x0/HXAN2x0: 1 node per chassis
B200: up to 8 nodes per chassis
C2x0: 1 node per chassis
HX220x Edge M5 are 1U building blocks.
HX220c M5, HXAF220c M5 and HXAN220c M5 are 1U building blocks.
HX240c M5 and HXAF240c M5 are 2U building blocks.
A maximum of eight B200 M4/M5 blade servers fit in a Cisco UCS 5108 6U Blade Chassis.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1, 2 or 4 nodes per chassis
Because PowerFlex is mostly hardware agnostic, customers can opt for multiple server densities.
Note: In most cases 1U or 2U building blocks are used.
Also Super Micro offers 2U chassis that can house 4 compute nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
|
|
|
Yes
DataCore does not explicitly recommend using different hardware platforms, but as long as the hardware specs are somehow comparable, there is no reason to insist on one or the other hardware vendor. This is proven in practice, meaning that some customers run their productive DataCore environment on comparable servers of different vendors.
|
Partial
Cisco supports mixing nodes with Intel v3 and Intel v4 processors within the same storage cluster. Also M4 and M5 nodes can be mixed within the same cluster. HyperFlex Edge does not support mixed M4/M5 clusters.
Mixing of HX220c and HX240c models is not allowed inside a single storage cluster (homogenous setup).
Mixing of HX2x0c, HXAF2x0c and HXAN2x0c models is not allowed inside a single storage cluster (homogenous setup).
Multiple homogenous HyperFlex storage clusters can be used in a single vCenter environment. The current maximum is 100.
Cisco HyperFlex supports up to 8 clusters on a single HX FI Domain.
|
Yes
PowerFlex allows for mixing different server hardware in a single solution; also PowerFlex allows different volume types (HDD-only, Hybrid, All-Flash) to exist within a single solution.
|
|
|
|
Components |
|
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible
NEW
M5: Choice of 1st generation Intel Xeon Scalable (Skylake) processors (1x or 2x per node).
Although Cisco does support 2nd generation Intel Xeon Scalable (Cascade Lake) processors in its UCS server line-up as of April 2019, Cisco HyperFlex nodes do not yet ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
|
Flexible
Dell EMC provides minimal hardware requirements in its PowerFlex documentation.
|
|
|
|
Flexible
|
Flexible
NEW
HX220x M5 Edge: 192GB - 3.0TB per node.
HX220c M5: 192GB - 3.0TB per node.
HX240c M5: 192GB - 3.0TB per node.
HXAF220x M5 Edge: 192GB - 3.0TB per node.
HXAF220c M5: 192GB - 3.0TB per node.
HXAF240c M5: 192GB - 3.0TB per node.
|
Flexible
Dell EMC provides minimal hardware requirements in its PowerFlex documentation.
|
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
HX220c/HXAF220c/HXAN220c: Fixed number of disks
HX240c/HXAF240c: Flexible (number of disks)
NEW
HX220c M5 and HXAF220c M5 1U appliances have 8 SFF disk slots.
HX220x M5 Edge (hybrid/all-flash) are the only systems that support less than 6 drives (3-6).
HX 3.5 adds support for Intel Optane NVMe DC SSDs and Cisco HyperFlex All-NVMe appliances. All-NVMe appliances leverage the ultra-fast Intel Optane NVMe drives for caching and Intel 3D NAND NVMe drives for capacity storage.
HX220c M5 (hybrid):
1 x 240GB SATA M.2 SSD for boot
1 x 240GB SATA SSD for system
1 x 480GB SATA SSD, 800GB SAS SSD or 800GB SAS SED SSD for caching
6-8 x 1.2TB/1.8TB/2.4TB SAS 10K HDD or 1.2TB SAS 10k SED HDD for data.
HXAF220c M5 (all-flash):
1 x 240GB SATA M.2 SSD for boot
1 x 240GB SATA SSD for system/log
1 x 375GB Optane/400GB/1.6TB SAS SSD, 1.6TB NVMe SSD or 800GB SAS SED SSD for caching
6-8 x 960GB/3.8TB SATA SSD or 800GB SAS/960GB SATA/3.8TB SATA SED SSD for data
HXAN220c M5 (all-NVMe):
1 x 240GB SATA M.2 SSD for boot
1 x 375GB NVMe for system/log
1 x 1.6TB NVMe SSD for caching
6-8 x 1.0TB/4.0TB NVMe SSD for data
HX240c M5 and HXAF240c M5 2U appliances have 24 front-mounted SFF disk slots and 1 internal SFF disk slot. The storage configuration is flexible.
HX240c M5 SFF (hybrid):
1 x 240GB SATA M.2 SSD for boot
1 x 240GB SATA SSD for system
1 x 1.6TB SAS/SATA SSD or 1.6TB SAS SED SSD for caching
6-23 x 1.2TB/1.8TB/2.4TB SAS 10K HDD or 1.2TB SAS 10k SED HDD for data
HX240c M5 LFF (hybrid):
1 x 240GB SATA M.2 SSD for boot
1 x 240GB SATA SSD for system
1 x 3.2TB SATA SSD for caching
6-12 x 6.0TB/8.0TB/12TB SATA 7.2K HDD
HXAF240c M5 (all-flash):
1 x 240GB SATA M.2 SSD for boot
1 x 240GB SATA SSD for system/log
1 x 375GB Optane/400GB/1.6TB SAS SSD, 1.6TB NVMe SSD or 800GB SAS SED SSD for caching
6-23 x 960GB/3.8TB SATA SSD or 800GB SAS/960GB SATA/3.8TB SATA SED SSD for data
AF = All-Flash
AN = All-NVMe
SED = Self-Encrypting Drive
SFF = Small Form Factor
|
Flexible
PowerFlex supports magnetic (HDD) and solid-state disks (SSD), as well as flash PCI Express (PCIe) cards.
|
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: M5:10/40GbE; M5 Edge:1/10GbE; FC (optional)
Cisco HyperFlex: Both HX220c and HX240c models are equipped with a dual-port SFP+ adapter for handling storage cluster data traffic. M4 Models come with a dual-port 10Gbps adapter, whereas M5 models sport 40Gbps adapters that can be converted to 10Gbps by use of Cisco QSFP to SFP or SFP+ Adapters (QSAs).
Cisco HyperFlex Edge: The HX220c models are equiped with both a dual-port 10Gbps SFP+ adapter and a dual-port 1GbE adapter. Either can be connected and actively used.
HX 3.0 added support for a second NIC in HX nodes on a RPQ basis. HX 3.5 supports this unconditionally and the second NIC is now a part of the HX installer and deployment is automated.
Initial Cisco HX configurations are always packaged and sold with Cisco UCS Fabric Interconnect network switches (6200/6300 series). The HX servers can therefore be managed centrally (=Cisco UCS-managed).
Cisco HyperFlex supports FC connections from external SANs.
|
Flexible
PowerFlex supports:
• 1,10, 25, 40, 100 gigabit networks;
• IP-over-InfiniBand networks.
The use of dual-port network interface cards is recommended.
Management and data storage traffic can be performed across the same IP network or across separate IP networks.
|
|
|
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
DataCore SANsymphony supports the hardware that is on the hypervisor HCL.
VMware vSphere 6.5U1 officially supports several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
Windows 2016 supports two graphics virtualization technologies available with Hyper-V to leverage GPU hardware:
- Discrete Device Assignment
- RemoteFX vGPU
More information is provided here: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-graphics-virtualization
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
NVIDIA Tesla (HX240c only)
AMD FirePro (HX240c only)
NEW
The following NVIDIA GPU cards can be ordered along with the Cisco HX240c M4 / HXAF240c M4 models (maximum is 2 per node):
- NVIDIA Tesla M10
- NVIDIA Tesla M60
The following NVIDIA GPU cards can be ordered along with the Cisco HX240c M5 / HXAF240c M5 models (maximum is 2 per node):
- NVIDIA Tesla M10
- NVIDIA Tesla P40
- NVIDIA Tesla P100
- NVIDIA Tesla V100
- AMD FirePro S7150X2
NVIDIA Tesla P100 GPU is optimal for HPC workloads.
NVIDIA Tesla V100 GPU is optimal for AI/ML workloads.
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
PowerFlex supports the hardware that is on the hypervisor HCL.
VMware vSphere 6.5U1 officially supports several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
Windows 2016 supports two graphics virtualization technologies available with Hyper-V to leverage GPU hardware:
- Discrete Device Assignment
- RemoteFX vGPU
More information is provided here: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-graphics-virtualization
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
|
|
|
Scaling |
|
|
|
|
CPU
Memory
Storage
GPU
The SANsymphony platform allows for expanding of all server hardware resources.
|
HX220c/HXAF220c/HXAN220c: CPU, Memory, Network
HX240c/HXAF240c: CPU, Memory, Storage, Network, GPU
A HX220c node has 8 front-mounted SFF disk slots; In the M4 series 2 disk slots are reserved for SSDs. This effectively means that each node can have up to 6 HDDs installed; M5 series can have up to 8 HDDs installed. Initial configurations have 6 to 8 HDDs installed; exception is the Edge bundle where 3 to 6 HDDs can be installed.
A HX240c SFF node has 24 front-mounted SFF disk slots; 1 disk slot is reserved for SSD. This effectively means that each node can have up to 23 HDDs installed. Initial bundle configurations have either 11 or 15 HDDs installed. Custom configurations have 6 to 23 HDDs installed. In addition a HX240c M5 SFF node has 2 rear SFF disk slots.
A HX240c LFF node has 12 front-mounted LFF disk slots. This effectively means that each node can have up to 12 high-capacity HDDs installed. Initial bundle configurations have either 6 or 12 HDDs installed. Custom configurations have 6 to 12 HDDs installed. In addition a HX240c M5 LFF node has 2 rear SFF disk slots. 1 rear SFF disk slot is reserved for SSD.
A HXAF220c/HXAN220c node has 8 front-mounted SFF disk slots; In the M4 series 2 disk slots are reserved for non-data SSDs. This effectively means that each node can have up to 6 data SSDs installed; M5 series can have up to 8 data SSDs installed. Initial configurations have 6 to 8 SSDs installed.
A HXAF240c node has 24 front-mounted SFF disk slots; 1 is reserved for a non-data SSD. In addition a HXAF240c M5 node has 2 rear SFF disk slots for non-data SSDs. This effectively means that each node could have up to 23 data SSDs installed. However, in M4 systems only up to 10 3.8TB data SSDs can be configured.
HX 3.0 added support for a second NIC in HX nodes on a RPQ basis. HX 3.5 supports this unconditionally and the second NIC is now a part of the HX installer and deployment is automated.
LFF = Large Form Factor
SFF = Small Form Factor
|
CPU
Memory
Storage
GPU
|
|
|
|
Storage+Compute
Compute-only
Storage-only
Storage+Compute: In a single-layer deployment existing SANsymphony clusters can be expanded by adding additional nodes running SANsymphony, which adds additional compute and storage resources to the shared pool. In a dual-layer deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded simultaneously.
Compute-only: Because SANsymphony leverages virtual block volumes (LUNs), storage can be presented to hypervisor hosts not participating in the SANsymphony cluster. This is also beneficial to migrations, since it allows for online storage vMotions between SANsymphony and non-SANsymphony storage platforms.
Storage-only: In a dual-layer or mixed deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded independent from each other.
|
Compute+storage
Compute-only (IO Visor)
Storage+Compute: Existing Cisco HyperFlex clusters can be expanded by adding additional HX nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: The IO Visor module is a vSphere Installation Bundle (VIB) that provides a network file system (NFS) mount point so that the ESXi hypervisor can access the virtual disk drives that are attached to individual virtual machines. From the hypervisor’s perspective, it is simply attached to a network file system.
The IO Visor module is installed on each storage node as well as each compute-only node in order to allow fast access the to the HX distributed file system (LogFS). Up to 8 hybrid or 16 all-flash Cisco UCS B2x0/B4x0/C2x0/C4x0 nodes can accomodate a compute-only role within a single storage cluster.
Storage-only: N/A; A Cisco HyperFlex node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
Cisco HyperFlex Edge: The initial configuration cannot be expanded beyond the default configuration, which consists of 3 HX220x Edge M5 rack-servers.
|
Compute+storage
Compute-only (SDC)
Storage-only
PowerFlex supports both single-layer (hyper-converged) and dual-layer architectures, as well as a mix of both. This flexibility allows the platforms Storage Data Servers (SDSs) to present storage volumes to servers that have the Storage Data Client (SDC) component installed.
Storage+Compute: Existing PowerFlex clusters can be expanded by adding additional PowerFlex nodes that have both SDS and SDC components installed, which adds additional compute and storage resources to the shared pool.
Compute-only: Existing PowerFlex clusters can be expanded by adding additional PowerFlex nodes that only have the SDC component installed, which adds additional compute resources to the shared pool.
Storage-only: Existing PowerFlex clusters can be expanded by adding additional PowerFlex nodes that only have the SDS component installed, which adds additional storage resources to the shared pool.
|
|
|
|
1-64 nodes in 1-node increments
There is a maximum of 64 nodes within a single cluster. Multiple clusters can be managed through a single SANsymphony management instance.
|
vSphere: 2-32 storage nodes in 1-node increments + 0-32 compute-only nodes in 1-node increments
Hyper-V: 2-16 storage nodes in 1-node increments + 0-16 compute-only nodes in 1-node increments
NEW
Supported Cluster Minimums and Maximums:
vSphere non-stretched: 3-32 storage nodes + 0-32 compute-only nodes
vSphere stretched: 2-8 storage nodes + 0-8 compute-only nodes
Hyper-V non-stretched: 3-16 storage nodes + 0-16 compute-only nodes
At maximum a single storage cluster consists 32x HX220c, 32x HX240c, 32x HXAF220c or 32x HXAF240c nodes.
Cisco HX Data Platform supports up to 8 hybrid storage clusters on one vCenter, that equates to 256 hybrid storage nodes.
A hybrid/all-flash storage node cluster can be extended with up to 8/16 Cisco B200 M4/M5, C220 M4/M5 or C240 M4/M5 compute-only nodes. These nodes require the 'IO Visor' software installed in order to access the HX Data Platform.
IO Visor: This vSphere Installation Bundle (VIB) provides a network file system (NFS) mount point so that the ESXi hypervisor can access the virtual disk drives that are attached to individual virtual machines. From the hypervisor’s perspective, it is simply attached to a network file system.
Cisco HyperFlex Edge: The storage node cluster configuration consists of 2, 3 or 4 HX220x Edge M5 rack-servers.
|
3-512 storage nodes in 1-node increments
PowerFlex can be used in two ways:
1. By installing the SDS and SDC components on the same server, creating a hyper-converged single-layer configuration.
2. By installing the SDS and SDC components on separate servers, creating a traditional 2-layer configuration.
These configurations can be mixed within the same environment.
The maximum number of SDSs per system is 512 at this time.
|
|
|
Small-scale (ROBO)
Details
|
2 Node minimum
DataCore prevents split-brain scenarios by always having an active-active configuration of SANsymphony with a primary and an alternate path.
In the case SANsymphony servers are fully operating but do not see each other, the application host will still be able to read and write data via the primary path (no switch to secondary). The mirroring is interrupted because of the lost connection and the administrator is informed accordingly. All writes are stored on the locally available storage (primary path) and all changes are tracked. As soon as the connection between the SANsymphony servers is restored, the mirror will recover automatically based on these tracked changes.
Dual updates due to misconfiguration are detected automatically and data corruption is prevented by freezing the vDisk and waiting for user input to solve the conflict. Conflict solutions could be to declare one side of the mirror to be the new active data set and discarding all tracked changes at the other side, or splitting the mirror and merge the two data sets into a 3rd vDisk manually.
|
2 Node minimum
NEW
Next to acquiring individual nodes Cisco also offers a bundle that is aimed at small ROBO deployments, HyperFlex Edge.
Previously Cisco HyperFlex Edge clusters consisted of 3 HX220x Edge M5 hybrid nodes with 1GbE or 10GbE connectivity. This configuration couldnt be expanded.
HX 4.0 introduces Cisco HyperFlex Edge clusters consisting of 2, 3 or 4 HX220x Edge M5 all-flash nodes with 1GbE or 10GbE connectivity. 2 node clusters are monitored by Cisco Intersight Invisible Cloud Witness, eliminating the need for witness VMs and the infrastructure to manage those VMs as well as life cycle management.
|
3 Node minimum
PowerFlexs smallest deployment contains 3 nodes.
|
|
|
Storage Support
|
|
|
|
|
|
|
General |
|
|
|
|
Block Storage Pool
SANsymphony only serves block devices to the supported OS platforms.
|
Distributed File System (DFS)
The Cisco HX platform uses a Distributed Log-structured File System called StorFS.
|
Block Pool
PowerFlex only serves block devices as storage volumes to the supported OS platforms.
The internal PowerFlex Metadata Manager (MDM) holds cluster-wide component mapping.
|
|
|
|
Partial
DataCores core approach is to provide storage resources to the applications without having to worry about data locality. But if data locality is explicitly requested, the solution can partially be designed that way by configuring the first instance of all data to be stored on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference.
By default data does not automatically follow the VM when the VM is moved to another node. However, virtual disks can be relocated on the fly to other DataCore node without losing I/O access, but this relocation takes some time due to data copy operations required. This kind of relocation usually is done manually, but we allow automation of such tasks and can integrate with VM orchestration using PowerShell for example.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
None
The Cixco HX platform uses full dynamic data distribution. This means that data is evenly striped across all nodes within the storage cluster, thus data is at maximum one hop away from the VM. Nodes are connected to each other through the low latency Cisco Fabric Interconnect (FI) network.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
None
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
|
|
|
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
VoV = Volume-on-Volume; The Virtual Storage Controller uses virtual disks provided by the hypervisor platform.
|
Direct-attached (Raw)
SAN or NAS
Direct-attached: The software takes ownership of the unformatted physical disks available each HX node.
External SAN/NAS Storage: Cisco HyperFlex supports the connection to external Fiber Channel (FC), Fiber Channel over Ethernet (FCoE), iSCSI and NFS storage through the Fabric Interconnect (FI) switches. Direct connect configurations are not supported. NFS Servers have to be listed on the VMware HCL.
|
Direct-attached (Raw, RAID, File)
|
|
|
|
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
|
Hybrid (Flash+Magnetic)
All-Flash
Hybrid hosts cannot be mixed with All-Flash hosts in the same HyperFlex cluster.
|
Magnetic-Only
Hybrid
All-Flash
|
|
|
Hypervisor OS Layer
Details
|
SD, USB, DOM, SSD/HDD
|
Dual SD cards
SSD (optional for HX240c and HXAF240c systems)
Each HX node comes with two internal 64 GB Cisco Flexible Flash drives (SD cards). These SD cards are mirrored to each other and can be used for booting.
The HX240c and HXAF240c models also offer the choice to boot from a local 240GB M.2 SSD drive that is connected to the motherboard.
|
SD, USB or DOM
VxFlex 3.0 and up cannot be deployed on the 32 GB SATADOM boot device that was sold in the
first generation of Dell EMC ScaleIO/VxRack Nodes and VxRack Flex (a hardware solution
based on Quanta servers).
DOM = Disk On Module
|
|
|
|
Memory |
|
|
|
|
DRAM
|
DRAM
|
DRAM
|
|
|
|
Read/Write Cache
DataCore SANsymphony accelerates reads and writes by leveraging the powerful processors and large DRAM memory inside current generation x86-64bit servers on which it runs. Up to 8 Terabytes of cache memory may be configured on each DataCore node, enabling it to perform at solid state disk speeds without the expense. SANsymphony uses a common cache pool to store reads and writes in.
SANsymphony read caching essentially recognizes I/O patterns to anticipate which blocks to read next into RAM from the physical back-end disks. That way the next request can be served from memory.
When hosts write to a virtual disk, the data first goes into DRAM memory and is later destaged to disk, often grouped with other writes to minimize delays when storing the data to the persistent disk layer. Written data stays in cache for re-reads.
The cache is cleaned on a first-in-first-out (FiFo) basis. Segment overwrites are performed on the oldest data first for both read- and write cache segment requests.
SANsymphony prevents the write cache data from flooding the entire cache. In case the write data amount runs above a certain percentage watermark of the entire cache amount, then the write cache will temporarily be switched to write-through mode in order to regain balance. This is performed fully automatically and is self-adjusting, per virtual disk as well as on a global level.
|
Read Cache
|
Read Cache
PowerFlex calls the use of DRAM as caching layer Read RAM Cache (rmcache). This is a tunable feature: rmcache can be enabled or disabled per volume.
Writes are only buffered in the host memory for Read after Write caching. One way to achieve Write buffering is to use Raid controllers (e.g. LSI, PMC etc.) that have battery backup for write buffering.
|
|
|
|
Up to 8 TB
The actual size that can be configured depends on the server hardware that is used.
|
Non-configurable
|
Configurable
The read cache size can be set anywhere between 128MB and 300GB per SDS server.
|
|
|
|
Flash |
|
|
|
|
SSD, PCIe, UltraDIMM, NVMe
|
SSD, NVMe
Cisco HyperFlex supports the use of NMVe SSDs for caching in All-Flash systems and for caching as well as persistent storage in All-NVMe systems.
|
SSD, PCIe, UltraDIMM, NVMe
|
|
|
|
Persistent Storage
SANsymphony supports new TRIM / UNMAP capabilities for solid-state drives (SSD) in order to reduce wear on those devices and optimize performance.
|
Hybrid: Log + Read/Write Cache
All-Flash/All-NVMe: Log + Write Cache + Storage Tier
In all Cisco HX hybrid configurations 1 separate SSD per node is used for houskeeping purposes (SDS logs).
In all Cisco HX hybrid and all-flash configurations 1 separate SSD per node is used for caching purposes. The other disks (SSD or HDD) in the node are used for persistent storage of data.
In a hybrid scenario, the caching SSD is primarily used for both read and write caching. However, data written to SSD is only destaged when needed. This means that current data stays available on the SSD layer as long as possible so that reads and writes are fast.
Distributed Read Cache: All SSD caching drives within the HyperFlex storage cluster form one big caching resource pool. This means that all storage nodes can access the entire distributed caching layer to read data.
In an all-flash scenario, the caching SSD (SAS) is primarily used for write caching. Reads are always accessed directly from the capacity SSD (SATA) layer, so a read cache is not required.
|
Read Cache
Write Buffer (Flash Storage Pools)
Storage Tier
PowerFlex calls the use of Flash as caching layer Read Flash Cache (rfcache). This is a tunable feature: rfcache can be enabled or disabled. Rfcache is used to increase read performance and buffers writes to increase the performance of Read-after-Write I/Os.
|
|
|
|
No limit, up to 1 PB per device
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
Hybrid: 2 Flash devices per node (1x Cache; 1x Housekeeping)
All-Flash: 9-26 Flash devices per node (1x Cache; 1x System, 1x Boot; 6-23x Data)
All-NVMe: 8-11 NVMe devices per node (1x Cache, 1x System, 6-8 Data)
NEW
In Cisco HyperFlex hybrid configurations each storage node has 2 or 3 SSDs.
HX220c / HX220x Edge:
1 x 240GB SSD for boot
1 x 240GB SSD for system
1 x 480GB/800GB SSD for caching
HX240c:
1 x 240GB SSD for boot
1 x 240GB SSD for system
1 x 1.6TB SSD for caching
Fully Distributed Read Cache: All SSD caching drives within the HyperFlex storage cluster form one big caching resource pool. This means that all storage nodes can access the entire distributed caching layer to read data.
In Cisco HyperFlex all-flash configurations each storage node has 8-26 SSDs.
HXAF220x Edge:
1 x 240GB SSD for boot
1 x 240GB SSD for system/log
1 x 400GB/1.6TB SSD for caching
6-8 x 960GB/3.8TB SSD for data
HXAF220c:
1 x 240GB SSD for boot
1 x 240GB SSD for system/log
1 x 375GB Optane/400GB/800GB/1.6TB SSD for caching
6-8 x 800GB/960GB/3.8TB SSD for data
HXAF240c:
1x 240GB SSD for boot
1x 240GB for system/log
1x 375GB Optane/400GB/800GB/1.6TB SSD for caching
6-23 x 800GB/960GB/3.8TB SSD for data
|
0 - 64 devices per node
Flash devices are not mandatory in a PowerFlex solution.
PowerFlex supports a maximum of 64 devices (disks) per SDS server. This includes both HDD and Flash devices. In VMware vSphere environments the maximum is 59 devices (disks) per SDS server.
|
|
|
|
Magnetic |
|
|
|
|
SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
In this case SATA = NL-SAS = MDL SAS
|
Hybrid: SAS or SATA
NEW
Magnetic disks are used for storing persistent data in a deduplicated and compressed format.
HX220c M5 / HX220x M5 Edge:
- 6-8 x 1.2TB/1.8TB/2.4TB SAS 10K SFF HDD for data
HX240c M5:
- 6-23 x 1.2TB/1.8TB/2.4TB SAS 10K SFF HDD for data
- 6-12 x 6TB/8TB/12TB SATA 7.2K LFF HDD for data
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
LFF = Large Form Factor
SFF = Small Form Factor
|
SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
|
|
|
Persistent Storage
|
Persistent Storage
HDD is primarily meant as a high-capacity storage tier.
|
Write Buffer (HDD Storage Pools)
Storage Tier
|
|
|
Magnetic Capacity
Details
|
No limit, up to 1 PB (per device)
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
HX220x Edge M5: 3-6 capacity devices per node
HX220c: 6-8 capacity devices per node
HX240c: 6-23 capacity devices per node
Option for 3-6 HDDs in HX220x M5 Edge hybrid nodes.
Option for 6 HDDs in HX220c M4 nodes.
Option for 6-8 HDDs in HX220c M5 nodes.
Option for 6-23 SFF HDDs or 6-12 LFF HDDs in HX240c M4/M5 nodes.
SFF = Small Form Factor
LFF = Large Form Factor
|
0 - 64 devices per node
PowerFlex supports a maximum of 64 devices (disks) per SDS server. This includes both HDD and Flash devices. In VMware vSphere environments the maximum is 59 devices (disks) per SDS server.
PowerFlex 2.6 supports devices with a capacity up to 8TB.
|
|
|
Data Availability
|
|
|
|
|
|
|
Reads/Writes |
|
|
|
Persistent Write Buffer
Details
|
DRAM (mirrored)
If caching is turned on (default=on), any write will only be acknowledged back to the host after it has been succesfully stored in DRAM memory of two separate physical SANsymphony nodes. Based on de-staging algorithms each of the nodes eventually copies the written data that is kept in DRAM to the persistent disk layer. Because DRAM outperforms both flash and spinning disks the applications experience much faster write behavior.
Per default, the limit of dirty-write-data allowed per Virtual Disk is 128MB. This limit could be adjusted, but there has never been a reason to do so in the real world. Individual Virtual Disks can be configured to act in write-through mode, which means that the dirty-write-data limit is set to 0MB so effectively the data is directly written to the persistent disk layer.
DataCore recommends that all servers running SANsymphony software are UPS protected to avoid data loss through unplanned power outages. Whenever a power loss is detected, the UPS automatically signals this to the SANsymphony node and write behavior is switched from write-back to write-through mode for all Virtual Disks. As soon as the UPS signals that power has been restored, the write behavior is switched to write-back again.
|
Flash Layer (SSD, NVMe)
The caching SSDs contain two write logs with a size of 12GB each. At all times 1 write log is active and 1 write log is passive. Writes are always performed to the active write log at the SSD cache layer and when full it gets de-staged to the HDD/SSD capacity layer.
During destaging the data is optimized by deduplication and compression before writing it to the persistent HDD/SSD layer.
|
Flash/HDD
The persistent write buffer depends on the type of the storage pool (Flash or HDD).
|
|
|
Disk Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1-2 Replicas (2N-3N)
HyperFlexs implementation of replicas is called Replication Factor or RF in short (RF2 = 2N; RF3 = 3N). Maintaining 2 replicas (RF3) is the default method for protecting data that is written to the HyperFlex cluster. It applies to both disk and node failures. This means the HX storage platform can withstand a failure of any two disks or any two nodes within the storage cluster.
An Access Policy can be set to determine how the storage cluster should behave when a second failure occurs and effectively a single point of failure (SPoF) situation is reached:
- The storage cluster goes offline to protect the data.
- The storage cluster goes into read-only mode to facilitate data access.
- The storage cluster stays in read/write mode to facilitate data access as well as data mutations.
The self-healing process after a disk failure kicks in after 1 minute.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated to the active Log on another node. All nodes in the cluster participate in replication. This means that with 3N one instance of data that is written is stored on one node and other instances of that data are stored on two different nodes in the cluster. For all instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When a disk fails, it is marked offline and data is read from another instance instead. At the same time data re-replication of the associated replicas is initiated in order to restore the desired Replication Factor.
|
1 Replica (2N)
+ opt. Hardware RAID
PowerFlex uses replicas to protect data within the cluster. In addition, hardware RAID can be implemented to enhance the robustness of individual nodes.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of the data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. This happens in a fully distributed manner, in other words, there is no dedicated partner node. When a disk fails, it is marked offline and data is read from another instance instead. At the same time data rebuilds of the associated replicas is initiated in order to restore the desired protection level (2N). All nodes participate in the rebuild task. When the failed disk is replaced/revived, the disk is repurposed to resume its original role.
|
|
|
Node Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1)
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
Logical Availability Zone
HyperFlex 3.0 introduced the concept of Logical Availability Zones (LAZs). It is an optional feature and is turned off by default. LAZ is not user-configurable at this time; the system intelligently assigns nodes to a specific LAZ (4 nodes per LAZ).
Logical Availability Zones (LAZs): When using LAZs, one instance of the data is kept within the local LAZ and another instance of the data is kept within another LAZ. Because of this, the cluster can sustain a greater number of node failures until the cluster shuts down to avoid data loss.
|
1 Replica (2N)
+ opt. Hardware RAID
PowerFlex uses replicas to protect data within the cluster. In addition, hardware RAID can be implemented to enhance the robustness of individual nodes.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of the data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. This happens in a fully distributed manner, in other words, there is no dedicated partner node. When a disk fails, it is marked offline and data is read from another instance instead. At the same time data rebuilds of the associated replicas is initiated in order to restore the desired protection level (2N). All nodes participate in the rebuild task. When the failed disk is replaced/revived, the disk is repurposed to resume its original role.
|
|
|
Block Failure Protection
Details
|
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk. However, block failure protection is in most cases irrelevant as 1-node appliances are used as building blocks.
SANsymphony works on an N+1 redundancy design allowing any node to acquire any other node as a redundancy peer per virtual device. Peers are replacable/interchangable on a per Virtual Disk level.
|
Not relevant (1-node chassis only)
Cisco HyperFlex (HX) compute+storage building blocks are based on 1-node chassis only. Therefore multi-node block (appliance) level protection is not relevant for this solution as Node Failure Protection applies.
|
Fault Sets
Block failure protection can be achieved by assigning nodes in the same appliance to different Fault Sets.
Fault Sets: When using Fault Sets, one instance of the data is kept within the local Fault Set and another instance of the data is kept within another Fault Set. By applying Fault Sets, rack failure protection can be achieved as well.
|
|
|
Rack Failure Protection
Details
|
Manual configuration
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk.
|
N/A
HyperFlex 3.0 introduced the concept of Logical Availability Zones (LAZs). It is an optional feature and is turned off by default. LAZ is not user-configurable at this time and therefore cannot be used to align each rack to a different LAZ.
Logical Availability Zones (LAZs): When using LAZs, one instance of the data is kept within the local LAZ and another instance of the data is kept within another LAZ. Because of this, the cluster can sustain a greater number of node failures until the cluster shuts down to avoid data loss.
|
Fault Sets
Block failure protection can be achieved by assigning nodes in the same appliance to different Fault Sets.
Fault Sets: When using Fault Sets, one instance of the data is kept within the local Fault Set and another instance of the data is kept within another Fault Set. By applying Fault Sets, rack failure protection can be achieved as well.
|
|
|
Protection Capacity Overhead
Details
|
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
|
Replicas (2N): 100%
Replicas (3N): 200%
|
Replica (2N): 100% + Hardware RAID overhead (optional)
|
|
|
Data Corruption Detection
Details
|
N/A (hardware dependent)
SANsymphony fully relies on the hardware layer to protect data integrity. This means that the SANsymphony software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
Read integrity checks
While writing data checksums are created and stored. When read again, a new checksum is created and compared to the initial checksum. If incorrect, a checksum is created from another copy of the data. After succesful comparison this data is used to repair the corrupted copy in order to stay compliant with the configured protection level.
|
Disk scrubbing (software)
In-flight integrity checks
Persistent integrity checks
NEW
The Background Device Scanner constantly searches for, and fixes, device errors before they can affect the system, thus providing additional data reliability. The scanner runs in the background, not interrupting other Storage Pool activities (such as adding and removing volumes). When scanning is enabled for a Storage Pool, the scanner seeks out corrupted sectors in the devices in that pool. The scanner also provides SNMP reporting about errors found.
In-flight checksum protection is provided for data reads and writes. This feature addresses errors that change the payload during the transit through the PowerFlex system. PowerFlex protects data in-flight by calculating and validating the checksum value for the payload at both ends. The checksum protection mode can be applied per Storage Pool.
Persistent checksum protection is provided for all data that is stored in Fine Granularity (FG) storage pools by default. This cannot be changed. Fine Granularity (FG) layout saves checksum data before and after processing to guarantee data integrity (compressed or not). There are also system checksums for metadata.
PowerFlex 3.5 adds enhancements to:
- Fine Granularity (FG) data layout: metadata cache for higher FG performance,
- Medium Granularity (MG) data layout: checksum,
- Sub-device error handling for improved resiliency.
|
|
|
|
Points-in-Time |
|
|
|
|
Built-in (native)
|
Built-in (native)
HyperFlexs native snapshot mechanism is meta-data based, space-efficient (zero-copy) and VMware VAAI / Microsoft Checkpoint-integrated.
|
Built-in (native)
NEW
PowerFlex has native snapshot capabilities. These snapshot capabilities include the support of Consistency Groups. This assures that snapshots of different volumes within the same group that are taken together at exactly the same time.
VxFlex OS 3.0 introduced FIne Granularity (FG) storage pools. Fine Granularity (FG) snapshots are Redirect on Write (RoW) instead of Copy on Write (CoW) that is used in Medium Granularity (MG) storage pools.
PowerFlex 3.5 introduces Secure Snapshots. These storage snapshots cannot be deleted and thus enable compliance with the financial and healthcare industry.
|
|
|
|
Local + Remote
SANsymphony snapshots are always created on one side only. However, SANsymphony allows you to create a snapshot for the data on each side by configuring two snapshot schedules, one for the local volume and one for the remote volume. Both snapshot entities are independent and can be deleted independently allowing different retention times if needed.
There is also the capability to pair the snapshot feature along with asynchronous replication which provides you with the ability to have a third site long distance remote copy in place with its own retention time.
|
Local
|
Local
|
|
|
Snapshot Frequency
Details
|
1 Minute
The snapshot lifecycle can be automatically configured using the integrated Automation Scheduler.
|
GUI: 1 hour (Policy-based)
Timing options of the HX native snapshot capability include:
- Hourly
- Daily
- Weekly
and works in 15 minute increments.
|
1 minute
VxFlex OS 3.0 introduced snapshot policy management.
Snapshots can be created every x minutes/hours/days.
Snapshot retention is based on the number of existing snapshots (retain last x snapshots).
The PowerFlex multi-level snapshot structure can have consist of up to six retention levels, where the 1st level having the most frequent snapshots.
Every Volume Tree (V-Tree) can have up to 127 snapshots next to the root volume. Policy-managed snapshots can create up to 60 snapshots per root volume.
|
|
|
Snapshot Granularity
Details
|
Per VM (Vvols) or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is certified for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
Per VM or VM-folder
The Cisco HX Data Platform uses metadata-based, zero-copy snapshots of files. In VMware vSphere these files map to individual drives in a virtual machine.
|
Per VM (Vvols) or Volume
NEW
Although Dell EMC PowerFlex uses block-storage, the platform is capable of attaining per VM-granularity by leveraging VMware Virtual Volumes (Vvols).
VxFlex OS 3.0.1 introduced VVols certification for VMware ESXi 6.0-7.0U1 through the Dell Storage VASA 2.0 Provider.
|
|
|
|
Built-in (native)
DataCore SANsymphony incorporates Continuous Data Protection (CDP) and leverages this as an advanced backup mechanism. As the term implies, CDP continuously logs and timestamps I/Os to designated virtual disks, allowing end-users to restore the environment to an arbitrary point-in-time within that log.
Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage. Several of these markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent or crash-consistent restore point from just before the incident occurred.
|
External
Cisco HyperFlex does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
Veeam is a strategic partner of Cisco.
|
External
PowerFlex does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
|
Local or Remote
All available storage within the SANsymphony group can be configured as targets for back-up jobs.
|
N/A
Cisco HyperFlex does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
Veeam is a strategic partner of Cisco.
|
N/A
PowerFlex does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
|
Continuously
As Continuous Data Protection (CDP) is being leveraged, I/Os are logged and timestamped in a continous fashion, so end-users can restore to virtually any-point-in-time.
|
N/A
Cisco HyperFlex does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
Veeam is a strategic partner of Cisco.
|
N/A
PowerFlex does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
Backup Consistency
Details
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
By default CDP creates crash consistent restore points. Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage.
Several CDP Rollback Markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent, filesystem-consistent or crash-consistent restore point from (just) before the incident occurred.
In a VMware vSphere environment, the DataCore VMware vCenter plug-in can be used to create snapshot schedules for datastores and select the VMs that you want to enable VSS filesystem/application consistency for.
|
N/A
Cisco HyperFlex does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
Veeam is a strategic partner of Cisco.
|
N/A
PowerFlex does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
Restore Granularity
Details
|
Entire VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
When configuring the virtual environment as described above, effectively VM-restores are possible.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
N/A
Cisco HyperFlex does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
Veeam is a strategic partner of Cisco.
|
N/A
PowerFlex does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
Restore Ease-of-use
Details
|
| |