SDS and HCI comparison & reviews

Summary
Rank
2nd 6th 8th
Score
0%
0%
User Reviews
Question?
Not Enabled Not Enabled Not Enabled
Analysis expand by Herman Rutten by Herman Rutten by Herman Rutten
Select All
General expand
0%
0%
0%
  • Fully Supported
  • Limitation
  • Not Supported
  • Information Only
Pros
  • + Extensive platform support
  • + Extensive data protection capabilities
  • + Flexible deployment options
  • + Built for performance and robustness
  • + Broad range of hardware support
  • + Well suited for open private cloud platforms
  • + Strong Cisco integration
  • + Fast streamlined deployment
  • + Strong container support
Cons
  • - No native data integrity verification
  • - Dedup/compr not performance optimized
  • - Disk/node failure protection not capacity optimized
  • - Only basic support for VMware and Hyper-V
  • - No native dedup capabilities
  • - No native encryption capabilities
  • - Single server hardware support
  • - No bare-metal support
  • - Limited native data protection capabilities
  Content  
  •  
Content Creator
  Assessment  
  •  
Overview
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
Name: StorPool Distributed Storage (StorPool Storage)
Type: Software-only (SDS)
Development Start: 2011
First Product Release: Nov 2012
Name: HyperFlex (HX)
Type: Hardware+Software (HCI)
Development Start: 2015
First Product Release: apr 2016
NEW
  •  
Maturity
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
Release Dates:
SP 19.01: Jun 2019
SP 18.02: Jan 2018
SP 18.01: Dec 2017
SP 16.03: Dec 2016
SP 16.02: Aug 2016
SP 16.01: Mar 2016
SP 15.03: Nov 2015
SP 15.02: Mar 2015
SP 15.01: Jan 2015
SP 14.12: Dec 2014
SP 14.10: Oct 2014
SP 14.08: Aug 2014
SP 14.04: Apr 2014
SP 14.02: Feb 2014
SP 13.10: Oct 2013 (GA)
SP 20121217: Dec 2012 (Early access)
SP 20121119: Nov 2012 (Early access)
GA Release Dates:
HX 4.0: apr 2019
HX 3.5.2a: jan 2019
HX 3.5.1a: nov 2018
HX 3.5: oct 2018
HX 3.0: apr 2018
HX 2.6.1b: dec 2017
HX 2.6.1a: oct 2017
HX 2.5: jul 2017
HX 2.1: may 2017
HX 2.0: mar 2017
HX 1.8: sep 2016
HX 1.7.3: aug 2016
1.7.1-14835: jun 2016
HX 1.7.1: apr 2016
NEW
  Pricing  
  •  
Hardware Pricing Model
N/A
N/A
Per Node
Bundle (ROBO)
  •  
Software Pricing Model
Capacity based (per TB)
NEW
Capacity based (per TB)
Per Node
  •  
Support Pricing Model
Capacity based (per TB)
Capacity based (per TB)
Per Node
Design & Deploy expand
0%
0%
0%
  Design  
  •  
Consolidation Scope
Storage
Data Protection
Management
Automation&Orchestration
Compute
Storage
Data Protection (limited)
Automation&Orchestration (limited)
Compute
Storage
Network
Management
Automation&Orchestration
  •  
Network Topology
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
Standard Ethernet 10/25/40/50/100 GbE
Infiniband (EoL)
1, 10, 40 GbE
  •  
Overall Design Complexity
Medium
Medium
Low
  •  
External Performance Validation
SPC (Jun 2016)
ESG Lab (Jan 2016)
N/A (one internally-validated)
ESG Lab (jul 2018)
SAP (dec 2017)
ESG Lab (mar 2017)
  •  
Evaluation Methods
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
Evaluation license (30-days)
Online Labs
Proof-of-Concept (PoC)
  Deploy  
  •  
Deployment Architecture
Single-Layer
Dual-Layer
Single-Layer
Dual-Layer
Single-Layer (primary)
Dual-Layer (secondary)
  •  
Deployment Method
BYOS (some automation)
Turnkey (remote install)
Turnkey (very fast; highly automated)
Workload Support expand
0%
0%
0%
  Virtualization  
  •  
Hypervisor Deployment
Virtual Storage Controller
Kernel (Optional for Hyper-V)
Next to Hypervisor (KVM)
None (ESXi, Hyper-V, XenServer, OracleVM)
Virtual Storage Controller
  •  
Hypervisor Compatibility
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
KVM
VMware ESXi
Hyper-V
XenServer
OracleVM
VMware vSphere ESXi 6.0U3/6.5U2/6.7U2
Microsoft Hyper-V 2016/2019
NEW
  •  
Hypervisor Interconnect
iSCSI
FC
Block device driver (KVM)
iSCSI (ESXi, Hyper-V, XenServer, OracleVM)
NFS
SMB
  Bare Metal  
  •  
Bare Metal Compatibility
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Microsoft Windows Server 2012R2-2019
CentOS 7, 8
Debian Linux 9
Ubuntu Linux 16.04 LTS, 18.04 LTS
Other Linux distributions (eg. RHEL, OEL, SuSE)
N/A
  •  
Bare Metal Interconnect
Block device driver (Linux)
iSCSI (Windows Server)
N/A
  Containers  
  •  
Container Integration Type
Built-in (native)
Hypervisor: None
Bare metal (Linux): Block device driver
Built-in (native)
  •  
Container Platform Compatibility
Docker CE/EE 18.03+
Most container platforms
Docker EE 1.13+
  •  
Container Platform Interconnect
Docker Volume plugin (certified)
Standard block devices
HX FlexVolume Driver
  •  
Container Host Compatibility
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Bare-metal container hosts
Virtualized container hosts on VMware vSphere hypervisor
  •  
Container Host OS Compatbility
Linux
Linux
Ubuntu Linux 16.04.3 LTS
  •  
Container Orch. Compatibility
Kubernetes v1.13+
Kubernetes
  •  
Container Orch. Interconnect
Kubernetes CSI plugin
Kubernetes CSI plugin
HX-CSI Plugin
NEW
  VDI  
  •  
VDI Compatibility
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
  •  
VDI Load Bearing
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
N/A
VMware: up to 137 virtual desktops/node
Citrix: up to 125 virtual desktops/node
Server Support expand
0%
0%
0%
  Server/Node  
  •  
Hardware Vendor Choice
Many
Many
Cisco
NEW
  •  
Models
Many
Many
9 storage models:
HX220x Edge M5, HX220c M4/M5, HX240c M4/M5, HXAF220c M4/M5, HXAF240c M4/M5
8 compute-only models: B2x0 M4/M5, B4x0 M4/M5, C2x0 M4/M5, C4x0 M4/M5, C480 ML
NEW
  •  
Density
1, 2 or 4 nodes per chassis
1, 2 or 4 nodes per chassis
HX2x0/HXAF2x0/HXAN2x0: 1 node per chassis
B200: up to 8 nodes per chassis
C2x0: 1 node per chassis
  •  
Mixing Allowed
Yes
Yes
Partial
  Components  
  •  
CPU Config
Flexible
Flexible
Flexible
NEW
  •  
Memory Config
Flexible
Flexible
NEW
  •  
Storage Config
Flexible
Flexible
HX220c/HXAF220c/HXAN220c: Fixed number of disks
HX240c/HXAF240c: Flexible (number of disks)
NEW
  •  
Network Config
Flexible
Flexible
Flexible: M5:10/40GbE; M5 Edge:1/10GbE; FC (optional)
  •  
GPU Config
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla (HX240c only)
AMD FirePro (HX240c only)
NEW
  Scaling  
  •  
Scale-up
CPU
Memory
Storage
GPU
CPU
Memory
Storage
GPU
HX220c/HXAF220c/HXAN220c: CPU, Memory, Network
HX240c/HXAF240c: CPU, Memory, Storage, Network, GPU
  •  
Scale-out
Storage+Compute
Compute-only
Storage-only
Storage+Compute
Compute-only
Storage-only
Compute+storage
Compute-only (IO Visor)
  •  
Scalability
1-64 nodes in 1-node increments
3-63 nodes in 1-node increments
vSphere: 2-32 storage nodes in 1-node increments + 0-32 compute-only nodes in 1-node increments
Hyper-V: 2-16 storage nodes in 1-node increments + 0-16 compute-only nodes in 1-node increments
NEW
  •  
Small-scale (ROBO)
2 Node minimum
3 Node minimum
2 Node minimum
NEW
Storage Support expand
0%
0%
0%
  General  
  •  
Layout
Block Storage Pool
Block Storage Pool
Distributed File System (DFS)
  •  
Data Locality
Partial
None
None
  •  
Storage Type(s)
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
Direct-attached (Raw)
Direct-attached (Raw)
SAN or NAS
  •  
Composition
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
Magnetic-Only
Hybrid
All-Flash
Hybrid (Flash+Magnetic)
All-Flash
  •  
Hypervisor OS Layer
SD, USB, DOM, SSD/HDD
SD, USB, DOM, SSD/HDD
Dual SD cards
SSD (optional for HX240c and HXAF240c systems)
  Memory  
  •  
Memory Layer
DRAM
DRAM
  •  
Memory Purpose
Read/Write Cache
Metadata
Read Cache
Write-back Cache (optional)
  •  
Memory Capacity
Up to 8 TB
Configurable
  Flash  
  •  
Flash Layer
SSD, PCIe, UltraDIMM, NVMe
SSD, PCIe, NVMe
SSD, NVMe
  •  
Flash Purpose
Persistent Storage
Persistent Storage
Write-back Cache
Hybrid: Log + Read/Write Cache
All-Flash/All-NVMe: Log + Write Cache + Storage Tier
  •  
Flash Capacity
No limit, up to 1 PB per device
No limit
Hybrid: 2 Flash devices per node (1x Cache; 1x Housekeeping)
All-Flash: 9-26 Flash devices per node (1x Cache; 1x System, 1x Boot; 6-23x Data)
All-NVMe: 8-11 NVMe devices per node (1x Cache, 1x System, 6-8 Data)
NEW
  Magnetic  
  •  
Magnetic Layer
SAS or SATA
SAS or SATA
Hybrid: SAS or SATA
NEW
  •  
Magnetic Purpose
Persistent Storage
Persistent Storage
  •  
Magnetic Capacity
No limit, up to 1 PB (per device)
No limit
HX220x Edge M5: 3-6 capacity devices per node
HX220c: 6-8 capacity devices per node
HX240c: 6-23 capacity devices per node
Data Availability expand
0%
0%
0%
  Reads/Writes  
  •  
Persistent Write Buffer
DRAM (mirrored)
Hybrid configurations (optional): Intel Optane NVMe, "Pool" NVMe drive, Broadcom/LSI CacheVault or BBU
Flash Layer (SSD, NVMe)
  •  
Disk Failure Protection
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
0-2 Replicas (1N-3N)
1-2 Replicas (2N-3N)
  •  
Node Failure Protection
2-way and 3-way Mirroring (RAID-1)
0-2 Replicas (1N-3N)
Logical Availability Zone
  •  
Block Failure Protection
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Fault Sets
Not relevant (1-node chassis only)
  •  
Rack Failure Protection
Manual configuration
Fault Sets
N/A
  •  
Protection Capacity Overhead
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
Replicas (2N): 100%
Replicas (3N): 200%
  •  
Data Corruption Detection
N/A (hardware dependent)
Read integrity checks (end-to-end checksums)
Disk scrubbing
Read integrity checks
  Points-in-Time  
  •  
Snapshot Type
Built-in (native)
Built-in (native)
  •  
Snapshot Scope
Local + Remote
Local + Remote
  •  
Snapshot Frequency
1 Minute
Seconds (workload dependent)
GUI: 1 hour (Policy-based)
  •  
Snapshot Granularity
Per VM (Vvols) or Volume
Per Volume (LUN)
Per VM/container (eg. OpenStack, Kubernetes)
Per VM or VM-folder
  •  
Backup Type
Built-in (native)
Built-in (native)
External
  •  
Backup Scope
Local or Remote
Local or Remote
N/A
  •  
Backup Frequency
Continuously
Seconds (workload dependent)
N/A
  •  
Backup Consistency
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
Crash Consistent (also Group Consistency)
N/A
  •  
Restore Granularity
Entire VM or Volume
Entire Volume
N/A
  •  
Restore Ease-of-use
Entire VM or Volume: GUI
Single File: Multi-step
Entire Volume: API or CLI
Single File: Multi-step
N/A
  Disaster Recovery  
  •  
Remote Replication Type
Built-in (native)
Built-in (native)
Built-in (native)
  •  
Remote Replication Scope
To remote sites
To MS Azure Cloud
To remote sites
To remote sites
  •  
Remote Replication Cloud Function
Data repository
DR-site (several cloud providers)
N/A
  •  
Remote Replication Topologies
Single-site and multi-site
Single-site and multi-site
Single-site
  •  
Remote Replication Frequency
Continuous (near-synchronous)
Seconds (workload dependent)
5 minutes (Asynchronous)
  •  
Remote Replication Granularity
VM or Volume
Per Volume (LUN)
VM
  •  
Consistency Groups
Yes
Yes
No
  •  
DR Orchestration
VMware SRM (certified)
N/A
HX Connect (native)
VMware SRM (certified)
NEW
  •  
Stretched Cluster (SC)
VMware vSphere: Yes (certified)
Linux KVM: Yes
vSphere: Yes
Hyper-V: No
  •  
SC Configuration
2+sites = two or more active sites, 0/1 or more tie-breakers
2+sites = two or more active sites
vSphere: 3-sites = two active sites + tie-breaker in 3rd site
  •  
SC Distance
<=5ms RTT (targeted, not required)
<=1ms RTT (targeted, not required)
<=5ms RTT / 10Gbps
  •  
SC Scaling
<=32 hosts at each active site (per cluster)
<=32 nodes in each site part of a synchronous stretched cluster configuration
2-16 converged hosts + 0-16 compute hosts at each active site
  •  
SC Data Redundancy
Replicas: 1N-2N at each active site
Replicas: 1N-2N at each active site
Replicas: 2N at each active site
Data Services expand
0%
0%
0%
  Efficiency  
  •  
Dedup/Compr. Engine
Software (integration)
NEW
N/A
Software
  •  
Dedup/Compr. Function
Efficiency (space savings)
N/A
Efficiency and Performance
  •  
Dedup/Compr. Process
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
N/A
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
  •  
Dedup/Compr. Type
Optional
NEW
N/A
Always-on
  •  
Dedup/Compr. Scope
Persistent data layer
N/A
Read and Write caches + Persistent data layers
  •  
Dedup/Compr. Radius
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
N/A
Storage Cluster
  •  
Dedup/Compr. Granularity
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
N/A
4-64 KB fixed block size
  •  
Dedup/Compr. Guarantee
N/A
N/A
N/A
  •  
Data Rebalancing
Full (optional)
Full
Full
  •  
Data Tiering
Yes
Yes
N/A
  Performance  
  •  
Task Offloading
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
OpenNebula, OnApp, OpenStack, CloudStack
vSphere: VMware VAAI-NAS (full)
Hyper-V: SMB3 ODX; UNMAP/TRIM
  •  
QoS Type
IOPs and/or MBps Limits
IOPs and/or MBps Limits
Fair-sharing (built-in)
N/A
  •  
QoS Granularity
Virtual Disk Groups and/or Host Groups
Per volume
Per vdisk (CMPs)
N/A
  •  
Flash Pinning
Per VM/Virtual Disk/Volume
Yes
Not relevant (global cache architecture)
  Security  
  •  
Data Encryption Type
Built-in (native)
N/A
  •  
Data Encryption Options
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: Self-encrypting drives (SEDs)
Software: N/A
Hardware: Self-encrypting drives (SEDs)
Software: N/A
  •  
Data Encryption Scope
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: Data-at-rest
Software: N/A
Hardware: Data-at-rest
Software: N/A
  •  
Data Encryption Compliance
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: N/A
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: N/A
  •  
Data Encryption Efficiency Impact
Hardware: No
Software: No
Hardware: No
Software: N/A
Hardware: No
Software: N/A
  Test/Dev  
  •  
Fast VM Cloning
Yes
Yes
Yes
  Portability  
  •  
Hypervisor Migration
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
Hyper-V/ESXi/XenServer to KVM (external)
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
  File Services  
  •  
Fileserver Type
Built-in (native)
N/A
N/A
  •  
Fileserver Compatibility
Windows clients
Linux clients
N/A
N/A
  •  
Fileserver Interconnect
SMB
NFS
N/A
N/A
  •  
Fileserver Quotas
Share Quotas, User Quotas
N/A
N/A
  •  
Fileserver Analytics
Partial
N/A
N/A
  Object Services  
  •  
Object Storage Type
N/A
N/A
N/A
  •  
Object Storage Protection
N/A
N/A
N/A
  •  
Object Storage LT Retention
N/A
N/A
N/A
Management expand
0%
0%
0%
  Interfaces  
  •  
GUI Functionality
Centralized
Centralized
Centralized
  •  
GUI Scope
Single-site and Multi-site
Single-site and Multi-site (read-only)
Single-site and Multi-site
  •  
GUI Perf. Monitoring
Advanced
Advanced
Basic
  •  
GUI Integration
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
OpenStack
CloudStack
OnApp
OpenNebula
Kubernetes
VMware vSphere Web Client (plugin)
  Programmability  
  •  
Policies
Full
Partial
Partial (Protection)
  •  
API/Scripting
REST-APIs
PowerShell
REST-API
CLI
REST-APIs
CLI
  •  
Integration
OpenStack
OpenStack
CloudStack
OnApp
OpenNebula
Kubernetes
Cisco UCS Director
  •  
Self Service
Full
N/A
N/A (not part of HX license)
  Maintenance  
  •  
SW Composition
Unified
Unified
Partially Distributed
  •  
SW Upgrade Execution
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
  •  
FW Upgrade Execution
Hardware dependent
Hardware dependent
1-Click
  Support  
  •  
Single HW/SW Support
No
Yes (limited)
Yes
  •  
Call-Home Function
Partial (HW dependent)
Yes
Full
  •  
Predictive Analytics
Partial
Partial
Partial

Matrix Score

  •  
  •  
  • DataCore
  • StorPool
  • Cisco
  •  
  • 2 nd
  • 6 th
  • 8 th
X
Login to access your personal profile

Forgot your Password?
X
Signup with linkedin
X

Registered, but not activated? click here (resend activation link)

Login to access your personal profile

Receive new comparison alerts

Show me as community member

I agree to your Terms of services

GDPR