SDS and HCI comparison & reviews

Summary
Rank
2nd 5th 6th
Score
0%
0%
User Reviews
Question?
Not Enabled Not Enabled Not Enabled
Analysis expand by Herman Rutten by Herman Rutten by Herman Rutten
Select All
General expand
0%
0%
0%
  • Fully Supported
  • Limitation
  • Not Supported
  • Information Only
Pros
  • + Extensive platform support
  • + Extensive data protection capabilities
  • + Flexible deployment options
  • + Extensive QoS capabilities
  • + Extensive data protection capabilities
  • + Small form factor
  • + Built for performance and robustness
  • + Broad range of hardware support
  • + Well suited for open private cloud platforms
Cons
  • - No native data integrity verification
  • - Dedup/compr not performance optimized
  • - Disk/node failure protection not capacity optimized
  • - No hybrid configurations
  • - No stretched clustering
  • - Single hypervisor support
  • - Only basic support for VMware and Hyper-V
  • - No native dedup capabilities
  • - No native encryption capabilities
  Content  
  •  
Content Creator
  Assessment  
  •  
Overview
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
Name: NetApp HCI
Type: Hardware+Software (HCI)
Development Start: 2016
First Product Release: 2017
Name: StorPool Distributed Storage (StorPool Storage)
Type: Software-only (SDS)
Development Start: 2011
First Product Release: Nov 2012
  •  
Maturity
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
GA Release Dates:
NetApp HCI 1.8P1: oct 2020
NetApp HCI 1.8: may 2020
NetApp HCI 1.7: sep 2019
NetApp HCI 1.6: jul 2019
NetApp HCI 1.4: nov 2018
NetApp HCI 1.3: jun 2018
NetApp HCI 1.2: mar 2018
NetApp HCI 1.1: dec 2017
NetApp HCI 1.0: oct 2017
NEW
Release Dates:
SP 19.01: Jun 2019
SP 18.02: Jan 2018
SP 18.01: Dec 2017
SP 16.03: Dec 2016
SP 16.02: Aug 2016
SP 16.01: Mar 2016
SP 15.03: Nov 2015
SP 15.02: Mar 2015
SP 15.01: Jan 2015
SP 14.12: Dec 2014
SP 14.10: Oct 2014
SP 14.08: Aug 2014
SP 14.04: Apr 2014
SP 14.02: Feb 2014
SP 13.10: Oct 2013 (GA)
SP 20121217: Dec 2012 (Early access)
SP 20121119: Nov 2012 (Early access)
  Pricing  
  •  
Hardware Pricing Model
N/A
N/A
  •  
Software Pricing Model
Capacity based (per TB)
NEW
Per Node
Capacity based (per TB)
Capacity based (per TB)
  •  
Support Pricing Model
Capacity based (per TB)
Per Node
Capacity based (per TB)
Design & Deploy expand
0%
0%
0%
  Design  
  •  
Consolidation Scope
Storage
Data Protection
Management
Automation&Orchestration
Compute
Storage
Data Protection
Management
Automation&Orchestration
Compute
Storage
Data Protection (limited)
Automation&Orchestration (limited)
  •  
Network Topology
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
10/25 GbE (iSCSI)
Standard Ethernet 10/25/40/50/100 GbE
Infiniband (EoL)
  •  
Overall Design Complexity
Medium
Low
Medium
  •  
External Performance Validation
SPC (Jun 2016)
ESG Lab (Jan 2016)
Evaluator Group (aug 2019)
N/A (one internally-validated)
  •  
Evaluation Methods
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
Proof-of-Concept (PoC)
Evaluation license (30-days)
  Deploy  
  •  
Deployment Architecture
Single-Layer
Dual-Layer
Dual-Layer
Single-Layer
Dual-Layer
  •  
Deployment Method
BYOS (some automation)
Turnkey (very fast; highly automated)
Turnkey (remote install)
Workload Support expand
0%
0%
0%
  Virtualization  
  •  
Hypervisor Deployment
Virtual Storage Controller
Kernel (Optional for Hyper-V)
None
Next to Hypervisor (KVM)
None (ESXi, Hyper-V, XenServer, OracleVM)
  •  
Hypervisor Compatibility
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
VMware vSphere ESXi 6.5U2-7.0
(Microsoft Hyper-V)
(Red Hat KVM)
KVM
VMware ESXi
Hyper-V
XenServer
OracleVM
  •  
Hypervisor Interconnect
iSCSI
FC
iSCSI
Block device driver (KVM)
iSCSI (ESXi, Hyper-V, XenServer, OracleVM)
  Bare Metal  
  •  
Bare Metal Compatibility
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
RHEL 7.4-8.1 (KVM)
Windows Server 2016-2019 (Hyper-V)
VMware vSphere 6.5-7.0
Citrix XenServer 7.4-7.6
Citrix Hypervisor 8.0-8.1
Microsoft Windows Server 2012R2-2019
CentOS 7, 8
Debian Linux 9
Ubuntu Linux 16.04 LTS, 18.04 LTS
Other Linux distributions (eg. RHEL, OEL, SuSE)
  •  
Bare Metal Interconnect
Block device driver (Linux)
iSCSI (Windows Server)
  Containers  
  •  
Container Integration Type
Built-in (native)
Built-in (native)
Hypervisor: None
Bare metal (Linux): Block device driver
  •  
Container Platform Compatibility
Docker CE/EE 18.03+
Docker EE 17.06+
Most container platforms
  •  
Container Platform Interconnect
Docker Volume plugin (certified)
Docker Volume Plugin (certified)
Standard block devices
  •  
Container Host Compatibility
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Bare-metal container hosts
  •  
Container Host OS Compatbility
Linux
RHEL
CentOS
Ubuntu
Debian
Linux
  •  
Container Orch. Compatibility
Kubernetes 1.9+
Kubernetes v1.13+
  •  
Container Orch. Interconnect
Kubernetes CSI plugin
Kubernetes Volume Plugin
Kubernetes CSI plugin
  VDI  
  •  
VDI Compatibility
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
  •  
VDI Load Bearing
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
VMware: up to 139 virtual desktops/node
Citrix: unknown
N/A
Server Support expand
0%
0%
0%
  Server/Node  
  •  
Hardware Vendor Choice
Many
Super Micro
Quanta Cloud Technology
Many
  •  
Models
Many
6 models (3x compute + 3x storage) + several sub-models
Many
  •  
Density
1, 2 or 4 nodes per chassis
1 node per chassis
4 nodes per chassis
1, 2 or 4 nodes per chassis
  •  
Mixing Allowed
Yes
Yes
Yes
  Components  
  •  
CPU Config
Flexible
Flexible (up to 5 options)
Flexible
  •  
Memory Config
Fixed: H610C
Flexible: H410C/H615C
Flexible
  •  
Storage Config
Flexible
Flexible (3 options)
Flexible
  •  
Network Config
Flexible
Fixed (10 or 25Gbps)
Flexible
  •  
GPU Config
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
H610C/H615C: NVIDIA Tesla
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
  Scaling  
  •  
Scale-up
CPU
Memory
Storage
GPU
N/A
CPU
Memory
Storage
GPU
  •  
Scale-out
Storage+Compute
Compute-only
Storage-only
Storage+Compute
Compute-only
Storage-only
Storage+Compute
Compute-only
Storage-only
  •  
Scalability
1-64 nodes in 1-node increments
2-64 compute nodes and 2-40 storage nodes in 1-node increments
3-63 nodes in 1-node increments
  •  
Small-scale (ROBO)
2 Node minimum
4 Node minimum (2 storage + 2 compute)
3 Node minimum
Storage Support expand
0%
0%
0%
  General  
  •  
Layout
Block Storage Pool
Block Pool
Block Storage Pool
  •  
Data Locality
Partial
None
None
  •  
Storage Type(s)
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
Direct-Attached (Raw)
Direct-attached (Raw)
  •  
Composition
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
All-Flash (SSD-only)
Magnetic-Only
Hybrid
All-Flash
  •  
Hypervisor OS Layer
SD, USB, DOM, SSD/HDD
SSD
SD, USB, DOM, SSD/HDD
  Memory  
  •  
Memory Layer
DRAM
NVRAM (PCIe card) or NVDIMM
DRAM
DRAM
  •  
Memory Purpose
Read/Write Cache
NVRAM/NVDIMM: Write Buffer
DRAM: Metadata
Metadata
Read Cache
Write-back Cache (optional)
  •  
Memory Capacity
Up to 8 TB
NVRAM/NVDIMM: Non-configurable
DRAM: Unknown
Configurable
  Flash  
  •  
Flash Layer
SSD, PCIe, UltraDIMM, NVMe
SSD, NVMe
SSD, PCIe, NVMe
  •  
Flash Purpose
Persistent Storage
All-Flash: Metadata + Persistent Storage Tier
Persistent Storage
Write-back Cache
  •  
Flash Capacity
No limit, up to 1 PB per device
All-Flash: 6 SSDs/12 NVMe per storage node
No limit
  Magnetic  
  •  
Magnetic Layer
SAS or SATA
SAS or SATA
  •  
Magnetic Purpose
Persistent Storage
  •  
Magnetic Capacity
No limit, up to 1 PB (per device)
No limit
Data Availability expand
0%
0%
0%
  Reads/Writes  
  •  
Persistent Write Buffer
DRAM (mirrored)
NVRAM (mirrored)
Hybrid configurations (optional): Intel Optane NVMe, "Pool" NVMe drive, Broadcom/LSI CacheVault or BBU
  •  
Disk Failure Protection
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
1 Replica (2N)
NEW
0-2 Replicas (1N-3N)
  •  
Node Failure Protection
2-way and 3-way Mirroring (RAID-1)
1 Replica (2N)
0-2 Replicas (1N-3N)
  •  
Block Failure Protection
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Protection Domains
Fault Sets
  •  
Rack Failure Protection
Manual configuration
Protection Domains
Fault Sets
  •  
Protection Capacity Overhead
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
  •  
Data Corruption Detection
N/A (hardware dependent)
Read integrity checks
Read integrity checks (end-to-end checksums)
Disk scrubbing
  Points-in-Time  
  •  
Snapshot Type
Built-in (native)
  •  
Snapshot Scope
Local + Remote
Local + Remote
Local + Remote
  •  
Snapshot Frequency
1 Minute
5 minutes
Seconds (workload dependent)
  •  
Snapshot Granularity
Per VM (Vvols) or Volume
Per VM (Vvols) or Volume
Per Volume (LUN)
Per VM/container (eg. OpenStack, Kubernetes)
  •  
Backup Type
Built-in (native)
Built-in (native)
Built-in (native)
  •  
Backup Scope
Local or Remote
Locally
To remote sites
To remote cloud object stores (Amazon S3, OpenStack Swift)
Local or Remote
  •  
Backup Frequency
Continuously
5 minutes
Seconds (workload dependent)
  •  
Backup Consistency
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
Crash Consistent (also Group Consistency)
  •  
Restore Granularity
Entire VM or Volume
Entire Volume
Entire Volume
  •  
Restore Ease-of-use
Entire VM or Volume: GUI
Single File: Multi-step
Entire VM: Multi-step
Single File: Multi-step
Entire Volume: API or CLI
Single File: Multi-step
  Disaster Recovery  
  •  
Remote Replication Type
Built-in (native)
Built-in (native)
Built-in (native)
  •  
Remote Replication Scope
To remote sites
To MS Azure Cloud
To remote sites
To remote sites
  •  
Remote Replication Cloud Function
Data repository
N/A
DR-site (several cloud providers)
  •  
Remote Replication Topologies
Single-site and multi-site
Single-site and multi-site (limited)
Single-site and multi-site
  •  
Remote Replication Frequency
Continuous (near-synchronous)
Continuous (Synchronous, Asynchronous)
Seconds (workload dependent)
  •  
Remote Replication Granularity
VM or Volume
VM (Vvols) or Volume; Snapshots-only
Per Volume (LUN)
  •  
Consistency Groups
Yes
Yes
  •  
DR Orchestration
VMware SRM (certified)
VMware SRM (certified)
N/A
  •  
Stretched Cluster (SC)
VMware vSphere: Yes (certified)
N/A
Linux KVM: Yes
  •  
SC Configuration
2+sites = two or more active sites, 0/1 or more tie-breakers
N/A
2+sites = two or more active sites
  •  
SC Distance
<=5ms RTT (targeted, not required)
N/A
<=1ms RTT (targeted, not required)
  •  
SC Scaling
<=32 hosts at each active site (per cluster)
N/A
<=32 nodes in each site part of a synchronous stretched cluster configuration
  •  
SC Data Redundancy
Replicas: 1N-2N at each active site
N/A
Replicas: 1N-2N at each active site
Data Services expand
0%
0%
0%
  Efficiency  
  •  
Dedup/Compr. Engine
Software (integration)
NEW
Software
N/A
  •  
Dedup/Compr. Function
Efficiency (space savings)
Efficiency and Performance
N/A
  •  
Dedup/Compr. Process
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
Deduplication: inline (pre-ack)
Compression: inline (pre-ack) + post process
N/A
  •  
Dedup/Compr. Type
Optional
NEW
Always-on
N/A
  •  
Dedup/Compr. Scope
Persistent data layer
Read and Write caches + Persistent data layers
N/A
  •  
Dedup/Compr. Radius
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
Storage Cluster
N/A
  •  
Dedup/Compr. Granularity
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
4 KB fixed block size
N/A
  •  
Dedup/Compr. Guarantee
N/A
Workload/Data Type dependent
N/A
  •  
Data Rebalancing
Full (optional)
Full
Full
  •  
Data Tiering
Yes
N/A
Yes
  Performance  
  •  
Task Offloading
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
vSphere: VMware VAAI-Block (full)
OpenNebula, OnApp, OpenStack, CloudStack
  •  
QoS Type
IOPs and/or MBps Limits
IOPs Limits (maximums)
IOPs Guarantees (minimums)
IOPs and/or MBps Limits
Fair-sharing (built-in)
  •  
QoS Granularity
Virtual Disk Groups and/or Host Groups
Per volume
Per vdisk (Vvols)
Per volume
Per vdisk (CMPs)
  •  
Flash Pinning
Per VM/Virtual Disk/Volume
Not relevant (All-Flash only)
Yes
  Security  
  •  
Data Encryption Type
Built-in (native)
N/A
  •  
Data Encryption Options
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: Self-encrypting drives (SEDs)
Software: Element OS encryption
NEW
Hardware: Self-encrypting drives (SEDs)
Software: N/A
  •  
Data Encryption Scope
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: Data-at-rest
Software: Data-at-rest
NEW
Hardware: Data-at-rest
Software: N/A
  •  
Data Encryption Compliance
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
Hardware: FIPS 140-2 Level 1 (SEDs)
Software: N/A
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: N/A
  •  
Data Encryption Efficiency Impact
Hardware: No
Software: No
Hardware: No
Software: Yes (very limited)
NEW
Hardware: No
Software: N/A
  Test/Dev  
  •  
Fast VM Cloning
Yes
Yes
Yes
  Portability  
  •  
Hypervisor Migration
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
Hyper-V to ESXi (external)
Hyper-V/ESXi/XenServer to KVM (external)
  File Services  
  •  
Fileserver Type
Built-in (native)
Built-in (native)
N/A
  •  
Fileserver Compatibility
Windows clients
Linux clients
Windows clients
Linux clients
N/A
  •  
Fileserver Interconnect
SMB
NFS
SMB
NFS
N/A
  •  
Fileserver Quotas
Share Quotas, User Quotas
User Quotas
N/A
  •  
Fileserver Analytics
Partial
N/A
N/A
  Object Services  
  •  
Object Storage Type
N/A
N/A
N/A
  •  
Object Storage Protection
N/A
N/A
N/A
  •  
Object Storage LT Retention
N/A
N/A
N/A
Management expand
0%
0%
0%
  Interfaces  
  •  
GUI Functionality
Centralized
Centralized
Centralized
  •  
GUI Scope
Single-site and Multi-site
Single-site and Multi-site
Single-site and Multi-site (read-only)
  •  
GUI Perf. Monitoring
Advanced
Advanced
Advanced
  •  
GUI Integration
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
VMware vSphere Web Client (plugin)
OpenStack
CloudStack
OnApp
OpenNebula
Kubernetes
  Programmability  
  •  
Policies
Full
Partial
  •  
API/Scripting
REST-APIs
PowerShell
REST-APIs
CLI
REST-API
CLI
  •  
Integration
OpenStack
OpenStack
Orchestration (vRO plug-in)
Ansible playbooks
OpenStack
CloudStack
OnApp
OpenNebula
Kubernetes
  •  
Self Service
Full
N/A
N/A
  Maintenance  
  •  
SW Composition
Unified
Unified
Unified
  •  
SW Upgrade Execution
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
NEW
Rolling Upgrade (1-by-1)
  •  
FW Upgrade Execution
Hardware dependent
Manual (written procedure)
Hardware dependent
  Support  
  •  
Single HW/SW Support
No
Yes
Yes (limited)
  •  
Call-Home Function
Partial (HW dependent)
Full
Yes
  •  
Predictive Analytics
Partial
Partial (Capacity)
NEW
Partial

Matrix Score

  •  
  •  
  • DataCore
  • NetApp
  • StorPool
  •  
  • 2 nd
  • 5 th
  • 6 th
X
Login to access your personal profile

Forgot your Password?
X
Signup with linkedin
X

Registered, but not activated? click here (resend activation link)

Login to access your personal profile

Receive new comparison alerts

Show me as community member

I agree to your Terms of services

GDPR