SDS and HCI comparison & reviews

Summary
Rank
5th 6th 7th
Score
0%
0%
User Reviews
Question?
Not Enabled Not Enabled Not Enabled
Analysis expand by Herman Rutten by Herman Rutten by Herman Rutten
Select All
General expand
0%
0%
0%
  • Fully Supported
  • Limitation
  • Not Supported
  • Information Only
Pros
  • + Extensive platform support
  • + Extensive data protection capabilities
  • + Flexible deployment options
  • + Built for performance and robustness
  • + Broad range of hardware support
  • + Well suited for open private cloud platforms
  • + Broad range of hardware support
  • + Strong Microsoft integration
  • + Great simplicity to deploy
Cons
  • - No native data integrity verification
  • - Dedup/compr not performance optimized
  • - Disk/node failure protection not capacity optimized
  • - Only basic support for VMware and Hyper-V
  • - No native dedup capabilities
  • - No native encryption capabilities
  • - Single hypervisor support
  • - Limited native data protection
  • - Dedup/compr not performance optimized
  Content  
  •  
Content Creator
  Assessment  
  •  
Overview
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
Name: StorPool Distributed Storage (StorPool Storage)
Type: Software-only (SDS)
Development Start: 2011
First Product Release: Nov 2012
Name: Storage Spaces Direct (S2D)
Type: Software-only (SDS)
Development Start: 2015
First Product Release: Oct 2016
NEW
  •  
Maturity
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
Release Dates:
SP 19.01: Jun 2019
SP 18.02: Jan 2018
SP 18.01: Dec 2017
SP 16.03: Dec 2016
SP 16.02: Aug 2016
SP 16.01: Mar 2016
SP 15.03: Nov 2015
SP 15.02: Mar 2015
SP 15.01: Jan 2015
SP 14.12: Dec 2014
SP 14.10: Oct 2014
SP 14.08: Aug 2014
SP 14.04: Apr 2014
SP 14.02: Feb 2014
SP 13.10: Oct 2013 (GA)
SP 20121217: Dec 2012 (Early access)
SP 20121119: Nov 2012 (Early access)
GA Release Dates:
S2D 2019: Oct 2018
S2D 2016: Oct 2016
NEW
  Pricing  
  •  
Hardware Pricing Model
N/A
N/A
Per Node
  •  
Software Pricing Model
Capacity based (per TB)
NEW
Capacity based (per TB)
Per Node
  •  
Support Pricing Model
Capacity based (per TB)
Capacity based (per TB)
Per Node
Design & Deploy expand
0%
0%
0%
  Design  
  •  
Consolidation Scope
Storage
Data Protection
Management
Automation&Orchestration
Compute
Storage
Data Protection (limited)
Automation&Orchestration (limited)
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
  •  
Network Topology
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
Standard Ethernet 10/25/40/50/100 GbE
Infiniband (EoL)
10, 25, 40, 100 GbE
  •  
Overall Design Complexity
Medium
Medium
Medium
  •  
External Performance Validation
SPC (Jun 2016)
ESG Lab (Jan 2016)
N/A (one internally-validated)
ESG Lab (mar 2017)
  •  
Evaluation Methods
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
Evaluation license (30-days)
Free Trial (180-days)
Proof-of-Concept (PoC)
  Deploy  
  •  
Deployment Architecture
Single-Layer
Dual-Layer
Single-Layer
Dual-Layer
Single-Layer
Dual-Layer
  •  
Deployment Method
BYOS (some automation)
Turnkey (remote install)
BYOS (some automation)
NEW
Workload Support expand
0%
0%
0%
  Virtualization  
  •  
Hypervisor Deployment
Virtual Storage Controller
Kernel (Optional for Hyper-V)
Next to Hypervisor (KVM)
None (ESXi, Hyper-V, XenServer, OracleVM)
Kernel Integrated
  •  
Hypervisor Compatibility
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
KVM
VMware ESXi
Hyper-V
XenServer
OracleVM
Microsoft Hyper-V 2012R2 (Dual Layer)
Microsoft Hyper-V 2016/2019
  •  
Hypervisor Interconnect
iSCSI
FC
Block device driver (KVM)
iSCSI (ESXi, Hyper-V, XenServer, OracleVM)
SMB3
  Bare Metal  
  •  
Bare Metal Compatibility
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Microsoft Windows Server 2012R2-2019
CentOS 7, 8
Debian Linux 9
Ubuntu Linux 16.04 LTS, 18.04 LTS
Other Linux distributions (eg. RHEL, OEL, SuSE)
Microsoft Windows Server (Limited)
  •  
Bare Metal Interconnect
Block device driver (Linux)
iSCSI (Windows Server)
  Containers  
  •  
Container Integration Type
Built-in (native)
Hypervisor: None
Bare metal (Linux): Block device driver
Built-in (native)
  •  
Container Platform Compatibility
Docker CE/EE 18.03+
Most container platforms
Windows (Native)
Linux (Docker in Linux VM or Windows Server 2016 VM)
NEW
  •  
Container Platform Interconnect
Docker Volume plugin (certified)
Standard block devices
OS-integrated software + CSV/SMB (Native)
VHDX (Docker in Linux VM or Windows Server 2016 VM)
NEW
  •  
Container Host Compatibility
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Bare-metal container hosts
Virtualized container hosts on Microsoft Hyper-V hypervisor (Docker in Linux VM, Native in Windows VM)
Bare Metal container hosts (Native)
NEW
  •  
Container Host OS Compatbility
Linux
Linux
Windows Server 2019 (Native)
Windows Server 2016 (Docker)
Linux (Docker)
NEW
  •  
Container Orch. Compatibility
Kubernetes v1.13+
Kubernetes v1.14+
NEW
  •  
Container Orch. Interconnect
Kubernetes CSI plugin
Kubernetes CSI plugin
Kubernetes FlexVolume Plugin
NEW
  VDI  
  •  
VDI Compatibility
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
Microsoft RDS on Hyper-V
Citrix Virtual Apps and Desktops 7 1808
Workspot VDI on Hyper-V
NEW
  •  
VDI Load Bearing
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
N/A
Server Support expand
0%
0%
0%
  Server/Node  
  •  
Hardware Vendor Choice
Many
Many
Many
  •  
Models
Many
Many
Many
  •  
Density
1, 2 or 4 nodes per chassis
1, 2 or 4 nodes per chassis
1, 2 or 4 nodes per chassis
  •  
Mixing Allowed
Yes
Yes
Yes
  Components  
  •  
CPU Config
Flexible
Flexible
Flexible
NEW
  •  
Memory Config
Flexible
Flexible
  •  
Storage Config
Flexible
Flexible
Flexible
  •  
Network Config
Flexible
Flexible
Flexible
  •  
GPU Config
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
  Scaling  
  •  
Scale-up
CPU
Memory
Storage
GPU
CPU
Memory
Storage
GPU
CPU
Memory
Storage
GPU
  •  
Scale-out
Storage+Compute
Compute-only
Storage-only
Storage+Compute
Compute-only
Storage-only
Compute+storage
Compute-only/Storage-only
  •  
Scalability
1-64 nodes in 1-node increments
3-63 nodes in 1-node increments
2-16 nodes in 1-node increments; >1,000 nodes in a federation (cluster set)
NEW
  •  
Small-scale (ROBO)
2 Node minimum
3 Node minimum
2 Node minimum
Storage Support expand
0%
0%
0%
  General  
  •  
Layout
Block Storage Pool
Block Storage Pool
SSB Block Pool
  •  
Data Locality
Partial
None
None
  •  
Storage Type(s)
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
Direct-attached (Raw)
Direct-Attached (Raw)
  •  
Composition
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
Magnetic-Only
Hybrid
All-Flash
Hybrid (Flash+Magnetic)
All-Flash
(Persistent Memory)
NEW
  •  
Hypervisor OS Layer
SD, USB, DOM, SSD/HDD
SD, USB, DOM, SSD/HDD
SSD/HDD
Persistent Memory
NEW
  Memory  
  •  
Memory Layer
DRAM
DRAM
  •  
Memory Purpose
Read/Write Cache
Metadata
Read Cache
Write-back Cache (optional)
Read Cache
Metadata structures
  •  
Memory Capacity
Up to 8 TB
Configurable
S2D Cache (Hybrid): non-configurable
CSV Cache: configurable
  Flash  
  •  
Flash Layer
SSD, PCIe, UltraDIMM, NVMe
SSD, PCIe, NVMe
SSD, NVMe, (Persistent memory)
NEW
  •  
Flash Purpose
Persistent Storage
Persistent Storage
Write-back Cache
Read/Write Cache (hybrid)
Write Cache (all-flash)
Persistent storage (all-flash)
  •  
Flash Capacity
No limit, up to 1 PB per device
No limit
Up to 4PB per cluster
NEW
  Magnetic  
  •  
Magnetic Layer
SAS or SATA
SAS or SATA
Hybrid: SAS or SATA
  •  
Magnetic Purpose
Persistent Storage
Persistent Storage
  •  
Magnetic Capacity
No limit, up to 1 PB (per device)
No limit
Up to 4PB per cluster
NEW
Data Availability expand
0%
0%
0%
  Reads/Writes  
  •  
Persistent Write Buffer
DRAM (mirrored)
Hybrid configurations (optional): Intel Optane NVMe, "Pool" NVMe drive, Broadcom/LSI CacheVault or BBU
Flash Layer (PMEM/NVMe/SSD)
NEW
  •  
Disk Failure Protection
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
0-2 Replicas (1N-3N)
2-way and 3-way Mirroring (RAID-1) (primary)
Erasure Coding (N+1/N+2) (secondary)
Nested Resiliency (4N; RAID 5+1) (primary; 2-node only)
NEW
  •  
Node Failure Protection
2-way and 3-way Mirroring (RAID-1)
0-2 Replicas (1N-3N)
1-2 Replicas (2N-3N)
Erasure Coding
Nested Resiliency (4N; RAID5+1) (primary; 2-node only)
NEW
  •  
Block Failure Protection
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Fault Sets
Fault Domain Awareness
  •  
Rack Failure Protection
Manual configuration
Fault Sets
Fault Domain Awareness
  •  
Protection Capacity Overhead
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
EC (N+2) (secondary): 50%-80%
Nested Resiliency (4N) (primary): 300%
Nested Resiliency (RAID5+1) (primary): 150%
NEW
  •  
Data Corruption Detection
N/A (hardware dependent)
Read integrity checks (end-to-end checksums)
Disk scrubbing
Read integrity checks
Proactive file integrity scrubber (requires ReFS integrity streams; optional)
Automatic in-line corruption correction (requires ReFS integrity streams; optional)
NEW
  Points-in-Time  
  •  
Snapshot Type
Built-in (native)
N/A
  •  
Snapshot Scope
Local + Remote
Local + Remote
N/A
  •  
Snapshot Frequency
1 Minute
Seconds (workload dependent)
N/A
  •  
Snapshot Granularity
Per VM (Vvols) or Volume
Per Volume (LUN)
Per VM/container (eg. OpenStack, Kubernetes)
N/A
  •  
Backup Type
Built-in (native)
Built-in (native)
External
  •  
Backup Scope
Local or Remote
Local or Remote
WSB:
Locally
To remote sites
NEW
  •  
Backup Frequency
Continuously
Seconds (workload dependent)
WSB GUI: 30 minutes
Task Scheduler: 1 minute
  •  
Backup Consistency
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
Crash Consistent (also Group Consistency)
WSB: File System Consistent (Windows)
WSB: Application Consistent (MS Apps on Windows)
  •  
Restore Granularity
Entire VM or Volume
Entire Volume
WSB: Entire VM
  •  
Restore Ease-of-use
Entire VM or Volume: GUI
Single File: Multi-step
Entire Volume: API or CLI
Single File: Multi-step
WSB: Entire VM (GUI)
  Disaster Recovery  
  •  
Remote Replication Type
Built-in (native)
Built-in (native)
Built-in (native)
  •  
Remote Replication Scope
To remote sites
To MS Azure Cloud
To remote sites
SR: To remote sites, to public clouds
HR: To remote sites, to Microsoft Azure (not part of Windows Server 2019)
  •  
Remote Replication Cloud Function
Data repository
DR-site (several cloud providers)
SR: Data repository (Azure)
HR: DR-site (Azure)
  •  
Remote Replication Topologies
Single-site and multi-site
Single-site and multi-site
SR: Single site
HR: Single-site and chained
  •  
Remote Replication Frequency
Continuous (near-synchronous)
Seconds (workload dependent)
SR: seconds (Near-sync), continous (Synchronous)
HR: 30 seconds (Asynchronous)
  •  
Remote Replication Granularity
VM or Volume
Per Volume (LUN)
SR: Volume
HR: VM
  •  
Consistency Groups
Yes
Yes
  •  
DR Orchestration
VMware SRM (certified)
N/A
Azure Site Recovery
  •  
Stretched Cluster (SC)
VMware vSphere: Yes (certified)
Linux KVM: Yes
N/A
  •  
SC Configuration
2+sites = two or more active sites, 0/1 or more tie-breakers
2+sites = two or more active sites
N/A
  •  
SC Distance
<=5ms RTT (targeted, not required)
<=1ms RTT (targeted, not required)
N/A
  •  
SC Scaling
<=32 hosts at each active site (per cluster)
<=32 nodes in each site part of a synchronous stretched cluster configuration
N/A
  •  
SC Data Redundancy
Replicas: 1N-2N at each active site
Replicas: 1N-2N at each active site
N/A
Data Services expand
0%
0%
0%
  Efficiency  
  •  
Dedup/Compr. Engine
Software (integration)
NEW
N/A
Software
NEW
  •  
Dedup/Compr. Function
Efficiency (space savings)
N/A
Efficiency (space savings)
NEW
  •  
Dedup/Compr. Process
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
N/A
Post-Process
NEW
  •  
Dedup/Compr. Type
Optional
NEW
N/A
Optional
NEW
  •  
Dedup/Compr. Scope
Persistent data layer
N/A
Persistent data layer
NEW
  •  
Dedup/Compr. Radius
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
N/A
Volume
NEW
  •  
Dedup/Compr. Granularity
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
N/A
32-128 KB variable block size
NEW
  •  
Dedup/Compr. Guarantee
N/A
N/A
  •  
Data Rebalancing
Full (optional)
Full
Full
  •  
Data Tiering
Yes
Yes
Yes
  Performance  
  •  
Task Offloading
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
OpenNebula, OnApp, OpenStack, CloudStack
RDMA
ReFSv2
  •  
QoS Type
IOPs and/or MBps Limits
IOPs and/or MBps Limits
Fair-sharing (built-in)
IOPs/MBps Limits (maximums)
IOPs Guarantees (minimums)
  •  
QoS Granularity
Virtual Disk Groups and/or Host Groups
Per volume
Per vdisk (CMPs)
Per Virtual Disk
  •  
Flash Pinning
Per VM/Virtual Disk/Volume
Yes
Not relevant (Cache architecture)
  Security  
  •  
Data Encryption Type
Built-in (native)
N/A
  •  
Data Encryption Options
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: Self-encrypting drives (SEDs)
Software: N/A
Hardware: N/A
Software: Microsoft BitLocker Drive Encryption; SMB encryption
  •  
Data Encryption Scope
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: Data-at-rest
Software: N/A
Hardware: N/A
Software: Data-at-rest (BitLocker); Data-in-transit (SMB Encryption)
  •  
Data Encryption Compliance
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: N/A
Hardware: N/A
Software: FIPS 140-2 Level 1 (Bitlocker)
  •  
Data Encryption Efficiency Impact
Hardware: No
Software: No
Hardware: No
Software: N/A
Hardware: N/A
Software: No
  Test/Dev  
  •  
Fast VM Cloning
Yes
Yes
  Portability  
  •  
Hypervisor Migration
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
Hyper-V/ESXi/XenServer to KVM (external)
ESXi to Hyper-V/Azure (external)
  File Services  
  •  
Fileserver Type
Built-in (native)
N/A
Built-in (native;limited)
  •  
Fileserver Compatibility
Windows clients
Linux clients
N/A
Windows clients
  •  
Fileserver Interconnect
SMB
NFS
N/A
SMB
  •  
Fileserver Quotas
Share Quotas, User Quotas
N/A
N/A
  •  
Fileserver Analytics
Partial
N/A
N/A
  Object Services  
  •  
Object Storage Type
N/A
N/A
N/A
  •  
Object Storage Protection
N/A
N/A
N/A
  •  
Object Storage LT Retention
N/A
N/A
N/A
Management expand
0%
0%
0%
  Interfaces  
  •  
GUI Functionality
Centralized
Centralized
Centralized
  •  
GUI Scope
Single-site and Multi-site
Single-site and Multi-site (read-only)
Single-site and Multi-site
  •  
GUI Perf. Monitoring
Advanced
Advanced
Advanced
NEW
  •  
GUI Integration
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
OpenStack
CloudStack
OnApp
OpenNebula
Kubernetes
SCVMM 2016
Windows Admin Center (HCI only)
  Programmability  
  •  
Policies
Full
Partial
Partial (Storage QoS)
  •  
API/Scripting
REST-APIs
PowerShell
REST-API
CLI
PowerShell
WMI
Public SDK (WAC)
  •  
Integration
OpenStack
OpenStack
CloudStack
OnApp
OpenNebula
Kubernetes
Azure Automation
System Center Orchestrator
  •  
Self Service
Full
N/A
N/A (not part of S2D license)
  Maintenance  
  •  
SW Composition
Unified
Unified
Partially Distributed
  •  
SW Upgrade Execution
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
  •  
FW Upgrade Execution
Hardware dependent
Hardware dependent
Hardware dependent
  Support  
  •  
Single HW/SW Support
No
Yes (limited)
No (Yes for some Tier-1 server hardware vendors)
  •  
Call-Home Function
Partial (HW dependent)
Yes
Partial (HW dependent)
  •  
Predictive Analytics
Partial
Partial
Full
NEW

Matrix Score

  •  
  •  
  • NetApp
  • StorPool
  • Microsoft
  •  
  • 5 th
  • 6 th
  • 7 th
X
Login to access your personal profile

Forgot your Password?
X
Signup with linkedin
X

Registered, but not activated? click here (resend activation link)

Login to access your personal profile

Receive new comparison alerts

Show me as community member

I agree to your Terms of services

GDPR