SDS and HCI comparison & reviews

Summary
Rank
2nd 5th 5th
Score
0%
0%
User Reviews
Question?
Not Enabled Not Enabled Not Enabled
Analysis expand by Herman Rutten by Herman Rutten by Herman Rutten
Select All
General expand
0%
0%
0%
  • Fully Supported
  • Limitation
  • Not Supported
  • Information Only
Pros
  • + Extensive platform support
  • + Extensive data protection capabilities
  • + Flexible deployment options
  • + Extensive QoS capabilities
  • + Extensive data protection capabilities
  • + Small form factor
  • + Extensive data protection capabilities
  • + Policy-based management
  • + Fast streamlined deployment
Cons
  • - No native data integrity verification
  • - Dedup/compr not performance optimized
  • - Disk/node failure protection not capacity optimized
  • - No hybrid configurations
  • - No stretched clustering
  • - Single hypervisor support
  • - Single hypervisor and server hardware
  • - No bare-metal support
  • - No hybrid configurations
  Content  
  •  
Content Creator
  Assessment  
  •  
Overview
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
Name: NetApp HCI
Type: Hardware+Software (HCI)
Development Start: 2016
First Product Release: 2017
Name: HPE SimpliVity 2600
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2018
  •  
Maturity
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
GA Release Dates:
NetApp HCI 1.8P1: oct 2020
NetApp HCI 1.8: may 2020
NetApp HCI 1.7: sep 2019
NetApp HCI 1.6: jul 2019
NetApp HCI 1.4: nov 2018
NetApp HCI 1.3: jun 2018
NetApp HCI 1.2: mar 2018
NetApp HCI 1.1: dec 2017
NetApp HCI 1.0: oct 2017
NEW
GA Release Dates:
OmniStack 4.0.1 U1: dec 2020
OmniStack 4.0.1: apr 2020
OmniStack 4.0.0: jan 2020
OmniStack 3.7.10: sep 2019
OmniStack 3.7.9: jun 2019
OmniStack 3.7.8: mar 2019
OmniStack 3.7.7: dec 2018
OmniStack 3.7.6 U1: oct 2018
OmniStack 3.7.6: sep 2018
OmniStack 3.7.5: jul 2018
OmniStack 3.7.4: may 2018
OmniStack 3.7.3: mar 2018
OmniStack 3.7.2: dec 2017
OmniStack 3.7.1: oct 2017
OmniStack 3.7.0: jun 2017
OmniStack 3.6.2: mar 2017
OmniStack 3.6.1: jan 2017
OmniStack 3.5.3: nov 2016
OmniStack 3.5.2: juli 2016
OmniStack 3.5.1: may 2016
OmniStack 3.0.7: aug 2015
OmniStack 2.1.0: jan 2014
OmniStack 1.1.0: aug 2013
NEW
  Pricing  
  •  
Hardware Pricing Model
N/A
  •  
Software Pricing Model
Capacity based (per TB)
NEW
Per Node
Capacity based (per TB)
Per Node (all-inclusive)
Add-ons: RapidDR (per VM)
  •  
Support Pricing Model
Capacity based (per TB)
Per Node
Per Node
Design & Deploy expand
0%
0%
0%
  Design  
  •  
Consolidation Scope
Storage
Data Protection
Management
Automation&Orchestration
Compute
Storage
Data Protection
Management
Automation&Orchestration
Compute
Storage
Data Protection (full)
Management
Automation&Orchestration (DR)
  •  
Network Topology
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
10/25 GbE (iSCSI)
1, 10 GbE
  •  
Overall Design Complexity
Medium
Low
Low
  •  
External Performance Validation
SPC (Jun 2016)
ESG Lab (Jan 2016)
Evaluator Group (aug 2019)
Login VSI (jun 2018)
  •  
Evaluation Methods
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
Proof-of-Concept (PoC)
Cloud Technology Showcase (CTS)
Proof-of-Concept (POC)
  Deploy  
  •  
Deployment Architecture
Single-Layer
Dual-Layer
Dual-Layer
Single-Layer
  •  
Deployment Method
BYOS (some automation)
Turnkey (very fast; highly automated)
Turnkey (very fast; highly automated)
Workload Support expand
0%
0%
0%
  Virtualization  
  •  
Hypervisor Deployment
Virtual Storage Controller
Kernel (Optional for Hyper-V)
None
Virtual Storage Controller
  •  
Hypervisor Compatibility
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
VMware vSphere ESXi 6.5U2-7.0
(Microsoft Hyper-V)
(Red Hat KVM)
VMware vSphere ESXi 6.5U2-6.7U3
  •  
Hypervisor Interconnect
iSCSI
FC
iSCSI
NFS
  Bare Metal  
  •  
Bare Metal Compatibility
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
RHEL 7.4-8.1 (KVM)
Windows Server 2016-2019 (Hyper-V)
VMware vSphere 6.5-7.0
Citrix XenServer 7.4-7.6
Citrix Hypervisor 8.0-8.1
N/A
  •  
Bare Metal Interconnect
N/A
  Containers  
  •  
Container Integration Type
Built-in (native)
Built-in (native)
N/A
  •  
Container Platform Compatibility
Docker CE/EE 18.03+
Docker EE 17.06+
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
  •  
Container Platform Interconnect
Docker Volume plugin (certified)
Docker Volume Plugin (certified)
Docker Volume Plugin (certified) + VMware VIB
  •  
Container Host Compatibility
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on VMware vSphere hypervisor
  •  
Container Host OS Compatbility
Linux
RHEL
CentOS
Ubuntu
Debian
Linux
Windows 10 or 2016
  •  
Container Orch. Compatibility
Kubernetes 1.9+
Kubernetes 1.6.5+ on ESXi 6.0+
  •  
Container Orch. Interconnect
Kubernetes CSI plugin
Kubernetes Volume Plugin
Kubernetes Volume Plugin
  VDI  
  •  
VDI Compatibility
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
  •  
VDI Load Bearing
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
VMware: up to 139 virtual desktops/node
Citrix: unknown
VMware: up to 175 virtual desktops/node
Citrix: unknown
Server Support expand
0%
0%
0%
  Server/Node  
  •  
Hardware Vendor Choice
Many
Super Micro
Quanta Cloud Technology
HPE
  •  
Models
Many
6 models (3x compute + 3x storage) + several sub-models
2 models
  •  
Density
1, 2 or 4 nodes per chassis
1 node per chassis
4 nodes per chassis
170-series: 3-4 nodes per chassis
190-series: 2 nodes per chassis
  •  
Mixing Allowed
Yes
Yes
Partial
  Components  
  •  
CPU Config
Flexible
Flexible (up to 5 options)
Flexible
  •  
Memory Config
Fixed: H610C
Flexible: H410C/H615C
Flexible
  •  
Storage Config
Flexible
Flexible (3 options)
Fixed: number of disks + capacity
  •  
Network Config
Flexible
Fixed (10 or 25Gbps)
Flexible: additional 10Gbps (190-series only)
  •  
GPU Config
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
H610C/H615C: NVIDIA Tesla
NVIDIA Tesla (190-series only)
  Scaling  
  •  
Scale-up
CPU
Memory
Storage
GPU
N/A
  •  
Scale-out
Storage+Compute
Compute-only
Storage-only
Storage+Compute
Compute-only
Storage-only
Compute+storage
Compute-only
  •  
Scalability
1-64 nodes in 1-node increments
2-64 compute nodes and 2-40 storage nodes in 1-node increments
vSphere: 2-16 storage nodes (cluster); 2-8 storage nodes (stretched cluster); 2-32+ storage nodes (Federation) in 1-node increments
  •  
Small-scale (ROBO)
2 Node minimum
4 Node minimum (2 storage + 2 compute)
2 Node minimum
Storage Support expand
0%
0%
0%
  General  
  •  
Layout
Block Storage Pool
Block Pool
Parallel File System
on top of Object Store
  •  
Data Locality
Partial
None
Full
  •  
Storage Type(s)
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
Direct-Attached (Raw)
Direct-attached (RAID)
  •  
Composition
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
All-Flash (SSD-only)
All-Flash (SSD-only)
  •  
Hypervisor OS Layer
SD, USB, DOM, SSD/HDD
SSD
SSD
  Memory  
  •  
Memory Layer
DRAM
NVRAM (PCIe card) or NVDIMM
DRAM
  •  
Memory Purpose
Read/Write Cache
NVRAM/NVDIMM: Write Buffer
DRAM: Metadata
DRAM (VSC): Read Cache
  •  
Memory Capacity
Up to 8 TB
NVRAM/NVDIMM: Non-configurable
DRAM: Unknown
DRAM (VSC): 16-48GB for Read Cache
  Flash  
  •  
Flash Layer
SSD, PCIe, UltraDIMM, NVMe
SSD, NVMe
  •  
Flash Purpose
Persistent Storage
All-Flash: Metadata + Persistent Storage Tier
All-Flash: Metadata + Write Buffer + Persistent Storage Tier
  •  
Flash Capacity
No limit, up to 1 PB per device
All-Flash: 6 SSDs/12 NVMe per storage node
All-Flash: 6 SSDs per node
  Magnetic  
  •  
Magnetic Layer
SAS or SATA
  •  
Magnetic Purpose
  •  
Magnetic Capacity
No limit, up to 1 PB (per device)
Data Availability expand
0%
0%
0%
  Reads/Writes  
  •  
Persistent Write Buffer
DRAM (mirrored)
NVRAM (mirrored)
Flash Layer (SSD)
  •  
Disk Failure Protection
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
1 Replica (2N)
NEW
1 Replica (2N)
+ Hardware RAID (5 or 6)
  •  
Node Failure Protection
2-way and 3-way Mirroring (RAID-1)
1 Replica (2N)
1 Replica (2N)
+ Hardware RAID (5 or 6)
  •  
Block Failure Protection
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Protection Domains
Not relevant (1-node chassis only)
  •  
Rack Failure Protection
Manual configuration
Protection Domains
Group Placement
  •  
Protection Capacity Overhead
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
Replica (2N) + RAID5: 125-133%
Replica (2N) + RAID6: 120-133%
Replica (2N) + RAID60: 125-140%
  •  
Data Corruption Detection
N/A (hardware dependent)
Read integrity checks
Read integrity checks (CLI)
Disk scrubbing (software)
  Points-in-Time  
  •  
Snapshot Type
Built-in (native)
  •  
Snapshot Scope
Local + Remote
Local + Remote
Local + Remote
  •  
Snapshot Frequency
1 Minute
5 minutes
GUI: 10 minutes (Policy-based)
CLI: 1 minute
  •  
Snapshot Granularity
Per VM (Vvols) or Volume
Per VM (Vvols) or Volume
  •  
Backup Type
Built-in (native)
Built-in (native)
Built-in (native)
  •  
Backup Scope
Local or Remote
Locally
To remote sites
To remote cloud object stores (Amazon S3, OpenStack Swift)
Locally
To other SimpliVity sites
To Service Providers
  •  
Backup Frequency
Continuously
5 minutes
GUI: 10 minutes (Policy-based)
CLI: 1 minute
  •  
Backup Consistency
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
vSphere: File System Consistent (Windows), Application Consistent (MS Apps on Windows)
Hyper-V: File System Consistent (Windows)
  •  
Restore Granularity
Entire VM or Volume
Entire Volume
vSphere: Entire VM or Single File
Hyper-V: Entire VM
  •  
Restore Ease-of-use
Entire VM or Volume: GUI
Single File: Multi-step
Entire VM: Multi-step
Single File: Multi-step
Entire VM: GUI, CLI and API
Single File: GUI
  Disaster Recovery  
  •  
Remote Replication Type
Built-in (native)
Built-in (native)
Built-in (native)
  •  
Remote Replication Scope
To remote sites
To MS Azure Cloud
To remote sites
  •  
Remote Replication Cloud Function
Data repository
N/A
N/A
  •  
Remote Replication Topologies
Single-site and multi-site
Single-site and multi-site (limited)
Single-site and multi-site
  •  
Remote Replication Frequency
Continuous (near-synchronous)
Continuous (Synchronous, Asynchronous)
GUI: 10 minutes (Asynchronous)
CLI: 1 minute (Asynchronous)
Continuous (Stretched Cluster)
  •  
Remote Replication Granularity
VM or Volume
VM (Vvols) or Volume; Snapshots-only
  •  
Consistency Groups
Yes
No
  •  
DR Orchestration
VMware SRM (certified)
VMware SRM (certified)
RapidDR (native; VMware only)
NEW
  •  
Stretched Cluster (SC)
VMware vSphere: Yes (certified)
N/A
VMware vSphere: Yes (certified)
  •  
SC Configuration
2+sites = two or more active sites, 0/1 or more tie-breakers
N/A
3-sites: two active sites + tie-breaker in 3rd site (optional)
  •  
SC Distance
<=5ms RTT (targeted, not required)
N/A
<=5ms RTT
  •  
SC Scaling
<=32 hosts at each active site (per cluster)
N/A
<=8 hosts at each active site
  •  
SC Data Redundancy
Replicas: 1N-2N at each active site
N/A
Replicas: 1N at each active site
+ Hardware RAID (5, 6 or 60)
Data Services expand
0%
0%
0%
  Efficiency  
  •  
Dedup/Compr. Engine
Software (integration)
NEW
Software
Software
  •  
Dedup/Compr. Function
Efficiency (space savings)
Efficiency and Performance
Efficiency and Performance
  •  
Dedup/Compr. Process
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
Deduplication: inline (pre-ack)
Compression: inline (pre-ack) + post process
Deduplication: Inline (on-ack)
Compression: Inline (on-ack)
  •  
Dedup/Compr. Type
Optional
NEW
Always-on
Always-on
  •  
Dedup/Compr. Scope
Persistent data layer
Read and Write caches + Persistent data layers
All data (memory-, flash- and persistent data layers)
  •  
Dedup/Compr. Radius
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
Storage Cluster
Federation
  •  
Dedup/Compr. Granularity
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
4 KB fixed block size
4-8 KB variable block size
  •  
Dedup/Compr. Guarantee
N/A
Workload/Data Type dependent
90% (10:1) capacity savings across storage and backup combined
  •  
Data Rebalancing
Full (optional)
Full
Partial
  •  
Data Tiering
Yes
N/A
N/A
  Performance  
  •  
Task Offloading
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
vSphere: VMware VAAI-Block (full)
vSphere: VMware VAAI-NAS (Limited)
GUI integrated tasks/commands
  •  
QoS Type
IOPs and/or MBps Limits
IOPs Limits (maximums)
IOPs Guarantees (minimums)
N/A
  •  
QoS Granularity
Virtual Disk Groups and/or Host Groups
Per volume
Per vdisk (Vvols)
N/A
  •  
Flash Pinning
Per VM/Virtual Disk/Volume
Not relevant (All-Flash only)
Not relevant (All-Flash only)
  Security  
  •  
Data Encryption Type
Built-in (native)
N/A
  •  
Data Encryption Options
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: Self-encrypting drives (SEDs)
Software: Element OS encryption
NEW
Hardware: N/A
Software: HyTrust DataControl (validated); Vormetric VTE (validated)
  •  
Data Encryption Scope
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: Data-at-rest
Software: Data-at-rest
NEW
Hardware: N/A
Software: Data-at-rest + Data-in-transit
  •  
Data Encryption Compliance
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
Hardware: FIPS 140-2 Level 1 (SEDs)
Software: N/A
Hardware: N/A
Software: FIPS 140-2 Level 1 (HyTrust, VTE)
  •  
Data Encryption Efficiency Impact
Hardware: No
Software: No
Hardware: No
Software: Yes (very limited)
NEW
Hardware: N/A
Software: Yes
  Test/Dev  
  •  
Fast VM Cloning
Yes
Yes
Yes
  Portability  
  •  
Hypervisor Migration
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
Hyper-V to ESXi (external)
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
  File Services  
  •  
Fileserver Type
Built-in (native)
Built-in (native)
N/A
  •  
Fileserver Compatibility
Windows clients
Linux clients
Windows clients
Linux clients
N/A
  •  
Fileserver Interconnect
SMB
NFS
SMB
NFS
N/A
  •  
Fileserver Quotas
Share Quotas, User Quotas
User Quotas
N/A
  •  
Fileserver Analytics
Partial
N/A
N/A
  Object Services  
  •  
Object Storage Type
N/A
N/A
N/A
  •  
Object Storage Protection
N/A
N/A
N/A
  •  
Object Storage LT Retention
N/A
N/A
N/A
Management expand
0%
0%
0%
  Interfaces  
  •  
GUI Functionality
Centralized
Centralized
Centralized
  •  
GUI Scope
Single-site and Multi-site
Single-site and Multi-site
Single-site and Multi-site
  •  
GUI Perf. Monitoring
Advanced
Advanced
Advanced
  •  
GUI Integration
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
VMware vSphere Web Client (plugin)
VMware vSphere HTML5 Client (plugin)
  Programmability  
  •  
Policies
Full
Partial (Protection)
  •  
API/Scripting
REST-APIs
PowerShell
REST-APIs
CLI
REST-APIs
XML-APIs
PowerShell (community supported)
CLI
  •  
Integration
OpenStack
OpenStack
Orchestration (vRO plug-in)
Ansible playbooks
VMware vRealize Automation (vRA)
Cisco UCS Director
  •  
Self Service
Full
N/A
N/A
  Maintenance  
  •  
SW Composition
Unified
Unified
Unified
  •  
SW Upgrade Execution
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
NEW
Rolling Upgrade (1-by-1)
  •  
FW Upgrade Execution
Hardware dependent
Manual (written procedure)
Rolling Upgrade (1-by-1)
  Support  
  •  
Single HW/SW Support
No
Yes
Yes
  •  
Call-Home Function
Partial (HW dependent)
Full
Full
  •  
Predictive Analytics
Partial
Partial (Capacity)
NEW
Full

Matrix Score

  •  
  •  
  • DataCore
  • NetApp
  • HPE
  •  
  • 2 nd
  • 5 th
  • 5 th
X
Login to access your personal profile

Forgot your Password?
X
Signup with linkedin
X

Registered, but not activated? click here (resend activation link)

Login to access your personal profile

Receive new comparison alerts

Show me as community member

I agree to your Terms of services

GDPR