SDS and HCI comparison & reviews

Summary
Rank
2nd 8th 5th
Score
0%
0%
User Reviews
Question?
Not Enabled Not Enabled Not Enabled
Analysis expand by Herman Rutten by Herman Rutten by Herman Rutten
Select All
General expand
0%
0%
0%
  • Fully Supported
  • Limitation
  • Not Supported
  • Information Only
Pros
  • + Extensive platform support
  • + Extensive data protection capabilities
  • + Flexible deployment options
  • + Strong Cisco integration
  • + Fast streamlined deployment
  • + Strong container support
  • + Extensive data protection capabilities
  • + Policy-based management
  • + Fast streamlined deployment
Cons
  • - No native data integrity verification
  • - Dedup/compr not performance optimized
  • - Disk/node failure protection not capacity optimized
  • - Single server hardware support
  • - No bare-metal support
  • - Limited native data protection capabilities
  • - Single hypervisor and server hardware
  • - No bare-metal support
  • - No hybrid configurations
  Content  
  •  
Content Creator
  Assessment  
  •  
Overview
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
Name: HyperFlex (HX)
Type: Hardware+Software (HCI)
Development Start: 2015
First Product Release: apr 2016
NEW
Name: HPE SimpliVity 2600
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2018
  •  
Maturity
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
GA Release Dates:
HX 4.0: apr 2019
HX 3.5.2a: jan 2019
HX 3.5.1a: nov 2018
HX 3.5: oct 2018
HX 3.0: apr 2018
HX 2.6.1b: dec 2017
HX 2.6.1a: oct 2017
HX 2.5: jul 2017
HX 2.1: may 2017
HX 2.0: mar 2017
HX 1.8: sep 2016
HX 1.7.3: aug 2016
1.7.1-14835: jun 2016
HX 1.7.1: apr 2016
NEW
GA Release Dates:
OmniStack 4.0.1 U1: dec 2020
OmniStack 4.0.1: apr 2020
OmniStack 4.0.0: jan 2020
OmniStack 3.7.10: sep 2019
OmniStack 3.7.9: jun 2019
OmniStack 3.7.8: mar 2019
OmniStack 3.7.7: dec 2018
OmniStack 3.7.6 U1: oct 2018
OmniStack 3.7.6: sep 2018
OmniStack 3.7.5: jul 2018
OmniStack 3.7.4: may 2018
OmniStack 3.7.3: mar 2018
OmniStack 3.7.2: dec 2017
OmniStack 3.7.1: oct 2017
OmniStack 3.7.0: jun 2017
OmniStack 3.6.2: mar 2017
OmniStack 3.6.1: jan 2017
OmniStack 3.5.3: nov 2016
OmniStack 3.5.2: juli 2016
OmniStack 3.5.1: may 2016
OmniStack 3.0.7: aug 2015
OmniStack 2.1.0: jan 2014
OmniStack 1.1.0: aug 2013
NEW
  Pricing  
  •  
Hardware Pricing Model
N/A
Per Node
Bundle (ROBO)
  •  
Software Pricing Model
Capacity based (per TB)
NEW
Per Node
Per Node (all-inclusive)
Add-ons: RapidDR (per VM)
  •  
Support Pricing Model
Capacity based (per TB)
Per Node
Per Node
Design & Deploy expand
0%
0%
0%
  Design  
  •  
Consolidation Scope
Storage
Data Protection
Management
Automation&Orchestration
Compute
Storage
Network
Management
Automation&Orchestration
Compute
Storage
Data Protection (full)
Management
Automation&Orchestration (DR)
  •  
Network Topology
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
1, 10, 40 GbE
1, 10 GbE
  •  
Overall Design Complexity
Medium
Low
Low
  •  
External Performance Validation
SPC (Jun 2016)
ESG Lab (Jan 2016)
ESG Lab (jul 2018)
SAP (dec 2017)
ESG Lab (mar 2017)
Login VSI (jun 2018)
  •  
Evaluation Methods
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
Online Labs
Proof-of-Concept (PoC)
Cloud Technology Showcase (CTS)
Proof-of-Concept (POC)
  Deploy  
  •  
Deployment Architecture
Single-Layer
Dual-Layer
Single-Layer (primary)
Dual-Layer (secondary)
Single-Layer
  •  
Deployment Method
BYOS (some automation)
Turnkey (very fast; highly automated)
Turnkey (very fast; highly automated)
Workload Support expand
0%
0%
0%
  Virtualization  
  •  
Hypervisor Deployment
Virtual Storage Controller
Kernel (Optional for Hyper-V)
Virtual Storage Controller
Virtual Storage Controller
  •  
Hypervisor Compatibility
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
VMware vSphere ESXi 6.0U3/6.5U2/6.7U2
Microsoft Hyper-V 2016/2019
NEW
VMware vSphere ESXi 6.5U2-6.7U3
  •  
Hypervisor Interconnect
iSCSI
FC
NFS
SMB
NFS
  Bare Metal  
  •  
Bare Metal Compatibility
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
N/A
N/A
  •  
Bare Metal Interconnect
N/A
N/A
  Containers  
  •  
Container Integration Type
Built-in (native)
Built-in (native)
N/A
  •  
Container Platform Compatibility
Docker CE/EE 18.03+
Docker EE 1.13+
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
  •  
Container Platform Interconnect
Docker Volume plugin (certified)
HX FlexVolume Driver
Docker Volume Plugin (certified) + VMware VIB
  •  
Container Host Compatibility
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on VMware vSphere hypervisor
Virtualized container hosts on VMware vSphere hypervisor
  •  
Container Host OS Compatbility
Linux
Ubuntu Linux 16.04.3 LTS
Linux
Windows 10 or 2016
  •  
Container Orch. Compatibility
Kubernetes
Kubernetes 1.6.5+ on ESXi 6.0+
  •  
Container Orch. Interconnect
Kubernetes CSI plugin
HX-CSI Plugin
NEW
Kubernetes Volume Plugin
  VDI  
  •  
VDI Compatibility
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
  •  
VDI Load Bearing
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
VMware: up to 137 virtual desktops/node
Citrix: up to 125 virtual desktops/node
VMware: up to 175 virtual desktops/node
Citrix: unknown
Server Support expand
0%
0%
0%
  Server/Node  
  •  
Hardware Vendor Choice
Many
Cisco
NEW
HPE
  •  
Models
Many
9 storage models:
HX220x Edge M5, HX220c M4/M5, HX240c M4/M5, HXAF220c M4/M5, HXAF240c M4/M5
8 compute-only models: B2x0 M4/M5, B4x0 M4/M5, C2x0 M4/M5, C4x0 M4/M5, C480 ML
NEW
2 models
  •  
Density
1, 2 or 4 nodes per chassis
HX2x0/HXAF2x0/HXAN2x0: 1 node per chassis
B200: up to 8 nodes per chassis
C2x0: 1 node per chassis
170-series: 3-4 nodes per chassis
190-series: 2 nodes per chassis
  •  
Mixing Allowed
Yes
Partial
Partial
  Components  
  •  
CPU Config
Flexible
Flexible
NEW
Flexible
  •  
Memory Config
Flexible
NEW
Flexible
  •  
Storage Config
Flexible
HX220c/HXAF220c/HXAN220c: Fixed number of disks
HX240c/HXAF240c: Flexible (number of disks)
NEW
Fixed: number of disks + capacity
  •  
Network Config
Flexible
Flexible: M5:10/40GbE; M5 Edge:1/10GbE; FC (optional)
Flexible: additional 10Gbps (190-series only)
  •  
GPU Config
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla (HX240c only)
AMD FirePro (HX240c only)
NEW
NVIDIA Tesla (190-series only)
  Scaling  
  •  
Scale-up
CPU
Memory
Storage
GPU
HX220c/HXAF220c/HXAN220c: CPU, Memory, Network
HX240c/HXAF240c: CPU, Memory, Storage, Network, GPU
  •  
Scale-out
Storage+Compute
Compute-only
Storage-only
Compute+storage
Compute-only (IO Visor)
Compute+storage
Compute-only
  •  
Scalability
1-64 nodes in 1-node increments
vSphere: 2-32 storage nodes in 1-node increments + 0-32 compute-only nodes in 1-node increments
Hyper-V: 2-16 storage nodes in 1-node increments + 0-16 compute-only nodes in 1-node increments
NEW
vSphere: 2-16 storage nodes (cluster); 2-8 storage nodes (stretched cluster); 2-32+ storage nodes (Federation) in 1-node increments
  •  
Small-scale (ROBO)
2 Node minimum
2 Node minimum
NEW
2 Node minimum
Storage Support expand
0%
0%
0%
  General  
  •  
Layout
Block Storage Pool
Distributed File System (DFS)
Parallel File System
on top of Object Store
  •  
Data Locality
Partial
None
Full
  •  
Storage Type(s)
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
Direct-attached (Raw)
SAN or NAS
Direct-attached (RAID)
  •  
Composition
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
Hybrid (Flash+Magnetic)
All-Flash
All-Flash (SSD-only)
  •  
Hypervisor OS Layer
SD, USB, DOM, SSD/HDD
Dual SD cards
SSD (optional for HX240c and HXAF240c systems)
SSD
  Memory  
  •  
Memory Layer
DRAM
  •  
Memory Purpose
Read/Write Cache
DRAM (VSC): Read Cache
  •  
Memory Capacity
Up to 8 TB
DRAM (VSC): 16-48GB for Read Cache
  Flash  
  •  
Flash Layer
SSD, PCIe, UltraDIMM, NVMe
SSD, NVMe
  •  
Flash Purpose
Persistent Storage
Hybrid: Log + Read/Write Cache
All-Flash/All-NVMe: Log + Write Cache + Storage Tier
All-Flash: Metadata + Write Buffer + Persistent Storage Tier
  •  
Flash Capacity
No limit, up to 1 PB per device
Hybrid: 2 Flash devices per node (1x Cache; 1x Housekeeping)
All-Flash: 9-26 Flash devices per node (1x Cache; 1x System, 1x Boot; 6-23x Data)
All-NVMe: 8-11 NVMe devices per node (1x Cache, 1x System, 6-8 Data)
NEW
All-Flash: 6 SSDs per node
  Magnetic  
  •  
Magnetic Layer
SAS or SATA
Hybrid: SAS or SATA
NEW
  •  
Magnetic Purpose
Persistent Storage
  •  
Magnetic Capacity
No limit, up to 1 PB (per device)
HX220x Edge M5: 3-6 capacity devices per node
HX220c: 6-8 capacity devices per node
HX240c: 6-23 capacity devices per node
Data Availability expand
0%
0%
0%
  Reads/Writes  
  •  
Persistent Write Buffer
DRAM (mirrored)
Flash Layer (SSD, NVMe)
Flash Layer (SSD)
  •  
Disk Failure Protection
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
1-2 Replicas (2N-3N)
1 Replica (2N)
+ Hardware RAID (5 or 6)
  •  
Node Failure Protection
2-way and 3-way Mirroring (RAID-1)
Logical Availability Zone
1 Replica (2N)
+ Hardware RAID (5 or 6)
  •  
Block Failure Protection
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Not relevant (1-node chassis only)
Not relevant (1-node chassis only)
  •  
Rack Failure Protection
Manual configuration
N/A
Group Placement
  •  
Protection Capacity Overhead
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
Replicas (2N): 100%
Replicas (3N): 200%
Replica (2N) + RAID5: 125-133%
Replica (2N) + RAID6: 120-133%
Replica (2N) + RAID60: 125-140%
  •  
Data Corruption Detection
N/A (hardware dependent)
Read integrity checks
Read integrity checks (CLI)
Disk scrubbing (software)
  Points-in-Time  
  •  
Snapshot Type
Built-in (native)
Built-in (native)
  •  
Snapshot Scope
Local + Remote
Local + Remote
  •  
Snapshot Frequency
1 Minute
GUI: 1 hour (Policy-based)
GUI: 10 minutes (Policy-based)
CLI: 1 minute
  •  
Snapshot Granularity
Per VM (Vvols) or Volume
Per VM or VM-folder
  •  
Backup Type
Built-in (native)
External
Built-in (native)
  •  
Backup Scope
Local or Remote
N/A
Locally
To other SimpliVity sites
To Service Providers
  •  
Backup Frequency
Continuously
N/A
GUI: 10 minutes (Policy-based)
CLI: 1 minute
  •  
Backup Consistency
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
N/A
vSphere: File System Consistent (Windows), Application Consistent (MS Apps on Windows)
Hyper-V: File System Consistent (Windows)
  •  
Restore Granularity
Entire VM or Volume
N/A
vSphere: Entire VM or Single File
Hyper-V: Entire VM
  •  
Restore Ease-of-use
Entire VM or Volume: GUI
Single File: Multi-step