SDS and HCI comparison & reviews

Summary
Rank
2nd 8th 9th
Score
0%
0%
User Reviews
Question?
Not Enabled Not Enabled Not Enabled
Analysis expand by Herman Rutten by Herman Rutten by Herman Rutten
Select All
General expand
0%
0%
0%
  • Fully Supported
  • Limitation
  • Not Supported
  • Information Only
Pros
  • + Extensive platform support
  • + Extensive data protection capabilities
  • + Flexible deployment options
  • + Flexible architecture
  • + Satisfactory platform support
  • + Several Microsoft integration points
  • + Built for simplicity
  • + Policy-based management
  • + Cost-effectiveness
Cons
  • - No native data integrity verification
  • - Dedup/compr not performance optimized
  • - Disk/node failure protection not capacity optimized
  • - Minimal data protection capabilities
  • - No native dedup capabilities
  • - No native encryption capabilities
  • - Single hypervisor support
  • - No stretched clustering
  • - No native file services
  Content  
  •  
Content Creator
  Assessment  
  •  
Overview
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
Name: HyperConverged Appliance (HCA)
Type: Hardware+Software (HCI)
Development Start: 2014
First Product Release: 2016
Name: Hyperconvergence (HC3)
Type: Hardware+Software (HCI)
Development Start: 2011
First Product Release: 2012
  •  
Maturity
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
Release Dates:
HCA build 13279: oct 2019
HCA build 13182: aug 2019
HCA build 12767: feb 2019
HCA build 12658: nov 2018
HCA build 12393: aug 2018
HCA build 12166: may 2018
HCA build 12146: apr 2018
HCA build 11818: dec 2017
HCA build 11456: aug 2017
HCA build 11404: jul 2017
HCA build 11156: may 2017
HCA build 10833: apr 2017
HCA build 10799: mar 2017
HCA build 10695: feb 2017
HCA build 10547: jan 2017
HCA build 9996: aug 2016
HCA build 9781: jun 2016
HCA build 9052: may 2016
HCA build 8730: nov 2015
GA Release Dates:
HCOS 8.6.5: mar 2020
HCOS 8.5.3: oct 2019
HCOS 8.3.3: jul 2019
HCOS 8.1.3: mar 2019
HCOS 7.4.22: may 2018
HCOS 7.2.24: sep 2017
HCOS 7.1.11: dec 2016
HCOS 6.4.2: apr 2016
HCOS 6.0: feb 2015
HCOS 5.0: oct 2014
ICOS 4.0: aug 2012
ICOS 3.0: may 2012
ICOS 2.0: feb 2010
ICOS 1.0: feb 2009
NEW
  Pricing  
  •  
Hardware Pricing Model
N/A
Per Node
  •  
Software Pricing Model
Capacity based (per TB)
NEW
Per Node (all-inclusive)
Per Node (all-inclusive)
  •  
Support Pricing Model
Capacity based (per TB)
Per Node (included)
Per Node
Design & Deploy expand
0%
0%
0%
  Design  
  •  
Consolidation Scope
Storage
Data Protection
Management
Automation&Orchestration
Hypervisor
Compute
Storage
Networking (optional)
Data Protection
Management
Automation&Orchestration
  •  
Network Topology
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
1, 10, 25, 40, 100 GbE
1, 10 GbE
  •  
Overall Design Complexity
Medium
Medium
Low
  •  
External Performance Validation
SPC (Jun 2016)
ESG Lab (Jan 2016)
StorageReview (oct 2019)
N/A
  •  
Evaluation Methods
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
Community Edition (forever)
Trial (up to 30 days)
Proof-of-Concept
Public Facing Clusters
Proof-of-Concept (PoC)
  Deploy  
  •  
Deployment Architecture
Single-Layer
Dual-Layer
Single-Layer
Dual-Layer (secondary)
Single-Layer
  •  
Deployment Method
BYOS (some automation)
Turnkey (very fast; highly automated)
Turnkey (very fast; highly automated)
Workload Support expand
0%
0%
0%
  Virtualization  
  •  
Hypervisor Deployment
Virtual Storage Controller
Kernel (Optional for Hyper-V)
Virtual Storage Controller (vSphere)
User-Space (Hyper-V)
KVM User Space
  •  
Hypervisor Compatibility
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
VMware vSphere ESXi 5.5U3-6.7
Microsoft Hyper-V 2012-2019
Linux KVM-based
NEW
  •  
Hypervisor Interconnect
iSCSI
FC
iSCSI
NFS
SMB3
Libscribe
  Bare Metal  
  •  
Bare Metal Compatibility
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Microsoft Windows Server 2012/2012R2/2016/2019
N/A
  •  
Bare Metal Interconnect
N/A
  Containers  
  •  
Container Integration Type
Built-in (native)
N/A
N/A
  •  
Container Platform Compatibility
Docker CE/EE 18.03+
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
N/A
  •  
Container Platform Interconnect
Docker Volume plugin (certified)
Docker Volume Plugin (certified) + VMware VIB
N/A
  •  
Container Host Compatibility
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on VMware vSphere hypervisor
N/A
  •  
Container Host OS Compatbility
Linux
Linux
Windows 10 or 2016
N/A
  •  
Container Orch. Compatibility
Kubernetes 1.6.5+ on ESXi 6.0+
N/A
  •  
Container Orch. Interconnect
Kubernetes CSI plugin
Kubernetes Volume Plugin
N/A
  VDI  
  •  
VDI Compatibility
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
Citrix XenDesktop
Parallels RAS
Leostream
  •  
VDI Load Bearing
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
VMware: up to 260 virtual desktops/node
Citrix: up to 220 virtual desktops/node
Workspot: 40 virtual desktops/node
Server Support expand
0%
0%
0%
  Server/Node  
  •  
Hardware Vendor Choice
Many
Super Micro (StarWind branded)
Dell (StarWind branded)
Dell (OEM)
HPE (Select)
Lenovo (native and OEM)
SuperMicro (native)
  •  
Models
Many
6 Native Models (L-AF, XL-AF, L-H, XL-H, L, XL)
20 Native Submodels
4 Native Models
NEW
  •  
Density
1, 2 or 4 nodes per chassis
1 node per chassis
1 node per chassis
NEW
  •  
Mixing Allowed
Yes
No
Yes
  Components  
  •  
CPU Config
Flexible
Flexible
Flexible: up to 3 options (native); extensive (Lenovo OEM)
  •  
Memory Config
Flexible
Flexible: up to 8 options
  •  
Storage Config
Flexible
Flexible: number of storage devices
Capacity: up to 5 options (HDD, SSD)
Fixed: Number of disks
  •  
Network Config
Flexible
Fixed (Private network)
Fixed: HC1200/5000: 10GbE; HE150/500T: 1GbE
Flexible: HE500: 1/10GbE
  •  
GPU Config
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla/Quadro
N/A
  Scaling  
  •  
Scale-up
CPU
Memory
Storage
GPU
CPU
Memory
Storage
GPU
CPU
Memory
  •  
Scale-out
Storage+Compute
Compute-only
Storage-only
Compute+storage
Compute-only
Storage-only
Storage+Compute
Storage-only
  •  
Scalability
1-64 nodes in 1-node increments
2-64 nodes in 1-node increments
3-8 nodes in 1-node increments
  •  
Small-scale (ROBO)
2 Node minimum
2 Node minimum
Storage Support expand
0%
0%
0%
  General  
  •  
Layout
Block Storage Pool
Block Storage Pool
Block Storage Pool
  •  
Data Locality
Partial
Partial
None
  •  
Storage Type(s)
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
Direct-attached (Raw)
SAN or NAS
Direct-Attached (Raw)
  •  
Composition
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
Magnetic-only
Hybrid (Flash+Magnetic)
All-Flash
Magnetic-only
Hybrid (Flash+Magnetic)
All-Flash
  •  
Hypervisor OS Layer
SD, USB, DOM, SSD/HDD
SD, USB, DOM, SSD/HDD
HDD or SSD (partition)
  Memory  
  •  
Memory Layer
DRAM
  •  
Memory Purpose
Read/Write Cache
Read/Write Cache
Read Cache
Metadata structures
  •  
Memory Capacity
Up to 8 TB
Configurable
4GB+
  Flash  
  •  
Flash Layer
SSD, PCIe, UltraDIMM, NVMe
SSD, NVMe
  •  
Flash Purpose
Persistent Storage
Read/Write Cache (hybrid)
Persistent storage (all-flash)
  •  
Flash Capacity
No limit, up to 1 PB per device
All-Flash: 4-20+ devices per node
Hybrid: 2 devices per node
Hybrid: 1-3 SSDs per node
All-Flash: 4 SSDs per node
  Magnetic  
  •  
Magnetic Layer
SAS or SATA
Hybrid: SATA
  •  
Magnetic Purpose
  •  
Magnetic Capacity
No limit, up to 1 PB (per device)
Hybrid: 4+ devices per node
Magnetic-only: 4-12+ devices per node
Magnetic-only: 4 HDDs per node
Hybrid: 3 or 9 HDDs per node
Data Availability expand
0%
0%
0%
  Reads/Writes  
  •  
Persistent Write Buffer
DRAM (mirrored)
DRAM (mirrored)
Flash Layer (SSD, NVMe)
Flash/HDD
  •  
Disk Failure Protection
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
1-2 Replicas (2N-3N)
+ Hardware RAID (1, 5, 10)
2-way Mirroring (Network RAID-10)
  •  
Node Failure Protection
2-way and 3-way Mirroring (RAID-1)
1-2 Replicas (2N-3N)
2-way Mirroring (Network RAID-10)
  •  
Block Failure Protection
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Not relevant (1-node chassis only)
Not relevant (1U/2U appliances)
  •  
Rack Failure Protection
Manual configuration
  •  
Protection Capacity Overhead
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
Replica (2N) + RAID1/10: 200%
Replica (2N) + RAID5: 125-133%
Replica (3N) + RAID1/10: 300%
Replica (3N) + RAID5: 225-233%
Mirroring (2N) (primary): 100%
  •  
Data Corruption Detection
N/A (hardware dependent)
N/A (hardware dependent)
Read integrity checks (software)
Disk scrubbing (software)
  Points-in-Time  
  •  
Snapshot Type
N/A
Built-in (native)
  •  
Snapshot Scope
Local + Remote
N/A
Local (+ Remote)
  •  
Snapshot Frequency
1 Minute
N/A
5 minutes
  •  
Snapshot Granularity
Per VM (Vvols) or Volume
N/A
  •  
Backup Type
Built-in (native)
External
Built-in (native)
  •  
Backup Scope
Local or Remote
N/A
Locally
To remote sites
  •  
Backup Frequency
Continuously
N/A
5 minutes (Asynchronous)
  •  
Backup Consistency
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
N/A
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
  •  
Restore Granularity
Entire VM or Volume
N/A
Entire VM
  •  
Restore Ease-of-use
Entire VM or Volume: GUI
Single File: Multi-step
N/A