SDS and HCI comparison & reviews

Summary
Rank
6th 3rd 8th
Score
0%
0%
User Reviews
Question?
Not Enabled Not Enabled Not Enabled
Analysis expand by Herman Rutten by Herman Rutten by Herman Rutten
Select All
General expand
0%
0%
0%
  • Fully Supported
  • Limitation
  • Not Supported
  • Information Only
Pros
  • + Extensive platform support
  • + Extensive data protection capabilities
  • + Flexible deployment options
  • + Broad range of hardware support
  • + Strong VMware integration
  • + Policy-based management
  • + Strong Cisco integration
  • + Fast streamlined deployment
  • + Strong container support
Cons
  • - No native data integrity verification
  • - Dedup/compr not performance optimized
  • - Disk/node failure protection not capacity optimized
  • - Single hypervisor support
  • - Very limited native data protection capabilities
  • - Dedup/compr not performance optimized
  • - Single server hardware support
  • - No bare-metal support
  • - Limited native data protection capabilities
  Content  
  •  
Content Creator
  Assessment  
  •  
Overview
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
Name: vSAN
Type: Software-only (SDS)
Development Start: Unknown
First Product Release: 2014
Name: HyperFlex (HX)
Type: Hardware+Software (HCI)
Development Start: 2015
First Product Release: apr 2016
NEW
  •  
Maturity
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
GA Release Dates:
vSAN 7.0 U1: oct 2020
vSAN 7.0: apr 2020
vSAN 6.7 U3: apr 2019
vSAN 6.7 U1: oct 2018
vSAN 6.7: may 2018
vSAN 6.6.1: jul 2017
vSAN 6.6: apr 2017
vSAN 6.5: nov 2016
vSAN 6.2: mar 2016
vSAN 6.1: aug 2015
vSAN 6.0: mar 2015
vSAN 5.5: mar 2014
NEW
GA Release Dates:
HX 4.0: apr 2019
HX 3.5.2a: jan 2019
HX 3.5.1a: nov 2018
HX 3.5: oct 2018
HX 3.0: apr 2018
HX 2.6.1b: dec 2017
HX 2.6.1a: oct 2017
HX 2.5: jul 2017
HX 2.1: may 2017
HX 2.0: mar 2017
HX 1.8: sep 2016
HX 1.7.3: aug 2016
1.7.1-14835: jun 2016
HX 1.7.1: apr 2016
NEW
  Pricing  
  •  
Hardware Pricing Model
N/A
N/A
Per Node
Bundle (ROBO)
  •  
Software Pricing Model
Capacity based (per TB)
NEW
Per CPU Socket
Per Desktop (VDI use cases only)
Per Used GB (VCPP only)
Per Node
  •  
Support Pricing Model
Capacity based (per TB)
Per CPU Socket
Per Desktop (VDI use cases only)
Per Node
Design & Deploy expand
0%
0%
0%
  Design  
  •  
Consolidation Scope
Storage
Data Protection
Management
Automation&Orchestration
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
Compute
Storage
Network
Management
Automation&Orchestration
  •  
Network Topology
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
1, 10, 40 GbE
1, 10, 40 GbE
  •  
Overall Design Complexity
Medium
Medium
Low
  •  
External Performance Validation
SPC (Jun 2016)
ESG Lab (Jan 2016)
StorageReview (aug 2018, aug 2016)
ESG Lab (aug 2018, apr 2016)
Evaluator Group (oct 2018, jul 2017, aug 2016)
ESG Lab (jul 2018)
SAP (dec 2017)
ESG Lab (mar 2017)
  •  
Evaluation Methods
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
Free Trial (60-days)
Online Lab
Proof-of-Concept (POC)
Online Labs
Proof-of-Concept (PoC)
  Deploy  
  •  
Deployment Architecture
Single-Layer
Dual-Layer
Single-Layer (primary)
Dual-Layer (secondary)
Single-Layer (primary)
Dual-Layer (secondary)
  •  
Deployment Method
BYOS (some automation)
BYOS (fast, some automation)
Pre-installed (very fast, turnkey approach)
Turnkey (very fast; highly automated)
Workload Support expand
0%
0%
0%
  Virtualization  
  •  
Hypervisor Deployment
Virtual Storage Controller
Kernel (Optional for Hyper-V)
Kernel Integrated
Virtual Storage Controller
  •  
Hypervisor Compatibility
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
VMware vSphere ESXi 7.0 U1
NEW
VMware vSphere ESXi 6.0U3/6.5U2/6.7U2
Microsoft Hyper-V 2016/2019
NEW
  •  
Hypervisor Interconnect
iSCSI
FC
vSAN (incl. WSFC)
NFS
SMB
  Bare Metal  
  •  
Bare Metal Compatibility
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Many
N/A
  •  
Bare Metal Interconnect
iSCSI
N/A
  Containers  
  •  
Container Integration Type
Built-in (native)
Built-in (Hypervisor-based, vSAN supported)
Built-in (native)
  •  
Container Platform Compatibility
Docker CE/EE 18.03+
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker EE 1.13+
  •  
Container Platform Interconnect
Docker Volume plugin (certified)
Docker Volume Plugin (certified) + VMware VIB
HX FlexVolume Driver
  •  
Container Host Compatibility
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on VMware vSphere hypervisor
Virtualized container hosts on VMware vSphere hypervisor
  •  
Container Host OS Compatbility
Linux
Linux
Windows 10 or Windows Server 2016
Ubuntu Linux 16.04.3 LTS
  •  
Container Orch. Compatibility
VCP: Kubernetes 1.6.5+ on ESXi 6.0+
CNS: Kubernetes 1.14+
Kubernetes
  •  
Container Orch. Interconnect
Kubernetes CSI plugin
Kubernetes Volume Plugin
HX-CSI Plugin
NEW
  VDI  
  •  
VDI Compatibility
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
  •  
VDI Load Bearing
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
VMware: up to 200 virtual desktops/node
Citrix: up to 90 virtual desktops/node
VMware: up to 137 virtual desktops/node
Citrix: up to 125 virtual desktops/node
Server Support expand
0%
0%
0%
  Server/Node  
  •  
Hardware Vendor Choice
Many
Many
Cisco
NEW
  •  
Models
Many
Many
9 storage models:
HX220x Edge M5, HX220c M4/M5, HX240c M4/M5, HXAF220c M4/M5, HXAF240c M4/M5
8 compute-only models: B2x0 M4/M5, B4x0 M4/M5, C2x0 M4/M5, C4x0 M4/M5, C480 ML
NEW
  •  
Density
1, 2 or 4 nodes per chassis
1, 2 or 4 nodes per chassis
HX2x0/HXAF2x0/HXAN2x0: 1 node per chassis
B200: up to 8 nodes per chassis
C2x0: 1 node per chassis
  •  
Mixing Allowed
Yes
Yes
Partial
  Components  
  •  
CPU Config
Flexible
Flexible
Flexible
NEW
  •  
Memory Config
Flexible
Flexible
NEW
  •  
Storage Config
Flexible
Flexible: number of disks + capacity
HX220c/HXAF220c/HXAN220c: Fixed number of disks
HX240c/HXAF240c: Flexible (number of disks)
NEW
  •  
Network Config
Flexible
Flexible
Flexible: M5:10/40GbE; M5 Edge:1/10GbE; FC (optional)
  •  
GPU Config
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla (HX240c only)
AMD FirePro (HX240c only)
NEW
  Scaling  
  •  
Scale-up
CPU
Memory
Storage
GPU
CPU
Memory
Storage
GPU
HX220c/HXAF220c/HXAN220c: CPU, Memory, Network
HX240c/HXAF240c: CPU, Memory, Storage, Network, GPU
  •  
Scale-out
Storage+Compute
Compute-only
Storage-only
Compute+storage
Compute-only (vSAN VMKernel)
Compute+storage
Compute-only (IO Visor)
  •  
Scalability
1-64 nodes in 1-node increments
2-64 nodes in 1-node increments
vSphere: 2-32 storage nodes in 1-node increments + 0-32 compute-only nodes in 1-node increments
Hyper-V: 2-16 storage nodes in 1-node increments + 0-16 compute-only nodes in 1-node increments
NEW
  •  
Small-scale (ROBO)
2 Node minimum
2 Node minimum
2 Node minimum
NEW
Storage Support expand
0%
0%
0%
  General  
  •  
Layout
Block Storage Pool
Object Storage File System (OSFS)
Distributed File System (DFS)
  •  
Data Locality
Partial
Partial
None
  •  
Storage Type(s)
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
Direct-attached (Raw)
Remote vSAN datatstores (HCI Mesh)
NEW
Direct-attached (Raw)
SAN or NAS
  •  
Composition
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
Hybrid (Flash+Magnetic)
All-Flash
Hybrid (Flash+Magnetic)
All-Flash
  •  
Hypervisor OS Layer
SD, USB, DOM, SSD/HDD
SD, USB, DOM, HDD or SSD
Dual SD cards
SSD (optional for HX240c and HXAF240c systems)
  Memory  
  •  
Memory Layer
DRAM
DRAM
  •  
Memory Purpose
Read/Write Cache
  •  
Memory Capacity
Up to 8 TB
Non-configurable
  Flash  
  •  
Flash Layer
SSD, PCIe, UltraDIMM, NVMe
SSD, PCIe, UltraDIMM, NVMe
SSD, NVMe
  •  
Flash Purpose
Persistent Storage
Hybrid: Read/Write Cache
All-Flash: Write Cache + Storage Tier
Hybrid: Log + Read/Write Cache
All-Flash/All-NVMe: Log + Write Cache + Storage Tier
  •  
Flash Capacity
No limit, up to 1 PB per device
Hybrid: 1-5 Flash devices per node (1 per disk group)
All-Flash: 40 Flash devices per node (8 per disk group,1 for cache and 7 for capacity)
Hybrid: 2 Flash devices per node (1x Cache; 1x Housekeeping)
All-Flash: 9-26 Flash devices per node (1x Cache; 1x System, 1x Boot; 6-23x Data)
All-NVMe: 8-11 NVMe devices per node (1x Cache, 1x System, 6-8 Data)
NEW
  Magnetic  
  •  
Magnetic Layer
SAS or SATA
Hybrid: SAS or SATA
Hybrid: SAS or SATA
NEW
  •  
Magnetic Purpose
Persistent Storage
  •  
Magnetic Capacity
No limit, up to 1 PB (per device)
1-35 SAS/SATA HDDs per host/node
HX220x Edge M5: 3-6 capacity devices per node
HX220c: 6-8 capacity devices per node
HX240c: 6-23 capacity devices per node
Data Availability expand
0%
0%
0%
  Reads/Writes  
  •  
Persistent Write Buffer
DRAM (mirrored)
Flash Layer (SSD;PCIe;NVMe)
Flash Layer (SSD, NVMe)
  •  
Disk Failure Protection
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N)
All-Flash: Erasure Coding (RAID5-6)
1-2 Replicas (2N-3N)
  •  
Node Failure Protection
2-way and 3-way Mirroring (RAID-1)
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N)
All-Flash: Erasure Coding (RAID5-6)
Logical Availability Zone
  •  
Block Failure Protection
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Failure Domains
Not relevant (1-node chassis only)
  •  
Rack Failure Protection
Manual configuration
Failure Domains
N/A
  •  
Protection Capacity Overhead
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
Host Pinning (1N): Dependent on # of VMs
Replicas (2N): 100%
Replicas (3N): 200%
Erasure Coding (RAID5): 33%
Erasure Coding (RAID6): 50%
Replicas (2N): 100%
Replicas (3N): 200%
  •  
Data Corruption Detection
N/A (hardware dependent)
Read integrity checks
Disk scrubbing (software)
Read integrity checks
  Points-in-Time  
  •  
Snapshot Type
Built-in (native)
Built-in (native)
  •  
Snapshot Scope
Local + Remote
  •  
Snapshot Frequency
1 Minute
GUI: 1 hour
GUI: 1 hour (Policy-based)
  •  
Snapshot Granularity
Per VM (Vvols) or Volume
Per VM or VM-folder
  •  
Backup Type
Built-in (native)
External (vSAN Certified)
External
  •  
Backup Scope
Local or Remote
N/A
N/A
  •  
Backup Frequency
Continuously
N/A
N/A
  •  
Backup Consistency
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
N/A
N/A
  •  
Restore Granularity
Entire VM or Volume
N/A
N/A
  •  
Restore Ease-of-use
Entire VM or Volume: GUI
Single File: Multi-step
N/A
N/A
  Disaster Recovery  
  •  
Remote Replication Type
Built-in (native)
Built-in (Stretched Clusters only)
External
Built-in (native)
  •  
Remote Replication Scope
To remote sites
To MS Azure Cloud
VR: To remote sites, To VMware clouds
To remote sites
  •  
Remote Replication Cloud Function
Data repository
VR: DR-site (VMware Clouds)
N/A
  •  
Remote Replication Topologies
Single-site and multi-site
VR: Single-site and multi-site
Single-site
  •  
Remote Replication Frequency
Continuous (near-synchronous)
VR: 5 minutes (Asynchronous)
vSAN: Continuous (Stretched Cluster)
5 minutes (Asynchronous)
  •  
Remote Replication Granularity
VM or Volume
VM
  •  
Consistency Groups
Yes
VR: No
No
  •  
DR Orchestration
VMware SRM (certified)
VMware SRM (certified)
HX Connect (native)
VMware SRM (certified)
NEW
  •  
Stretched Cluster (SC)
VMware vSphere: Yes (certified)
VMware vSphere: Yes (certified)
vSphere: Yes
Hyper-V: No
  •  
SC Configuration
2+sites = two or more active sites, 0/1 or more tie-breakers
3-sites: two active sites + tie-breaker in 3rd site
NEW
vSphere: 3-sites = two active sites + tie-breaker in 3rd site
  •  
SC Distance
<=5ms RTT (targeted, not required)
<=5ms RTT
<=5ms RTT / 10Gbps
  •  
SC Scaling
<=32 hosts at each active site (per cluster)
<=15 hosts at each active site
2-16 converged hosts + 0-16 compute hosts at each active site
  •  
SC Data Redundancy
Replicas: 1N-2N at each active site
Replicas: 0-3 Replicas (1N-4N) at each active site
Erasure Coding: RAID5-6 at each active site
Replicas: 2N at each active site
Data Services expand
0%
0%
0%
  Efficiency  
  •  
Dedup/Compr. Engine
Software (integration)
NEW
All-Flash: Software
Hybrid: N/A
Software
  •  
Dedup/Compr. Function
Efficiency (space savings)
Efficiency (Space savings)
Efficiency and Performance
  •  
Dedup/Compr. Process
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
All-Flash: Inline (post-ack)
Hybrid: N/A
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
  •  
Dedup/Compr. Type
Optional
NEW
All-Flash: Optional
Hybrid: N/A
NEW
Always-on
  •  
Dedup/Compr. Scope
Persistent data layer
Persistent data layer
Read and Write caches + Persistent data layers
  •  
Dedup/Compr. Radius
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
Disk Group
Storage Cluster
  •  
Dedup/Compr. Granularity
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
4 KB fixed block size
4-64 KB fixed block size
  •  
Dedup/Compr. Guarantee
N/A
N/A
N/A
  •  
Data Rebalancing
Full (optional)
Full
Full
  •  
Data Tiering
Yes
N/A
N/A
  Performance  
  •  
Task Offloading
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
vSphere: Integrated
vSphere: VMware VAAI-NAS (full)
Hyper-V: SMB3 ODX; UNMAP/TRIM
  •  
QoS Type
IOPs and/or MBps Limits
IOPs Limits (maximums)
N/A
  •  
QoS Granularity
Virtual Disk Groups and/or Host Groups
Per VM/Virtual Disk
N/A
  •  
Flash Pinning
Per VM/Virtual Disk/Volume
Cache Read Reservation: Per VM/Virtual Disk
Not relevant (global cache architecture)
  Security  
  •  
Data Encryption Type
Built-in (native)
  •  
Data Encryption Options
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: N/A
Software: vSAN data encryption; HyTrust DataControl (validated)
NEW
Hardware: Self-encrypting drives (SEDs)
Software: N/A
  •  
Data Encryption Scope
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: N/A
Software vSAN: Data-at-rest + Data-in-transit
Software Hytrust: Data-at-rest + Data-in-transit
NEW
Hardware: Data-at-rest
Software: N/A
  •  
Data Encryption Compliance
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
Hardware: N/A
Software: FIPS 140-2 Level 1 (vSAN); FIPS 140-2 Level 1 (HyTrust)
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: N/A
  •  
Data Encryption Efficiency Impact
Hardware: No
Software: No
Hardware: N/A
Software: No (vSAN); Yes (HyTrust)
Hardware: No
Software: N/A
  Test/Dev  
  •  
Fast VM Cloning
Yes
No
Yes
  Portability  
  •  
Hypervisor Migration
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
Hyper-V to ESXi (external)
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
  File Services  
  •  
Fileserver Type
Built-in (native)
Built-in (native)
External (vSAN Certified)
NEW
N/A
  •  
Fileserver Compatibility
Windows clients
Linux clients
Windows clients
Linux clients
NEW
N/A
  •  
Fileserver Interconnect
SMB
NFS
SMB
NFS
NEW
N/A
  •  
Fileserver Quotas
Share Quotas, User Quotas
Share Quotas
N/A
  •  
Fileserver Analytics
Partial
Partial
N/A
  Object Services  
  •  
Object Storage Type
N/A
N/A
N/A
  •  
Object Storage Protection
N/A
N/A
N/A
  •  
Object Storage LT Retention
N/A
N/A
N/A
Management expand
0%
0%
0%
  Interfaces  
  •  
GUI Functionality
Centralized
Centralized
Centralized
  •  
GUI Scope
Single-site and Multi-site
Single-site and Multi-site
Single-site and Multi-site
  •  
GUI Perf. Monitoring
Advanced
Advanced
NEW
Basic
  •  
GUI Integration
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
VMware HTML5 vSphere Client (integrated)
VMware vSphere Web Client (integrated)
VMware vSphere Web Client (plugin)
  Programmability  
  •  
Policies
Full
Full
Partial (Protection)
  •  
API/Scripting
REST-APIs
PowerShell
REST-APIs
Ruby vSphere Console (RVC)
PowerCLI
REST-APIs
CLI
  •  
Integration
OpenStack
OpenStack
VMware vRealize Automation (vRA)
Cisco UCS Director
  •  
Self Service
Full
N/A (not part of vSAN license)
N/A (not part of HX license)
  Maintenance  
  •  
SW Composition
Unified
Partially Distributed
Partially Distributed
  •  
SW Upgrade Execution
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
  •  
FW Upgrade Execution
Hardware dependent
Rolling Upgrade (1-by-1)
1-Click
  Support  
  •  
Single HW/SW Support
No
Yes (most OEM vendors)
Yes
  •  
Call-Home Function
Partial (HW dependent)
Partial (vSAN Support Insight)
Full
  •  
Predictive Analytics
Partial
Full (not part of vSAN license)
Partial

Matrix Score

  •  
  •  
  • StorPool
  • VMware
  • Cisco
  •  
  • 6 th
  • 3 rd
  • 8 th
X
Login to access your personal profile

Forgot your Password?
X
Signup with linkedin
X

Registered, but not activated? click here (resend activation link)

Login to access your personal profile

Receive new comparison alerts

Show me as community member

I agree to your Terms of services

GDPR