SDS and HCI comparison & reviews

Summary
Rank
2nd 8th 3rd
Score
0%
0%
User Reviews
Question?
Not Enabled Not Enabled Not Enabled
Analysis expand by Herman Rutten by Herman Rutten by Herman Rutten
Select All
General expand
0%
0%
0%
  • Fully Supported
  • Limitation
  • Not Supported
  • Information Only
Pros
  • + Extensive platform support
  • + Extensive data protection capabilities
  • + Flexible deployment options
  • + Flexible architecture
  • + Extensive platform support
  • + Several Microsoft integration points
  • + Broad range of hardware support
  • + Strong VMware integration
  • + Policy-based management
Cons
  • - No native data integrity verification
  • - Dedup/compr not performance optimized
  • - Disk/node failure protection not capacity optimized
  • - Minimal data protection capabilities
  • - No Quality-of-service mechanisms
  • - No native encryption capabilities
  • - Single hypervisor support
  • - Very limited native data protection capabilities
  • - Dedup/compr not performance optimized
  Content  
  •  
Content Creator
  Assessment  
  •  
Overview
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
Name: StarWind Virtual SAN
Type: SDS
Development Start: 2003
First Product Release: 2011
Name: vSAN
Type: Software-only (SDS)
Development Start: Unknown
First Product Release: 2014
  •  
Maturity
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
StarWind VSAN for vSphere Release Dates:
VSAN build 13170: oct 2019
VSAN build 12859: feb 2019
VSAN build 12658: dec 2018
VSAN build 12533: sep 2018

StarWind VSAN for Hyper-V Release Dates:
VSAN build 13279: oct 2019
VSAN build 13182: aug 2019
VSAN build 12767: feb 2019
VSAN build 12658: nov 2018
VSAN build 12585 oct 2018
VSAN build 12393: aug 2018
VSAN build 12166: may 2018
VSAN build 12146: apr 2018
VSAN build 11818: dec 2017
VSAN build 11456: aug 2017
VSAN build 11404: jul 2017
VSAN build 11156: may 2017
VSAN build 11071 may 2017
VSAN build 10927 apr 2017
VSAN build 10914 apr 2017
VSAN build 10833: apr 2017
VSAN build 10811 mar 2017
VSAN build 10799: mar 2017
VSAN build 10695: feb 2017
VSAN build 10547: jan 2017
VSAN build 9996: aug 2016
VSAN build 9980 aug 2016
VSAN build 9781: jun 2016
VSAN build 9611 jun 2016
VSAN build 9052: may 2016
VSAN build 8730: nov 2015
VSAN build 8716 nov 2015
VSAN build 8198 jun 2015
VSAN build 7929 apr 2015
VSAN build 7774 feb 2015
VSAN build 7509 de 2014
VSAN build 7471 dec 2014
VSAN build 7354 nov 2014
VSAN build 7145
VSAN build 6884
GA Release Dates:
vSAN 7.0 U1: oct 2020
vSAN 7.0: apr 2020
vSAN 6.7 U3: apr 2019
vSAN 6.7 U1: oct 2018
vSAN 6.7: may 2018
vSAN 6.6.1: jul 2017
vSAN 6.6: apr 2017
vSAN 6.5: nov 2016
vSAN 6.2: mar 2016
vSAN 6.1: aug 2015
vSAN 6.0: mar 2015
vSAN 5.5: mar 2014
NEW
  Pricing  
  •  
Hardware Pricing Model
N/A
N/A
N/A
  •  
Software Pricing Model
Capacity based (per TB)
NEW
Hyper-V: Per Node + Per TB
vSphere: Per Node (storage capacity included)
Per CPU Socket
Per Desktop (VDI use cases only)
Per Used GB (VCPP only)
  •  
Support Pricing Model
Capacity based (per TB)
Per Node
Per CPU Socket
Per Desktop (VDI use cases only)
Design & Deploy expand
0%
0%
0%
  Design  
  •  
Consolidation Scope
Storage
Data Protection
Management
Automation&Orchestration
Storage
Management
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
  •  
Network Topology
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
1, 10, 25, 40, 100 GbE
1, 10, 40 GbE
  •  
Overall Design Complexity
Medium
Medium
Medium
  •  
External Performance Validation
SPC (Jun 2016)
ESG Lab (Jan 2016)
N/A
StorageReview (aug 2018, aug 2016)
ESG Lab (aug 2018, apr 2016)
Evaluator Group (oct 2018, jul 2017, aug 2016)
  •  
Evaluation Methods
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
Community Edition (forever)
Trial (up to 30 days)
Proof-of-Concept
Free Trial (60-days)
Online Lab
Proof-of-Concept (POC)
  Deploy  
  •  
Deployment Architecture
Single-Layer
Dual-Layer
Single-Layer
Dual-Layer (secondary)
Single-Layer (primary)
Dual-Layer (secondary)
  •  
Deployment Method
BYOS (some automation)
BYOS (some automation)
BYOS (fast, some automation)
Pre-installed (very fast, turnkey approach)
Workload Support expand
0%
0%
0%
  Virtualization  
  •  
Hypervisor Deployment
Virtual Storage Controller
Kernel (Optional for Hyper-V)
Virtual Storage Controller (vSphere)
User-Space (Hyper-V)
Kernel Integrated
  •  
Hypervisor Compatibility
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
VMware vSphere ESXi 6.0-6.7
Microsoft Hyper-V 2012-2019
VMware vSphere ESXi 7.0 U1
NEW
  •  
Hypervisor Interconnect
iSCSI
FC
iSCSI
NFS
SMB3
vSAN (incl. WSFC)
  Bare Metal  
  •  
Bare Metal Compatibility
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Microsoft Windows Server 2012/2012R2/2016/2019
Many
  •  
Bare Metal Interconnect
iSCSI
  Containers  
  •  
Container Integration Type
Built-in (native)
N/A
Built-in (Hypervisor-based, vSAN supported)
  •  
Container Platform Compatibility
Docker CE/EE 18.03+
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
  •  
Container Platform Interconnect
Docker Volume plugin (certified)
Docker Volume Plugin (certified) + VMware VIB
Docker Volume Plugin (certified) + VMware VIB
  •  
Container Host Compatibility
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on VMware vSphere hypervisor
Virtualized container hosts on VMware vSphere hypervisor
  •  
Container Host OS Compatbility
Linux
Linux
Windows 10 or 2016
Linux
Windows 10 or Windows Server 2016
  •  
Container Orch. Compatibility
Kubernetes 1.6.5+ on ESXi 6.0+
VCP: Kubernetes 1.6.5+ on ESXi 6.0+
CNS: Kubernetes 1.14+
  •  
Container Orch. Interconnect
Kubernetes CSI plugin
Kubernetes Volume Plugin
Kubernetes Volume Plugin
  VDI  
  •  
VDI Compatibility
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
  •  
VDI Load Bearing
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
VMware: up to 260 virtual desktops/node
Citrix: up to 220 virtual desktops/node
VMware: up to 200 virtual desktops/node
Citrix: up to 90 virtual desktops/node
Server Support expand
0%
0%
0%
  Server/Node  
  •  
Hardware Vendor Choice
Many
Many
Many
  •  
Models
Many
Many
Many
  •  
Density
1, 2 or 4 nodes per chassis
1, 2 or 4 nodes per chassis
1, 2 or 4 nodes per chassis
  •  
Mixing Allowed
Yes
Yes
Yes
  Components  
  •  
CPU Config
Flexible
Flexible
Flexible
  •  
Memory Config
Flexible
Flexible
  •  
Storage Config
Flexible
Flexible
Flexible: number of disks + capacity
  •  
Network Config
Flexible
Flexible
Flexible
  •  
GPU Config
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
  Scaling  
  •  
Scale-up
CPU
Memory
Storage
GPU
CPU
Memory
Storage
GPU
CPU
Memory
Storage
GPU
  •  
Scale-out
Storage+Compute
Compute-only
Storage-only
Storage+Compute
Compute-only
Storage-only
Compute+storage
Compute-only (vSAN VMKernel)
  •  
Scalability
1-64 nodes in 1-node increments
2-64 nodes in 1-node increments
2-64 nodes in 1-node increments
  •  
Small-scale (ROBO)
2 Node minimum
1 Node minimum
2 Node minimum
Storage Support expand
0%
0%
0%
  General  
  •  
Layout
Block Storage Pool
Block Storage Pool
Object Storage File System (OSFS)
  •  
Data Locality
Partial
Partial
Partial
  •  
Storage Type(s)
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
Direct-attached (Raw)
SAN or NAS
Direct-attached (Raw)
Remote vSAN datatstores (HCI Mesh)
NEW
  •  
Composition
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
Magnetic-only
Hybrid (Flash+Magnetic)
All-Flash
Hybrid (Flash+Magnetic)
All-Flash
  •  
Hypervisor OS Layer
SD, USB, DOM, SSD/HDD
SD, USB, DOM, SSD/HDD
SD, USB, DOM, HDD or SSD
  Memory  
  •  
Memory Layer
DRAM
DRAM
DRAM
  •  
Memory Purpose
Read/Write Cache
Read/Write Cache
  •  
Memory Capacity
Up to 8 TB
Configurable
Non-configurable
  Flash  
  •  
Flash Layer
SSD, PCIe, UltraDIMM, NVMe
SSD, PCIe, UltraDIMM, NVMe
  •  
Flash Purpose
Persistent Storage
Read/Write Cache (hybrid)
Persistent storage (all-flash)
Hybrid: Read/Write Cache
All-Flash: Write Cache + Storage Tier
  •  
Flash Capacity
No limit, up to 1 PB per device
No limitations
Hybrid: 1-5 Flash devices per node (1 per disk group)
All-Flash: 40 Flash devices per node (8 per disk group,1 for cache and 7 for capacity)
  Magnetic  
  •  
Magnetic Layer
SAS or SATA
Hybrid: SAS or SATA
  •  
Magnetic Purpose
  •  
Magnetic Capacity
No limit, up to 1 PB (per device)
1-35 SAS/SATA HDDs per host/node
Data Availability expand
0%
0%
0%
  Reads/Writes  
  •  
Persistent Write Buffer
DRAM (mirrored)
DRAM (mirrored)
Flash Layer (SSD, NVMe)
Flash Layer (SSD;PCIe;NVMe)
  •  
Disk Failure Protection
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
1-2 Replicas (2N-3N)
+ Hardware RAID (1, 5, 10)
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N)
All-Flash: Erasure Coding (RAID5-6)
  •  
Node Failure Protection
2-way and 3-way Mirroring (RAID-1)
1-2 Replicas (2N-3N)
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N)
All-Flash: Erasure Coding (RAID5-6)
  •  
Block Failure Protection
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Not relevant (usually 1-node appliances)
Failure Domains
  •  
Rack Failure Protection
Manual configuration
Failure Domains
  •  
Protection Capacity Overhead
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
Replica (2N) + RAID1/10: 200%
Replica (2N) + RAID5: 125-133%
Replica (3N) + RAID1/10: 300%
Replica (3N) + RAID5: 225-233%
Host Pinning (1N): Dependent on # of VMs
Replicas (2N): 100%
Replicas (3N): 200%
Erasure Coding (RAID5): 33%
Erasure Coding (RAID6): 50%
  •  
Data Corruption Detection
N/A (hardware dependent)
N/A (hardware dependent)
Read integrity checks
Disk scrubbing (software)
  Points-in-Time  
  •  
Snapshot Type
N/A
Built-in (native)
  •  
Snapshot Scope
Local + Remote
N/A
  •  
Snapshot Frequency
1 Minute
N/A
GUI: 1 hour
  •  
Snapshot Granularity
Per VM (Vvols) or Volume
N/A
  •  
Backup Type
Built-in (native)
External
External (vSAN Certified)
  •  
Backup Scope
Local or Remote
N/A
N/A
  •  
Backup Frequency
Continuously
N/A
N/A
  •  
Backup Consistency
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
N/A
N/A
  •  
Restore Granularity
Entire VM or Volume
N/A
N/A
  •  
Restore Ease-of-use