SDS and HCI comparison & reviews

Summary
Rank
8th 5th 8th
Score
0%
0%
User Reviews
Question?
Not Enabled Not Enabled Not Enabled
Analysis expand by Herman Rutten by Herman Rutten by Herman Rutten
Select All
General expand
0%
0%
0%
  • Fully Supported
  • Limitation
  • Not Supported
  • Information Only
Pros
  • + Extensive platform support
  • + Extensive data protection capabilities
  • + Flexible deployment options
  • + Extensive data protection integration
  • + Policy-based management
  • + Fast streamlined deployment
  • + Flexible architecture
  • + Extensive platform support
  • + Several Microsoft integration points
Cons
  • - No native data integrity verification
  • - Dedup/compr not performance optimized
  • - Disk/node failure protection not capacity optimized
  • - Single server hardware support
  • - No bare-metal support
  • - No native file services
  • - Minimal data protection capabilities
  • - No Quality-of-service mechanisms
  • - No native encryption capabilities
  Content  
  •  
Content Creator
  Assessment  
  •  
Overview
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
Name: HPE SimpliVity 380
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2013
Name: StarWind Virtual SAN
Type: SDS
Development Start: 2003
First Product Release: 2011
  •  
Maturity
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
GA Release Dates:
OmniStack 4.0.1 U1: dec 2020
OmniStack 4.0.1: apr 2020
OmniStack 4.0.0: jan 2020
OmniStack 3.7.10: sep 2019
OmniStack 3.7.9: jun 2019
OmniStack 3.7.8: mar 2019
OmniStack 3.7.7: dec 2018
OmniStack 3.7.6 U1: oct 2018
OmniStack 3.7.6: sep 2018
OmniStack 3.7.5: jul 2018
OmniStack 3.7.4: may 2018
OmniStack 3.7.3: mar 2018
OmniStack 3.7.2: dec 2017
OmniStack 3.7.1: oct 2017
OmniStack 3.7.0: jun 2017
OmniStack 3.6.2: mar 2017
OmniStack 3.6.1: jan 2017
OmniStack 3.5.3: nov 2016
OmniStack 3.5.2: juli 2016
OmniStack 3.5.1: may 2016
OmniStack 3.0.7: aug 2015
OmniStack 2.1.0: jan 2014
OmniStack 1.1.0: aug 2013
NEW
StarWind VSAN for vSphere Release Dates:
VSAN build 13170: oct 2019
VSAN build 12859: feb 2019
VSAN build 12658: dec 2018
VSAN build 12533: sep 2018

StarWind VSAN for Hyper-V Release Dates:
VSAN build 13279: oct 2019
VSAN build 13182: aug 2019
VSAN build 12767: feb 2019
VSAN build 12658: nov 2018
VSAN build 12585 oct 2018
VSAN build 12393: aug 2018
VSAN build 12166: may 2018
VSAN build 12146: apr 2018
VSAN build 11818: dec 2017
VSAN build 11456: aug 2017
VSAN build 11404: jul 2017
VSAN build 11156: may 2017
VSAN build 11071 may 2017
VSAN build 10927 apr 2017
VSAN build 10914 apr 2017
VSAN build 10833: apr 2017
VSAN build 10811 mar 2017
VSAN build 10799: mar 2017
VSAN build 10695: feb 2017
VSAN build 10547: jan 2017
VSAN build 9996: aug 2016
VSAN build 9980 aug 2016
VSAN build 9781: jun 2016
VSAN build 9611 jun 2016
VSAN build 9052: may 2016
VSAN build 8730: nov 2015
VSAN build 8716 nov 2015
VSAN build 8198 jun 2015
VSAN build 7929 apr 2015
VSAN build 7774 feb 2015
VSAN build 7509 de 2014
VSAN build 7471 dec 2014
VSAN build 7354 nov 2014
VSAN build 7145
VSAN build 6884
  Pricing  
  •  
Hardware Pricing Model
N/A
N/A
  •  
Software Pricing Model
Capacity based (per TB)
NEW
Per Node (all-inclusive)
Add-ons: RapidDR (per VM)
Hyper-V: Per Node + Per TB
vSphere: Per Node (storage capacity included)
  •  
Support Pricing Model
Capacity based (per TB)
Per Node
Per Node
Design & Deploy expand
0%
0%
0%
  Design  
  •  
Consolidation Scope
Storage
Data Protection
Management
Automation&Orchestration
Compute
Storage
Data Protection (full)
Management
Automation&Orchestration (DR)
Storage
Management
  •  
Network Topology
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
1, 10, 25 GbE
1, 10, 25, 40, 100 GbE
  •  
Overall Design Complexity
Medium
Low
Medium
  •  
External Performance Validation
SPC (Jun 2016)
ESG Lab (Jan 2016)
ESG Lab (jul 2017)
Login VSI (dec 2018; jun 2017)
N/A
  •  
Evaluation Methods
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
Cloud Technology Showcase (CTS)
Proof-of-Concept (POC)
Community Edition (forever)
Trial (up to 30 days)
Proof-of-Concept
  Deploy  
  •  
Deployment Architecture
Single-Layer
Dual-Layer
Single-Layer
Single-Layer
Dual-Layer (secondary)
  •  
Deployment Method
BYOS (some automation)
Turnkey (very fast; highly automated)
BYOS (some automation)
Workload Support expand
0%
0%
0%
  Virtualization  
  •  
Hypervisor Deployment
Virtual Storage Controller
Kernel (Optional for Hyper-V)
Virtual Storage Controller
Virtual Storage Controller (vSphere)
User-Space (Hyper-V)
  •  
Hypervisor Compatibility
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
VMware vSphere ESXi 6.5U2-6.7U3
Microsoft Hyper-V 2016 UR6-UR8
VMware vSphere ESXi 6.0-6.7
Microsoft Hyper-V 2012-2019
  •  
Hypervisor Interconnect
iSCSI
FC
NFS
SMB
iSCSI
NFS
SMB3
  Bare Metal  
  •  
Bare Metal Compatibility
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
N/A
Microsoft Windows Server 2012/2012R2/2016/2019
  •  
Bare Metal Interconnect
N/A
  Containers  
  •  
Container Integration Type
Built-in (native)
N/A
N/A
  •  
Container Platform Compatibility
Docker CE/EE 18.03+
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
  •  
Container Platform Interconnect
Docker Volume plugin (certified)
Docker Volume Plugin (certified) + VMware VIB
Docker Volume Plugin (certified) + VMware VIB
  •  
Container Host Compatibility
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on VMware vSphere hypervisor
Virtualized container hosts on VMware vSphere hypervisor
  •  
Container Host OS Compatbility
Linux
Linux
Windows 10 or 2016
Linux
Windows 10 or 2016
  •  
Container Orch. Compatibility
Kubernetes 1.6.5+ on ESXi 6.0+
Kubernetes 1.6.5+ on ESXi 6.0+
  •  
Container Orch. Interconnect
Kubernetes CSI plugin
Kubernetes Volume Plugin
Kubernetes Volume Plugin
  VDI  
  •  
VDI Compatibility
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop (certified)
Workspot VDI
VMware Horizon
Citrix XenDesktop
  •  
VDI Load Bearing
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
VMware: up to 190 virtual desktops/node
Citrix: up to 190 virtual desktops/node
Workspot: up to 150 virtual desktops/node
VMware: up to 260 virtual desktops/node
Citrix: up to 220 virtual desktops/node
Server Support expand
0%
0%
0%
  Server/Node  
  •  
Hardware Vendor Choice
Many
HPE
Many
  •  
Models
Many
7 models
Many
  •  
Density
1, 2 or 4 nodes per chassis
1 node per chassis
1, 2 or 4 nodes per chassis
  •  
Mixing Allowed
Yes
Partial
Yes
  Components  
  •  
CPU Config
Flexible
Flexible
Flexible
  •  
Memory Config
Flexible
Flexible
  •  
Storage Config
Flexible
Fixed: number of disks + capacity
Flexible
  •  
Network Config
Flexible
Flexible: additional 1, 10, 25 Gbps
Flexible
  •  
GPU Config
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla (vSphere only)
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
  Scaling  
  •  
Scale-up
CPU
Memory
Storage
GPU
CPU
Memory
Storage
Network
GPU
CPU
Memory
Storage
GPU
  •  
Scale-out
Storage+Compute
Compute-only
Storage-only
Compute+storage
Compute-only (NFS; vSphere)
Storage+Compute
Compute-only
Storage-only
  •  
Scalability
1-64 nodes in 1-node increments
vSphere: 1-16 storage nodes (cluster); 2-8 storage nodes (stretched cluster); 1-96 storage nodes (Federation) in 1-node increments
Hyper-V: 4 storage nodes (cluster); 2-16 storage nodes (Federation)
2-64 nodes in 1-node increments
  •  
Small-scale (ROBO)
2 Node minimum
1 Node minimum (2 Nodes for local HA)
1 Node minimum
Storage Support expand
0%
0%
0%
  General  
  •  
Layout
Block Storage Pool
Parallel File System
on top of Object Store
Block Storage Pool
  •  
Data Locality
Partial
Full
Partial
  •  
Storage Type(s)
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
Direct-attached (RAID)
Direct-attached (Raw)
SAN or NAS
  •  
Composition
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
Hybrid (Flash+Magnetic)
All-Flash (SSD-only)
Magnetic-only
Hybrid (Flash+Magnetic)
All-Flash
  •  
Hypervisor OS Layer
SD, USB, DOM, SSD/HDD
SSD
SD, USB, DOM, SSD/HDD
  Memory  
  •  
Memory Layer
DRAM
NVRAM (PCIe card)
DRAM (VSC)
DRAM
  •  
Memory Purpose
Read/Write Cache
NVRAM (PCIe): Write Buffer
DRAM (VSC): Read Cache
Read/Write Cache
  •  
Memory Capacity
Up to 8 TB
NVRAM (PCIe): Unknown
DRAM (VSC): 16-48GB for Read Cache
Configurable
  Flash  
  •  
Flash Layer
SSD, PCIe, UltraDIMM, NVMe
  •  
Flash Purpose
Persistent Storage
All-Flash: Metadata + Persistent Storage Tier
Read/Write Cache (hybrid)
Persistent storage (all-flash)
  •  
Flash Capacity
No limit, up to 1 PB per device
All-Flash: 5-12 SSDs per node
No limitations
  Magnetic  
  •  
Magnetic Layer
SAS or SATA
Hybrid: SATA
  •  
Magnetic Purpose
Persistent Storage
  •  
Magnetic Capacity
No limit, up to 1 PB (per device)
8 SATA HDDs per host/node
Data Availability expand
0%
0%
0%
  Reads/Writes  
  •  
Persistent Write Buffer
DRAM (mirrored)
NVRAM (PCIe card)
DRAM (mirrored)
Flash Layer (SSD, NVMe)
  •  
Disk Failure Protection
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
1 Replica (2N)
+ Hardware RAID (5, 6 or 60)
1-2 Replicas (2N-3N)
+ Hardware RAID (1, 5, 10)
  •  
Node Failure Protection
2-way and 3-way Mirroring (RAID-1)
1 Replica (2N)
+ Hardware RAID (5, 6 or 60)
1-2 Replicas (2N-3N)
  •  
Block Failure Protection
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Not relevant (1-node chassis only)
Not relevant (usually 1-node appliances)
  •  
Rack Failure Protection
Manual configuration
Group Placement
  •  
Protection Capacity Overhead
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
Replica (2N) + RAID5: 125-133%
Replica (2N) + RAID6: 120-133%
Replica (2N) + RAID60: 125-140%
Replica (2N) + RAID1/10: 200%
Replica (2N) + RAID5: 125-133%
Replica (3N) + RAID1/10: 300%
Replica (3N) + RAID5: 225-233%
  •  
Data Corruption Detection
N/A (hardware dependent)
Read integrity checks (CLI)
Disk scrubbing (software)
N/A (hardware dependent)
  Points-in-Time  
  •  
Snapshot Type
Built-in (native)
N/A
  •  
Snapshot Scope
Local + Remote
Local + Remote
N/A
  •  
Snapshot Frequency
1 Minute
GUI: 10 minutes (Policy-based)
CLI: 1 minute
N/A
  •  
Snapshot Granularity
Per VM (Vvols) or Volume
N/A
  •  
Backup Type
Built-in (native)
Built-in (native)
External
  •  
Backup Scope
Local or Remote
Locally
To other SimpliVity sites
To Service Providers
N/A
  •  
Backup Frequency
Continuously
GUI: 10 minutes (Policy-based)
CLI: 1 minute
N/A
  •  
Backup Consistency
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
vSphere: File System Consistent (Windows), Application Consistent (MS Apps on Windows)
Hyper-V: File System Consistent (Windows)
N/A
  •  
Restore Granularity
Entire VM or Volume
vSphere: Entire VM or Single File
Hyper-V: Entire VM
N/A
  •  
Restore Ease-of-use
Entire VM or Volume: GUI
Single File: Multi-step
Entire VM: GUI, CLI and API
Single File: GUI
N/A
  Disaster Recovery  
  •  
Remote Replication Type
Built-in (native)
Built-in (native)
Built-in (native; stretched clusters only)
External
  •  
Remote Replication Scope
To remote sites
To MS Azure Cloud
VR: To remote sites, To VMware clouds
HR: To remote sites, to Microsoft Azure (not part of Windows Server 2019)
  •  
Remote Replication Cloud Function
Data repository
N/A
VR: DR-site (VMware Clouds)
HR: DR-site (Azure)
  •  
Remote Replication Topologies
Single-site and multi-site
Single-site and multi-site
VR: Single-site and multi-site
HR: Single-site and chained
  •  
Remote Replication Frequency
Continuous (near-synchronous)
GUI: 10 minutes (Asynchronous)
CLI: 1 minute (Asynchronous)
Continuous (Stretched Cluster)
VR: 5 minutes (Asynchronous)
HR: 30 seconds (Asynchronous)
SW: Continuous (Stretched Cluster)
  •  
Remote Replication Granularity
VM or Volume
VR: VM
HR: VM
  •  
Consistency Groups
Yes
No
  •  
DR Orchestration
VMware SRM (certified)
RapidDR (native; VMware only)
NEW
N/A
  •  
Stretched Cluster (SC)
VMware vSphere: Yes (certified)
VMware vSphere: Yes (certified)
Microsoft Hyper-V: No
vSphere: Yes
Hyper-V: Yes
  •  
SC Configuration
2+sites = two or more active sites, 0/1 or more tie-breakers
3-sites: two active sites + tie-breaker in 3rd site (optional)
vSphere: 2+ sites = minimum of two active sites + optional tie-breaker in 3rd site
Hyper-V: 2+ sites = minimum of two active sites + optional tie-breaker in 3rd site
  •  
SC Distance
<=5ms RTT (targeted, not required)
<=5ms RTT
  •  
SC Scaling
<=32 hosts at each active site (per cluster)
<=8 hosts at each active site
No set maximum number of nodes
  •  
SC Data Redundancy
Replicas: 1N-2N at each active site
Replicas: 1N at each active site
+ Hardware RAID (5, 6 or 60)
Replicas: 0-3 Replicas at each active site
Data Services expand
0%
0%
0%
  Efficiency  
  •  
Dedup/Compr. Engine
Software (integration)
NEW
Hardware (PCIe)
vSphere: Software (native)
Hyper-V: Software (integration)
  •  
Dedup/Compr. Function
Efficiency (space savings)
Efficiency and Performance
Efficiency (space savings)
  •  
Dedup/Compr. Process
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
Deduplication: Inline (on-ack)
Compression: Inline (on-ack)
vSphere: Inline
Hyper-V: Post-Processing
  •  
Dedup/Compr. Type
Optional
NEW
Always-on
Optional
  •  
Dedup/Compr. Scope
Persistent data layer
All data (memory-, flash- and persistent data layers)
Persistent data layer
  •  
Dedup/Compr. Radius
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
Federation
Volume
  •  
Dedup/Compr. Granularity
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
4-8 KB variable block size
vSphere: 4KB fixed block size
Hyper-V: 32-128 KB variable block size
  •  
Dedup/Compr. Guarantee
N/A
90% (10:1) capacity savings across storage and backup combined
  •  
Data Rebalancing
Full (optional)
Partial
Full (optional)
  •  
Data Tiering
Yes
N/A
Partial (integration; optional)
  Performance  
  •  
Task Offloading
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
vSphere: VMware VAAI-NAS (Limited)
GUI integrated tasks/commands
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; UNMAP/TRIM
  •  
QoS Type
IOPs and/or MBps Limits
N/A
N/A
  •  
QoS Granularity
Virtual Disk Groups and/or Host Groups
N/A
N/A
  •  
Flash Pinning
Per VM/Virtual Disk/Volume
Not relevant (All-Flash only)
  Security  
  •  
Data Encryption Type
Built-in (native)
N/A
  •  
Data Encryption Options
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: Smart array controller
Software: HyTrust DataControl (validated); Vormetric VTE (validated)
Hardware: Self-encrypting drives (SEDs)
Software: Microsoft BitLocker Drive Encryption; VMware Virtual machine Encryption
  •  
Data Encryption Scope
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: Data-at-rest
Software: Data-at-rest + Data-in-transit
Hardware: Data-at-rest
Software: Data-at-rest
  •  
Data Encryption Compliance
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
Hardware: FIPS 140-2 Level 1 (Smart array controller)
Software: FIPS 140-2 Level 1 (HyTrust, VTE)
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (BitLocker; VMware Virtual Machine Encryption)
  •  
Data Encryption Efficiency Impact
Hardware: No
Software: No
Hardware: No
Software: Yes
Hardware: No
Software: No (Bitlocker); No (VMware Virtual Machine Encryption)
  Test/Dev  
  •  
Fast VM Cloning
Yes
Yes
No
  Portability  
  •  
Hypervisor Migration
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
  File Services  
  •  
Fileserver Type
Built-in (native)
N/A
vSphere: N/A
Hyper-V: Built-in (native)
  •  
Fileserver Compatibility
Windows clients
Linux clients
N/A
Windows clients
Linux clients
  •  
Fileserver Interconnect
SMB
NFS
N/A
SMB
NFS
  •  
Fileserver Quotas
Share Quotas, User Quotas
N/A
Share Quotas, User Quotas
  •  
Fileserver Analytics
Partial
N/A
Partial
  Object Services  
  •  
Object Storage Type
N/A
N/A
N/A
  •  
Object Storage Protection
N/A
N/A
N/A
  •  
Object Storage LT Retention
N/A
N/A
N/A
Management expand
0%
0%
0%
  Interfaces  
  •  
GUI Functionality
Centralized
Centralized
Centralized
  •  
GUI Scope
Single-site and Multi-site
Single-site and Multi-site
Single-site and Multi-site
  •  
GUI Perf. Monitoring
Advanced
Advanced
Basic
  •  
GUI Integration
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
VMware vSphere HTML5 Client (plugin)
SCVMM 2016/2019 (add-in)
vSphere: Web Client (plugin)
Hyper-V: SCVMM 2016 (add-in)
  Programmability  
  •  
Policies
Full
Partial (Protection)
Partial (Protection)
  •  
API/Scripting
REST-APIs
PowerShell
REST-APIs
XML-APIs
PowerShell (community supported)
CLI
REST-APIs (through Swordfish)
PowerShell
  •  
Integration
OpenStack
VMware vRealize Automation (vRA)
Cisco UCS Director
  •  
Self Service
Full
N/A
  Maintenance  
  •  
SW Composition
Unified
Unified
Partially Distributed
  •  
SW Upgrade Execution
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
Manual Upgrade (1-by-1)
  •  
FW Upgrade Execution
Hardware dependent
Rolling Upgrade (1-by-1)
Manual Upgrade (1-by-1 )
  Support  
  •  
Single HW/SW Support
No
Yes
Yes (optional)
  •  
Call-Home Function
Partial (HW dependent)
Full
Yes
  •  
Predictive Analytics
Partial
Full
Partial

Matrix Score

  •  
  •  
  • StarWind
  • HPE
  • StarWind
  •  
  • 8 th
  • 5 th
  • 8 th
X
Login to access your personal profile

Forgot your Password?
X
Signup with linkedin
X

Registered, but not activated? click here (resend activation link)

Login to access your personal profile

Receive new comparison alerts

Show me as community member

I agree to your Terms of services

GDPR