SDS and HCI comparison & reviews

Summary
Rank
2nd 1st 4th
Score
0%
0%
User Reviews
Question?
Not Enabled Not Enabled Not Enabled
Analysis expand by Herman Rutten by Herman Rutten by Herman Rutten
Select All
General expand
0%
0%
0%
  • Fully Supported
  • Limitation
  • Not Supported
  • Information Only
Pros
  • + Extensive platform support
  • + Extensive data protection capabilities
  • + Flexible deployment options
  • + Extensive platform support
  • + Native file and object services
  • + Manageability
  • + Extensive QoS capabilities
  • + Considerable data protection integration
  • + Built for performance
Cons
  • - No native data integrity verification
  • - Dedup/compr not performance optimized
  • - Disk/node failure protection not capacity optimized
  • - Complex solution design
  • - No QoS
  • - Complex dedup/compr architecture
  • - Single hypervisor support
  • - No stretched clustering
  • - No native file services
  Content  
  •  
Content Creator
  Assessment  
  •  
Overview
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
Name: Enterprise Cloud Platform (ECP)
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2011
NEW
Name: Acuity (AC)
Type: Hardware+Software (HCI)
Development Start: 2016
First Product Release: 2017
  •  
Maturity
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
GA Release Dates:
AOS 5.19: dec 2020
AOS 5.18: aug 2020
AOS 5.17: may 2020
AOS 5.16: jan 2020
AOS 5.11: aug 2019
AOS 5.10: nov 2018
AOS 5.9: oct 2018
AOS 5.8: jul 2018
AOS 5.6.1: jun 2018
AOS 5.6: apr 2018
AOS 5.5: dec 2017
AOS 5.1.2 / 5.2*: sep 2017
AOS 5.1.1.1: jul 2017
AOS 5.1: may 2017
AOS 5.0: dec 2016
AOS 4.7: jun 2016
AOS 4.6: feb 2016
AOS 4.5: oct 2015
NOS 4.1: jan 2015
NOS 4.0: apr 2014
NOS 3.5: aug 2013
NOS 3.0: dec 2012
NEW
GA Release Dates:
AC 10.6.1: feb 2019
AC 10.6: dec 2018
AC 10.5.1: oct 2018
AC 10.4: jun 2018
AC 2.3.3: mar 2018
AC 2.3.2: jan 2018
AC 2.2: oct 2017
AC 2.1.1: aug 2017
AC 2.1: apr 2017
NEW
  Pricing  
  •  
Hardware Pricing Model
N/A
Per Node
Per Node
  •  
Software Pricing Model
Capacity based (per TB)
NEW
Per Core + Flash TiB (AOS)
Per Node (AOS, Prism Pro/Ultimate)
Per Concurrent User (VDI)
Per VM (ROBO)
Per TB (Files)
Per VM (Calm)
Per VM (Xi Leap)
Per Node (all-inclusive)
  •  
Support Pricing Model
Capacity based (per TB)
Per Node
Per Node
Design & Deploy expand
0%
0%
0%
  Design  
  •  
Consolidation Scope
Storage
Data Protection
Management
Automation&Orchestration
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
Compute
Storage
Data Protection
Management
Automation&Orchestration
  •  
Network Topology
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
1, 10, 25, 40 GbE
10 GbE (or 1GbE)
  •  
Overall Design Complexity
Medium
High
Low
  •  
External Performance Validation
SPC (Jun 2016)
ESG Lab (Jan 2016)
Login VSI (may 2017)
ESG Lab (feb 2017; sep 2020)
SAP (nov 2016)
ESG Lab (dec 2017)
  •  
Evaluation Methods
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
Community Edition (forever)
Hyperconverged Test Drive in GCP
Proof-of-Concept (POC)
Partner Driven Demo Environment
Xi Services Free Trial (60-days)
Cloud Edition (forever)
Online Labs
Proof-of-Concept (PoC)
Partner Driven Demo Environment
  Deploy  
  •  
Deployment Architecture
Single-Layer
Dual-Layer
Single-Layer (primary)
Dual-Layer (secondary)
Single-Layer (primary)
Dual-Layer (secondary)
  •  
Deployment Method
BYOS (some automation)
Turnkey (very fast; highly automated)
Turnkey (very fast; highly automated)
Workload Support expand
0%
0%
0%
  Virtualization  
  •  
Hypervisor Deployment
Virtual Storage Controller
Kernel (Optional for Hyper-V)
Virtual Storage Controller
Virtual Storage Controller
  •  
Hypervisor Compatibility
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
VMware vSphere ESXi 6.0U1A-7.0U1
Microsoft Hyper-V 2012R2/2016/2019*
Microsoft CPS Standard
Nutanix Acropolis Hypervisor (AHV)
Citrix XenServer 7.0.0-7.1.0CU2**
NEW
VMware vSphere ESXi 6.5-6.7
  •  
Hypervisor Interconnect
iSCSI
FC
NFS
SMB3
iSCSI
iSCSI
  Bare Metal  
  •  
Bare Metal Compatibility
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Microsoft Windows Server 2008R2/2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.7/6.8/7.2
SLES 11/12
Oracle Linux 6.7/7.2
AIX 7.1/7.2 on POWER
Oracle Solaris 11.3 on SPARC
ESXi 5.5/6 with VMFS (very specific use-cases)
Many
  •  
Bare Metal Interconnect
iSCSI
iSCSI
  Containers  
  •  
Container Integration Type
Built-in (native)
Built-in (native)
N/A
  •  
Container Platform Compatibility
Docker CE/EE 18.03+
Docker EE 1.13+
Node OS CentOS 7.5
Kubernetes 1.11-1.14
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
  •  
Container Platform Interconnect
Docker Volume plugin (certified)
Docker Volume plugin (certified)
Docker Volume Plugin (certified) + VMware VIB
  •  
Container Host Compatibility
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on VMware vSphere hypervisor
  •  
Container Host OS Compatbility
Linux
CentOS 7
Red Hat Enterprise Linux (RHEL) 7.3
Ubuntu Linux 16.04.2
Linux
Windows 10 or 2016
  •  
Container Orch. Compatibility
Kubernetes
Kubernetes 1.6.5+ on ESXi 6.0+
  •  
Container Orch. Interconnect
Kubernetes CSI plugin
Kubernetes Volume plugin
Kubernetes Volume Plugin
  VDI  
  •  
VDI Compatibility
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop (certified)
Citrix Cloud (certified)
Parallels RAS
Xi Frame
VMware Horizon
Citrix XenDesktop
  •  
VDI Load Bearing
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
VMware: up to 170 virtual desktops/node
Citrix: up to 175 virtual desktops/node
VMware: 272-317 virtual desktops/node
Citrix: 167 virtual desktops/node
NEW
Server Support expand
0%
0%
0%
  Server/Node  
  •  
Hardware Vendor Choice
Many
Super Micro (Nutanix branded)
Super Micro (source your own)
Dell EMC (OEM)
Lenovo (OEM)
Fujitsu (OEM)
HPE (OEM)
IBM (OEM)
Inspur (OEM)
Cisco UCS (Select)
Crystal (Rugged)
Klas Telecom (Rugged)
Many (CE only)
Dell
Lenovo
  •  
Models
Many
5 Native Models (SX-1000, NX-1000, NX-3000, NX-5000, NX-8000)
15 Native Model sub-types
8 compute+storage models:
X5-2000, X5-2500, X5-6000, X5-6500, X3-2000, X3-2500, X3-6000, X3-6500
4 compute-only models:
X3-2000, X3-2500, X3-6000, X3-6500
4 storage-only models:
X3-2000s, X3-6000s, X5-2000s, X5-6000s
NEW
  •  
Density
1, 2 or 4 nodes per chassis
Native:
1 (NX1000, NX3000, NX6000, NX8000)
2 (NX6000, NX8000)
4 (NX1000, NX3000)
3 or 4 (SX1000)
nodes per chassis
1 node per chassis
  •  
Mixing Allowed
Yes
Yes
Partial
  Components  
  •  
CPU Config
Flexible
Flexible: up to 10 options
Flexible: up to 7 options
NEW
  •  
Memory Config
Flexible: up to 10 options
Flexible: up to 4 options
NEW
  •  
Storage Config
Flexible
Flexible: capacity (up to 7 options per disk type); number of disks (Dell, Cisco)
Fixed: Number of disks (hybrid, most all-flash)
Flexible: disk capacity
NEW
  •  
Network Config
Flexible
Flexible: up to 4 add-on options
Flexible: 2 or 3 options
NEW
  •  
GPU Config
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla (specific appliance models only)
X5: NVIDIA Tesla
X3: N/A
  Scaling  
  •  
Scale-up
CPU
Memory
Storage
GPU
Memory
Storage
GPU
Memory
Network
  •  
Scale-out
Storage+Compute
Compute-only
Storage-only
Compute+storage
Compute-only (NFS; SMB3)
Storage-only
Compute+storage
Compute-only (iSCSI)
Storage-only
NEW
  •  
Scalability
1-64 nodes in 1-node increments
3-Unlimited nodes in 1-node increments
X5 Hybrid: 3-12 nodes in 1-node increments
X5 All-Flash: 3-16 nodes in 1-node increments
X3 Hybrid: 3-8 nodes in 1-node increments
X3 All-Flash: 3-8 nodes in 1-node increments
  •  
Small-scale (ROBO)
2 Node minimum
3 Node minimum (data center)
1 or 2 Node minimum (ROBO)
1 Node minimum (backup)
3 Node minimum (data center)
1 Node minimum (ROBO)
Storage Support expand
0%
0%
0%
  General  
  •  
Layout
Block Storage Pool
Distributed File System (ADSF)
Block Pool
  •  
Data Locality
Partial
Full
None
  •  
Storage Type(s)
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
Direct-attached (Raw)
Direct-attached (Raw)
  •  
Composition
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
Hybrid (Flash+Magnetic)
All-Flash
Hybrid (Flash+Magnetic)
All-Flash
  •  
Hypervisor OS Layer
SD, USB, DOM, SSD/HDD
SuperMicro (G3,G4,G5): DOM
SuperMicro (G6): M2 SSD
Dell: SD or SSD
Lenovo: DOM, SD or SSD
Cisco: SD or SSD
USB, SSD, SD
  Memory  
  •  
Memory Layer
DRAM
  •  
Memory Purpose
Read/Write Cache
Read Cache and Write Buffer
  •  
Memory Capacity
Up to 8 TB
Configurable
Configurable
  Flash  
  •  
Flash Layer
SSD, PCIe, UltraDIMM, NVMe
SSD, NVMe
  •  
Flash Purpose
Persistent Storage
Read/Write Cache
Storage Tier
NVMe PCIe: Read/Write Cache
SSD: Persistent Storage
  •  
Flash Capacity
No limit, up to 1 PB per device
Hybrid: 1-4 SSDs per node
All-Flash: 3-24 SSDs per node
NVMe-Hybrid: 2-4 NVMe + 4-8 SSDs per node
All-flash:
X5-6500: 1x NVMe AIC/PCIe +16x SSD
X3-6500: 1x NVMe U.2 + 8x SSD
X5-6000: 16x SSD
X3-6000: 8x SSD

Hybrid:
X5-2500: 1x NVMe AIC/PCIe + 2x SSD
X3-2500: 1x NVMe U.2/PCIe + 1x SSD
X5-2000 2x SSD
X3-2000 1x SSD
  Magnetic  
  •  
Magnetic Layer
SAS or SATA
Hybrid: SATA
Hybrid: SATA
  •  
Magnetic Purpose
  •  
Magnetic Capacity
No limit, up to 1 PB (per device)
2-20 SATA HDDs per node
X3 Hybrid: 12 SATA HDDs per host/node
X5 Hybrid: 8 SATA HDDs per host/node
Data Availability expand
0%
0%
0%
  Reads/Writes  
  •  
Persistent Write Buffer
DRAM (mirrored)
Flash Layer (SSD; NVMe)
NVMe PCIe (mirrored)
  •  
Disk Failure Protection
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
1-2 Replicas (2N-3N) (primary)
Erasure Coding (N+1/N+2) (secondary)
Erasure Coding (EC1/EC3/EC5)
Global Virtual Sparing
  •  
Node Failure Protection
2-way and 3-way Mirroring (RAID-1)
1-2 Replicas (2N-3N) (primary)
Erasure Coding (N+1/N+2) (secondary)
Erasure Coding (N+1/N+3/N+5)
  •  
Block Failure Protection
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Block Awareness (integrated)
Not relevant (1-node chassis only)
  •  
Rack Failure Protection
Manual configuration
Rack Fault Tolerance
  •  
Protection Capacity Overhead
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
RF2 (2N) (primary): 100%
RF3 (3N) (primary): 200%
EC-X (N+1) (secondary): 20-50%
EC-X (N+2) (secondary): 50%
EC1: 9%-51%
EC3: 25%-72%
EC5: 36%-92%
  •  
Data Corruption Detection
N/A (hardware dependent)
Read integrity checks
Disk scrubbing (software)
Metadata verification (software)
Predictive drive analysis (software)
Disk scrubbing (software)
  Points-in-Time  
  •  
Snapshot Type
Built-in (native)
Built-in (native)
  •  
Snapshot Scope
Local + Remote
Local + Remote
  •  
Snapshot Frequency
1 Minute
GUI: 1-15 minutes (nearsync replication); 1 hour (async replication)
GUI: 15 minutes (Policy-based)
  •  
Snapshot Granularity
Per VM (Vvols) or Volume
Per Volume
  •  
Backup Type
Built-in (native)
Built-in (native)
Built-in (native)
  •  
Backup Scope
Local or Remote
To local single-node
To local and remote clusters
To remote cloud object stores (Amazon S3, Microsoft Azure)
To local and remote clusters
To remote cloud object stores (Amazon S3)
  •  
Backup Frequency
Continuously
NearSync to remote clusters: 1-15 minutes*
Async to remote clusters: 1 hour
AWS/Azure Cloud: 1 hour