SDS and HCI comparison & reviews

Summary
Rank
8th 8th 1st
Score
0%
0%
User Reviews
Question?
Not Enabled Not Enabled Not Enabled
Analysis expand by Herman Rutten by Herman Rutten by Herman Rutten
Select All
General expand
0%
0%
0%
  • Fully Supported
  • Limitation
  • Not Supported
  • Information Only
Pros
  • + Extensive platform support
  • + Extensive data protection capabilities
  • + Flexible deployment options
  • + Strong Cisco integration
  • + Fast streamlined deployment
  • + Strong container support
  • + Extensive platform support
  • + Native file and object services
  • + Manageability
Cons
  • - No native data integrity verification
  • - Dedup/compr not performance optimized
  • - Disk/node failure protection not capacity optimized
  • - Single server hardware support
  • - No bare-metal support
  • - Limited native data protection capabilities
  • - Complex solution design
  • - No QoS
  • - Complex dedup/compr architecture
  Content  
  •  
Content Creator
  Assessment  
  •  
Overview
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
Name: HyperFlex (HX)
Type: Hardware+Software (HCI)
Development Start: 2015
First Product Release: apr 2016
NEW
Name: Enterprise Cloud Platform (ECP)
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2011
NEW
  •  
Maturity
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
GA Release Dates:
HX 4.0: apr 2019
HX 3.5.2a: jan 2019
HX 3.5.1a: nov 2018
HX 3.5: oct 2018
HX 3.0: apr 2018
HX 2.6.1b: dec 2017
HX 2.6.1a: oct 2017
HX 2.5: jul 2017
HX 2.1: may 2017
HX 2.0: mar 2017
HX 1.8: sep 2016
HX 1.7.3: aug 2016
1.7.1-14835: jun 2016
HX 1.7.1: apr 2016
NEW
GA Release Dates:
AOS 5.19: dec 2020
AOS 5.18: aug 2020
AOS 5.17: may 2020
AOS 5.16: jan 2020
AOS 5.11: aug 2019
AOS 5.10: nov 2018
AOS 5.9: oct 2018
AOS 5.8: jul 2018
AOS 5.6.1: jun 2018
AOS 5.6: apr 2018
AOS 5.5: dec 2017
AOS 5.1.2 / 5.2*: sep 2017
AOS 5.1.1.1: jul 2017
AOS 5.1: may 2017
AOS 5.0: dec 2016
AOS 4.7: jun 2016
AOS 4.6: feb 2016
AOS 4.5: oct 2015
NOS 4.1: jan 2015
NOS 4.0: apr 2014
NOS 3.5: aug 2013
NOS 3.0: dec 2012
NEW
  Pricing  
  •  
Hardware Pricing Model
N/A
Per Node
Bundle (ROBO)
Per Node
  •  
Software Pricing Model
Capacity based (per TB)
NEW
Per Node
Per Core + Flash TiB (AOS)
Per Node (AOS, Prism Pro/Ultimate)
Per Concurrent User (VDI)
Per VM (ROBO)
Per TB (Files)
Per VM (Calm)
Per VM (Xi Leap)
  •  
Support Pricing Model
Capacity based (per TB)
Per Node
Per Node
Design & Deploy expand
0%
0%
0%
  Design  
  •  
Consolidation Scope
Storage
Data Protection
Management
Automation&Orchestration
Compute
Storage
Network
Management
Automation&Orchestration
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
  •  
Network Topology
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
1, 10, 40 GbE
1, 10, 25, 40 GbE
  •  
Overall Design Complexity
Medium
Low
High
  •  
External Performance Validation
SPC (Jun 2016)
ESG Lab (Jan 2016)
ESG Lab (jul 2018)
SAP (dec 2017)
ESG Lab (mar 2017)
Login VSI (may 2017)
ESG Lab (feb 2017; sep 2020)
SAP (nov 2016)
  •  
Evaluation Methods
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
Online Labs
Proof-of-Concept (PoC)
Community Edition (forever)
Hyperconverged Test Drive in GCP
Proof-of-Concept (POC)
Partner Driven Demo Environment
Xi Services Free Trial (60-days)
  Deploy  
  •  
Deployment Architecture
Single-Layer
Dual-Layer
Single-Layer (primary)
Dual-Layer (secondary)
Single-Layer (primary)
Dual-Layer (secondary)
  •  
Deployment Method
BYOS (some automation)
Turnkey (very fast; highly automated)
Turnkey (very fast; highly automated)
Workload Support expand
0%
0%
0%
  Virtualization  
  •  
Hypervisor Deployment
Virtual Storage Controller
Kernel (Optional for Hyper-V)
Virtual Storage Controller
Virtual Storage Controller
  •  
Hypervisor Compatibility
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
VMware vSphere ESXi 6.0U3/6.5U2/6.7U2
Microsoft Hyper-V 2016/2019
NEW
VMware vSphere ESXi 6.0U1A-7.0U1
Microsoft Hyper-V 2012R2/2016/2019*
Microsoft CPS Standard
Nutanix Acropolis Hypervisor (AHV)
Citrix XenServer 7.0.0-7.1.0CU2**
NEW
  •  
Hypervisor Interconnect
iSCSI
FC
NFS
SMB
NFS
SMB3
iSCSI
  Bare Metal  
  •  
Bare Metal Compatibility
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
N/A
Microsoft Windows Server 2008R2/2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.7/6.8/7.2
SLES 11/12
Oracle Linux 6.7/7.2
AIX 7.1/7.2 on POWER
Oracle Solaris 11.3 on SPARC
ESXi 5.5/6 with VMFS (very specific use-cases)
  •  
Bare Metal Interconnect
N/A
iSCSI
  Containers  
  •  
Container Integration Type
Built-in (native)
Built-in (native)
Built-in (native)
  •  
Container Platform Compatibility
Docker CE/EE 18.03+
Docker EE 1.13+
Docker EE 1.13+
Node OS CentOS 7.5
Kubernetes 1.11-1.14
  •  
Container Platform Interconnect
Docker Volume plugin (certified)
HX FlexVolume Driver
Docker Volume plugin (certified)
  •  
Container Host Compatibility
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on VMware vSphere hypervisor
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
  •  
Container Host OS Compatbility
Linux
Ubuntu Linux 16.04.3 LTS
CentOS 7
Red Hat Enterprise Linux (RHEL) 7.3
Ubuntu Linux 16.04.2
  •  
Container Orch. Compatibility
Kubernetes
Kubernetes
  •  
Container Orch. Interconnect
Kubernetes CSI plugin
HX-CSI Plugin
NEW
Kubernetes Volume plugin
  VDI  
  •  
VDI Compatibility
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop (certified)
Citrix Cloud (certified)
Parallels RAS
Xi Frame
  •  
VDI Load Bearing
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
VMware: up to 137 virtual desktops/node
Citrix: up to 125 virtual desktops/node
VMware: up to 170 virtual desktops/node
Citrix: up to 175 virtual desktops/node
Server Support expand
0%
0%
0%
  Server/Node  
  •  
Hardware Vendor Choice
Many
Cisco
NEW
Super Micro (Nutanix branded)
Super Micro (source your own)
Dell EMC (OEM)
Lenovo (OEM)
Fujitsu (OEM)
HPE (OEM)
IBM (OEM)
Inspur (OEM)
Cisco UCS (Select)
Crystal (Rugged)
Klas Telecom (Rugged)
Many (CE only)
  •  
Models
Many
9 storage models:
HX220x Edge M5, HX220c M4/M5, HX240c M4/M5, HXAF220c M4/M5, HXAF240c M4/M5
8 compute-only models: B2x0 M4/M5, B4x0 M4/M5, C2x0 M4/M5, C4x0 M4/M5, C480 ML
NEW
5 Native Models (SX-1000, NX-1000, NX-3000, NX-5000, NX-8000)
15 Native Model sub-types
  •  
Density
1, 2 or 4 nodes per chassis
HX2x0/HXAF2x0/HXAN2x0: 1 node per chassis
B200: up to 8 nodes per chassis
C2x0: 1 node per chassis
Native:
1 (NX1000, NX3000, NX6000, NX8000)
2 (NX6000, NX8000)
4 (NX1000, NX3000)
3 or 4 (SX1000)
nodes per chassis
  •  
Mixing Allowed
Yes
Partial
Yes
  Components  
  •  
CPU Config
Flexible
Flexible
NEW
Flexible: up to 10 options
  •  
Memory Config
Flexible
NEW
Flexible: up to 10 options
  •  
Storage Config
Flexible
HX220c/HXAF220c/HXAN220c: Fixed number of disks
HX240c/HXAF240c: Flexible (number of disks)
NEW
Flexible: capacity (up to 7 options per disk type); number of disks (Dell, Cisco)
Fixed: Number of disks (hybrid, most all-flash)
  •  
Network Config
Flexible
Flexible: M5:10/40GbE; M5 Edge:1/10GbE; FC (optional)
Flexible: up to 4 add-on options
  •  
GPU Config
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla (HX240c only)
AMD FirePro (HX240c only)
NEW
NVIDIA Tesla (specific appliance models only)
  Scaling  
  •  
Scale-up
CPU
Memory
Storage
GPU
HX220c/HXAF220c/HXAN220c: CPU, Memory, Network
HX240c/HXAF240c: CPU, Memory, Storage, Network, GPU
Memory
Storage
GPU
  •  
Scale-out
Storage+Compute
Compute-only
Storage-only
Compute+storage
Compute-only (IO Visor)
Compute+storage
Compute-only (NFS; SMB3)
Storage-only
  •  
Scalability
1-64 nodes in 1-node increments
vSphere: 2-32 storage nodes in 1-node increments + 0-32 compute-only nodes in 1-node increments
Hyper-V: 2-16 storage nodes in 1-node increments + 0-16 compute-only nodes in 1-node increments
NEW
3-Unlimited nodes in 1-node increments
  •  
Small-scale (ROBO)
2 Node minimum
2 Node minimum
NEW
3 Node minimum (data center)
1 or 2 Node minimum (ROBO)
1 Node minimum (backup)
Storage Support expand
0%
0%
0%
  General  
  •  
Layout
Block Storage Pool
Distributed File System (DFS)
Distributed File System (ADSF)
  •  
Data Locality
Partial
None
Full
  •  
Storage Type(s)
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
Direct-attached (Raw)
SAN or NAS
Direct-attached (Raw)
  •  
Composition
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
Hybrid (Flash+Magnetic)
All-Flash
Hybrid (Flash+Magnetic)
All-Flash
  •  
Hypervisor OS Layer
SD, USB, DOM, SSD/HDD
Dual SD cards
SSD (optional for HX240c and HXAF240c systems)
SuperMicro (G3,G4,G5): DOM
SuperMicro (G6): M2 SSD
Dell: SD or SSD
Lenovo: DOM, SD or SSD
Cisco: SD or SSD
  Memory  
  •  
Memory Layer
DRAM
  •  
Memory Purpose
Read/Write Cache
  •  
Memory Capacity
Up to 8 TB
Configurable
  Flash  
  •  
Flash Layer
SSD, PCIe, UltraDIMM, NVMe
SSD, NVMe
SSD, NVMe
  •  
Flash Purpose
Persistent Storage
Hybrid: Log + Read/Write Cache
All-Flash/All-NVMe: Log + Write Cache + Storage Tier
Read/Write Cache
Storage Tier
  •  
Flash Capacity
No limit, up to 1 PB per device
Hybrid: 2 Flash devices per node (1x Cache; 1x Housekeeping)
All-Flash: 9-26 Flash devices per node (1x Cache; 1x System, 1x Boot; 6-23x Data)
All-NVMe: 8-11 NVMe devices per node (1x Cache, 1x System, 6-8 Data)
NEW
Hybrid: 1-4 SSDs per node
All-Flash: 3-24 SSDs per node
NVMe-Hybrid: 2-4 NVMe + 4-8 SSDs per node
  Magnetic  
  •  
Magnetic Layer
SAS or SATA
Hybrid: SAS or SATA
NEW
Hybrid: SATA
  •  
Magnetic Purpose
Persistent Storage
  •  
Magnetic Capacity
No limit, up to 1 PB (per device)
HX220x Edge M5: 3-6 capacity devices per node
HX220c: 6-8 capacity devices per node
HX240c: 6-23 capacity devices per node
2-20 SATA HDDs per node
Data Availability expand
0%
0%
0%
  Reads/Writes  
  •  
Persistent Write Buffer
DRAM (mirrored)
Flash Layer (SSD, NVMe)
Flash Layer (SSD; NVMe)
  •  
Disk Failure Protection
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
1-2 Replicas (2N-3N)
1-2 Replicas (2N-3N) (primary)
Erasure Coding (N+1/N+2) (secondary)
  •  
Node Failure Protection
2-way and 3-way Mirroring (RAID-1)
Logical Availability Zone
1-2 Replicas (2N-3N) (primary)
Erasure Coding (N+1/N+2) (secondary)
  •  
Block Failure Protection
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Not relevant (1-node chassis only)
Block Awareness (integrated)
  •  
Rack Failure Protection
Manual configuration
N/A
Rack Fault Tolerance
  •  
Protection Capacity Overhead
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
Replicas (2N): 100%
Replicas (3N): 200%
RF2 (2N) (primary): 100%
RF3 (3N) (primary): 200%
EC-X (N+1) (secondary): 20-50%
EC-X (N+2) (secondary): 50%
  •  
Data Corruption Detection
N/A (hardware dependent)
Read integrity checks
Read integrity checks
Disk scrubbing (software)
  Points-in-Time  
  •  
Snapshot Type
Built-in (native)
Built-in (native)
  •  
Snapshot Scope
Local + Remote
  •  
Snapshot Frequency
1 Minute
GUI: 1 hour (Policy-based)
GUI: 1-15 minutes (nearsync replication); 1 hour (async replication)
  •  
Snapshot Granularity
Per VM (Vvols) or Volume
Per VM or VM-folder
  •  
Backup Type
Built-in (native)
External
Built-in (native)
  •  
Backup Scope
Local or Remote
N/A
To local single-node
To local and remote clusters
To remote cloud object stores (Amazon S3, Microsoft Azure)
  •  
Backup Frequency
Continuously
N/A
NearSync to remote clusters: 1-15 minutes*
Async to remote clusters: 1 hour
AWS/Azure Cloud: 1 hour
  •  
Backup Consistency
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
N/A
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
  •  
Restore Granularity
Entire VM or Volume
N/A
Entire VM or Single File (local snapshots)
  •  
Restore Ease-of-use
Entire VM or Volume: GUI
Single File: Multi-step
N/A
Entire VM: GUI
Single File: GUI, nCLI
  Disaster Recovery  
  •  
Remote Replication Type
Built-in (native)
Built-in (native)
Built-in (native)
  •  
Remote Replication Scope
To remote sites
To MS Azure Cloud
To remote sites
To remote sites
To AWS and MS Azure Cloud
To Xi Cloud (US and UK only)
  •  
Remote Replication Cloud Function
Data repository
N/A
Data repository (AWS/Azure)
DR-site (Xi Cloud)
  •  
Remote Replication Topologies
Single-site and multi-site
Single-site
Single-site and multi-site
  •  
Remote Replication Frequency
Continuous (near-synchronous)
5 minutes (Asynchronous)
Synchronous to remote cluster: continuous
NearSync to remote clusters: 20 seconds*
Async to remote clusters: 1 hour
AWS/Azure Cloud: 1 hour
Xi Cloud: 1 hour
  •  
Remote Replication Granularity
VM or Volume
VM
VM
iSCSI LUN
  •  
Consistency Groups
Yes
No
Yes
  •  
DR Orchestration
VMware SRM (certified)
HX Connect (native)
VMware SRM (certified)
NEW
VMware SRM (certified)
Xi Leap (native; ESXi/AHV; US/UK/EU/JP)
NEW
  •  
Stretched Cluster (SC)
VMware vSphere: Yes (certified)
vSphere: Yes
Hyper-V: No
vSphere: Yes
Hyper-V: Yes
AHV: No
  •  
SC Configuration
2+sites = two or more active sites, 0/1 or more tie-breakers
vSphere: 3-sites = two active sites + tie-breaker in 3rd site
vSphere: 3-sites = two active sites + tie-breaker in 3rd site
Hyper-V: 3-sites = two active sites + tie-breaker in 3rd site
  •  
SC Distance
<=5ms RTT (targeted, not required)
<=5ms RTT / 10Gbps
<=5ms RTT / <400 KMs
  •  
SC Scaling
<=32 hosts at each active site (per cluster)
2-16 converged hosts + 0-16 compute hosts at each active site
No set max. # Nodes; Mixing hardware models allowed
  •  
SC Data Redundancy
Replicas: 1N-2N at each active site
Replicas: 2N at each active site
Replicas: 1N at each active site
Erasure Coding (optional): Nutanix EC-X at each active site
Data Services expand
0%
0%
0%
  Efficiency  
  •  
Dedup/Compr. Engine
Software (integration)
NEW
Software
  •  
Dedup/Compr. Function
Efficiency (space savings)
Efficiency and Performance
Efficiency (full) and Performance (limited)
  •  
Dedup/Compr. Process
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Perf. Tier: Inline (dedup post-ack / compr pre-ack)
Cap. Tier: Post-process
  •  
Dedup/Compr. Type
Optional
NEW
Always-on
Dedup Inline: Optional
Dedup Post-Process: Optional
Compr. Inline: Optional
Compr. Post-Process: Optional
  •  
Dedup/Compr. Scope
Persistent data layer
Read and Write caches + Persistent data layers
Dedup Inline: memory and flash layers
Dedup Post-process: persistent data layer (adaptive)
Compr. Inline: flash and persistent data layers
Compr. Post-process: persistent data layer (adaptive)
  •  
Dedup/Compr. Radius
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
Storage Cluster
Storage Container
  •  
Dedup/Compr. Granularity
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
4-64 KB fixed block size
16 KB fixed block size
  •  
Dedup/Compr. Guarantee
N/A
N/A
  •  
Data Rebalancing
Full (optional)
Full
Full
  •  
Data Tiering
Yes
N/A
N/A
  Performance  
  •  
Task Offloading
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
vSphere: VMware VAAI-NAS (full)
Hyper-V: SMB3 ODX; UNMAP/TRIM
vSphere: VMware VAAI-NAS (full), RDMA
Hyper-V: SMB3 ODX; UNMAP/TRIM
AVH: Integrated
  •  
QoS Type
IOPs and/or MBps Limits
N/A
IOPs Limits (maximums)
  •  
QoS Granularity
Virtual Disk Groups and/or Host Groups
N/A
Per VM
  •  
Flash Pinning
Per VM/Virtual Disk/Volume
Not relevant (global cache architecture)
VM Flash Mode: Per VM/Virtual Disk/iSCSI LUN
  Security  
  •  
Data Encryption Type
Built-in (native)
  •  
Data Encryption Options
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: Self-encrypting drives (SEDs)
Software: N/A
Hardware: Self-encrypting drives (SEDs)
Software: AOS encryption; Vormetric VTE (validated), Gemalto (verified)
  •  
Data Encryption Scope
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: Data-at-rest
Software: N/A
Hardware: Data-at-rest
Software AOS: Data-at-rest
Software VTE/Gemalto: Data-at-rest + Data-in-transit
  •  
Data Encryption Compliance
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: N/A
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (AOS, VTE)
  •  
Data Encryption Efficiency Impact
Hardware: No
Software: No
Hardware: No
Software: N/A
Hardware: No
Software AOS: No
Software VTE/Gemalto: Yes
  Test/Dev  
  •  
Fast VM Cloning
Yes
Yes
  Portability  
  •  
Hypervisor Migration
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
ESXi to AHV (integrated)
AHV to ESXi (integrated)
Hyper-V to AHV (external)
  File Services  
  •  
Fileserver Type
Built-in (native)
N/A
Built-in (native)
NEW
  •  
Fileserver Compatibility
Windows clients
Linux clients
N/A
Windows clients
Apple Mac clients
Linux clients
NEW
  •  
Fileserver Interconnect
SMB
NFS
N/A
SMB
NFS
NEW
  •  
Fileserver Quotas
Share Quotas, User Quotas
N/A
Share Quotas, User Quotas
  •  
Fileserver Analytics
Partial
N/A
Yes
  Object Services  
  •  
Object Storage Type
N/A
N/A
S3-compatible
NEW
  •  
Object Storage Protection
N/A
N/A
Versioning
  •  
Object Storage LT Retention
N/A
N/A
WORM
Management expand
0%
0%
0%
  Interfaces  
  •  
GUI Functionality
Centralized
Centralized
Centralized
  •  
GUI Scope
Single-site and Multi-site
Single-site and Multi-site
Prism: Single-site
Prism Central: Multi-site
  •  
GUI Perf. Monitoring
Advanced
Basic
Advanced
  •  
GUI Integration
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
VMware vSphere Web Client (plugin)
VMware: Prism (subset)
Microsoft: SCCM (SCOM and SCVMM)
AHV: Prism
Nutanix Files: Prism
Xi Frame: Prism Central
  Programmability  
  •  
Policies
Full
Partial (Protection)
Partial (Protection)
  •  
API/Scripting
REST-APIs
PowerShell
REST-APIs
CLI
REST-APIs
PowerShell
nCLI
  •  
Integration
OpenStack
Cisco UCS Director
OpenStack
VMware vRealize Automation (vRA)
Nutanix Calm
  •  
Self Service
Full
N/A (not part of HX license)
AHV only
  Maintenance  
  •  
SW Composition
Unified
Partially Distributed
Unified
  •  
SW Upgrade Execution
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
  •  
FW Upgrade Execution
Hardware dependent
1-Click
1-Click
  Support  
  •  
Single HW/SW Support
No
Yes
Yes (Nutanix; Dell; Lenovo; IBM)
  •  
Call-Home Function
Partial (HW dependent)
Full
Full
  •  
Predictive Analytics
Partial
Partial
Full

Matrix Score

  •  
  •  
  • StarWind
  • Cisco
  • Nutanix
  •  
  • 8 th
  • 8 th
  • 1 st
X
Login to access your personal profile

Forgot your Password?
X
Signup with linkedin
X

Registered, but not activated? click here (resend activation link)

Login to access your personal profile

Receive new comparison alerts

Show me as community member

I agree to your Terms of services

GDPR