SDS and HCI comparison & reviews

Summary
Rank
1st 2nd 3rd
Score
0%
0%
User Reviews
Question?
Not Enabled Not Enabled Not Enabled
Analysis expand by Herman Rutten by Herman Rutten by Herman Rutten
Select All
General expand
0%
0%
0%
  • Fully Supported
  • Limitation
  • Not Supported
  • Information Only
Pros
  • + Extensive platform support
  • + Native file and object services
  • + Manageability
  • + Extensive platform support
  • + Extensive data protection capabilities
  • + Flexible deployment options
  • + Broad range of hardware support
  • + Strong VMware integration
  • + Policy-based management
Cons
  • - Complex solution design
  • - No QoS
  • - Complex dedup/compr architecture
  • - No native data integrity verification
  • - Dedup/compr not performance optimized
  • - Disk/node failure protection not capacity optimized
  • - Single hypervisor support
  • - Very limited native data protection capabilities
  • - Dedup/compr not performance optimized
  Content  
  •  
Content Creator
  Assessment  
  •  
Overview
Name: Enterprise Cloud Platform (ECP)
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2011
NEW
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
Name: vSAN
Type: Software-only (SDS)
Development Start: Unknown
First Product Release: 2014
  •  
Maturity
GA Release Dates:
AOS 5.19: dec 2020
AOS 5.18: aug 2020
AOS 5.17: may 2020
AOS 5.16: jan 2020
AOS 5.11: aug 2019
AOS 5.10: nov 2018
AOS 5.9: oct 2018
AOS 5.8: jul 2018
AOS 5.6.1: jun 2018
AOS 5.6: apr 2018
AOS 5.5: dec 2017
AOS 5.1.2 / 5.2*: sep 2017
AOS 5.1.1.1: jul 2017
AOS 5.1: may 2017
AOS 5.0: dec 2016
AOS 4.7: jun 2016
AOS 4.6: feb 2016
AOS 4.5: oct 2015
NOS 4.1: jan 2015
NOS 4.0: apr 2014
NOS 3.5: aug 2013
NOS 3.0: dec 2012
NEW
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
GA Release Dates:
vSAN 7.0 U1: oct 2020
vSAN 7.0: apr 2020
vSAN 6.7 U3: apr 2019
vSAN 6.7 U1: oct 2018
vSAN 6.7: may 2018
vSAN 6.6.1: jul 2017
vSAN 6.6: apr 2017
vSAN 6.5: nov 2016
vSAN 6.2: mar 2016
vSAN 6.1: aug 2015
vSAN 6.0: mar 2015
vSAN 5.5: mar 2014
NEW
  Pricing  
  •  
Hardware Pricing Model
Per Node
N/A
N/A
  •  
Software Pricing Model
Per Core + Flash TiB (AOS)
Per Node (AOS, Prism Pro/Ultimate)
Per Concurrent User (VDI)
Per VM (ROBO)
Per TB (Files)
Per VM (Calm)
Per VM (Xi Leap)
Capacity based (per TB)
NEW
Per CPU Socket
Per Desktop (VDI use cases only)
Per Used GB (VCPP only)
  •  
Support Pricing Model
Per Node
Capacity based (per TB)
Per CPU Socket
Per Desktop (VDI use cases only)
Design & Deploy expand
0%
0%
0%
  Design  
  •  
Consolidation Scope
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
Storage
Data Protection
Management
Automation&Orchestration
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
  •  
Network Topology
1, 10, 25, 40 GbE
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
1, 10, 40 GbE
  •  
Overall Design Complexity
High
Medium
Medium
  •  
External Performance Validation
Login VSI (may 2017)
ESG Lab (feb 2017; sep 2020)
SAP (nov 2016)
SPC (Jun 2016)
ESG Lab (Jan 2016)
StorageReview (aug 2018, aug 2016)
ESG Lab (aug 2018, apr 2016)
Evaluator Group (oct 2018, jul 2017, aug 2016)
  •  
Evaluation Methods
Community Edition (forever)
Hyperconverged Test Drive in GCP
Proof-of-Concept (POC)
Partner Driven Demo Environment
Xi Services Free Trial (60-days)
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
Free Trial (60-days)
Online Lab
Proof-of-Concept (POC)
  Deploy  
  •  
Deployment Architecture
Single-Layer (primary)
Dual-Layer (secondary)
Single-Layer
Dual-Layer
Single-Layer (primary)
Dual-Layer (secondary)
  •  
Deployment Method
Turnkey (very fast; highly automated)
BYOS (some automation)
BYOS (fast, some automation)
Pre-installed (very fast, turnkey approach)
Workload Support expand
0%
0%
0%
  Virtualization  
  •  
Hypervisor Deployment
Virtual Storage Controller
Virtual Storage Controller
Kernel (Optional for Hyper-V)
Kernel Integrated
  •  
Hypervisor Compatibility
VMware vSphere ESXi 6.0U1A-7.0U1
Microsoft Hyper-V 2012R2/2016/2019*
Microsoft CPS Standard
Nutanix Acropolis Hypervisor (AHV)
Citrix XenServer 7.0.0-7.1.0CU2**
NEW
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
VMware vSphere ESXi 7.0 U1
NEW
  •  
Hypervisor Interconnect
NFS
SMB3
iSCSI
iSCSI
FC
vSAN (incl. WSFC)
  Bare Metal  
  •  
Bare Metal Compatibility
Microsoft Windows Server 2008R2/2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.7/6.8/7.2
SLES 11/12
Oracle Linux 6.7/7.2
AIX 7.1/7.2 on POWER
Oracle Solaris 11.3 on SPARC
ESXi 5.5/6 with VMFS (very specific use-cases)
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Many
  •  
Bare Metal Interconnect
iSCSI
iSCSI
  Containers  
  •  
Container Integration Type
Built-in (native)
Built-in (native)
Built-in (Hypervisor-based, vSAN supported)
  •  
Container Platform Compatibility
Docker EE 1.13+
Node OS CentOS 7.5
Kubernetes 1.11-1.14
Docker CE/EE 18.03+
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
  •  
Container Platform Interconnect
Docker Volume plugin (certified)
Docker Volume plugin (certified)
Docker Volume Plugin (certified) + VMware VIB
  •  
Container Host Compatibility
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
Virtualized container hosts on VMware vSphere hypervisor
  •  
Container Host OS Compatbility
CentOS 7
Red Hat Enterprise Linux (RHEL) 7.3
Ubuntu Linux 16.04.2
Linux
Linux
Windows 10 or Windows Server 2016
  •  
Container Orch. Compatibility
Kubernetes
VCP: Kubernetes 1.6.5+ on ESXi 6.0+
CNS: Kubernetes 1.14+
  •  
Container Orch. Interconnect
Kubernetes Volume plugin
Kubernetes CSI plugin
Kubernetes Volume Plugin
  VDI  
  •  
VDI Compatibility
VMware Horizon
Citrix XenDesktop (certified)
Citrix Cloud (certified)
Parallels RAS
Xi Frame
VMware Horizon
Citrix XenDesktop
VMware Horizon
Citrix XenDesktop
  •  
VDI Load Bearing
VMware: up to 170 virtual desktops/node
Citrix: up to 175 virtual desktops/node
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
VMware: up to 200 virtual desktops/node
Citrix: up to 90 virtual desktops/node
Server Support expand
0%
0%
0%
  Server/Node  
  •  
Hardware Vendor Choice
Super Micro (Nutanix branded)
Super Micro (source your own)
Dell EMC (OEM)
Lenovo (OEM)
Fujitsu (OEM)
HPE (OEM)
IBM (OEM)
Inspur (OEM)
Cisco UCS (Select)
Crystal (Rugged)
Klas Telecom (Rugged)
Many (CE only)
Many
Many
  •  
Models
5 Native Models (SX-1000, NX-1000, NX-3000, NX-5000, NX-8000)
15 Native Model sub-types
Many
Many
  •  
Density
Native:
1 (NX1000, NX3000, NX6000, NX8000)
2 (NX6000, NX8000)
4 (NX1000, NX3000)
3 or 4 (SX1000)
nodes per chassis
1, 2 or 4 nodes per chassis
1, 2 or 4 nodes per chassis
  •  
Mixing Allowed
Yes
Yes
Yes
  Components  
  •  
CPU Config
Flexible: up to 10 options
Flexible
Flexible
  •  
Memory Config
Flexible: up to 10 options
Flexible
  •  
Storage Config
Flexible: capacity (up to 7 options per disk type); number of disks (Dell, Cisco)
Fixed: Number of disks (hybrid, most all-flash)
Flexible
Flexible: number of disks + capacity
  •  
Network Config
Flexible: up to 4 add-on options
Flexible
Flexible
  •  
GPU Config
NVIDIA Tesla (specific appliance models only)
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
  Scaling  
  •  
Scale-up
Memory
Storage
GPU
CPU
Memory
Storage
GPU
CPU
Memory
Storage
GPU
  •  
Scale-out
Compute+storage
Compute-only (NFS; SMB3)
Storage-only
Storage+Compute
Compute-only
Storage-only
Compute+storage
Compute-only (vSAN VMKernel)
  •  
Scalability
3-Unlimited nodes in 1-node increments
1-64 nodes in 1-node increments
2-64 nodes in 1-node increments
  •  
Small-scale (ROBO)
3 Node minimum (data center)
1 or 2 Node minimum (ROBO)
1 Node minimum (backup)
2 Node minimum
2 Node minimum
Storage Support expand
0%
0%
0%
  General  
  •  
Layout
Distributed File System (ADSF)
Block Storage Pool
Object Storage File System (OSFS)
  •  
Data Locality
Full
Partial
Partial
  •  
Storage Type(s)
Direct-attached (Raw)
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
Direct-attached (Raw)
Remote vSAN datatstores (HCI Mesh)
NEW
  •  
Composition
Hybrid (Flash+Magnetic)
All-Flash
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
Hybrid (Flash+Magnetic)
All-Flash
  •  
Hypervisor OS Layer
SuperMicro (G3,G4,G5): DOM
SuperMicro (G6): M2 SSD
Dell: SD or SSD
Lenovo: DOM, SD or SSD
Cisco: SD or SSD
SD, USB, DOM, SSD/HDD
SD, USB, DOM, HDD or SSD
  Memory  
  •  
Memory Layer
DRAM
DRAM
  •  
Memory Purpose
Read/Write Cache
  •  
Memory Capacity
Configurable
Up to 8 TB
Non-configurable
  Flash  
  •  
Flash Layer
SSD, NVMe
SSD, PCIe, UltraDIMM, NVMe
SSD, PCIe, UltraDIMM, NVMe
  •  
Flash Purpose
Read/Write Cache
Storage Tier
Persistent Storage
Hybrid: Read/Write Cache
All-Flash: Write Cache + Storage Tier
  •  
Flash Capacity
Hybrid: 1-4 SSDs per node
All-Flash: 3-24 SSDs per node
NVMe-Hybrid: 2-4 NVMe + 4-8 SSDs per node
No limit, up to 1 PB per device
Hybrid: 1-5 Flash devices per node (1 per disk group)
All-Flash: 40 Flash devices per node (8 per disk group,1 for cache and 7 for capacity)
  Magnetic  
  •  
Magnetic Layer
Hybrid: SATA
SAS or SATA
Hybrid: SAS or SATA
  •  
Magnetic Purpose
  •  
Magnetic Capacity
2-20 SATA HDDs per node
No limit, up to 1 PB (per device)
1-35 SAS/SATA HDDs per host/node
Data Availability expand
0%
0%
0%
  Reads/Writes  
  •  
Persistent Write Buffer
Flash Layer (SSD; NVMe)
DRAM (mirrored)
Flash Layer (SSD;PCIe;NVMe)
  •  
Disk Failure Protection
1-2 Replicas (2N-3N) (primary)
Erasure Coding (N+1/N+2) (secondary)
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N)
All-Flash: Erasure Coding (RAID5-6)
  •  
Node Failure Protection
1-2 Replicas (2N-3N) (primary)
Erasure Coding (N+1/N+2) (secondary)
2-way and 3-way Mirroring (RAID-1)
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N)
All-Flash: Erasure Coding (RAID5-6)
  •  
Block Failure Protection
Block Awareness (integrated)
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Failure Domains
  •  
Rack Failure Protection
Rack Fault Tolerance
Manual configuration
Failure Domains
  •  
Protection Capacity Overhead
RF2 (2N) (primary): 100%
RF3 (3N) (primary): 200%
EC-X (N+1) (secondary): 20-50%
EC-X (N+2) (secondary): 50%
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
Host Pinning (1N): Dependent on # of VMs
Replicas (2N): 100%
Replicas (3N): 200%
Erasure Coding (RAID5): 33%
Erasure Coding (RAID6): 50%
  •  
Data Corruption Detection
Read integrity checks
Disk scrubbing (software)
N/A (hardware dependent)
Read integrity checks
Disk scrubbing (software)
  Points-in-Time  
  •  
Snapshot Type
Built-in (native)
Built-in (native)
  •  
Snapshot Scope
Local + Remote
  •  
Snapshot Frequency
GUI: 1-15 minutes (nearsync replication); 1 hour (async replication)
1 Minute
GUI: 1 hour
  •  
Snapshot Granularity
Per VM (Vvols) or Volume
  •  
Backup Type
Built-in (native)
Built-in (native)
External (vSAN Certified)
  •  
Backup Scope
To local single-node
To local and remote clusters
To remote cloud object stores (Amazon S3, Microsoft Azure)
Local or Remote
N/A
  •  
Backup Frequency
NearSync to remote clusters: 1-15 minutes*
Async to remote clusters: 1 hour
AWS/Azure Cloud: 1 hour
Continuously
N/A
  •  
Backup Consistency
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
N/A
  •  
Restore Granularity
Entire VM or Single File (local snapshots)
Entire VM or Volume
N/A
  •  
Restore Ease-of-use
Entire VM: GUI
Single File: GUI, nCLI
Entire VM or Volume: GUI
Single File: Multi-step
N/A
  Disaster Recovery  
  •  
Remote Replication Type
Built-in (native)
Built-in (native)
Built-in (Stretched Clusters only)
External
  •  
Remote Replication Scope
To remote sites
To AWS and MS Azure Cloud
To Xi Cloud (US and UK only)
To remote sites
To MS Azure Cloud
VR: To remote sites, To VMware clouds
  •  
Remote Replication Cloud Function
Data repository (AWS/Azure)
DR-site (Xi Cloud)
Data repository
VR: DR-site (VMware Clouds)
  •  
Remote Replication Topologies
Single-site and multi-site
Single-site and multi-site
VR: Single-site and multi-site
  •  
Remote Replication Frequency
Synchronous to remote cluster: continuous
NearSync to remote clusters: 20 seconds*
Async to remote clusters: 1 hour
AWS/Azure Cloud: 1 hour
Xi Cloud: 1 hour
Continuous (near-synchronous)
VR: 5 minutes (Asynchronous)
vSAN: Continuous (Stretched Cluster)
  •  
Remote Replication Granularity
VM
iSCSI LUN
VM or Volume
  •  
Consistency Groups
Yes
Yes
VR: No
  •  
DR Orchestration
VMware SRM (certified)
Xi Leap (native; ESXi/AHV; US/UK/EU/JP)
NEW
VMware SRM (certified)
VMware SRM (certified)
  •  
Stretched Cluster (SC)
vSphere: Yes
Hyper-V: Yes
AHV: No
VMware vSphere: Yes (certified)
VMware vSphere: Yes (certified)
  •  
SC Configuration
vSphere: 3-sites = two active sites + tie-breaker in 3rd site
Hyper-V: 3-sites = two active sites + tie-breaker in 3rd site
2+sites = two or more active sites, 0/1 or more tie-breakers
3-sites: two active sites + tie-breaker in 3rd site
NEW
  •  
SC Distance
<=5ms RTT / <400 KMs
<=5ms RTT (targeted, not required)
<=5ms RTT
  •  
SC Scaling
No set max. # Nodes; Mixing hardware models allowed
<=32 hosts at each active site (per cluster)
<=15 hosts at each active site
  •  
SC Data Redundancy
Replicas: 1N at each active site
Erasure Coding (optional): Nutanix EC-X at each active site
Replicas: 1N-2N at each active site
Replicas: 0-3 Replicas (1N-4N) at each active site
Erasure Coding: RAID5-6 at each active site
Data Services expand
0%
0%
0%
  Efficiency  
  •  
Dedup/Compr. Engine
Software (integration)
NEW
All-Flash: Software
Hybrid: N/A
  •  
Dedup/Compr. Function
Efficiency (full) and Performance (limited)
Efficiency (space savings)
Efficiency (Space savings)
  •  
Dedup/Compr. Process
Perf. Tier: Inline (dedup post-ack / compr pre-ack)
Cap. Tier: Post-process
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
All-Flash: Inline (post-ack)
Hybrid: N/A
  •  
Dedup/Compr. Type
Dedup Inline: Optional
Dedup Post-Process: Optional
Compr. Inline: Optional
Compr. Post-Process: Optional
Optional
NEW
All-Flash: Optional
Hybrid: N/A
NEW
  •  
Dedup/Compr. Scope
Dedup Inline: memory and flash layers
Dedup Post-process: persistent data layer (adaptive)
Compr. Inline: flash and persistent data layers
Compr. Post-process: persistent data layer (adaptive)
Persistent data layer
Persistent data layer
  •  
Dedup/Compr. Radius
Storage Container
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
Disk Group
  •  
Dedup/Compr. Granularity
16 KB fixed block size
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
4 KB fixed block size
  •  
Dedup/Compr. Guarantee
N/A
N/A
  •  
Data Rebalancing
Full
Full (optional)
Full
  •  
Data Tiering
N/A
Yes
N/A
  Performance  
  •  
Task Offloading
vSphere: VMware VAAI-NAS (full), RDMA
Hyper-V: SMB3 ODX; UNMAP/TRIM
AVH: Integrated
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
vSphere: Integrated
  •  
QoS Type
IOPs Limits (maximums)
IOPs and/or MBps Limits
IOPs Limits (maximums)
  •  
QoS Granularity
Per VM
Virtual Disk Groups and/or Host Groups
Per VM/Virtual Disk
  •  
Flash Pinning
VM Flash Mode: Per VM/Virtual Disk/iSCSI LUN
Per VM/Virtual Disk/Volume
Cache Read Reservation: Per VM/Virtual Disk
  Security  
  •  
Data Encryption Type
Built-in (native)
  •  
Data Encryption Options
Hardware: Self-encrypting drives (SEDs)
Software: AOS encryption; Vormetric VTE (validated), Gemalto (verified)
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: N/A
Software: vSAN data encryption; HyTrust DataControl (validated)
NEW
  •  
Data Encryption Scope
Hardware: Data-at-rest
Software AOS: Data-at-rest
Software VTE/Gemalto: Data-at-rest + Data-in-transit
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: N/A
Software vSAN: Data-at-rest + Data-in-transit
Software Hytrust: Data-at-rest + Data-in-transit
NEW
  •  
Data Encryption Compliance
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (AOS, VTE)
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
Hardware: N/A
Software: FIPS 140-2 Level 1 (vSAN); FIPS 140-2 Level 1 (HyTrust)
  •  
Data Encryption Efficiency Impact
Hardware: No
Software AOS: No
Software VTE/Gemalto: Yes
Hardware: No
Software: No
Hardware: N/A
Software: No (vSAN); Yes (HyTrust)
  Test/Dev  
  •  
Fast VM Cloning
Yes
No
  Portability  
  •  
Hypervisor Migration
ESXi to AHV (integrated)
AHV to ESXi (integrated)
Hyper-V to AHV (external)
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
Hyper-V to ESXi (external)
  File Services  
  •  
Fileserver Type
Built-in (native)
NEW
Built-in (native)
Built-in (native)
External (vSAN Certified)
NEW
  •  
Fileserver Compatibility
Windows clients
Apple Mac clients
Linux clients
NEW
Windows clients
Linux clients
Windows clients
Linux clients
NEW
  •  
Fileserver Interconnect
SMB
NFS
NEW
SMB
NFS
SMB
NFS
NEW
  •  
Fileserver Quotas
Share Quotas, User Quotas
Share Quotas, User Quotas
Share Quotas
  •  
Fileserver Analytics
Yes
Partial
Partial
  Object Services  
  •  
Object Storage Type
S3-compatible
NEW
N/A
N/A
  •  
Object Storage Protection
Versioning
N/A
N/A
  •  
Object Storage LT Retention
WORM
N/A
N/A
Management expand
0%
0%
0%
  Interfaces  
  •  
GUI Functionality
Centralized
Centralized
Centralized
  •  
GUI Scope
Prism: Single-site
Prism Central: Multi-site
Single-site and Multi-site
Single-site and Multi-site
  •  
GUI Perf. Monitoring
Advanced
Advanced
Advanced
NEW
  •  
GUI Integration
VMware: Prism (subset)
Microsoft: SCCM (SCOM and SCVMM)
AHV: Prism
Nutanix Files: Prism
Xi Frame: Prism Central
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
VMware HTML5 vSphere Client (integrated)
VMware vSphere Web Client (integrated)
  Programmability  
  •  
Policies
Partial (Protection)
Full
Full
  •  
API/Scripting
REST-APIs
PowerShell
nCLI
REST-APIs
PowerShell
REST-APIs
Ruby vSphere Console (RVC)
PowerCLI
  •  
Integration
OpenStack
VMware vRealize Automation (vRA)
Nutanix Calm
OpenStack
OpenStack
VMware vRealize Automation (vRA)
  •  
Self Service
AHV only
Full
N/A (not part of vSAN license)
  Maintenance  
  •  
SW Composition
Unified
Unified
Partially Distributed
  •  
SW Upgrade Execution
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
  •  
FW Upgrade Execution
1-Click
Hardware dependent
Rolling Upgrade (1-by-1)
  Support  
  •  
Single HW/SW Support
Yes (Nutanix; Dell; Lenovo; IBM)
No
Yes (most OEM vendors)
  •  
Call-Home Function
Full
Partial (HW dependent)
Partial (vSAN Support Insight)
  •  
Predictive Analytics
Full
Partial
Full (not part of vSAN license)

Matrix Score

  •  
  •  
  • Nutanix
  • DataCore
  • VMware
  •  
  • 1 st
  • 2 nd
  • 3 rd
X
Login to access your personal profile

Forgot your Password?
X
Signup with linkedin
X

Registered, but not activated? click here (resend activation link)

Login to access your personal profile

Receive new comparison alerts

Show me as community member

I agree to your Terms of services

GDPR