|
General
|
Score:N/A - Features:6
- Green(Full Support):0
- Amber(Partial):0
- Red(Not support):0
- Gray(FYI only):6
|
Score:N/A - Features:6
- Green(Full Support):0
- Amber(Partial):0
- Red(Not support):0
- Gray(FYI only):6
|
Score:N/A - Features:6
- Green(Full Support):0
- Amber(Partial):0
- Red(Not support):0
- Gray(FYI only):6
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
- + Broad range of hardware support
- + Strong VMware integration
- + Policy-based management
|
- + Flexible architecture
- + Satisfactory platform support
- + Several Microsoft integration points
|
- + Fast streamlined deployment
- + Strong VMware integration
- + Policy-based management
|
|
Cons
|
- - Single hypervisor support
- - Very limited native data protection capabilities
- - Dedup/compr not performance optimized
|
- - Minimal data protection capabilities
- - No native dedup capabilities
- - No native encryption capabilities
|
- - Single hypervisor and server hardware
- - No bare-metal support
- - Very limited native data protection capabilities
|
|
|
|
Content |
|
|
|
WhatMatrix
|
WhatMatrix
|
WhatMatrix
|
|
|
|
Assessment |
|
|
|
Name: vSAN
Type: Software-only (SDS)
Development Start: Unknown
First Product Release: 2014
VMware was founded in 1998 and began to ship its first Software Defined Storage solution, Virtual SAN, in 2014. Virtual SAN was later rebranded to vSAN. The vSAN solution is fully integrated into the vSphere Hypervisor platform. In 2015 VMware released major updates in the second iteration of the product. Close to the end of 2016 the third iteration was released.
At the end of May 2019 the company had a customer install base of more than 20,000 vSAN customers worldwide. This covers both vSAN and VxRail customers. At the end of May 2019 there were over 30,000 employees working for VMware worldwide.
|
Name: HyperConverged Appliance (HCA)
Type: Hardware+Software (HCI)
Development Start: 2014
First Product Release: 2016
StarWind Software is a privately held company which started in 2008 as a spin-off from Rocket Division Software, Ltd. (founded in 2003). It initially provided free Software Defined Storage (SDS) offerings to early adopters in 2009. Sometime in 2011 the company released its product Native SAN (later rebranded to Virtual SAN). In 2015 StarWind executed a successful 'pivot shift' from software-only company to become a hardware vendor and brought Hyper-Convergence from the Enterprise level to SMB and ROBO. The new HyperConverged Appliance (HCA) allowed the company to tap into the 'long tail' of the hyper-convergence market, thanks to its simplicity and cost-efficiency. In 2016 StarWind released two more hardware products: Backup Appliance and Storage Appliance.
In June 2019 the company had a StarWind HCA install base of approximately 1,250 customers worldwide. In June 2019 there were more than 250 employees working for StarWind.
|
Name: VxRail
Type: Hardware+Software (HCI)
Development Start: 2015
First Product Release: feb 2016
VCE was founded late 2009 by VMware, Cisco and EMC. The company is best known for its converged solutions VBlock and VxBlock. VCE started to ship its first hyper-converged solution, VxRack, late 2015, as part of the VMware EVO:RAIL program. In February 2016 VCE launced VxRail on Quanta server hardware. After completion of the Dell/EMC merger however, VxRail became part of the Dell EMC portfolio and the company switched to Dell server hardware.
VMware was founded early 2002 and began to ship its first Software Defined Storage solution, Virtual SAN (vSAN), in 2014. The vSAN solution is fully integrated into the vSphere Hypervisor platform. In 2015 VMware released major updates in the second itiration of the product and continues to improve the software ever since.
In August 2017 VxRail had an install base of approximately 3,000 customers worldwide.
At the end of May 2019 the company had a customer install base of more than 20,000 vSAN customers worldwide. This covers both vSAN and VxRail customers. At the end of May 2019 there were over 30,000 employees working for VMware worldwide.
|
|
|
GA Release Dates:
vSAN 7.0 U1: oct 2020
vSAN 7.0: apr 2020
vSAN 6.7 U3: apr 2019
vSAN 6.7 U1: oct 2018
vSAN 6.7: may 2018
vSAN 6.6.1: jul 2017
vSAN 6.6: apr 2017
vSAN 6.5: nov 2016
vSAN 6.2: mar 2016
vSAN 6.1: aug 2015
vSAN 6.0: mar 2015
vSAN 5.5: mar 2014
NEW
7th Generation software. vSAN maturity has again increased by expanding its range of features with a set of advanced functionality.
vSAN is also a key element in both the VCE VxRail/VxRack and VMware EVO:SDDC propositions.
|
Release Dates:
HCA build 13279: oct 2019
HCA build 13182: aug 2019
HCA build 12767: feb 2019
HCA build 12658: nov 2018
HCA build 12393: aug 2018
HCA build 12166: may 2018
HCA build 12146: apr 2018
HCA build 11818: dec 2017
HCA build 11456: aug 2017
HCA build 11404: jul 2017
HCA build 11156: may 2017
HCA build 10833: apr 2017
HCA build 10799: mar 2017
HCA build 10695: feb 2017
HCA build 10547: jan 2017
HCA build 9996: aug 2016
HCA build 9781: jun 2016
HCA build 9052: may 2016
HCA build 8730: nov 2015
Version 8 Release 10 StarWind software on proven Super Micro and Dell server hardware.
|
GA Release Dates:
VxRail 7.0.100 (vSAN 7.0 U1: nov 2020
VxRail 7.0 (vSAN 7.0): apr 2020
VxRail 4.7.410 (vSAN 6.7.3): dec 2019
VxRail 4.7.300 (vSAN 6.7.3): sep 2019
VxRail 4.7.212 (vSAN 6.7.2): jul 2019
VxRail 4.7.200 (vSAN 6.7.2): may 2019
VxRail 4.7.100 (vSAN 6.7.1): mar 2019
VxRail 4.7.001 (vSAN 6.7.1): dec 2018
VxRail 4.7.000 (vSAN 6.7.1): nov 2018
VxRail 4.5.225 (vSAN 6.6.1): oct 2018
VxRail 4.5.218 (vSAN 6.6.1): aug 2018
VxRail 4.5.210 (vSAN 6.6.1): may 2018
VxRail 4.5 (vSAN 6.6): sep 2017
VxRail 4.0 (vSAN 6.2): dec 2016
VxRail 3.5 (vSAN 6.2): jun 2016
VxRail 3.0 (vSAN 6.1): feb 2016
NEW
7th Generation VMware software on 14th Generation Dell server hardware.
VxRail is fueled by vSAN software. vSANs maturity has been increasing ever since the first iteration by expanding its range of features with a set of advanced functionality.
|
|
|
|
Pricing |
|
|
Hardware Pricing Model
Details
|
N/A
vSAN is sold by VMware as a software-only solution. Server hardware must be acquired separately.
VMware maintains an extensive Hardware Compatibility List (HCL) with supported hardware for VMware vSAN implementations.
For guidance on proper hardware configurations, VMware provides a vSAN Hardware Quick Reference Guide.
You can view both the HCL and the Quick Reference Guide by using the following link: http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vSAN
To help customers further, VMware also partners with multiple server hardware manufacturers to provide reference configurations in the VMware vSAN Ready Nodes document.
|
Per Node
|
Per Node
|
|
Software Pricing Model
Details
|
Per CPU Socket
Per Desktop (VDI use cases only)
Per Used GB (VCPP only)
Editions:
vSAN Enterprise
vSAN Advanced
vSAN Standard
vSAN for ROBO Enterprise
vSAN for ROBO Advanced
vSAN for ROBO Standard
vSAN (for ROBO) Standard editions offer All-Flash Hardware, iSCSI Target Service, Storage Policy Based Management, Virtual Distributed Switch, Rack Awareness, Software Checksum and QoS - IOPS Limit.
vSAN (for ROBO) Enterprise and vSAN (for ROBO) Advanced editions exclusively offer All-Flash related features RAID-5/6 Erasure Coding and Deduplication+Compression, as well as vRealize Operations within vCenter over vSAN (for ROBO) Standard editions.
vSAN (for ROBO) Enterprise editions exclusively offer Stretched Cluster and Data-at-rest Encryption features over both vSAN (for ROBO) Standard and Advanced editions.
VMware vSAN is priced per CPU socket for VSI workloads, but can also be acquired per desktop for VDI use cases; A 25VM Pack is exclusively available for ROBO use cases.
vSAN for Desktop is priced per named user or per concurrent user (CCU) in a virtual desktop environment and sold in packs of 10 and 100 licenses.
VMware vSAN is priced per Used GB when subscribing to vSAN from a VMware Cloud Provider (VCPP = VMware Cloud Partner Program).
|
Per Node (all-inclusive)
Normal StarWind Virtual SAN licensing does not apply. Each StarWind HCA node comes equiped with an all-inclusive feature set. This means that a StarWind HCA solution provides unlimited capacity and scalability.
|
Per Node
NEW
Every VxRail 7.0.100 node comes bundled with:
- Dell EMC VxRail Manager 7.0.100
- VxRail Manager Plugin for VMware vCenter
- VMware vCenter Server Virtual Appliance (vCSA) 7.0 U1
- VMware vSphere 7.0 U1
- VMware vSAN 7.0 U1
- VMware vRealize Log Insight 8.1.1.0
- ESRS 3.46
As of VxRail 3.5 VMware vSphere licenses have to be purchased separately. VxRail nodes come pre-installed with VMware vSphere 6.7 U3 Patch01 and require a valid vSphere license key to be entered. VMware vSphere Data Protection (VDP) 6.1 is included as part of the vSphere license and is downloadable through the VxRail Manager.
VMware vSAN licenses have to be purchased separately as well. As of VxRail 4.7 there is a choice of either vSAN 6.7 U3 Standard, Advanced or Enterprise licenses.
Dell EMC VxRail 7.0.100 supports VMware Cloud Foundation (VCF) 4.1. VMware Cloud Foundation (VCF) is a unified SDDC platform that brings together VMware ESXi, VMware vSAN, VMware NSX, and optionally, vRealize Suite components, VMware NSX-T, VMware Enterprise PKS, and VMware Horizon 7 into one integrated stack.
Dell EMC VxRail 7.0 does not support VMware vLCM; vLCM is disabled in vCenter.
Dell EMC VxRail 7.0 does not support appliances based on the Quanta hardware platform.
Dell EMC VxRail 7.0 does not support RecoverPoint for Virtual Machines (RP4VM).
|
|
Support Pricing Model
Details
|
Per CPU Socket
Per Desktop (VDI use cases only)
Subscriptions: Basic, Production, Business, Critical and Mission Critical
For details on the different support subscriptions, please use the following link: https://www.vmware.com/support/services/compare.html
|
Per Node (included)
StarWind ProActive Support is included with every HCA node that is acquired.
StarWind ProActive Support monitors the cluster health 24x7x365.
With the intial purchase, StarWind HCA comes with ProActive Support for a period of 3 years. There are options to get 5 and 7 years support. In terms of part replacement within the support contract both Next Busines Day (NBD) and 4 Hour options are available.
|
Per Node
Dell EMC offers two types of VxRail Appliance Support:
- Enhanced provides 24x7 support for production environments, including around-the-clock technical support, next business day onsite response, proactive remote monitoring and resolution, and installation of non customer replaceable units.
- Premium provides mission critical support for fastest resolution, including 24x7 technical support and monitoring, priority onsite response for critical issues, installation of operating environment updates, and installation of all replacement parts.
|
|
|
Design & Deploy
|
Score:83.3% - Features:7
- Green(Full Support):4
- Amber(Partial):2
- Red(Not support):0
- Gray(FYI only):1
|
Score:83.3% - Features:7
- Green(Full Support):4
- Amber(Partial):2
- Red(Not support):0
- Gray(FYI only):1
|
Score:66.7% - Features:7
- Green(Full Support):2
- Amber(Partial):4
- Red(Not support):0
- Gray(FYI only):1
|
|
|
|
Design |
|
|
Consolidation Scope
Details
|
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
VMware is stack-oriented, whereas the vSAN platform itself is heavily storage-focused.
With the vSAN/VxRail platforms VMware aims to provide all functionality required in a Private Cloud ecosystem.
|
Storage
Management
|
Hypervisor
Compute
Storage
Network (limited)
Data Protection (limited)
Management
Automation&Orchestration
VMware is stack-oriented, whereas the VxRail platform itself is heavily storage-focused.
With the vSAN/VxRail platforms VMware aims to provide all functionality required in a Private Cloud ecosystem.
|
|
|
1, 10, 40 GbE
vSAN supports ethernet connectivity using SFP+ or Base-T. VMware recommends 10GbE or higher to avoid the network becoming a performance bottleneck.
|
1, 10, 25, 40, 100 GbE
StarWind hardware appliances include redundant ethernet connectivity using SFP+, SFP28, QSFP+, QSFP14, QSFP28, Base-T. StarWind recommends 10GbE or higher to avoid the network becoming a performance bottleneck.
|
1, 10, 25 GbE
VxRail hardware models include redundant ethernet connectivity using SFP+ or Base-T. Dell EMC recommends at least 10GbE to avoid the network becoming a performance bottleneck.
VxRail 4.7 added automatic network configuration support for select Dell top-of-rack (TOR)
switches.
VxRail 4.7.211 added support for Qlogic and Mellanox NICs, as well as SmartFabric support for Dell EMC S5200 25Gb TOR switches.
|
|
Overall Design Complexity
Details
|
Medium
VMware vSAN was developed with simplicity in mind, both from a design and a deployment perspective. VMware vSANs uniform platform architecture is meant to be applicable to all virtualization use-cases and seeks to provide important capabilities either natively or by leveraging features already present in the VMware hypervisor, vSphere, on a per-VM basis. However, there are still some key areas where vSAN still relies heavily on external products and where there is no tight integration involved (eg. backup/restore). In these cases choices need to made whether to incorporate 1st party of 3rd party solutions into the overall technical design.
|
Medium
StarWind HCA in itself has a straight-forward techniscal architecture. However, StarWind HCA does not encompass many native data protection capabilities and data services. A complete solution design therefore requires the presence of multiple technology platforms.
|
Medium
Dell EMC VxRail was developed with simplicity in mind, both from a design and a deployment perspective. VMware vSANs uniform platform architecture, running at the core of VxRail, is meant to be applicable to all virtualization use-cases and seeks to provide important capabilities either natively or by leveraging features already present in the VMware hypervisor, vSphere, on a per-VM basis. As there is no tight integration involved, especially with regard to data protection choices need to made whether to incorporate 1st party of 3rd party solutions into the overall technical design.
|
|
External Performance Validation
Details
|
StorageReview (aug 2018, aug 2016)
ESG Lab (aug 2018, apr 2016)
Evaluator Group (oct 2018, jul 2017, aug 2016)
StorageReview (Aug 2018)
Title: 'VMware vSAN with Intel Optane Review'
Workloads: MySQL OLTP, MSSQL OLTP, Generic profiles
Benchmark Tools: Sysbench (MySQL), TPC-C (MSSQL), Vdbench (generic)
Hardware: All-Flash Supermicro’s 2029U-TN24R4T+, 4-node cluster, vSAN 6.7
ESG Lab (Aug 2018)
Title: 'Optimize VMware vSAN with Western Digital NVMe SSDs and Supermicro Servers'.
Workloads: MSSQL OLTP
Benchmark Tools: HammerDB (MSSQL)
Hardware: All-Flash SuperMicro BigTwin, 4-node cluster, vSAN 6.6.1; All-Flash Lenovo X3650M5, 4-node cluster, vSAN 6.1
Evaluator Group (Jul 2017/Oct 2018)
Title: 'IOmark-VM-HC Test Report'
Workloads: Mix (MS Exchange, Olio, Web, Database)
Benchmark Tools: IOmark-VM (all)
Hardware: All-Flash Intel R2208WF, 4-node cluster, vSAN 6.6/6.7
Storage Review (Aug 2016)
Title: 'VMware VSAN 6.2 All-Flash Review'
Workloads: MySQL OLTP, MSSQL OLTP
Benchmark Tools: Sysbench (MySQL), TPC-C (MSSQL)
Hardware: All-Flash Dell PowerEdge R730xd, 4-node cluster, vSAN 6.2
Evaluator Group (Aug 2016)
Title: 'IOmark-VM-HC Test Report'
Workloads: Mix (MS Exchange, Olio, Web, Database)
Benchmark Tools: IOmark-VM (all)
Hardware: All-Flash Intel S2600WT, 4-node and 6-node cluster, vSAN 6.2
ESG Lab (Apr 2016)
Title: 'Optimize VMware Virtual SAN 6 with SanDisk SSDs'.
Workloads: MSSQL OLTP
Benchmark Tools: HammerDB (MSSQL)
Hardware: Hybrid/All-Flash Lenovo X3650M5, 4-node cluster, vSAN 6.2
|
StorageReview (oct 2019)
StorageReview (Oct 2019)
Title: 'StarWind HyperConverged Appliance Review'
Workloads: Generic profiles, SQL profiles
Benchmark Tools: VDbench
Hardware: HCA L-AF 3.2, 2-node cluster, HCA 13279
|
StorageReview (dec 2018)
Principled Technologies (jul 2017, jun 2017)
StorageReview (Dec 2018)
Title: 'Dell EMC VxRail P570F Review'
Workloads: MySQL OLTP, MSSQL OLTP, Generic profiles
Benchmark Tools: Sysbench (MySQL), TPC-C (MSSQL), Vdbench (generic)
Hardware: All-Flash Dell EMC VxRail P570F, vSAN 6.7
Principled Technologies (Jul 2017)
Title: 'Handle more orders with faster response times, today and tomorrow'
Workloads: MSSQL OLTP
Benchmark Tools: DS2 (MSSQL)
Hardware: All-Flash Dell EMC VxRail P470F, 4-node cluster, VxRail 4.0 (vSAN 6.2)
Principled Technologies (Jun 2017)
Title: 'Empower your databases with strong, efficient, scalable performance'
Workloads: MSSQL OLTP
Benchmark Tools: DS2 (MSSQL)
Hardware: All-Flash Dell EMC VxRail P470F, 4-node cluster, VxRail 4.0 (vSAN 6.2)
|
|
Evaluation Methods
Details
|
Free Trial (60-days)
Online Lab
Proof-of-Concept (POC)
vSAN Evaluation is freely downloadble after registering online. Because it is embedded in the hypervisor, the free trial includes vSphere and vCenter Server. vSAN Evaluation can be installed on all commodity hardware platforms that meet the hardware requirements. vSAN Evaluation use is time-restricted (60-days). vSAN Evaluation is not for production environments.
VMware also offers a vSAN hosted hands-on lab that lets you deploy, configure and manage vSAN in a contained environment, after registering online.
|
Community Edition (forever)
Trial (up to 30 days)
Proof-of-Concept
There are 3 ways for end-user organizations to evaluate StarWind:
1. StarWind Free. A free version of the software-only Virtual SAN product can be downloaded for the StarWind website. The free version has full functionality but StarWind Management Console works only in the monitoring mode without the ability to create or manage StarWind. All the management is performed via PowerShell and set of script templates.
The free version is intended to be self-supported or community-supported on public discussion forums. This
2. StarWind Trial. The trial version has full functionality and all the management capabilities for StarWind Management Console and is limited to 30 days but can be prolonged if required.
3. Proof-of-Concept. This option also has 2 ways. First, it is possible to get StarWind HCAs that will be delivered to a potential customer location for POC. The availability of HCAs is checked according to the schedule.
Second, remote access to StarWind HCA Appliances. The availability of HCAs is checked according to the schedule.
|
Proof-of-Concept (POC)
vSAN: Free Trial (60-days)
vSAN: Online Lab
VxRail 7.0 runs VMware vSAN 7.0 software at its core. vSAN Evaluation is freely downloadble after registering online. Because it is embedded in the hypervisor, the free trial includes vSphere and vCenter Server. vSAN Evaluation can be installed on all commodity hardware platforms that meet the hardware requirements. vSAN Evaluation use is time-restricted (60-days). vSAN Evaluation is not for production environments.
VMware also offers a vSAN hosted hands-on lab that lets you deploy, configure and manage vSAN in a contained environment, after registering online.
|
|
|
|
Deploy |
|
|
Deployment Architecture
Details
|
Single-Layer (primary)
Dual-Layer (secondary)
Single-Layer: VMware vSAN is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
VMware vSAN can partially serve in a dual-layer model by providing storage also to other vSphere hosts within the same cluster that do not contribute storage to vSAN themselves or to bare metal hosts. However, this is not a primary use case and also requires the other vSphere hosts to have vSAN enabled (Please view the compute-only scale-out option for more information).
|
Single-Layer
Dual-Layer (secondary)
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
|
Single-Layer
Single-Layer: VMware vSAN is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
VMware vSAN can partially serve in a dual-layer model by providing storage also to other vSphere hosts within the same cluster that do not contribute storage to vSAN themselves or to bare metal hosts. However, this is not a primary use case and also requires the other vSphere hosts to have vSAN enabled (Please view the compute-only scale-out option for more information).
|
|
Deployment Method
Details
|
BYOS (fast, some automation)
Pre-installed (very fast, turnkey approach)
There are four methods to deploy vSAN:
1. Build-Your-Own.
2. vSAN Ready Nodes (Pre-installed).
3. vSAN Ready Nodes (non pre-installed).
4. Hardware Appliances aka HCI (Dell VxRail / Lenovo VX Series).
Each method has a different level of effort involved; the method that suits the end-user organization best is based on the technical level of the admin team, the IT architecture and state, as well as the desired level of customization. Where the 'Hardware Appliances' allow for the least amount of customization and the 'Build-Your-Own' for the most, in reality the majority of end-users choose either 'Pre-Installed vSAN Ready Nodes' or 'Hardware Appliances'.
Because of the tight integration with the VMware vSphere platform, vSAN itself is very easy to install and configure (just a few clicks in the vSphere GUI)
vSAN 6.7 U1 introduced a Quickstart wizard in the vSphere Client. The Quickstart workflow guides the end user through the deployment process for vSAN and non-vSAN clusters, and covers every aspect of the initial configuration, such as host, network, and vSphere settings. Quickstart also plays a part in the ongoing expansion of a vSAN cluster by allowing an end user to add additional hosts to the cluster.
BYOS = Bring-Your-Own-Server-Hardware
|
Turnkey (very fast; highly automated)
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by Dell EMC, customer deployments can be executed in hours instead of days.
|
|
|
Workload Support
|
Score:92.3% - Features:14
- Green(Full Support):11
- Amber(Partial):2
- Red(Not support):0
- Gray(FYI only):1
|
Score:84.6% - Features:14
- Green(Full Support):10
- Amber(Partial):2
- Red(Not support):1
- Gray(FYI only):1
|
Score:76.9% - Features:14
- Green(Full Support):9
- Amber(Partial):2
- Red(Not support):2
- Gray(FYI only):1
|
|
|
|
Virtualization |
|
|
Hypervisor Deployment
Details
|
Kernel Integrated
vSAN is embedded into the VMware hypervisor. This means it does not require any Controller VMs to be deployed on top of the hypervisor platform.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Virtual Storage Controller (vSphere)
User-Space (Hyper-V)
VMware vSphere: StarWind is installed inside a virtual machine (VM) on every ESXi host. The StarWind VM runs the Windows Server operating system.
Microsoft Hyper-V: By default, StarWind is installed in 'C:\Program Files\StarWind Software' on the Windows Server operating system.
|
Kernel Integrated
Virtual SAN is embedded into the VMware hypervisor. This means it does not require any Controller VMs to be deployed on top of the hypervisor platform.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
|
Hypervisor Compatibility
Details
|
VMware vSphere ESXi 7.0 U1
NEW
VMware vSAN is an integral part of the VMware vSphere platform; As such it cannot be used with any other hypervisor platform.
vSAN supports a single hypervisor in contrast to other SDS/HCI products that support multiple hypervisors.
|
VMware vSphere ESXi 5.5U3-6.7
Microsoft Hyper-V 2012-2019
StarWind is actively working on supporting KVM.
|
VMware vSphere ESXi 7.0 U1
NEW
VMware Virtual SAN is an integral part of the VMware vSphere platform; As such it cannot be used with any other hypervisor platform.
Dell EMC VxRail and vSAN support a single hypervisor in contrast to other SDS/HCI products that support multiple hypervisors.
|
|
Hypervisor Interconnect
Details
|
vSAN (incl. WSFC)
VMware uses a propietary protocol for vSAN.
vSAN 6.1 and upwards support the use of Microsoft Failover Clustering (MSFC). This includes MS Exchange DAG and SQL Always-On clusters when a file share witness quorum is used. The use of a failover clustering instance (FCI) is not supported.
vSAN 6.7 and upwards support Windows Server Failover Clustering (WSFC) by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.
vSAN 6.7 U3 introduced native support for SCSI-3 Persistent Reservations (PR), which enables Windows Server Failover Clusters (WSFC) to be directly deployed on native vSAN VMDKs. This capability enables migrations from legacy deployments on physical RDMs or external storage protocols to VMware vSAN.
|
iSCSI
NFS
SMB3
iSCSI is the native StarWind HCA storage protocol, as StarWind HCA provides block-based storage.
NFS can be used as the storage protocol in VMware vSphere environments by leveraging the File Server role that the Windows OS provides.
SMB3 can be used as the storage protocol in Microsoft Hyper-V environments by leveraging the File Server role that the Windows OS provides.
In both VMware vSphere and Microsoft Hyper-V environments, iSCSI is used as a protocol to provide block-level storage access. It allows consolidating storage from multiple servers providing it as highly available storage to target servers. In the case of vSphere, VMware iSCSI Initiator allows connecting StarWind iSCSI devices to ESXi hosts and further create datastores on them. In the case of Hyper-V, Microsoft iSCSI Initiator is utilized to connect StarWind iSCSI devices to the servers and further provide HA storage to the cluster (i.e. CSV).
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
vSAN (incl. WSFC)
VMware uses a propietary protocol for vSAN.
vSAN 6.1 and upwards support the use of Microsoft Failover Clustering (MSFC). This includes MS Exchange DAG and SQL Always-On clusters when a file share witness quorum is used. The use of a failover clustering instance (FCI) is not supported.
vSAN 6.7 and upwards support Windows Server Failover Clustering (WSFC) by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.
vSAN 6.7 U3 introduced native support for SCSI-3 Persistent Reservations (PR), which enables Windows Server Failover Clusters (WSFC) to be directly deployed on native vSAN VMDKs. This capability enables migrations from legacy deployments on physical RDMs or external storage protocols to VMware vSAN.
|
|
|
|
Bare Metal |
|
|
Bare Metal Compatibility
Details
|
Many
vSAN iSCSI Service enables hosts and physical workloads that reside outside the vSAN cluster to access the vSAN datastore by providing highly available block storage as iSCSI LUNs. The physical workloads can be stand-alone servers, Windows Failover Clusters (including MSSQL) or Oracle RAC.
vSAN iSCSI Service does not support other vSphere or ESXi clients or initiators, third
party hypervisors, or migrations using raw device mapping (RDMs).
vSAN 6.7 and upwards support Windows Server Failover Clustering (WSFC) by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.
|
Microsoft Windows Server 2012/2012R2/2016/2019
StarWind HCA provides highly available storage over iSCSI between the StarWind nodes and can additionally share storage over iSCSI to any OS that supports the iSCSI protocol.
|
N/A
Dell EMC VxRail does not support any non-hypervisor platforms.
|
|
Bare Metal Interconnect
Details
|
iSCSI
vSAN iSCSI block storage acts as one or more targets for Windows or Linux operating systems running on a bare metal (physical) server.
vSAN iSCSI Service does not support other vSphere or ESXi clients or initiators, third-party hypervisors, or migrations using raw device mapping (RDMs).
vSAN 6.7 and upwards support Windows Server Failover Clustering (WSFC) by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.
vSAN 6.7 U3 enhances the vSAN iSCSI service by allowing dynamic resizing of iSCSI LUNs without disruption.
|
iSCSI
|
N/A
Dell EMC VxRail does not support any non-hypervisor platforms.
|
|
|
|
Containers |
|
|
Container Integration Type
Details
|
Built-in (Hypervisor-based, vSAN supported)
VMware vSphere Docker Volume Service (vDVS) technology enables running stateful containers backed by storage technology of choice in a vSphere environment.
vDVS comprises of Docker plugin and vSphere Installation Bundle which bridges the Docker and vSphere ecosystems.
vDVS abstracts underlying enterprise class storage and makes it available as docker volumes to a cluster of hosts running in a vSphere environment. vDVS can be used with enterprise class storage technologies such as vSAN, VMFS, NFS and VVol.
vSAN 6.7 U3 introduced support for VMware Cloud Native Storage (CNS). When Cloud Native Storage is used, persistent storage for containerized stateful applications can be created that are capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
|
N/A
StarWind HCA relies on the container support delivered by the hypervisor platform.
|
Built-in (Hypervisor-based, vSAN supported)
VMware vSphere Docker Volume Service (vDVS) technology enables running stateful containers backed by storage technology of choice in a vSphere environment.
vDVS comprises of Docker plugin and vSphere Installation Bundle which bridges the Docker and vSphere ecosystems.
vDVS abstracts underlying enterprise class storage and makes it available as docker volumes to a cluster of hosts running in a vSphere environment. vDVS can be used with enterprise class storage technologies such as vSAN, VMFS, NFS and VVol.
vSAN 6.7 U3 introduces support for VMware Cloud Native Storage (CNS). When Cloud Native Storage is used, persistent storage for containerized stateful applications can be created that are capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
|
|
Container Platform Compatibility
Details
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
|
Container Platform Interconnect
Details
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
The StarWind HA VMFS (datastore) can be used for deploying containers just as on common VMFS datastore.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
|
|
Container Host Compatibility
Details
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
|
Container Host OS Compatbility
Details
|
Linux
Windows 10 or Windows Server 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
Linux
Windows 10 or 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
Linux
Windows 10 or 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
|
Container Orch. Compatibility
Details
|
VCP: Kubernetes 1.6.5+ on ESXi 6.0+
CNS: Kubernetes 1.14+
vSAN 6.7 U3 introduced support for VMware Cloud Native Storage (CNS).
When Cloud Native Storage (CNS) is used, persistent storage for containerized stateful applications can be created that are capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS, vVols) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
VCP = vSphere Cloud Provider
CSI = Container Storage Interface
|
Kubernetes 1.6.5+ on ESXi 6.0+
|
VCP: Kubernetes 1.6.5+ on ESXi 6.0+
CNS: Kubernetes 1.14+
vSAN 6.7 U3 introduced support for VMware Cloud Native Storage (CNS).
When Cloud Native Storage (CNS) is used, persistent storage for containerized stateful applications can be created that are capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS, vVols) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
VCP = vSphere Cloud Provider
CSI = Container Storage Interface
|
|
Container Orch. Interconnect
Details
|
Kubernetes Volume Plugin
The VMware vSphere Container Storage Interface (CSI) Volume Driver for Kubernetes leverages vSAN block storage and vSAN file shares to provide scalable, persistent storage for stateful applications.
Kubernetes contains an in-tree CSI Volume Plug-In that allows the out-of-tree VMware vSphere CSI Volume Driver to gain access to containers and provide persistent-volume storage. The plugin runs in a pod and dynamically provisions requested PersistentVolumes (PVs) using vSAN block storage and vSAN native files shares dynamically provisioned by VMware vSAN File Services.
The VMware vSphere CSI Volume Driver requires Kubernetes v1.14 or later and VMware vSAN 6.7 U3 or later. vSAN File Services requires VMware vSAN/vSphere 7.0.
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, vVol, VMFS or NFS datastores.
|
Kubernetes Volume Plugin
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, VVol, VMFS or NFS datastores.
|
Kubernetes Volume Plugin
The VMware vSphere Container Storage Interface (CSI) Volume Driver for Kubernetes leverages vSAN block storage and vSAN file shares to provide scalable, persistent storage for stateful applications.
Kubernetes contains an in-tree CSI Volume Plug-In that allows the out-of-tree VMware vSphere CSI Volume Driver to gain access to containers and provide persistent-volume storage. The plugin runs in a pod and dynamically provisions requested PersistentVolumes (PVs) using vSAN block storage and vSAN native files shares dynamically provisioned by VMware vSAN File Services.
The VMware vSphere CSI Volume Driver requires Kubernetes v1.14 or later and VMware vSAN 6.7 U3 or later. vSAN File Services requires VMware vSAN/vSphere 7.0.
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, vVol, VMFS or NFS datastores.
|
|
|
|
VDI |
|
|
VDI Compatibility
Details
|
VMware Horizon
Citrix XenDesktop
VMware has vSAN related Reference Architecture whitepapers available for both VMware Horizon and Citrix VDI platforms.
|
VMware Horizon
Citrix XenDesktop
Although StarWind supports both VMware and Citrix VDI deployments on top of StarWind HCA, there is currently no specific documentation available.
|
VMware Horizon
Citrix XenDesktop
Dell EMC has published Reference Architecture whitepapers for both VMware Horizon and Citrix XenDesktop platforms.
Dell EMC VxRail 4.7.211 supports VMware Horizon 7.7
|
|
|
VMware: up to 200 virtual desktops/node
Citrix: up to 90 virtual desktops/node
VMware Horizon 7.0: Load bearing number is based on Login VSI tests performed on all-flash rack servers using 2vCPU Windows 7 desktops and the Knowledge Worker profile.
Citrix XenDesktop 7.9: Load bearing number is based on Login VSI tests performed on hybrid VxRail 160 appliances using 2vCPU Windows 7 desktops and the Knowledge Worker profile.
For detailed information please read the corresponding whitepapers.
|
VMware: up to 260 virtual desktops/node
Citrix: up to 220 virtual desktops/node
The load bearing numbers are based on approximate calculations of VDI infrastructure that StarWind HCA can support.
There are no LoginVSA benchmark numbers on record for StarWind HCA as of yet.
|
VMware: up to 160 virtual desktops/node
Citrix: up to 140 virtual desktops/node
VMware Horizon 7.7: Load bearing number is based on Login VSI tests performed on hybrid VxRail V570F appliances using 2 vCPU Windows 10 desktops and the Knowledge Worker profile.
Citrix XenDesktop 7.15: Load bearing number is based on Login VSI tests performed on hybrid VxRail V570F-B appliances using 2 vCPU Windows 10 desktops and the Knowledge Worker profile.
For detailed information please view the corresponding reference architecture whitepapers.
|
|
|
Server Support
|
Score:96.2% - Features:13
- Green(Full Support):12
- Amber(Partial):1
- Red(Not support):0
- Gray(FYI only):0
|
Score:73.1% - Features:13
- Green(Full Support):7
- Amber(Partial):5
- Red(Not support):1
- Gray(FYI only):0
|
Score:76.9% - Features:13
- Green(Full Support):8
- Amber(Partial):4
- Red(Not support):1
- Gray(FYI only):0
|
|
|
|
Server/Node |
|
|
Hardware Vendor Choice
Details
|
Many
The following server hardware vendors provide vSAN Ready Nodes: Cisco, DELL, Fujitsu, Hitachi, HPE, Huawei, Lenovo, Supermicro.
|
Super Micro (StarWind branded)
Dell (StarWind branded)
Dell (OEM)
HPE (Select)
All StarWind HCA models can be delivered as either Dell or Supermicro server hardware. Dell and Supermicro rackmount chassis hardware models (1U-2U) are further selected as to best fit the requirements of the specific end-user organization.
|
Dell
Dell EMC uses a single brand of server hardware for its VxRail solution. Since completion of the Dell/EMC merger, Dell EMC has shifted from Quanta to Dell PowerEdge server hardware. This coincides with the VxRail 4.0 release (dec 2016).
In november 2017 Dell refreshed its VxRail hardware base with 14th Generation Dell PowerEdge server hardware.
|
|
|
Many
The following server hardware vendors provide vSAN Ready Nodes: Cisco, DELL, Fujitsu, Hitachi, HPE, Huawei, Lenovo, Supermicro.
|
6 Native Models (L-AF, XL-AF, L-H, XL-H, L, XL)
20 Native Submodels
All StarWind HCA models are designed for small and mid-size SMB virtualized environments.
StarWind Native Models (Super Micro / Dell):
L-AF (All-Flash; Performance)
XL-AF (All-Flash; Performance @ scale)
L-H (Hybrid; Cost-efficient performance)
XL-H (Hybrid; Cost-efficient performance @ scale)
L (Magnetic-only; Capacity)
XL (Magnetic-only; Capacity @ scale)
|
5 Dell (native) Models (E-, G-, P-, S- and V-Series)
Different models are available for different workloads and use cases:
E Series (1U-1Node) - Entry Level
G Series (2U-4Node) - High Density
V Series (2U-1Node) - VDI Optimized
P Series (2U-1Node) - Performance Optimized
S Series (2U-1Node) - Storage Dense
E-, G-, V- and P-Series can be acquired as Hybrid or All-Flash appliance. S-Series can only be acquired as Hybrid appliance.
|
|
|
1, 2 or 4 nodes per chassis
Because vSAN has an extensive HCL, customers can opt multiple server densities.
Note: vSAN Ready Nodes are mostly based on standard 2U rack server configurations.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1 node per chassis
The StarWind HCA architecture uses 1U-1Node and 2U-1Node building blocks.
|
1 or 4 nodes per chassis
The VxRail Appliance architecture uses a combination of 1U-1Node, 2U-1Node and 2U-4Node building blocks.
|
|
|
Yes
Mixing is allowed, although this is not advised within a single vSAN cluster for a consistent performance experience.
|
No
StarWind HCAs cannot be mixed and should be of the same configuration when used in a cluster as they come in a High-Availability (HA) pair as a single solution.
|
Yes
Dell EMC allows mixing of different VxRail Appliance models within a cluster, except where doing so would create highly unbalanced performance. The first 4 cluster nodes do not have to be identical anymore (previous requirement). All G Series nodes within the same chassis must be identical. No mixing is allowed between hybrid and all-flash nodes within the same storage cluster. All nodes within the same cluster must run the same version of VxRail software.
|
|
|
|
Components |
|
|
|
Flexible
For each hardware component the vSAN Hardware Quick Reference Guide and the VMware vSAN Ready Nodes document provide sizing guidelines.
VMware vSphere supports 2nd generation Intel Xeon Scalable (Cascade Lake) processors in version 6.7U1 and upwards.
|
Flexible
StarWind HCAs have flexible CPU configurations. StarWind can equip any Intel CPU as per customers specific requirements. Only the default CPU configurations are shown below.
StarWind HCA model L-AF (All-Flash) CPU specs:
L-AF 2.8: 2x Intel Xeon Silver 4208 (8 cores)
L-AF 4.8: 2x Intel Xeon Silver 4208 (8 cores)
L-AF 7.6: 2x Intel Xeon Silver 4208 (8 cores)
L-AF 9.6: 2x Intel Xeon Silver 4208 (8 cores)
StarWind HCA model XL-AF (All-Flash) CPU specs:
XL-AF 13.4: 2x Intel Xeon Gold 5218 (16 cores)
XL-AF 15.3: 2x Intel Xeon Gold 5218 (16 cores)
XL-AF 19.2: 2x Intel Xeon Gold 5218 (16 cores)
XL-AF 23.0: 2x Intel Xeon Gold 5218 (16 cores)
StarWind HCA model L-H (Hybrid) CPU specs:
L-H 5.9: 2x Intel Xeon Silver 4208 (8 cores)
StarWind HCA model XL-H (Hybrid) CPU specs:
XL-H 8.8: 2x Intel Xeon Silver 4208 (8 cores)
XL-H 11.8: 2x Intel Xeon Silver 4208 (8 cores)
XL-H 16.8: 2x Intel Xeon Gold 5218 (16 cores)
XL-H 21.7: 2x Intel Xeon Silver 5218 (16 cores)
XL-H 37.6: 2x Intel Xeon Gold 5218 (16 cores)
StarWind HCA model L (Magnetic-only) CPU specs:
L-4D: 2x Intel Xeon Bronze 3204 (6 cores)
StarWind HCA model XL (Magnetic-only) CPU specs:
XL-8D: 2x Intel Xeon Silver 4208 (8 cores)
XL-16D: 2x Intel Xeon Silver 4208 (8 cores)
XL-24D: 2x Intel Xeon Silver 4208 (8 cores)
XL-32D: 2x Intel Xeon Silver 4208 (8 cores)
XL-48D: 2x Intel Xeon Silver 4208 (8 cores)
StarWind HCA nodes are equiped with 2nd generation Intel Xeon Scalable (Cascade Lake) processors by default.
|
Flexible
VxRail offers multiple CPU options in each hardware model.
E-Series can have single or dual socket, and up to 28 cores/CPU.
G-Series can have single or dual socket, and up to 28 cores/CPU.
P-Series can have dual or quad socket, and up to 28 cores/CPU.
S-Series can have single or dual socket, and up to 28 cores/CPU.
V-Series can have dual socket, and up to 28 cores/CPU.
VxRail on Dell PowerEdge 14G servers are equiped with Intel Xeon Scalable processors (Skylake and Cascade Lake).
Dell EMC VxRail 4.7.211 introduced official support for the 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
|
|
|
Flexible
For each hardware component the vSAN Hardware Quick Reference Guide and the VMware vSAN Ready Nodes document provide sizing guidelines.
|
Flexible
StarWind HCA model L-AF (All-Flash) Memory specs:
L-AF 2.8: 64GB
L-AF 4.8: 128GB
L-AF 7.6: 128GB
L-AF 9.6: 128GB
StarWind HCA model XL-AF (All-Flash) Memory specs:
XL-AF 13.4: 192GB
XL-AF 15.3: 256GB
XL-AF 19.2: 320GB
XL-AF 23.0: 512GB
StarWind HCA model L-H (Hybrid) Memory specs:
L-H 5.9: 96GB
StarWind HCA model XL-H (Hybrid) Memory specs:
XL-H 8.8: 128GB
XL-H 11.8: 128GB
XL-H 16.8: 256GB
XL-H 21.7: 256GB
XL-H 37.6: 512GB
StarWind HCA model L (Magnetic-only) Memory specs:
L-4D: 32GB
StarWind HCA model XL (Magnetic-only) Memory specs:
XL-8D: 64GB
XL-16D: 96GB
XL-24D: 128GB
XL-32D: 128GB
XL-48D: 192GB
Memory can be expanded on all StarWind HCA models and sub-models.
|
Flexible
The amount of memory is configurable for all hardware models. Depending on the hardware model Dell EMC offers multiple choices on memory per Node, maxing at 3TB for E-, P-, S-, V-series and 2TB for G-Series. VxRail Appliances use 16GB RDIMMS, 32GB RDIMMS, 64GB LRDIMMS or 128GB LRDIMMS.
|
|
|
Flexible: number of disks + capacity
For each hardware component the vSAN Hardware Quick Reference Guide and the VMware vSAN Ready Nodes document provide sizing guidelines.
vSAN 7.0 provides support for 32TB physical capacity drives. This extends the logical capacity up to 1PB when deduplication and compression is enabled.
|
Flexible: number of storage devices
StarWind HCA model L-AF (All-Flash) Storage specs:
L-AF 2.8: 4x 960GB SATA Mix Use SSD (RAID5)
L-AF 4.8: 6x 960GB SATA Mix Use SSD (RAID5)
L-AF 7.6: 5x 1.92TB SATA Mix Use SSD (RAID5)
L-AF 9.6: 6x 1.92TB SATA Mix Use SSD (RAID5)
StarWind HCA model XL-AF (All-Flash) Storage specs:
XL-AF 13.4: 8x 1.92TB SATA Mix Use SSD (RAID5)
XL-AF 15.3: 9x 1.92TB SATA Mix Use SSD (RAID5)
XL-AF 19.2: 11x 1.92TB SATA Mix Use SSD (RAID5)
XL-AF 23.0: 13x 1.92TB SATA Mix Use SSD (RAID5)
StarWind HCA model L-H (Hybrid) Storage specs:
L-H 5.9: 2x 1.92TB SATA Mix Use SSD (RAID1) + 2x 4TB 7.2K NL-SAS (RAID1)
StarWind HCA model XL-H (Hybrid) Storage specs:
XL-H 8.8: 4x 960GB SATA Mix Use SSD (RAID5) + 6x 2TB 7.2K NL-SAS (RAID10)
XL-H 11.8: 5x 960GB SATA Mix Use SSD (RAID5) + 4x 4TB 7.2K NL-SAS (RAID10)
XL-H 16.8: 6x 960GB SATA Mix Use SSD (RAID5) + 6x 4TB 7.2K NL-SAS (RAID10)
XL-H 21.7: 4x 1.92TB SATA Mix Use SSD (RAID5) + 8x 4TB 7.2K NL-SAS (RAID10)
XL-H 37.6: 6x 1.92TB SATA Mix Use SSD (RAID5) + 14x 4TB 7.2K NL-SAS (RAID10)
StarWind HCA model L (Magnetic-only) Storage specs:
L-4D: 4x 2TB 7.2K NL-SAS (RAID10)
StarWind HCA model XL (Magnetic-only) Storage specs:
XL-8D: 8x 2TB 7.2K NL-SAS (RAID10)
XL-16D: 8x 4TB 7.2K NL-SAS (RAID10)
XL-24D: 12x 4TB 7.2K NL-SAS (RAID10)
XL-32D: 8x 8TB 7.2K NL-SAS (RAID10)
XL-48D: 12x 8TB 7.2K NL-SAS (RAID10)
Storage can be expanded on the following StarWind HCA models:
- All L-AF and XL-AF submodels
- XL-H 8.8, XL-H 11.8, XL-H 16.8, XL-H 21.7, XL-H 37.6
- XL-8D, XL-16D and XL-32D
|
Flexible: number of disks (limited) + capacity
A 14th generation VxRail appliance has 24 disk slots.
E-Series supports 10x 2.5' SAS drives per node (up to 2 disk groups: 1x flash + 4x capacity drives each).
G-Series supports 6x 2.5' SAS drives per node (up to 1 disk group: 1x flash + 5x capacity drives each).
P-Series supports 24x 2.5' SAS drives per node (up to 4 disk groups: 1x flash + 5x capacity drives each).
V-Series supports 24x 2.5' SAS drives per node (up to 4 disk groups: 1x flash + 5x capacity drives each).
S-Series supports 12x 2.5' + 2x 3,5' SAS drives per node (up to 2 disk groups: 1x flash + 6 capacity drives each).
|
|
|
Flexible
For each hardware component the vSAN Hardware Quick Reference Guide and the VMware vSAN Ready Nodes document provide sizing guidelines.
|
Fixed (Private network)
StarWind HCA model L-AF (All-Flash) Networking specs:
L-AF 2.8: 4x 1GbE and 2x 10GbE (Private, RDMA-enabled)
L-AF 4.8: 4x 1GbE and 2x 10GbE (Private, RDMA-enabled)
L-AF 7.6: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
L-AF 9.6: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
StarWind HCA model XL-AF (All-Flash) Networking specs:
XL-AF 13.4: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-AF 15.3: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-AF 19.2: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-AF 23.0: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
StarWind HCA model L-H (Hybrid) Networking specs:
L-H 5.9: 2x 1GbE + 2x 10GbE NDC
StarWind HCA model XL-H (Hybrid) Networking specs:
XL-H 8.8: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-H 11.8: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-H 16.8: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-H 21.7: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
XL-H 37.6: 2x 1GbE + 2x 10GbE NDC and 2x 25GbE (Private, RDMA-enabled)
StarWind HCA model L (Magnetic-only) Networking specs:
L-4D: 2x 1GbE (LOM) and 4x 1GbE
StarWind HCA model XL (Magnetic-only) Networking specs:
XL-8D: 4x 1GbE and 2x 10GbE (Private)
XL-16D: 4x 1GbE and 2x 10GbE (Private)
XL-24D: 4x 1GbE and 2x 10GbE (Private)
XL-32D: 4x 1GbE and 2x 10GbE (Private)
XL-48D: 4x 1GbE and 2x 10GbE (Private)
Private networks that are used for StarWind remain fixed. Other NICs can be replaced as per customer-specific requirements.
|
Flexible (1, 10 or 25 Gbps)
E-Series supports 2x 25 GbE (SFP28), 4x 10GbE (RJ45/SFP+) or 4x 1GbE (RJ45) per node.
G-Series supports 2x 25 GbE (SFP28) or 2x 10GbE (RJ45/SFP+) per node.
P-Series supports 2x 25 GbE (SFP28), 4x 10GbE (RJ45/SFP+) or 4x 1GbE (RJ45) per node.
S-Series supports 2x 25 GbE (SFP28), 4x 10GbE (RJ45/SFP+) or 4x 1GbE (RJ45) per node.
V-Series supports 2x 25 GbE (SFP28), 4x 10GbE (RJ45/SFP+) per node.
1GbE configurations are only supported with 1-CPU configurations.
Dell EMC VxRail 4.7.300 provides more network design flexibility in creating VxRail clusters across multiple racks. It also provides the ability to expand a cluster beyond one rack using L3 networks and L2 networks.
|
|
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
VMware vSphere 6.5U1 and up officially support several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
NVIDIA Tesla/Quadro
GPUs can be added to custom StarWind HCA models depending on the requirements.
The following NVIDIA GPU card configurations can be ordered along with custom StarWind HCA models:
NVIDIA Tesla P4
NVIDIA Quadro P4000
|
NVIDIA Tesla
Dell EMC offers multiple GPU options in the VxRail V-series Appliances.
Currently the following GPUs are provided as add-on in PowerEdge Gen14 server hardware (V-model only):
NVIDIA Tesla M10 (up to 2x in each node)
NVIDIA Tesla M60 (up to 3x in each node)
NVDIA Tesla P40 (up to 3x in each node)
|
|
|
|
Scaling |
|
|
|
CPU
Memory
Storage
GPU
VMware allows for the on-the fly (non-disruptive) adding/removal of individual disks in existing disk groups.
There is a maximum of 5 disk groups (flash cache device + capacity devices) on an individual ESXi host participating in a vSAN cluster. In a hybrid configuration each diskgroup consists of 1 Flash Device + a maximum of 7 capacity devices. This totals to 40 devices per ESXi host, although an average rack server has room for up to only 24 devices.
|
CPU
Memory
Storage
GPU
Storage: All available disk slots in a StarWind HCA node can be utilized. Additionally, the StarWind HCA technical architecture allows scaling using external disk shelves (JBODs).
|
Memory
Storage
Network
GPU
At the moment CPUs are not upgradeable in VxRail.
|
|
|
Compute+storage
Compute-only (vSAN VMKernel)
Storage+Compute: Existing vSAN clusters can be expanded by adding additional vSAN nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: When the vSAN VMkernel is installed and enabled on a host that is not contributing storage but resides in the same cluster as contributing hosts, vSAN datastores can be presented to these hypervisor hosts as well. This is also beneficial to storage migrations as it allows for online storage vMotions between vSAN storage and non-vSAN storage platforms. The use of the vSAN VMkernel requires a vSAN license for this host.
Storage-only: N/A; A vSAN node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
|
Compute+storage
Compute-only
Storage-only
In case of Storage-only scale out, StarWind HCA Storage Nodes will be based on Windows Server bare metal with no hypervisor software installed eg. VMware ESXi. StarWind software will be running as a Windows-native application and providing storage to the hypervisor hosts.
|
Compute+storage
Storage+Compute: Existing VxRail clusters can be expanded by adding additional VxRail nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: VMware does not allow non-VxRail hosts to become part of a VxRail cluster. This means that installing and enabling the vSAN VMkernel on hosts in a VxRail cluster that is not contributing storage, so vSAN datastores can be presented to these hypervisor hosts as well, is not an option at this time.
Storage-only: N/A; A VxRail node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
|
|
|
2-64 nodes in 1-node increments
The hypervisor cluster scale-out limits still apply: 64 hosts for VMware vSphere.
|
2-64 nodes in 1-node increments
The 64-node limit applies to both VMware vSphere and Microsoft Hyper-V environments.
|
3-64 storage nodes in 1-node increments
At minimum a VxRail deployment consists of 3 nodes. From there the solution can be scaled one or more nodes at a time. The first 4 cluster nodes must be identical.
Scaling beyond 32 nodes no longer requires a Request for Product Qualification (RPQ). However, an RPQ is required for Stretched Cluster implementations
If using 1GbE only, a storage cluster cannot expand beyond 8 nodes.
For the maximum node configuration hypervisor cluster scale-out limits still apply: 64 hosts for VMware vSphere.
VxRail 4.7 introduces support for adding multiple nodes in parallel, speeding up cluster expansions.
|
|
Small-scale (ROBO)
Details
|
2 Node minimum
The use of the witness virtual appliance eliminates the requirement of a third physical node in vSAN ROBO deployments. vSAN for ROBO edition licensing is best suited for this type of deployment.
|
2 Node minimum
StarWind HCAs or Storage-only HCAs come as a 2-node minimum. They can further scale up by adding more disks or JBODs or scale-out by adding new HCAs or Storage-only nodes.
|
2 Node minimum
VxRail 4.7.100 introduced support for 2-node clusters:
- The deployment is limited to VxRail Series E-Series nodes.
- Only 1GbE and 10GbE are supported. Inter-cluster VxRail traffic utilizes a pair of network cables linke between the physical nodes.
- A customer-supplied external vCenter is required that does not reside on the 2-node cluster.
- A Witness VM that monitors the health of the 2-node cluster is required and does not reside on the 2-node cluster.
2-node clusters are not supported when using the VxRail G410 appliance.
|
|
|
Storage Support
|
Score:91.7% - Features:14
- Green(Full Support):10
- Amber(Partial):2
- Red(Not support):0
- Gray(FYI only):2
|
Score:100.0% - Features:14
- Green(Full Support):12
- Amber(Partial):0
- Red(Not support):0
- Gray(FYI only):2
|
Score:87.5% - Features:14
- Green(Full Support):9
- Amber(Partial):3
- Red(Not support):0
- Gray(FYI only):2
|
|
|
|
General |
|
|
|
Object Storage File System (OSFS)
|
Block Storage Pool
StarWind HCA only serves block devices as storage volumes to the supported OS platforms.
The underlying storage is first aggregated with hardware RAID inside StarWind HCA. Then, the storage is replicated by StarWind at the block level across 2 or 3 nodes and further provided as a single pool (single StarWind virtual device) or as multiple pools (multiple StarWind devices).
|
Object Storage File System (OSFS)
|
|
|
Partial
Each host within a vSAN cluster has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example. Apart from the read-cache vSAN only uses data locality in stretched clusters to avoid high latency.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Partial
StarWinds core approach is to keep data close (local) to the VM in order to avoid slow data transfers through the network and achieve the highest performance the setup can provide. The solution is designed to store the first instance of all data on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference. Data does not automatically follow the VM when the VM is moved to another node.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Partial
Each node within a VxRail appliance has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example. Apart from the read-cache VxRail only uses data locality in stretched clusters to avoid high latency.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
|
|
Direct-attached (Raw)
Remote vSAN datatstores (HCI Mesh)
NEW
The software takes ownership of the unformatted physical disks available inside the host.
VMware vSAN 7.0 U1 introduces the HCI Mesh concept. With VMware HCI Mesh a vSAN cluster can leverage the storage of remote vSAN clusters for hosting VMs without sacrificing important features such as HA and DR. Up to 5 remote vSAN datastores can be mounted by a single vSAN cluster. HCI Mesh works by using the existing vSAN VMkernel ports and transport protocols. It is full software-based and does not require any specialized hardware.
|
Direct-attached (Raw)
SAN or NAS
Direct-attached: StarWind can take control of formatted disks (NTFS). Also, StarWind software can present RAW unformatted disks over SCSI Pass Through Interface that enables remote initiator clients to use any type of a hard drive (PATA/SATA/RAID).
External SAN/NAS Storage: SAN/NAS can be connected over Ethernet connectivity and can be used for StarWind HCA as soon as they are provided as block storage (iSCSI). In case it is required to replicate data between NAS systems, there should be 2 NAS systems connected to both StarWind HCA nodes.
|
Direct-attached (Raw)
Remote vSAN datatstores (HCI Mesh)
NEW
The software takes ownership of the unformatted physical disks available inside the host.
VMware vSAN 7.0 U1 introduces the HCI Mesh concept. With VMware HCI Mesh a vSAN cluster can leverage the storage of remote vSAN clusters for hosting VMs without sacrificing important features such as HA and DR. Up to 5 remote vSAN datastores can be mounted by a single vSAN cluster. HCI Mesh works by using the existing vSAN VMkernel ports and transport protocols. It is full software-based and does not require any specialized hardware.
|
|
|
Hybrid (Flash+Magnetic)
All-Flash
VMware Enterprise edition offers All-Flash related features Erasure Coding and Data Reduction (Deduplication+Compression).
Hybrid hosts cannot be mixed with All-Flash hosts in the same vSAN cluster.
vSAN 7.0 provides support for 32TB physical capacity drives. This extends the logical capacity up to 1PB when deduplication and compression is enabled.
|
Magnetic-only
Hybrid (Flash+Magnetic)
All-Flash
|
Hybrid (Flash+Magnetic)
All-Flash
Hybrid hosts cannot be mixed with All-Flash hosts in the same VxRail cluster.
|
|
Hypervisor OS Layer
Details
|
SD, USB, DOM, HDD or SSD
|
SD, USB, DOM, SSD/HDD
By default, all StarWind HCA nodes come with redundant SSDs or M.2 sticks for OS installation.
|
SSD
Each VxRail Gen14 node contains 2x 240GB SATA M.2 SSDs with RAID1 protection to host the VMware vSphere hypervisor software.
|
|
|
|
Memory |
|
|
|
DRAM
Each host within a vSAN cluster has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example.
|
DRAM
|
DRAM
Each node within a VxRail appliance has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example.
|
|
|
Read Cache
|
Read/Write Cache
StarWind HCA accelerates reads and writes by leveraging conventional RAM.
The memory cache is filled up with data mainly during write operations. During read operations, data enters the cache only if the latter contains either empty memory blocks or the lines that were allocated for these entries earlier and have not been fully exhausted yet.
StarWind HCA supports two Memory (L1 Cache) Policies:
1. Write-Back, caches writes in DRAM only and acknowledges back to the originator when complete in DRAM.
2. Write-Through, caches writes in both DRAM and underlying storage, and acknowledges back to the originator when complete in the underlying storage.
This means that exclusively caching writes in convential memory is optional. When the Write-Through policy is used, DRAM is used primarily for caching reads.
To change the cache size, first, the StarWind service should be stopped on one HCA and change cache and then start the service and then repeat the same process on the partner HCA. This allows keeping VMs up and running during cache changes.
In the majority of use cases, there is no need to assign L1 cache for all-flash storage arrays.
Note: In case of using the Write-Back policy for DRAM, UPS units have to be installed to ensure the correct shutdown of StarWind Virtual SAN nodes. If a power outage occurs, this will prevent the loss of cached data. The UPS capacity must cover the time required for flushing the cached data to the underlying storage.
|
Read Cache
|
|
|
Non-configurable
Each host within a vSAN cluster has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example.
|
Configurable
The size of L1 cache should be equal to the amount of the average working data set.
There are no default or maximum values for RAM cache as such. The maximum size that can be assigned for StarWind RAM cache is limited by the available RAM to the system; also, you need to make sure that other applications running will have enough of RAM for their operations.
Additionally, the total amount of L1 cache assigned influences the time required for system shutdown so overprovisioning of the L1 cache amount can cause StarWind service interruption and the loss of cached data. The minimum size assigned for RAM cache in either write-back or write-through mode is 1MB. However, StarWind recommends assigning a StarWind RAM cache size that matches the size of the working data set.
|
Non-configurable
Each node within a VxRail appliance has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example.
|
|
|
|
Flash |
|
|
|
SSD, PCIe, UltraDIMM, NVMe
VMware vSAN 6.6 offers support for Intel Optane (3D XPoint technology) NVMe SSDs.
|
SSD, NVMe
|
SSD; NVMe
VxRail Appliances support a variety of SSDs.
Cache SSDs (SAS): 400GB, 800GB, 1.6TB
Cache SSDs (NVMe): 800GB, 1.6TB
Capacity SSDs (SAS/SATA): 1.92TB, 3.84TB, 7.68TB
Capacity SSDs (NVMe): 960GB, 1TB, 3.84TB, 4TB
VxRail does not support mixing SAS/SATA SSDs in the same disk group.
|
|
|
Hybrid: Read/Write Cache
All-Flash: Write Cache + Storage Tier
In all vSAN configurations 1 separate SSD per disk group is used for caching purposes. The other disks in the disk group are used for persistent storage of data.
For all-flash configurations, the flash device(s) in the cache tier are used for write caching only (no read cache) as read performance from the capacity flash devices is more than sufficient.
Two different grades of flash devices are commonly used in an all-flash vSAN configuration: Lower capacity, higher endurance devices for the cache layer and more cost effective, higher capacity, lower endurance devices for the capacity layer. Writes are performed at the cache layer and then de-staged to the capacity layer, only as needed.
|
Read/Write Cache (hybrid)
Persistent storage (all-flash)
StarWind HCA supports a single Flash (L2 Cache) Policy:
1. Write-Through, caches writes in both Flash (SSD/NVMe) and the underlying storage, and acknowledges back to the originator when complete in the underlying storage.
With the write-through policy, new blocks are written both to cache layer and the underlying storage synchronously. However, in this mode, only the blocks that are read most frequently are kept in the cache layer, accelerating read operations. This is the only mode available for StarWind L2 cache.
In the case of Write-Through cached data does not need to be offloaded to the backing store when a device is removed or the service is stopped.
In the majority of use cases, if L1 cache is already assigned, there is no need to configure L2 cache.
A StarWind HCA solution with L2 cache configured should have more RAM available, since L2 cache needs 6.5 GB of RAM per 1 TB of L2 cache to store metadata.
|
Hybrid: Read/Write Cache
All-Flash: Write Cache + Storage Tier
In all VxRail configurations 1 separate SSD per disk group is used for caching purposes. The other disks in the disk group are used for persistent storage of data.
For All-flash configurations, the flash device(s) in the cache tier are used for write caching only (no read cache) as read performance from the capacity flash devices is more than sufficient.
Two different grades of flash devices are used in an All-flash VxRail configuration: Lower capacity, higher endurance devices for the cache layer and more cost effective, higher capacity, lower endurance devices for the capacity layer. Writes are performed at the cache layer and then de-staged to the capacity layer, only as needed.
|
|
|
Hybrid: 1-5 Flash devices per node (1 per disk group)
All-Flash: 40 Flash devices per node (8 per disk group,1 for cache and 7 for capacity)
VMware vSAN 7.0 provides support for large flash devices (up to 32TB).
|
All-Flash: 4-20+ devices per node
Hybrid: 2 devices per node
All available disk slots in a StarWind HCA node can be utilized. Additionally, the StarWind HCA technical architecture allows scaling using external disk shelves (JBODs).
|
Hybrid: 4 Flash devices per node
All-Flash: 2-24 Flash devices per node
Each VxRail node always requires 1 high-performance SSD for write caching.
In Hybrid VxRail configurations the high-performance SSD in a disk group is also used for read caching. Per disk group 3-5 HDDs can be used as persistent storage (capacity drives).
In All-Flash VxRail configurations the high-performance SSD in a disk group is only used for write caching. Per node 1-5 SSDs can be used for read caching and persistent storage (capacity drives).
|
|
|
|
Magnetic |
|
|
|
Hybrid: SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
VMware vSAN supports the use of 512e drives. 512e magnetic hard disk drives (HDDs) use a physical sector size of 4096 bytes, but the logical sector size emulates a sector size of 512 bytes. Larger sectors enable the integration of stronger error correction algorithms to maintain data integrity at higher storage densities.
VMware vSAN 6.7 introduced support for 4K native (4Kn) mode.
VMware vSAN 7.0 provides support for 32TB physical capacity drives.
|
Hybrid: SAS or SATA
|
Hybrid: SAS or SATA
VxRail Appliances support a variety of HDDs.
SAS 10K: 1.2TB, 1.8TB, 2.4TB
SATA 7.2K: 2.0TB, 4.0TB
VMware vSAN supports the use of 512e drives. 512e magnetic hard disk drives (HDDs) use a physical sector size of 4096 bytes, but the logical sector size emulates a sector size of 512 bytes. Larger sectors enable the integration of stronger error correction algorithms to maintain data integrity at higher storage densities.
VMware vSAN 6.7 introduces support for 4K native (4Kn) mode.
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
|
|
|
Persistent Storage
|
Persistent Storage
|
Persistent Storage
|
|
Magnetic Capacity
Details
|
1-35 SAS/SATA HDDs per host/node
|
Hybrid: 4+ devices per node
Magnetic-only: 4-12+ devices per node
All available disk slots in a StarWind HCA node can be utilized. Additionally, the StarWind HCA technical architecture allows scaling using external disk shelves (JBODs).
|
3-5 SAS/SATA HDDs per disk group
In the current configurations there is a choice between 1.2/1.8/2.4TB 10k SAS drives and 2.0/4.0TB 7.2k NL-SAS drives.
The current configuration maximum for a single host/node is 4 disk groups consisting of 1 NVMe drive + 5 HDDs for hybrid configurations or 1 NVMe drive + 5 capacity SSDs for all-flash configurations = total of 24 drives per host/node.
Since a single VxRail G-Series chassis can contain up to 4 nodes, theres a total of 6 drives per node.
|
|
|
Data Availability
|
Score:70.0% - Features:30
- Green(Full Support):18
- Amber(Partial):6
- Red(Not support):6
- Gray(FYI only):0
|
Score:50.0% - Features:30
- Green(Full Support):14
- Amber(Partial):2
- Red(Not support):14
- Gray(FYI only):0
|
Score:70.0% - Features:30
- Green(Full Support):18
- Amber(Partial):6
- Red(Not support):6
- Gray(FYI only):0
|
|
|
|
Reads/Writes |
|
|
Persistent Write Buffer
Details
|
Flash Layer (SSD;PCIe;NVMe)
|
DRAM (mirrored)
Flash Layer (SSD, NVMe)
|
Flash Layer (SSD, NVMe)
|
|
Disk Failure Protection
Details
|
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N)
All-Flash: Erasure Coding (RAID5-6)
VMwares implementation of Erasure Coding only applies to All-Flash configurations and is similar to RAID-5 and RAID-6 protection. RAID-5 requires a minimum of 4 nodes (3+1) and protects against a single node failure; RAID-6 requires a minimum of 6 nodes and protects against two node failures. Erasure Coding is only available in vSAN Enterprise and Advanced editions, and is only configurable for All-flash configurations.
VMwares implementation of replicas is called NumberOfFailuresToTolerate (0, 1, 2 or 3). It applies to both disk and node failures. Optionally, nodes can be assigned to a logical grouping called Failure Domains. The use of 0 Replicas within a single site is only available when using Stretched Clustering, which is only available in the Enterprise editions.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. For both instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When an entire node fails, VMs need to be restarted and data is read from the surviving instances on other nodes within the vSAN cluster instead. At the same time data re-replication of the associated replicas needs to occur in order to restore the desired NumberOfFailuresToTolerate.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configuration.
vSAN provides increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.
vSAN 6.7 introduced the Host Pinning storage policy that can be used for next-generation, shared-nothing applications. When using Host Pinning, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment.
|
1-2 Replicas (2N-3N)
+ Hardware RAID (1, 5, 10)
StarWind HCA uses replicas to protect data within the cluster. In addition, hardware RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated to one or two designated partner nodes. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. With 3N one instance of data that is written is stored on the local node, and two other instances of that data are stored on designated partner nodes in the cluster. When a physical disk fails, hardware RAID maintains data availability.
The hardware RAID level that is applied depends on the media type and the number of devices used:
Flash drives = RAID5
2 Flash drives in Hybrid config = RAID1
Magnetic drives = RAID10
2 Magnetic drives in Hybrid config = RAID1
|
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N)
All-Flash: Erasure Coding (RAID5-6)
VMwares implementation of Erasure Coding only applies to All-Flash configurations and is similar to RAID-5 and RAID-6 protection. RAID-5 requires a minimum of 4 nodes (3+1) and protects against a single node failure; RAID-6 requires a minimum of 6 nodes and protects against two node failures. Erasure Coding is only available in vSAN Enterprise and Advanced editions, and is only configurable for All-flash configurations.
VMwares implementation of replicas is called NumberOfFailuresToTolerate (0, 1, 2 or 3). It applies to both disk and node failures. Optionally, nodes can be assigned to a logical grouping called Failure Domains. The use of 0 Replicas within a single site is only available when using Stretched Clustering, which is only available in the Enterprise editions.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. For both instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When an entire node fails, VMs need to be restarted and data is read from the surviving instances on other nodes within the vSAN cluster instead. At the same time data re-replication of the associated replicas needs to occur in order to restore the desired NumberOfFailuresToTolerate.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configuration.
vSAN provides increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.
vSAN 6.7 introduces the Host Pinning storage policy that can be used for next-generation, shared-nothing applications. When using Host Pinning, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment.
|
|
Node Failure Protection
Details
|
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N)
All-Flash: Erasure Coding (RAID5-6)
VMwares implementation of Erasure Coding only applies to All-Flash configurations and is similar to RAID-5 and RAID-6 protection. RAID-5 requires a minimum of 4 nodes (3+1) and protects against a single node failure; RAID-6 requires a minimum of 6 nodes and protects against two node failures. Erasure Coding is only available in vSAN Enterprise and Advanced editions, and is only configurable for All-flash configurations.
VMwares implementation of replicas is called NumberOfFailuresToTolerate (0, 1, 2 or 3). It applies to both disk and node failures. Optionally, nodes can be assigned to a logical grouping called Failure Domains. The use of 0 Replicas within a single site is only available when using Stretched Clustering, which is only available in the Enterprise editions.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. For both instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When an entire node fails, VMs need to be restarted and data is read from the surviving instances on other nodes within the vSAN cluster instead. At the same time data re-replication of the associated replicas needs to occur in order to restore the desired NumberOfFailuresToTolerate.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configuration.
vSAN provides increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.
vSAN 6.7 introduced the Host Pinning storage policy that can be used for next-generation, shared-nothing applications. When using Host Pinning, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment.
|
1-2 Replicas (2N-3N)
Replicas: Before any write is acknowledged to the host, it is synchronously replicated to one or two designated partner nodes. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. With 3N one instance of data that is written is stored on the local node, and two other instances of that data are stored on designated partner nodes in the cluster.
|
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N)
All-Flash: Erasure Coding (RAID5-6)
VMwares implementation of Erasure Coding only applies to All-Flash configurations and is similar to RAID-5 and RAID-6 protection. RAID-5 requires a minimum of 4 nodes (3+1) and protects against a single node failure; RAID-6 requires a minimum of 6 nodes and protects against two node failures. Erasure Coding is only available in vSAN Enterprise and Advanced editions, and is only configurable for All-flash configurations.
VMwares implementation of replicas is called NumberOfFailuresToTolerate (0, 1, 2 or 3). It applies to both disk and node failures. Optionally, nodes can be assigned to a logical grouping called Failure Domains. The use of 0 Replicas within a single site is only available when using Stretched Clustering, which is only available in the Enterprise editions.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. For both instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When an entire node fails, VMs need to be restarted and data is read from the surviving instances on other nodes within the vSAN cluster instead. At the same time data re-replication of the associated replicas needs to occur in order to restore the desired NumberOfFailuresToTolerate.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configuration.
vSAN provides increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.
vSAN 6.7 introduces the Host Pinning storage policy that can be used for next-generation, shared-nothing applications. When using Host Pinning, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment.
|
|
Block Failure Protection
Details
|
Failure Domains
Block failure protection can be achieved by assigning nodes in the same appliance to different Failure Domains.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configurations.
|
Not relevant (1-node chassis only)
|
Failure Domains
Block failure protection can be achieved by assigning nodes in the same appliance to different Failure Domains.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configurations.
|
|
Rack Failure Protection
Details
|
Failure Domains
Rack failure protection can be achieved by assigning nodes within the same rack to different Failure Domains.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configurations.
|
N/A
|
Failure Domains
Rack failure protection can be achieved by assigning nodes within the same rack to different Failure Domains.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configurations.
|
|
Protection Capacity Overhead
Details
|
Host Pinning (1N): Dependent on # of VMs
Replicas (2N): 100%
Replicas (3N): 200%
Erasure Coding (RAID5): 33%
Erasure Coding (RAID6): 50%
RAID5: The stripe size used by vSAN for RAID5 is 3+1 (33% capacity overhead for data protection) and is independent of the cluster size. The minimum cluster size for RAID5 is 4 nodes.
RAID6: The stripe size used by vSAN for RAID6 is 4+2 (50% capacity overhead for data protection) and is independent of the cluster size. The minimum cluster size for RAID6 is 6 nodes.
RAID5/6 can only be leveraged in vSAN All-flash configurations because of I/O amplification.
|
Replica (2N) + RAID1/10: 200%
Replica (2N) + RAID5: 125-133%
Replica (3N) + RAID1/10: 300%
Replica (3N) + RAID5: 225-233%
|
Replicas (2N): 100%
Replicas (3N): 200%
Erasure Coding (RAID5): 33%
Erasure Coding (RAID6): 50%
RAID5: The stripe size used by vSAN for RAID5 is 3+1 (33% capacity overhead for data protection) and is independent of the cluster size. The minimum cluster size for RAID5 is 4 nodes.
RAID6: The stripe size used by vSAN for RAID6 is 4+2 (50% capacity overhead for data protection) and is independent of the cluster size. The minimum cluster size for RAID6 is 6 nodes.
RAID5/6 can only be leveraged in vSAN All-flash configurations because of I/O amplification.
|
|
Data Corruption Detection
Details
|
Read integrity checks
Disk scrubbing (software)
End-to-end checksum provides automatic detection and resolution of silent disk errors. Creation of checksums is enabled by default, but can be disabled through policy on a per VM (or virtual disk) basis if desired. In case of checksum verification failures data is fetched from another copy.
The disk scrubbing process runs in the background.
|
N/A (hardware dependent)
StarWind HCA fully relies on the hardware layer to protect data integrity. This means that the StarWind software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
Read integrity checks
Disk scrubbing (software)
End-to-end checksum provides automatic detection and resolution of silent disk errors. Creation of checksums is enabled by default, but can be disabled through policy on a per VM (or virtual disk) basis if desired. In case of checksum verification failures data is fetched from another copy.
The disk scrubbing process runs in the background.
|
|
|
|
Points-in-Time |
|
|
|
Built-in (native)
VMware vSAN uses the 'vSANSparse' snapshot format that leverages VirstoFS technology as well as in-memory metadata cache for lookups. vSANSparse offers greatly improved performance when compared to previous virtual machine snapshot implementations.
|
N/A
StarWind HCA does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
Built-in (native)
VMware vSAN uses the 'vSANSparse' snapshot format that leverages VirstoFS technology as well as in-memory metadata cache for lookups. vSANSparse offers greatly improved performance when compared to previous virtual machine snapshot implementations.
|
|
|
Local
|
N/A
StarWind HCA does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
Local
|
|
Snapshot Frequency
Details
|
GUI: 1 hour
vSAN snapshots are invoked using the existing snapshot options in the VMware vSphere GUI.
To create a snapshot schedule using the vCenter (Web) Client: Click on a VM, then inside the Monitoring tab select Tasks & Events, Scheduled Tasks, 'Take Snapshots…'.
A single snapshot schedule allows a minimum frequency of 1 hour. Manual snapshots can be taken at any time.
|
N/A
StarWind HCA does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
GUI: 1 hour
vSAN snapshots are invoked using the existing snapshot options in the VMware vSphere GUI.
To create a snapshot schedule using the Dell EMCnter (Web) Client: Click on a VM, then inside the Monitoring tab select Tasks & Events, Scheduled Tasks, 'Take Snapshots…'.
A single snapshot schedule allows a minimum frequency of 1 hour. Manual snapshots can be taken at any time.
|
|
Snapshot Granularity
Details
|
Per VM
|
N/A
StarWind HCA does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
Per VM
|
|
|
External (vSAN Certified)
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
External
StarWind HCA does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
StarWind Virtual Tape Library Appliance (VTLA) which is available as pre-configured Appliance or as a software verion, another product in StarWinds portfolio, can be leveraged as a backup storage target by a large number of well-known backup applications. VTLA is also able to offload backups to a secondary backup repository by replicating them to public cloud targets. VTLA supports Amazon S3, Amazon Glacier, Azure blob (premium, hot, cool, archive), Backblaze B2, Wasabi, and IronMountain IronCloud.
|
External (vSAN Certified)
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and was expected to go live in the first half of 2019. vSAN 7.0 also did not introduce native data protection.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
N/A
StarWind HCA does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
StarWind Virtual Tape Library Appliance (VTLA) which is available as pre-configured Appliance or as a software verion, another product in StarWinds portfolio, can be leveraged as a backup storage target by a large number of well-known backup applications. VTLA is also able to offload backups to a secondary backup repository by replicating them to public cloud targets. VTLA supports Amazon S3, Amazon Glacier, Azure blob (premium, hot, cool, archive), Backblaze B2, Wasabi, and IronMountain IronCloud.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
N/A
StarWind HCA does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
StarWind Virtual Tape Library Appliance (VTLA) which is available as pre-configured Appliance or as a software verion, another product in StarWinds portfolio, can be leveraged as a backup storage target by a large number of well-known backup applications. VTLA is also able to offload backups to a secondary backup repository by replicating them to public cloud targets. VTLA supports Amazon S3, Amazon Glacier, Azure blob (premium, hot, cool, archive), Backblaze B2, Wasabi, and IronMountain IronCloud.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
Backup Consistency
Details
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
N/A
StarWind HCA does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
StarWind Virtual Tape Library Appliance (VTLA) which is available as pre-configured Appliance or as a software verion, another product in StarWinds portfolio, can be leveraged as a backup storage target by a large number of well-known backup applications. VTLA is also able to offload backups to a secondary backup repository by replicating them to public cloud targets. VTLA supports Amazon S3, Amazon Glacier, Azure blob (premium, hot, cool, archive), Backblaze B2, Wasabi, and IronMountain IronCloud.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
Restore Granularity
Details
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
N/A
StarWind HCA does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
StarWind Virtual Tape Library Appliance (VTLA) which is available as pre-configured Appliance or as a software verion, another product in StarWinds portfolio, can be leveraged as a backup storage target by a large number of well-known backup applications. VTLA is also able to offload backups to a secondary backup repository by replicating them to public cloud targets. VTLA supports Amazon S3, Amazon Glacier, Azure blob (premium, hot, cool, archive), Backblaze B2, Wasabi, and IronMountain IronCloud.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
Restore Ease-of-use
Details
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
N/A
StarWind HCA does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
StarWind Virtual Tape Library Appliance (VTLA) which is available as pre-configured Appliance or as a software verion, another product in StarWinds portfolio, can be leveraged as a backup storage target by a large number of well-known backup applications. VTLA is also able to offload backups to a secondary backup repository by replicating them to public cloud targets. VTLA supports Amazon S3, Amazon Glacier, Azure blob (premium, hot, cool, archive), Backblaze B2, Wasabi, and IronMountain IronCloud.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
|
|
Disaster Recovery |
|
|
Remote Replication Type
Details
|
Built-in (Stretched Clusters only)
External
VMware vSAN does not have any remote replication capabilities of its own. Stretched Clustering with synchronous replication is the exception.
Therefore in non-stretched setups it relies on external remote replication mechanisms like VMwares free-of-charge vSphere Replication (VR) or any vSphere compatible 3rd party remote replication application (eg. Zerto VR).
vSphere Replication requires the deployment of virtual appliances. No specific integration exists between VMware vSAN and VMware vSphere VR.
As of vSAN 7.0 vSphere Replication objects are visible in the vSAN capacity view. Objects are recognized as vSphere replica type, and space usage is accounted for under the Replication category.
|
Built-in (native; stretched clusters only)
External
StarWind HCA does not have any remote replication capabilities of its own. Stretched Clustering with synchronous replication is the exception.
Therefore in non-stretched setups it relies on external remote replication mechanisms like the ones natively available in the hypervisor platform (VMware vSphere Replication, Microsoft Hyper-V Replica) or any 3rd party remote replication application (eg. Zerto VR, Veeam VM replica, Azure Site Recovery).
vSphere Replication requires the deployment of virtual appliances. No specific integration exists between StarWind HCA and VMware vSphere VR.
|
Built-in (Stretched Clusters only)
External
VMware vSAN does not have any remote replication capabilities of its own. Stretched Clustering with synchronous replication is the exception.
Therefore in non-stretched setups it relies on external remote replication mechanisms like VMwares free-of-charge vSphere Replication (VR) or any vSphere compatible 3rd party remote replication application (eg. Zerto VR).
vSphere Replication requires the deployment of virtual appliances. No specific integration exists between VMware vSAN and VMware vSphere VR.
As of vSAN 7.0 vSphere Replication objects are visible in the vSAN capacity view. Objects are recognized as vSphere replica type, and space usage is accounted for under the Replication category.
|
|
Remote Replication Scope
Details
|
VR: To remote sites, To VMware clouds
vSAN allows for replication of VMs to a different vSAN cluster on a remote site or to any supported VMware Cloud Service Provider (vSphere Replication to Cloud). This includes VMware on AWS and VMware on IBM Cloud.
|
VR: To remote sites, To VMware clouds
HR: To remote sites, to Microsoft Azure (not part of Windows Server 2019)
VMware vSphere Replication (VR): VMware vSphere Replication allows for replication of VMs to a different vSphere cluster on a remote site or to any supported VMware Cloud Service Provider (vSphere Replication to Cloud). This includes VMware on AWS and VMware on IBM Cloud.
Hyper-V Replica (HR): Hyper-V Replica is an integral part of the Hyper-V role. This feature enables block-level log-based replication of an active source VM to a passive destination VM located on another Hyper-V server or to Microsoft Azure (requires Azure Site Recovery, which is a paid external service, i.e. not part of Windows Server 2019).
Because Hyper-V Replica operates on the hypervisor layer, it is storage agnostic. This means that on one site you can have Hyper-V 2019 on SAN, whereas on the other site you can have Hyper-V 2019 on S2D.
|
VR: To remote sites, To VMware clouds
vSAN allows for replication of VMs to a different vSAN cluster on a remote site or to any supported VMware Cloud Service Provider (vSphere Replication to Cloud). This includes VMware on AWS and VMware on IBM Cloud.
|
|
Remote Replication Cloud Function
Details
|
VR: DR-site (VMware Clouds)
Because VMware on AWS and VMware on IBM Cloud are full vSphere implementations, replicated VMs can be started and run in a DR-scenario.
|
VR: DR-site (VMware Clouds)
HR: DR-site (Azure)
VMware vSphere Replication (VR): Because VMware on AWS and VMware on IBM Cloud are full vSphere implementations, replicated VMs can be started and run in a DR-scenario.
|
VR: DR-site (VMware Clouds)
Because VMware on AWS and VMware on IBM Cloud are full vSphere implementations, replicated VMs can be started and run in a DR-scenario.
|
|
Remote Replication Topologies
Details
|
VR: Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
|
VR: Single-site and multi-site
HR: Single-site and chained
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
Hyper-V Replica (HR): Besides 1-to-1 replications Hyper-V Replica allows for extended (chained) replication. A VM can be replicated from a primary host to a secondary host, and then be replicated from the secondary host to a third host. Please note that it is not possible to replicate from the primary host directly to the second and the third (1-to-many).
|
VR: Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
|
|
Remote Replication Frequency
Details
|
VR: 5 minutes (Asynchronous)
vSAN: Continuous (Stretched Cluster)
The 'Stretched Cluster' feature is only available in the Enterprise edition.
|
VR: 5 minutes (Asynchronous)
HR: 30 seconds (Asynchronous)
SW: Continuous (Stretched Cluster)
Hyper-V Replica (HR): With Hyper-V Replica replication frequency can be set to 30 seconds, 5 minutes, or 15 minutes on a per-VM basis.
|
VR: 5 minutes (Asynchronous)
vSAN: Continuous (Stretched Cluster)
The 'Stretched Cluster' feature is only available in the Enterprise edition.
|
|
Remote Replication Granularity
Details
|
VR: VM
|
VR: VM
HR: VM
Both vSphere Replication (VR) and Hyper-V Replica operate on the VM level.
|
VR: VM
|
|
Consistency Groups
Details
|
VR: No
Protection is on a per-VM basis only.
|
VR: No
HR: No
|
VR: No
Protection is on a per-VM basis only.
|
|
|
VMware SRM (certified)
VMware Interoperability Matrix shows official support for SRM 8.3.
|
N/A
StarWind HCA does not have a-synchronous native remote-replication capabilities and does not provide a Storage Replication Adapter (SRA) for VMware SRM implementations.
|
VMware SRM (certified)
VMware Interoperability Matrix shows official support for SRM 8.3.
|
|
Stretched Cluster (SC)
Details
|
VMware vSphere: Yes (certified)
There is read-locality in place preventing sub-optimal cross-site data transfers.
vSAN 7.0 introduces redirection for all VM I/O from a capacity-strained site to the other site, untill the capacity is freed up. This feature improves the uptime of VMs.
The Stretched Cluster feature is only available in the Enterprise edition of vSAN.
|
vSphere: Yes
Hyper-V: Yes
StarWind HCA support active-active Stretched Clustering which leverages native synchronous block-level replication.
|
VMware vSphere: Yes (certified)
There is read-locality in place preventing sub-optimal cross-site data transfers.
vSAN 7.0 introduces redirection for all VM I/O from a capacity-strained site to the other site, untill the capacity is freed up. This feature improves the uptime of VMs.
|
|
|
3-sites: two active sites + tie-breaker in 3rd site
NEW
The use of the Stretched Cluster Witness Appliance automates failover decisions in order to avoid split-brain scenarios like network partitions and remote site failures. The witness is deployed as a VM within a third site.
vSAN 6.7 introduced the option to configure a dedicated VMkernel NIC for witness traffic. This enhances data security because witness traffic is isolated from vSAN data traffic.
vSAN 7.0 U1 introduces the vSAN Shared Witness. This feature allows end-user organizations to leverage a single Witness Appliance for up to 64 stretched clusters. This is especially useful in scenarios where many edge locations are involved. The size of the Witness Appliance determines the maximum number of cluster and components that can be managed.
|
vSphere: 2+ sites = minimum of two active sites + optional tie-breaker in 3rd site
Hyper-V: 2+ sites = minimum of two active sites + optional tie-breaker in 3rd site
|
3-sites: two active sites + tie-breaker in 3rd site
NEW
The use of the Stretched Cluster Witness Appliance automates failover decisions in order to avoid split-brain scenarios like network partitions and remote site failures. The witness is deployed as a VM within a third site.
vSAN 6.7 introduced the option to configure a dedicated VMkernel NIC for witness traffic. This enhances data security because witness traffic is isolated from vSAN data traffic.
vSAN 7.0 U1 introduces the vSAN Shared Witness. This feature allows end-user organizations to leverage a single Witness Appliance for up to 64 stretched clusters. This is especially useful in scenarios where many edge locations are involved. The size of the Witness Appliance determines the maximum number of cluster and components that can be managed.
|
|
|
<=5ms RTT
|
<=5ms RTT
|
<=5ms RTT
|
|
|
<=15 hosts at each active site
|
No set maximum number of nodes
|
<=15 nodes at each active site
For Dell EMC VxRail the minimum stretched cluster configuration is 3+3+1, meaning 3 nodes on the first site, 3 nodes on the second site and 1 tie-breaker VM on a third site. The maximum stretched cluster configuration is 15+15+1, meaning 15 nodes on the first site, 15 nodes on the second site and 1 tie-breaker VM on a third site.
|
|
SC Data Redundancy
Details
|
Replicas: 0-3 Replicas (1N-4N) at each active site
Erasure Coding: RAID5-6 at each active site
VMware vSAN 6.6 introduced enhanced stretched cluster availability with Local Fault Protection. You can provide local fault protection for virtual machine objects within a single site in a stretched cluster. You can define a Primary level of failures to tolerate for the cluster, and a Secondary level of failures to tolerate for objects within a single site. When one site is unavailable, vSAN maintains availability with local redundancy in the available site.
In the case of stretched clustering, selecting 0 replicas means that there is only one instance of the data available at each of the active sites.
Local Fault Protection is only available in the Enterprise edition of vSAN.
|
Replicas: 0-3 Replicas at each active site
|
Replicas: 0-3 Replicas (1N-4N) at each active site
Erasure Coding: RAID5-6 at each active site
VMware vSAN 6.6 and up provide enhanced stretched cluster availability with Local Fault Protection. You can provide local fault protection for virtual machine objects within a single site in a stretched cluster. You can define a Primary level of failures to tolerate for the cluster, and a Secondary level of failures to tolerate for objects within a single site. When one site is unavailable, vSAN maintains availability with local redundancy in the available site.
In the case of stretched clustering, selecting 0 replicas means that there is only one instance of the data available at each of the active sites.
Local Fault Protection is only available in the Enterprise edition of vSAN.
|
|
|
Data Services
|
Score:58.6% - Features:29
- Green(Full Support):12
- Amber(Partial):10
- Red(Not support):7
- Gray(FYI only):0
|
Score:50.0% - Features:29
- Green(Full Support):9
- Amber(Partial):11
- Red(Not support):9
- Gray(FYI only):0
|
Score:58.6% - Features:29
- Green(Full Support):12
- Amber(Partial):10
- Red(Not support):7
- Gray(FYI only):0
|
|
|
|
Efficiency |
|
|
Dedup/Compr. Engine
Details
|
All-Flash: Software
Hybrid: N/A
Deduplication and compression are only available in Enterprise and Advanced editions of vSAN.
Deduplication and compression are not available for vSAN hybrid configurations.
|
Software (integration; limited)
StarWind HCA does not have any native data efficiency (deduplication and/or compression) capabilities. Post-process deduplication and compression can used by leveraging Windows Server OS native capabilities. However, this option is only applicable to bare metal StarWind HCA deployments, so not available to hypervisor StarWind HCA deployments.
|
All-Flash: Software
Hybrid: N/A
Deduplication and compression are only available in the VxRail All-Flash ('F') appliances.
|
|
Dedup/Compr. Function
Details
|
Efficiency (Space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
Efficiency (space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
Efficiency (Space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
|
Dedup/Compr. Process
Details
|
All-Flash: Inline (post-ack)
Hybrid: N/A
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
|
Post-Processing
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
Windows Server 2019 deduplication is performed outside of IO path (post-processing) and is multi-threaded to speed up processing and keep performance impact minimal.
|
All-Flash: Inline (post-ack)
Hybrid: N/A
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
|
|
Dedup/Compr. Type
Details
|
All-Flash: Optional
Hybrid: N/A
NEW
Compression occurs after deduplication and just before the data is actually written to the persisent data layer.
In vSAN 7.0 U1 and onwards there are three settings to choose from: 'None', 'Compression Only' or 'Deduplication'.
When choosing 'Compression only' deduplication is effectively disabled. This optimizes storage performance, resource usage as well as availability. When using 'Compression only' a single disk failing no longer impacts the entire disk group.
|
Optional
By default post-process deduplication and compression are turned off. Deduplication and compression can be enabled for selected volumes, either manually or scheduled.
|
All-Flash: Optional
Hybrid: N/A
NEW
Compression occurs after deduplication and just before the data is actually written to the persisent data layer.
In vSAN 7.0 U1 and onwards there are three settings to choose from: 'None', 'Compression Only' or 'Deduplication'.
When choosing 'Compression only' deduplication is effectively disabled. This optimizes storage performance, resource usage as well as availability. When using 'Compression only' a single disk failing no longer impacts the entire disk group.
|
|
Dedup/Compr. Scope
Details
|
Persistent data layer
Deduplication and compression are not used for optimizing read/write cache
|
Persistent data layer
Microsoft Windows Server native deduplication and compression can be set for StarWind virtual devices formatted as NTFS.
Windows Server 2019 Deduplication only happens in the persistent data layer and not in the cache. The cache is not accessible from the file system and so deduplication cannot be applied to it.
|
Persistent data layer
Deduplication and compression are not used for optimizing read/write cache.
|
|
Dedup/Compr. Radius
Details
|
Disk Group
Deduplication and compression is a cluster-wide setting and is performed within each disk group. Redundant copies of a block within the same disk group are reduced to one copy. However, redundant blocks across multiple disk groups are not deduplicated.
|
Volume
Windows Server 2019 deduplication is highly scalable and can be used with volumes up to 64TB and files up to 4TB in size. Data deduplication identifies repeated patterns across files on that volume.
|
Disk Group
Deduplication and compression is a cluster-wide setting and is performed within each disk group. Redundant copies of a block within the same disk group are reduced to one copy. However, redundant blocks across multiple disk groups are not deduplicated.
|
|
Dedup/Compr. Granularity
Details
|
4 KB fixed block size
vSANs deduplication algorithm utilizes a 4K-fixed block size.
|
32-128 KB variable block size
By leveraging deduplication in Windows Server 2019, files within a deduplication-enabled volume are segmented into small variable-sized chunks (32–128 KB), duplicate chunks are identified, and only a single copy of each chunk is physically stored.
|
4 KB fixed block size
vSANs deduplication algorithm utilizes a 4K-fixed block size.
|
|
Dedup/Compr. Guarantee
Details
|
N/A
Enabling deduplication and compression can reduce the amount of storage consumed by as much as 7x (7:1 ratio).
|
N/A
|
N/A
Enabling deduplication and compression can reduce the amount of storage consumed by as much as 7x (7:1 ratio).
|
|
|
Full
Data can be redistributed evenly across all nodes in the cluster when a node is either added or removed.
For VMware vSAN data redistribution happens is two ways:
1. Automated: when physical disks are between 30% and 80% full and a node is added to the vSAN cluster, a health alert is generated that allows the end-user to execute an automated data rebalancing run. For this VMware uses the term 'proactive'.
2. Automatic: when physical disks are more than 80% full, vSAN executes an data rebalancing fully automatically, without requiring any user intervention. For this VMware uses the term 'reactive'.
As data is written, all nodes in the cluster service RF copies even when no VMs are running on the node which ensures data is being distributed evenly across all nodes in the cluster.
VMware vSAN 6.7 U3 included proactive rebalancing enhancements. All rebalancing activities can be automated with cluster-wide configuration and threshold settings. Prior to this release, proactive rebalancing was manually initiated after being alerted by vSAN health checks.
|
Full (optional)
Data rebalancing needs to be initiated manually. When a new StarWind HCA node is added, a StarWind Support Engineer assists with replicating data to the new partner.
|
Full
Data can be redistributed evenly across all nodes in the cluster when a node is either added or removed.
For VMware vSAN data redistribution happens is two ways:
1. Automated: when physical disks are between 30% and 80% full and a node is added to the vSAN cluster, a health alert is generated that allows the end-user to execute an automated data rebalancing run. For this VMware uses the term 'proactive'.
2. Automatic: when physical disks are more than 80% full, vSAN executes an data rebalancing fully automatically, without requiring any user intervention. For this VMware uses the term 'reactive'.
As data is written, all nodes in the cluster service RF copies even when no VMs are running on the node which ensures data is being distributed evenly across all nodes in the cluster.
VMware vSAN 6.7 U3 includes proactive rebalancing enhancements. All rebalancing activities can be automated with cluster-wide configuration and threshold settings. Prior to this release, proactive rebalancing was manually initiated after being alerted by vSAN health checks.
|
|
|
N/A
The VMware vSAN storage architecture does not include multiple persistent storage layers, but rather consist of a caching layer (fastest storage devices) and a persistent layer (slower/most cost-efficient storage devices).
|
Partial (integration; optional)
StarWind HCA can leverage the data tiering capabilities available in the Windows Server 2016/2019 OS (Storage Spaces). This is the case for Hyper-V and VMware vSphere environments.
|
N/A
The VMware vSAN storage architecture does not include multiple persistent storage layers, but rather consist of a caching layer (fastest storage devices) and a persistent layer (slower/most cost-efficient storage devices).
|
|
|
|
Performance |
|
|
|
vSphere: Integrated
VMware vSAN is an integral part of the VMware vSphere platform and as such is not a separate storage platform.
VMware vSAN 6.7 adds TRIM/UNMAP support: space that is no longer used can be automatically reclaimed, reducing the overall capacity needed for running workloads.
|
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; UNMAP/TRIM
StarWind HCA iSCSI is fully qualified for all VMware vSphere VAAI-Block capabilities that include: Thin Provisioning, HW Assisted Locking, Full Copy, Block Zero.
StarWind HCA is also fully qualified for Microsoft Hyper-V 2016 and 2019 ODX, as well as TRIM (for SATA SSDs) and UNMAP (for SAS SSDs).
Note: ODX is not used for files smaller than 256KB.
VAAI = VMware vSphere APIs for Array Integration
ODX = Offloaded Data Transfers
UNMAP/TRIM support allows the Windows operating system to communicate the inactive block IDs to the storage system. The storage system can wipe these unused blocks internally.
|
vSphere: Integrated
VMware vSAN is an integral part of the VMware vSphere platform and as such is not a separate storage platform.
VMware vSAN 6.7 adds TRIM/UNMAP support: space that is no longer used can be automatically reclaimed, reducing the overall capacity needed for running workloads.
|
|
|
IOPs Limits (maximums)
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
vSAN currently supports only the first method and focusses on IOPs. 'MBps Limits' cannot be set. It is also not possible to guarantee a certain amount of IOPs for any given VM.
|
N/A
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
StarWind HCA currently does not offer any QoS mechanisms.
|
IOPs Limits (maximums)
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
The vSAN software inside VxRail currently supports only the first method and focusses on IOPs. 'MBps Limits' cannot be set. It is also not possible to guarantee a certain amount of IOPs for any given VM.
|
|
|
Per VM/Virtual Disk
Quality of Service (QoS) for vSAN is normalized to a 32KB block size, and treats reads the same as writes.
|
N/A
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
StarWind HCA currently does not offer any QoS mechanisms.
|
Per VM/Virtual Disk
Quality of Service (QoS) for vSAN is normalized to a 32KB block size, and treats reads the same as writes.
|
|
|
Cache Read Reservation: Per VM/Virtual Disk
With vSAN the Cache Read Reservation policy for a particular VM can be set to 100% to allow all data to also exist entirely on the flash layer. The difference with Nutanix 'VM Flash Mode' is, that with 'VM Flash Mode' persistent data of the VM resides on Flash and is never destaged to spinning disks. In contrast, with vSANs Cache Read Reservation data exists twice: one instance on persistent magnetic disk storage and one instance within the SSD read cache.
|
N/A
|
Cache Read Reservation: Per VM/Virtual Disk
With VxRail the Cache Read Reservation policy for a particular VM can be set to 100% to allow all data to also exist entirely on the flash layer. The difference with Nutanix 'VM Flash Mode' is, that with 'VM Flash Mode' persistent data of the VM resides on Flash and is never destaged to spinning disks. In contrast, with VxRails Cache Read Reservation data exists twice: one instance on persistent magnetic disk storage and one instance within the SSD read cache.
|
|
|
|
Security |
|
|
Data Encryption Type
Details
|
Built-in (native)
|
N/A
StarWind HCA does not have native data encryption capabilities. Adding data encryption capabilities requires encryption storage hardware. Alternatively, software options such as Microsoft Bitlocker or vSphere Virtual Machine Encryption can be leveraged.
VMware Virtual Machine Encryption requires VMware vSphere
Enterprise Plus or VMware vSphere Platinum licences.
|
Built-in (native)
|
|
Data Encryption Options
Details
|
Hardware: N/A
Software: vSAN data encryption; HyTrust DataControl (validated)
NEW
Hardware: vSAN does no longer support self-encrypting drives (SEDs).
Software: vSAN supports native data-at-rest encryption of the vSAN datastore. When encryption is enabled, vSAN performs a rolling reformat of every disk group in the cluster. vSAN encryption requires a trusted connection between vCenter Server and a key management server (KMS). The KMS must support the Key Management Interoperability Protocol (KMIP) 1.1 standard. In contrast, vSAN native data-in-transit encryption does not require a KMS server. vSAN native data-at-rest and data-in-transit encryption are only available in the Enterprise edition.
vSAN encryption has been validated for the Federal Information Processing Standard (FIPS) 140-2 Level 1.
VMware has also validated the interoperability of HyTrust DataControl software encryption with its vSAN platform.
|
Hardware: Self-encrypting drives (SEDs)
Software: Microsoft BitLocker Drive Encryption; VMware Virtual machine Encryption
Hardware: In StarWind HCA deployments the encryption data service capabilities can be offloaded to hardware-based SED offerings available in server- and storage solutions.
Software: Microsoft Bitlocker provides software encryption on standalone and cluster based NTFS or ReFS(v2) volumes. Cluster volumes (CSV) encryption support was added in Windows 2012 Server.
Microsoft BitLocker uses the Advanced Encryption Standard (AES) encryption algorithm with either 128-bit or 256-bit keys. It is generally recommended to use 256-bit keys because of their superior strength.
For pure Windows Server StarWind HCA deployments, Bitlocker can be used to encrypt the local storage, encrypt Cluster Shared Volumes created on StarWind HA devices where VMs are located or encrypt VMs with Trusted Platform Module.
As for ESXi hosts, Bitlocker can be used to encrypt the Windows Server VMs storage (only non-bootable partitions). However, this does not provide full security. Alternatively, vSphere Virtual Machine Encryption can be used.
|
Hardware: N/A
Software: vSAN data encryption; HyTrust DataControl (validated)
NEW
Hardware: vSAN does no longer support self-encrypting drives (SEDs).
Software: vSAN supports native data-at-rest encryption of the vSAN datastore. When encryption is enabled, vSAN performs a rolling reformat of every disk group in the cluster. vSAN encryption requires a trusted connection between vCenter Server and a key management server (KMS). The KMS must support the Key Management Interoperability Protocol (KMIP) 1.1 standard. In contrast, vSAN native data-in-transit encryption does not require a KMS server. vSAN native data-at-rest and data-in-transit encryption are only available in the Enterprise edition.
vSAN encryption has been validated for the Federal Information Processing Standard (FIPS) 140-2 Level 1.
VMware has also validated the interoperability of HyTrust DataControl software encryption with its vSAN platform.
|
|
Data Encryption Scope
Details
|
Hardware: N/A
Software vSAN: Data-at-rest + Data-in-transit
Software Hytrust: Data-at-rest + Data-in-transit
NEW
Hardware: N/A
Software: VMware vSAN 7.0 U1 encryption provides enhanced security for data on a drive as well as data in transit. Both are optional and can be enabled seperately. HyTrust DataControl encryption also provides encryption for data-at-rest and data-in-transit.
|
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: SEDs provide encryption for data-at-rest; SEDs do not provide encryption for data-in-transit.
Software: Microsoft BitLocker provides encryption for data-at-rest as well as data-in-transit during live migration of a VM; VMware Virtual Machine Encryption provides encryption for data-at-rest.
|
Hardware: N/A
Software vSAN: Data-at-rest + Data-in-transit
Software Hytrust: Data-at-rest + Data-in-transit
NEW
Hardware: N/A
Software: VMware vSAN 7.0 U1 encryption provides enhanced security for data on a drive as well as data in transit. Both are optional and can be enabled seperately. HyTrust DataControl encryption also provides encryption for data-at-rest and data-in-transit.
|
|
Data Encryption Compliance
Details
|
Hardware: N/A
Software: FIPS 140-2 Level 1 (vSAN); FIPS 140-2 Level 1 (HyTrust)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (BitLocker; VMware Virtual Machine Encryption)
Microsoft BitLocker has been validated for Federal Information Processing Standard (FIPS) 140-2 in March 2018.
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: N/A
Software: FIPS 140-2 Level 1 (vSAN); FIPS 140-2 Level 1 (HyTrust)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
|
Data Encryption Efficiency Impact
Details
|
Hardware: N/A
Software: No (vSAN); Yes (HyTrust)
Hardware: N/A
Software vSAN: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software Hytrust: Because HyTrust DataControl is an end-to-end solution, encryption is performed at the start of the write path and some efficiency mechanisms (eg. deduplication and compression) are effectively negated.
|
Hardware: No
Software: No (Bitlocker); No (VMware Virtual Machine Encryption)
Hardware: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software: Microsoft BitLocker can be used to provide whole-disk encryption on a deduplicated disk since BitLocker sits at the end of the write path. VMware Virtual Machine Encryption encrypts tge data on the host before it is written to storage, thus negatively impacting backend storage features such as deduplication and compression. However, Microsoft post-process deduplication is executed at the filesystem layer.
|
Hardware: N/A
Software: No (vSAN); Yes (HyTrust)
Hardware: N/A
Software vSAN: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software Hytrust: Because HyTrust DataControl is an end-to-end solution, encryption is performed at the start of the write path and some efficiency mechanisms (eg. deduplication and compression) are effectively negated.
|
|
|
|
Test/Dev |
|
|
|
No
VMware vSAN itself does not include fast cloning capabilities.
Cloning operations actually copy all the data to provide a second instance. When cloning a running VM on vSAN, all the VMDKs on the source VM are snapshotted first before cloning them to the destination VM.
VMware vSphere however does provide Instant Clone technology, which enables you to clone a VM instantly both from CPU and a memory stance.
|
No
StarWind HCA itself does not include fast cloning capabilities. Cloning functionality is provided by the hypervisor platform.
|
No
Dell EMC VxRail does not include fast cloning capabilities.
Cloning operations actually copy all the data to provide a second instance. When cloning a running VM on VxRail, all the VMDKs on the source VM are snapshotted first before cloning them to the destination VM.
|
|
|
|
Portability |
|
|
Hypervisor Migration
Details
|
Hyper-V to ESXi (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
|
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
StarWind V2V Converter is a StarWind proprietary tool that can be leveraged for virtual-to-virtual (V2V) use cases as well as physical-to-virtual (P2V) use cases. The tool supports all major VM formats: VHD/VHDX, VMDK, QCOW2, and StarWind native IMG. Both the source and target VM copies exist simultaneously because the conversion procedure is more like a cloning process than a replacement. As a convenient side effect, StarWind V2V Converter basically creates a backup copy of the VMs, making the process even safer.
When converting the VM to VHDX format, StarWind V2V Converter enables the activation of Windows Repair Mode. This way the virtual machine will automatically adapt to the given hardware environment and negate any compatibility problems.
|
Hyper-V to ESXi (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
|
|
|
|
File Services |
|
|
|
Built-in (native)
External (vSAN Certified)
NEW
vSAN 7.0 U1 has integrated file services. vSAN File Services leverages scale-out architecture by deploying an Agent/Appliance VM (OVF templates) on individual ESXi hosts. Within each Agent/Appliance VM a container, or “protocol stack”, is running. The 'protocol stack' creates a file system that is spread across the VMware vSAN Virtual Distributed File System (VDFS), and exposes the file system as an NFS file share. The file shares support NFSv3, NFSv4.1, SMBv2.1 and SMBv3 by default. A file share has a 1:1 relationship with a VDFS volume and is formed out of vSAN objects. The minimum number of containers that need to be deployed is 3, the maximum 32 in any given cluster. vSAN 7.0 Files Services are deployed through the vSAN File Service wizard.
vSAN File Services currenty has the following restrictions:
- not supported on 2-node clusters,
- not supported on stretched clusters,
- not supported in combination with vLCM (vSphere Lifecycle Manager),
- it is not supported to mount the NFS share from your ESXi host,
- no integration with vSAN Fault Domains.
The alternative to vSAN File Services is to provide file services through Windows guest VMs (SMB) and/or Linux guest VMs (NFS) on top of vSAN. These file services can be made highly available by using clustering techniques.
Another alternative is to use virtual storage appliances from a third-party to host file services on top of vSAN. The following 3rd party File Services partner products are certified with vSAN 6.7:
- Cohesity DataPlatform 6.1
- Dell EMC Unity VSA 4.4
- NetApp ONTAP Select vNAS 9.5
- Nexenta NexentaStor VSA 5.1.2 and 5.2.0VM
- Panzura Freedom Filer VSA 7.1.9.3
However, none of the mentioned platforms have been certified for vSAN 7.0 or 7.0U1 (yet).
|
Built-in (native)
StarWind HCA delivers out-of-box (OOB) file services by leveraging Windows native SMB/NFS and Scale-out File Services capabilities. StarWind HCA is capable of simultaneously handling highly-available block and file level services.
StarWind virtual devices formatted as NTFS volumes are provisioned to the Microsoft file services layer. This means any file services configuration is performed from within the respective Windows service consoles e.g. quotas.
|
Built-in (native)
External (vSAN Certified)
NEW
vSAN 7.0 U1 has integrated file services. vSAN File Services leverages scale-out architecture by deploying an Agent/Appliance VM (OVF templates) on individual ESXi hosts. Within each Agent/Appliance VM a container, or “protocol stack”, is running. The 'protocol stack' creates a file system that is spread across the VMware vSAN Virtual Distributed File System (VDFS), and exposes the file system as an NFS file share. The file shares support NFSv3, NFSv4.1, SMBv2.1 and SMBv3 by default. A file share has a 1:1 relationship with a VDFS volume and is formed out of vSAN objects. The minimum number of containers that need to be deployed is 3, the maximum 32 in any given cluster. vSAN 7.0 Files Services are deployed through the vSAN File Service wizard.
vSAN File Services currenty has the following restrictions:
- not supported on 2-node clusters,
- not supported on stretched clusters,
- not supported in combination with vLCM (vSphere Lifecycle Manager),
- it is not supported to mount the NFS share from your ESXi host,
- no integration with vSAN Fault Domains.
The alternative to vSAN File Services is to provide file services through Windows guest VMs (SMB) and/or Linux guest VMs (NFS) on top of vSAN. These file services can be made highly available by using clustering techniques.
Another alternative is to use virtual storage appliances from a third-party to host file services on top of vSAN. The following 3rd party File Services partner products are certified with vSAN 6.7:
- Cohesity DataPlatform 6.1
- Dell EMC Unity VSA 4.4
- NetApp ONTAP Select vNAS 9.5
- Nexenta NexentaStor VSA 5.1.2 and 5.2.0VM
- Panzura Freedom Filer VSA 7.1.9.3
However, none of the mentioned platforms have been certified for vSAN 7.0 or 7.0U1 (yet).
|
|
Fileserver Compatibility
Details
|
Windows clients
Linux clients
NEW
vSAN 7.0 U1 File Services supports all client platforms that support NFS v3/v4.1 or SMB v2.1/v3. This includes traditional use cases as well as persistent volumes for Kubernetes on vSAN datastores.
vSAN 7.0 U1 File Services supports Microsoft Active Directory and Kerberos authentication for NFS.
VMware does not support leveraging vSAN 7.0 U1 File Services file shares as NFS datastores on which VMs can be stored and run.
|
Windows clients
Linux clients
Because StarWind HCA leverages Windows Server native CIFS/NFS and Scale-out File services, most Windows and Linux clients are able to connect.
|
Windows clients
Linux clients
NEW
vSAN 7.0 U1 File Services supports all client platforms that support NFS v3/v4.1 or SMB v2.1/v3. This includes traditional use cases as well as persistent volumes for Kubernetes on vSAN datastores.
vSAN 7.0 U1 File Services supports Microsoft Active Directory and Kerberos authentication for NFS.
VMware does not support leveraging vSAN 7.0 U1 File Services file shares as NFS datastores on which VMs can be stored and run.
|
|
Fileserver Interconnect
Details
|
SMB
NFS
NEW
vSAN 7.0 U1 File Services supports all client platforms that support NFS v3/v4.1 or SMB v2.1/v3. This includes traditional use cases as well as persistent volumes for Kubernetes on vSAN datastores.
vSAN 7.0 U1 File Services supports Microsoft Active Directory and Kerberos authentication for NFS.
VMware does not support leveraging vSAN 7.0 U1 File Services file shares as NFS datastores on which VMs can be stored and run.
|
SMB
NFS
Because StarWind HCA leverages Windows Server native CIFS/NFS and Scale-out File services, Windows Server platform compatibility applies:
SMB versions1,2 and 3 are supported, as are NFS versions 2, 3 and 4.1.
|
SMB
NFS
NEW
vSAN 7.0 U1 File Services supports all client platforms that support NFS v3/v4.1 or SMB v2.1/v3. This includes traditional use cases as well as persistent volumes for Kubernetes on vSAN datastores.
vSAN 7.0 U1 File Services supports Microsoft Active Directory and Kerberos authentication for NFS.
VMware does not support leveraging vSAN 7.0 U1 File Services file shares as NFS datastores on which VMs can be stored and run.
|
|
Fileserver Quotas
Details
|
Share Quotas
vSAN 7.0 File Services supports share quotas through the following settings:
- Share warning threshold: When the share reaches this threshold, a warning message is displayed.
- Share hard quota: When the share reaches this threshold, new block allocation is denied.
|
Share Quotas, User Quotas
Because StarWind HCA leverages Windows Server native CIFS/NFS and Scale-out File services, all Quota features available in Windows Server can be used.
|
Share Quotas
vSAN 7.0 File Services supports share quotas through the following settings:
- Share warning threshold: When the share reaches this threshold, a warning message is displayed.
- Share hard quota: When the share reaches this threshold, new block allocation is denied.
|
|
Fileserver Analytics
Details
|
Partial
vSAN 7.0 File Services provide some analytics capabilities:
- Amount of capacity consumed by vSAN File Services file shares,
- Skyline health monitoring with regard to infrastructure, file server and shares.
|
Partial
Because StarWind HCA leverages Windows Server native CIFS/NFS, Windows Server built-in auditing capabilities can be used.
|
Partial
vSAN 7.0 File Services provide some analytics capabilities:
- Amount of capacity consumed by vSAN File Services file shares,
- Skyline health monitoring with regard to infrastructure, file server and shares.
|
|
|
|
Object Services |
|
|
Object Storage Type
Details
|
N/A
VMware vSAN does not provide any object storage serving capabilities of its own.
|
N/A
StarWind HCA does not provide any object storage serving capabilities of its own.
|
N/A
Dell EMC VxRail does not provide any object storage serving capabilities of its own.
|
|
Object Storage Protection
Details
|
N/A
VMware vSAN does not provide any object storage serving capabilities of its own.
|
N/A
StarWind HCA does not provide any object storage serving capabilities of its own.
|
N/A
VMware vSAN does not provide any object storage serving capabilities of its own.
|
|
Object Storage LT Retention
Details
|
N/A
VMware vSAN does not provide any object storage serving capabilities of its own.
|
N/A
StarWind HCA does not provide any object storage serving capabilities of its own.
|
N/A
Dell EMC VxRail does not provide any object storage serving capabilities of its own.
|
|
|
Management
|
Score:82.1% - Features:14
- Green(Full Support):10
- Amber(Partial):3
- Red(Not support):1
- Gray(FYI only):0
|
Score:60.7% - Features:14
- Green(Full Support):5
- Amber(Partial):7
- Red(Not support):2
- Gray(FYI only):0
|
Score:85.7% - Features:14
- Green(Full Support):11
- Amber(Partial):2
- Red(Not support):1
- Gray(FYI only):0
|
|
|
|
Interfaces |
|
|
GUI Functionality
Details
|
Centralized
vSAN management, capacity monitoring, performance monitoring and efficiency reporting is performed through the vSphere Web Client interface.
Other functionality such as backups and snapshots are also managed from the vSphere Web Client Interface.
|
Centralized
StarWind Web-based Management provides the ability to administrate the StarWind HCA infrastructure from any remote location using any HTML 5 web console.
|
Centralized
Management of the vSAN software, capacity monitoring, performance monitoring and efficiency reporting can be performed through the vSphere Web Client interface.
Other functionality such as backups and snapshots are also managed from the vSphere Web Client Interface.
|
|
|
Single-site and Multi-site
Centralized management of multicluster environments can be performed through the vSphere Web Client by using Enhanced Linked Mode.
Enhanced Linked Mode links multiple vCenter Server systems by using one or more Platform Services Controllers. Enhanced Linked Mode enables you to log in to all linked vCenter Server systems simultaneously with a single user name and password. You can view and search across all linked vCenter Server systems. Enhanced Linked Mode replicates roles, permissions, licenses, and other key data across systems. Enhanced Linked Mode requires the vCenter Server Standard licensing level, and is not supported with vCenter Server Foundation or vCenter Server Essentials.
|
Single-site and Multi-site
From within the StarWind Web Management Console servers from different clusters and sites can be added.
|
Single-site and Multi-site
Centralized management of multicluster environments can be performed through the vSphere Web Client by using Enhanced Linked Mode.
Enhanced Linked Mode links multiple Dell EMCnter Server systems by using one or more Platform Services Controllers. Enhanced Linked Mode enables you to log in to all linked Dell EMCnter Server systems simultaneously with a single user name and password. You can view and search across all linked Dell EMCnter Server systems. Enhanced Linked Mode replicates roles, permissions, licenses, and other key data across systems. Enhanced Linked Mode requires the Dell EMCnter Server Standard licensing level, and is not supported with Dell EMCnter Server Foundation or Dell EMCnter Server Essentials.
|
|
GUI Perf. Monitoring
Details
|
Advanced
NEW
Performance information can be viewed on the cluster level, the Host level and the VM level. Per VM there is also a view on backend performance. Performance graphs focus on IOPS, MB/s and Latency of Reads and Writes. Statistics for networking, resynchronization, and iSCSI are also included.
End-users can select saved time ranges in performance views. vSAN saves each selected time range when end-users run a performance query.
There is also a VMware vRealize Operations (vROps) Management Pack for vSAN that provides additional options for monitoring, managing and troubleshooting vSAN.
The vSphere 6.7 Client includes an embedded vRealize Operations (vROps) plugin that provides basic vSAN and vSphere operational dashboards. The vROps plugin does not require any additional vROps licensing. vRealize Operations within vCenter is only available in the Enterprise and Advanced editions of vSAN.
vSAN observer as of vSAN 6.6 is deprecated but still included. In its place, vSAN Support Analytics is provided to deliver more enhanced support capabilities, including performance diagnostics. Performance diagnostics analyzes previously executed benchmark tests. It detects issues, suggests remediation steps, and provides supporting performance graphs for further insight. Performance Diagnostics requires participation in the Customer Experience Improvement Program (CEIP).
vSAN 6.7 U3 introduced a vSAN CPU metric through the performance service, and provides a new command-line utility (vsantop) for real-time performance statistics of vSAN, similar to esxtop for vSphere.
vSAN 7.0 introduces vSAN Memory metric through the performance service and the API for measuring vSAN memory usage.
vSAN 7.0 U1 introduces vSAN IO Insight for investigating the storage performance of individual VMs. vSAN IO Insight generates the following performance statistics which can be viewed from within the vCenter console:
- IOPS (read/write/total)
- Throughput (read/write/total)
- Sequential & Random Throughput (sequential/random/total)
- Sequential & Random IO Ratio (sequential read IO/sequential write IO/sequential IO/random read IO/random write IO/random IO)
- 4K Aligned & Unaligned IO Ratio (4K aligned read IO/4K aligned write IO/4K aligned IO/4K unaligned read IO/4K unaligned write IO/4K unaligned IO)
- Read & Write IO Ratio (read IO/writeIO)
- IO Size Distribution (read/write)
- IO Latency Distribution (read/write)
|
Advanced
There are two options for monitoring in StarWind HCA. The StarWind Management Console can be used to view basic storage performance characteristics. New is StarWind Command Center that provides advanced monitoring functionality. StarWind Command Center monitoring and management tool is currently included with all StarWind HCAs on Hyper-V.
StarWind Management Console provides:
- counters on a per-host level: CPU/RAM load, CPU load, RAM load, Total IOPS, Total Bandwidth
- counters on a per-device level: Read Bandwidth, Write Bandwidth, Total Bandwidth, Total IOPS
- selectable are Server with all targets (StarWind devices) or each separate StarWind device
- retention time of performance information is last 24 hours
- refresh rate of performance metrics is every 30 seconds
StarWind Command Center provides:
- counters on a per VM-level: CPU, RAM, Storage usage, IOPS, IO latency
- counters on a per Storage-level: total usage, synchronization state, IOPS, disk throughput, average Queue Depth, Read/Write ratio, Peak disk throughput, Average IO size, Average IO, Latency
- counters on a per-node level: CPU, RAM, Storage usage, VMs distribution among nodes, IOPS, Disk Throughput, Disk IO Latency
- retention time of performance information is maximum of one year
- refresh rate of performance metrics is every 3 minutes
- VM-ranking options based on CPU/RAM usage, IOPS or IO latency
- monitoring of cluster-wide performance
|
Advanced
NEW
Performance information can be viewed on the cluster level, the Host level and the VM level. Per VM there is also a view on backend performance. Performance graphs focus on IOPS, MB/s and Latency of Reads and Writes. Statistics for networking, resynchronization, and iSCSI are also included.
End-users can select saved time ranges in performance views. vSAN saves each selected time range when end-users run a performance query.
There is also a VMware vRealize Operations (vROps) Management Pack for vSAN that provides additional options for monitoring, managing and troubleshooting vSAN.
The vSphere 6.7 Client includes an embedded vRealize Operations (vROps) plugin that provides basic vSAN and vSphere operational dashboards. The vROps plugin does not require any additional vROps licensing. vRealize Operations within vCenter is only available in the Enterprise and Advanced editions of vSAN.
vSAN observer as of vSAN 6.6 is deprecated but still included. In its place, vSAN Support Analytics is provided to deliver more enhanced support capabilities, including performance diagnostics. Performance diagnostics analyzes previously executed benchmark tests. It detects issues, suggests remediation steps, and provides supporting performance graphs for further insight. Performance Diagnostics requires participation in the Customer Experience Improvement Program (CEIP).
vSAN 6.7 U3 introduced a vSAN CPU metric through the performance service, and provides a new command-line utility (vsantop) for real-time performance statistics of vSAN, similar to esxtop for vSphere.
vSAN 7.0 introduces vSAN Memory metric through the performance service and the API for measuring vSAN memory usage.
vSAN 7.0 U1 introduces vSAN IO Insight for investigating the storage performance of individual VMs. vSAN IO Insight generates the following performance statistics which can be viewed from within the vCenter console:
- IOPS (read/write/total)
- Throughput (read/write/total)
- Sequential & Random Throughput (sequential/random/total)
- Sequential & Random IO Ratio (sequential read IO/sequential write IO/sequential IO/random read IO/random write IO/random IO)
- 4K Aligned & Unaligned IO Ratio (4K aligned read IO/4K aligned write IO/4K aligned IO/4K unaligned read IO/4K unaligned write IO/4K unaligned IO)
- Read & Write IO Ratio (read IO/writeIO)
- IO Size Distribution (read/write)
- IO Latency Distribution (read/write)
|
|
|
VMware HTML5 vSphere Client (integrated)
VMware vSphere Web Client (integrated)
vSAN 6.6 and up provide integration with the vCenter Server Appliance. End-users can create a vSAN cluster as they deploy a vCenter Server Appliance, and host the appliance on that cluster. The vCenter Server Appliance Installer enables end-users to create a one-host vSAN cluster, with disks claimed from the host. vCenter Server Appliance is deployed on the vSAN cluster.
vSAN 6.6 and up also support host-based vSAN monitoring. This means end-users can monitor vSAN health and basic configuration through the ESXi host client. This also allows end-users to correct configuration issues at the host level.
vSAN 6.7 introduced support for the HTML5-based vSphere Client that ships with vCenter Server. vSAN Configuration Assist and vSAN Updates are available only in the vSphere Web Client.
|
vSphere: Web Client (plugin)
Hyper-V: SCVMM 2016 (add-in)
The StarWind vCenter plug-in provides a 1-to-1 representation of the StarWind Web Console inside the VMware vSphere Web Client. It does not add additional StarWind HCA-related actions to any Web Client menus.
|
VMware vSphere Web Client (integrated)
VxRail 4.7.300 adds full native vCenter plug-in support for all core day to day operations including physical views and Life Cycle Management (LCM).
VxRail 4.7.100 and up provide a VxRail Manager Plugin for VMware vCenter. The plugin replaces the VxRail Manager web interface. Also full VxRail event details are presented in vCenter Event Viewer.
vSAN 6.7 provides support for the HTML5-based vSphere Client that ships with vCenter Server. vSAN Configuration Assist and vSAN Updates are available only in the vSphere Web Client.
vSAN 6.6 and up provide integration with the vCenter Server Appliance. End-users can create a vSAN cluster as they deploy a vCenter Server Appliance, and host the appliance on that cluster. The vCenter Server Appliance Installer enables end-users to create a one-host vSAN cluster, with disks claimed from the host. vCenter Server Appliance is deployed on the vSAN cluster.
vSAN 6.6 and up also support host-based vSAN monitoring. This means end-users can monitor vSAN health and basic configuration through the ESXi host client. This also allows end-users to correct configuration issues at the host level.
|
|
|
|
Programmability |
|
|
|
Full
Storage Policy-Based Management (SPBM) is a feature of vSAN that allows administrators to create storage profiles so that virtual machines (VMs) dont need to be individually provisioned/deployed and so that management can be automated. The creation of storage policies is fully integrated in the vSphere GUI.
|
Partial (Protection)
|
Full
Storage Policy-Based Management (SPBM) is a feature of vSAN that allows administrators to create storage profiles so that virtual machines (VMs) dont need to be individually provisioned/deployed and so that management can be automated. The creation of storage policies is fully integrated in the vSphere GUI.
|
|
|
REST-APIs
Ruby vSphere Console (RVC)
PowerCLI
|
REST-APIs (through Swordfish)
PowerShell
StarWind HCA does not offer native REST APIs (functionality still under development). Alternatively, StarWind over Swordfish API can be configured if programming is a requirement.
For more information, please view: https://www.starwindsoftware.com/resource-library/starwind-swordfish-provider/
|
REST-APIs
Ruby vSphere Console (RVC)
PowerCLI
Dell EMC VxRail 4.7.100 and up include RESTful API enhancements.
Dell EMC VxRail 4.7.300 and up provide full VxRail manager functionality using RESTful API.
|
|
|
OpenStack
VMware vRealize Automation (vRA)
OpenStack integration is achieved through VMware Integrated OpenStack v2.0
vRealize Automation 7 integration is achieved through vRA 7.0.
|
N/A
|
OpenStack
VMware vRealize Automation (vRA)
OpenStack integration is achieved through VMware Integrated OpenStack v2.0
vRealize Automation 7 integration is achieved through vRA 7.0.
|
|
|
N/A (not part of vSAN license)
VMware vSAN does not provide any end-user self service capabilities of its own.
A self service portal enables end-users to access a portal where they can provision and manage VMs from templates, eliminating administrator requests or activity.
Self-Service functionality can be enabled by leveraging VMware vRealize Automation (vRA). This requires a separate VMware license.
|
N/A
|
N/A (not part of VxRail license bundle)
VMware vSAN does not provide any end-user self service capabilities of its own.
A self service portal enables end-users to access a portal where they can provision and manage VMs from templates, eliminating administrator requests or activity.
Self-Service functionality can be enabled by leveraging VMware vRealize Automation (vRA). This requires a separate VMware license.
|
|
|
|
Maintenance |
|
|
|
Partially Distributed
For a number of features and functions vSAN relies on other components that need to be installed and upgraded next to the core vSphere platform. This is mosly relevant for backup/restore and file services. As a result some dependencies exist with other software.
|
Partially Distributed
For a number of features and functions StarWind HCA relies on other components that need to be installed and upgraded next to the core Windows platform. Examples are backup/restore and advanced management software. As a result some dependencies exist with other software.
|
Partially Distributed
For a number of features and functions the vSAN software inside the VxRail appliances relies on other components that need to be installed and upgraded next to the core vSphere platform. Examples are Avamar Virtual Edition (AVE), vSphere Replication (VR) and RecoverPoint for VMs (RPVM). As a result some dependencies exist with other software.
|
|
SW Upgrade Execution
Details
|
Rolling Upgrade (1-by-1)
Because vSAN is a kernel-based solution, upgrading vSAN requires upgrading the vSphere hypervisor. The VMware vSphere Update Manager (VUM) can be used to automate the upgrade process of hosts in a vSAN cluster. When upgrading, you can choose between two evacuation modes: Ensure Accessibility or Full Data Migration.
VMware Update Manager (VUM) builds recommendations for vSAN. Update Manager can scan the vSAN cluster and recommend host baselines that include updates, patches, and extensions. It manages recommended baselines, validates the support status from vSAN HCL, and downloads the correct ESXi ISO images from VMware.
vSAN 6.7 U1 performs a simulation of data evacuation to determine if the operation will succeed or fail before it starts. If the evacuation will fail, vSAN halts the operation before any resynchronization activity begins. In addition, the vSphere Client enables end users to modify the component repair delay timer.
vSAN 6.7 U3 includes an improved vSAN update recommendation experience from VUM, which allows users to configure the recommended baseline for a vSAN cluster to either stay within the current version and only apply available patches or updates, or upgrade to the latest ESXi version that is compatible with the cluster.
vSAN 6.7 U3 introduces vCenter forward compatibility with ESXi. vCenter Server can manage newer versions of ESXi hosts in a vSAN cluster, as long as both vCenter and its managed hosts have the same major vSphere version. Critical ESXi patches can be applied without updating vCenter Server to the same version.
vSAN 7.0 native File Services upgrades are also performed on a rolling basis. The file shares remain accessible during the upgrade as file server containers running on the virtual machines which are undergoing upgrade fail over to other virtual machines. During the upgrade some interruptions might be experienced while accessing the file shares.
|
Manual Upgrade (1-by-1)
|
Rolling Upgrade (1-by-1)
The Dell EMC VxRail Manager software provides one-click, non-disruptive patches and upgrades for the entire solution stack.
End-user organizations no longer need professional services for stretched cluster upgrades when upgrading from VxRail 4.7.300 to the next release.
vSAN 7.0 native File Services upgrades are also performed on a rolling basis. The file shares remain accessible during the upgrade as file server containers running on the virtual machines which are undergoing upgrade fail over to other virtual machines. During the upgrade some interruptions might be experienced while accessing the file shares.
|
|
FW Upgrade Execution
Details
|
Rolling Upgrade (1-by-1)
VMware vSAN provides 1-click GUI-based support for installing/updating firmware from a growing list of server hardware vendors. Currently this works for Dell, Lenovo, Fujitsu, and SuperMicro servers.
Some other server hardware vendors offer rolling upgrade options with their base software or with a premium software suite. With some other server vendors, BIOS and Baseboard Management Controller (BMC) updated have to be performed manually and 1-by-1.
VMware Update Manager (VUM) builds recommendations for vSAN. Update Manager can scan the vSAN cluster and recommend host baselines that include updates, patches, and extensions. It manages recommended baselines, validates the support status from vSAN HCL, and downloads the correct ESXi ISO images from VMware.
HBA firmware update through VUM: Storage I/O controller firmware for vSAN hosts is now included as part of the VUM remediation workflow. This functionality was previously provided in a vSAN utility called Configuration Assist. VUM also supports custom ISOs that are provided by certain OEM vendors and vCenter Servers that do not have internet connectivity.
vSAN 7.0 introduces vSphere Lifecycle Manager (vLCM) support for Dell and HPE ReadyNodes. vSphere Lifecycle Manager (vLCM) uses a desired-state model that provides lifecycle management for the hypervisor and the full stack of drivers and firmware on ESXi hosts.
|
Manual Upgrade (1-by-1 )
To perform drivers updates, a remote session with a StarWind Support engineer is scheduled who will perform all the updates. Firmware and driver updates are first verified for any possible issues.
To keep the VMs and applications running during maintenance, StarWind performs the following steps:
1. Check that StarWind devices are synchronized on both nodes and that all iSCSI connections are active.
2. Move VMs inside the cluster to the partner node.
3. In case the process presumes several restarts, put StarWind service into a disabled state or manual start on the required node.
4. Install the required updates/drivers.
5. Restart the node if required.
6. In case required, shut down the node and perform a firmware update.
|
1-Click
The Dell EMC VxRail Manager software provides one-click, non-disruptive patches and upgrades for the entire solution stack.
VxRail 7.0 does not support vSphere Lifecycle Manager (vLCM); vLCM is disabled in VMware vCenter.
|
|
|
|
Support |
|
|
Single HW/SW Support
Details
|
Yes (most OEM vendors)
Most VMware OEM partners (eg. Fujitsu, HPE, Dell) offer full support for both server hardware and VMware vSphere software including vSAN. Because VMware vSphere is delivered through OEM, customers are required to buy both software and hardware from the server hardware vendor.
Because VMware vSAN is a software-only offering (SDS), end users still have the choice to acquire server hardware support and software support separately.
|
Yes
StarWind provides unified support for the entire native solution. This means StarWind is the single point-of-contact for any storage software (StarWind Virtual SAN) and server hardware (SuperMicro/Dell) related issues.
|
Yes
Dell EMC VxRail provides unified support for the entire native solution. This means Dell EMC is the single point-of-contact for both software and hardware issues. Prerequisite is that the separate VMware vSphere licenses have not been acquired through an OEM vendor.
|
|
Call-Home Function
Details
|
Partial (vSAN Support Insight)
With regard to vSAN as a software-only offering (SDS), VMware does not offer call-home for the entire solution consisting of both storage software and sever hardware. This means storage software support (vSAN) and server hardware support are separate.
VMware vSAN Support Insight: For end user organizations participating in the VMware Customer Experience Improvement Program (CEIP) anonymized telemetry data is collected hourly to help VMware Global Support Services (GSS) and Engineering to understand usage patterns of features and hardware, identify common product issues faced by end users, and understand and diagnose the end user organizations configuration and runtime state to expedite support request handling. All information is gathered from VMWare vCenter and as such provides an indirect view of the server hardware state.
VMware vSAN Support Insight is available at no additional cost for all vSAN customers with an active support contract running vSphere 6.5 Patch 2 or higher.
Some server hardware vendors provide call-home support for hardware related failures eg. failed disks, faulty network interface cards (NICs). In some cases this type of pro-active support requires an enhanced support contract.
|
Yes
StarWind ProActive Support combines both the 'Call Home' functionality and Predictive Analytics.
StarWind ProActive Support service notifies the StarWind Support team about any issues occurring in the StarWind HCA for storage, networking, compute and software layers. This is achieved by StarWind Agents running on each HCA and collecting the metrics from the servers and StarWind software.
|
Full
VxRail uses EMC Secure Remote Services (ESRS). ESRS maintains connectivity with the VxRail hardware components around the clock and automatically notifies EMC Support if a problem or potential problem occurs.
VxRail is also supported by Dell EMC Vision. Dell EMC Vision offers Multi-System Management, Health monitoring and Automated log collection for a multitude of products from the Dell EMC family.
|
|
Predictive Analytics
Details
|
Full (not part of vSAN license)
vRealize Operations (vROps) provides native vSAN support with multiplay dashboards for proactive alerting, heat maps, device and cluster insights, and streamlined issue resolution. Also vROps provides forward trending and forecasting to vSAN datastore as well as any another datastore (SAN/NAS).
vSAN 6.7 introduced 'vRealize Operations (vROps) within vCenter'. This provides end users with 6 dashboards inside vCenter console, giving insights but not actions. Three of these dashboards relate to vSAN: Overview, Cluster View, Alerts. One of the widgets inside these dasboards displays 'Time remaining before capacity runs out'. Because this provides only some very basic trending information, a full version of the vROps product is still required.
'vRealize Operations (vROps) within vCenter' is included with vSAN Enterprise and vSAN Advanced.
The full version of vRealize Operations (vROps) is licensed as a separate product from VMware vSAN.
VMware vSAN 6.7 U3 introduced increased hardening during capacity-strained scenarios. This entails new robust handling of capacity usage conditions for improved detection, prevention, and remediation of conditions where cluster capacity has exceeded recommended thresholds.
|
Partial
StarWind ProActive support is capable of Predictive Analytics by analyzing abnormal patterns on storage, networking, compute and software layers. This allows StarWinds Support team to automatically receive notifications as to possible issues that might occur on the servers.
Additionally, any other pattern that caused issues in one environment is analyzed and integrated into the ProActive Support database allowing the service to notify the StarWind Support team about the same patterns occurring in other clients environments.
|
Full (not part of VxRail license bundle)
vRealize Operations (vROps) provides native vSAN support with multiplay dashboards for proactive alerting, heat maps, device and cluster insights, and streamlined issue resolution. Also vROps provides forward trending and forecasting to vSAN datastore as well as any another datastore (SAN/NAS).
vSAN 6.7 introduced 'vRealize Operations (vROps) within vCenter'. This provides end users with 6 dashboards inside vCenter console, giving insights but not actions. Three of these dashboards relate to vSAN: Overview, Cluster View, Alerts. One of the widgets inside these dasboards displays 'Time remaining before capacity runs out'. Because this provides only some very basic trending information, a full version of the vROps product is still required.
'vRealize Operations (vROps) within vCenter' is included with vSAN Enterprise and vSAN Advanced.
The full version of vRealize Operations (vROps) is licensed as a separate product from VMware vSAN.
In June 2019 an early access edition (=technical preview) of VxRail Analytical Consulting Engine (ACE) was released to the public for test and evaluation purposes. VxRail ACE is a cloud service that runs in a Dell EMC secure data lake and provides infrastructure machine learning for insights that can be viewed by end-user organizations on a Dell EMC managed web portal. On-premises VxRail clusters send advanced telemetry data to VxRail ACE by leveraging the existing SRS secyure transport mechanism in order to provide the cloud platform with raw data input. VxRail ACE is built on Pivotal Cloud Foundry.
VxRail ACE provides:
- global vizualization across all VxRail clusters and vCenter appliances.
- simplified health scores at the cluster and node levels;
- advanced capacity and performance metrics charting so problem areas (CPU, memory, disk, networking) can be pinpointed up to the VM level;
- future capacity planning by analyzing the previous 180 days of storage use data, and projecting data usage for the next 90 days.
VxRail ACE supports VxRail 4.5.215 or later, as well as 4.7.0001 or later. Sending advanced telemetry data to VxRail ACE is optional and can be turned off. The default collection frequency is once every hour.
VxRail ACE is designed for extensibility so that future visibility between VxRail ACE and vRealize Operations Manager is possible.
Because VxRail ACE is not Generally Available (GA) at this time, it is not yet considered in the WhatMatrix evaluation/scoring mechanism.
|
|