|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
- + Extensive platform support
- + Extensive data protection capabilities
- + Flexible deployment options
|
- + Flexible architecture
- + Extensive platform support
- + Several Microsoft integration points
|
- + Broad range of hardware support
- + Strong VMware integration
- + Policy-based management
|
|
Cons
|
- - No native data integrity verification
- - Dedup/compr not performance optimized
- - Disk/node failure protection not capacity optimized
|
- - Minimal data protection capabilities
- - No Quality-of-service mechanisms
- - No native encryption capabilities
|
- - Single hypervisor support
- - Very limited native data protection capabilities
- - Dedup/compr not performance optimized
|
|
|
|
Content |
|
|
|
|
WhatMatrix
|
WhatMatrix
|
WhatMatrix
|
|
|
|
Assessment |
|
|
|
|
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
DataCore was founded in 1998 and began to ship its first software-defined storage (SDS) platform, SANsymphony (SSY), in 1999. DataCore launched a separate entry-level storage virtualization solution, SANmelody (v1.4), in 2004. This platform was also the foundation for DataCores HCI solution. In 2014 DataCore formally announced Hyperconverged Virtual SAN as a separate product. In May 2018 integral changes to the software licensing model enabled consolidation because the core software is the same and since then cumulatively called DataCore SANsymphony.
One year later, in 2019, DataCore expanded its software-defined storage portfolio with a solution especially for the need of file virtualization. The additional SDS offering is called DataCore vFilO and operates as scale-out global file system across distributed sites spanning on-premises and cloud-based NFS and SMB shares.
Recently, at the beginning of 2021, DataCore acquired Caringo and integrated its know how and software-defined object storage offerings into the DataCore portfolio. The newest member of the DataCore SDS portfolio is called DataCore Swarm and together with its complementary offering SwarmFS and DataCore FileFly it enables customers to build on-premises object storage solutions that radically simplify the ability to manage, store, and protect data while allowing multi-protocol (S3/HTTP, API, NFS/SMB) access to any application, device, or end-user.
DataCore Software specializes in the high-tech fields of software solutions for block, file, and object storage. DataCore has by far the longest track-record when it comes to software-defined storage, when comparing to the other SDS/HCI vendors on the WhatMatrix.
In April 2021 the company had an install base of more than 10,000 customers worldwide and there were about 250 employees working for DataCore.
|
Name: StarWind Virtual SAN
Type: SDS
Development Start: 2003
First Product Release: 2011
StarWind Software is a privately held company which started in 2008 as a spin-off from Rocket Division Software, Ltd. (founded in 2003). It initially provided free Software Defined Storage (SDS) offerings to early adopters in 2009. Sometime in 2011 the company released its product Native SAN (later rebranded to Virtual SAN). In 2015 StarWind executed a successful 'pivot shift' from software-only company to become a hardware vendor and brought Hyper-Convergence from the Enterprise level to SMB and ROBO. Apart of HCA solutions, StarWind keeps focus on developing and improving its Virtual SAN solution. In 2018, StarWind released Virtual SAN for vSphere - an SDS solution specifically aimed at VMware vSphere environments.
In March 2020 the company had a StarWind Virtual SAN install base of more than 4,500 customers worldwide. In June 2019 there were more than 250 employees working for StarWind.
|
Name: vSAN
Type: Software-only (SDS)
Development Start: Unknown
First Product Release: 2014
VMware was founded in 1998 and began to ship its first Software Defined Storage solution, Virtual SAN, in 2014. Virtual SAN was later rebranded to vSAN. The vSAN solution is fully integrated into the vSphere Hypervisor platform. In 2015 VMware released major updates in the second iteration of the product. Close to the end of 2016 the third iteration was released.
At the end of May 2019 the company had a customer install base of more than 20,000 vSAN customers worldwide. This covers both vSAN and VxRail customers. At the end of May 2019 there were over 30,000 employees working for VMware worldwide.
|
|
|
|
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
10th Generation software. DataCore currently has the most experience when it comes to SDS/HCI technology, when comparing SANsymphony to other SDS/HCI platforms.
SANsymphony (SSY) version 3 was the first public release that hit the market back in 1999. The product has evolved ever since and the current major release is version 10. The list includes only the milestone releases.
PSP = Product Support Package
U = Update
|
StarWind VSAN for vSphere Release Dates:
VSAN build 13170: oct 2019
VSAN build 12859: feb 2019
VSAN build 12658: dec 2018
VSAN build 12533: sep 2018
StarWind VSAN for Hyper-V Release Dates:
VSAN build 13279: oct 2019
VSAN build 13182: aug 2019
VSAN build 12767: feb 2019
VSAN build 12658: nov 2018
VSAN build 12585 oct 2018
VSAN build 12393: aug 2018
VSAN build 12166: may 2018
VSAN build 12146: apr 2018
VSAN build 11818: dec 2017
VSAN build 11456: aug 2017
VSAN build 11404: jul 2017
VSAN build 11156: may 2017
VSAN build 11071 may 2017
VSAN build 10927 apr 2017
VSAN build 10914 apr 2017
VSAN build 10833: apr 2017
VSAN build 10811 mar 2017
VSAN build 10799: mar 2017
VSAN build 10695: feb 2017
VSAN build 10547: jan 2017
VSAN build 9996: aug 2016
VSAN build 9980 aug 2016
VSAN build 9781: jun 2016
VSAN build 9611 jun 2016
VSAN build 9052: may 2016
VSAN build 8730: nov 2015
VSAN build 8716 nov 2015
VSAN build 8198 jun 2015
VSAN build 7929 apr 2015
VSAN build 7774 feb 2015
VSAN build 7509 de 2014
VSAN build 7471 dec 2014
VSAN build 7354 nov 2014
VSAN build 7145
VSAN build 6884
Version 8 Release 10 StarWind software.
|
GA Release Dates:
vSAN 7.0 U1: oct 2020
vSAN 7.0: apr 2020
vSAN 6.7 U3: apr 2019
vSAN 6.7 U1: oct 2018
vSAN 6.7: may 2018
vSAN 6.6.1: jul 2017
vSAN 6.6: apr 2017
vSAN 6.5: nov 2016
vSAN 6.2: mar 2016
vSAN 6.1: aug 2015
vSAN 6.0: mar 2015
vSAN 5.5: mar 2014
NEW
7th Generation software. vSAN maturity has again increased by expanding its range of features with a set of advanced functionality.
vSAN is also a key element in both the VCE VxRail/VxRack and VMware EVO:SDDC propositions.
|
|
|
|
Pricing |
|
|
|
Hardware Pricing Model
Details
|
N/A
SANsymphony is sold by DataCore as a software-only solution. Server hardware must be acquired separately.
The entry point for all hardware and software compatibility statements is: https://www.datacore.com/products/sansymphony/tech/compatibility/
On this page links can be found to: Storage Devices, Servers, SANs, Operating Systems (Hosts), Networks, Hypervisors, Desktops.
Minimum server hardware requirements can be found at: https://www.datacore.com/products/sansymphony/tech/prerequisites/
|
N/A
StarWind Virtual SAN is sold by StarWind as a software-only solution. Server hardware must be acquired separately.
|
N/A
vSAN is sold by VMware as a software-only solution. Server hardware must be acquired separately.
VMware maintains an extensive Hardware Compatibility List (HCL) with supported hardware for VMware vSAN implementations.
For guidance on proper hardware configurations, VMware provides a vSAN Hardware Quick Reference Guide.
You can view both the HCL and the Quick Reference Guide by using the following link: http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vSAN
To help customers further, VMware also partners with multiple server hardware manufacturers to provide reference configurations in the VMware vSAN Ready Nodes document.
|
|
|
Software Pricing Model
Details
|
Capacity based (per TB)
NEW
DataCore SANsymphony is licensed in three different editions: Enterprise, Standard, and Business.
All editions are licensed per capacity (in 1 TB steps). Except for the Business edition which has a fixed price per TB, the more capacity that is used by an end-user in each class, the lower the price per TB.
Each edition includes a defined feature set.
Enterprise (EN) includes all available features plus expanded Parallel I/O.
Standard (ST) includes all Enterprise (EN) features, except FC connections, Encryption, Inline Deduplication & Compression and Shared Multi-Port Array (SMPA) support with regular Parallel I/O.
Business (BZ) as entry-offering includes all essential Enterprise (EN) features, except Asynchronous Replication & Site Recovery, Encryption, Deduplication & Compression, Random Write Accelerator (RWA) and Continuous Data Protection (CDP) with limited Parallel I/O.
Customers can choose between a perpetual licensing model or a term-based licensing model. Any initial license purchase for perpetual licensing includes Premier Support for either 1, 3 or 5 years. Alternatively, term-based licensing is available for either 1, 3 or 5 years, always including Premier Support as well, plus enhanced DataCore Insight Services (predictive analytics with actionable insights). In most regions, BZ is available as term license only.
Capacity can be expanded in 1 TB steps. There exists a 10 TB minimum per installation for Business (BZ). Moreover, BZ is limited to 2 instances and a total capacity of 38 TB per installation, but one customer can have multiple BZ installations.
Cost neutral upgrades are available when upgrading from Business/Standard (BZ/ST) to Enterprise (EN).
|
Hyper-V: Per Node + Per TB
vSphere: Per Node (storage capacity included)
There are two StarWind Virtual SAN editions: StarWind Virtual SAN for Hyper-V and StarWind Virtual SAN for vSphere. StarWind Virtual SAN for Hyper-V is licensed per node + per amount of HA storage provisioned by StarWind Virtual SAN. StarWind Virtual SAN for vSphere is licensed per the node. There the amount of HA storage provisioned by StarWind Virtual SAN is always unlimited.
HA = Highly Available
|
Per CPU Socket
Per Desktop (VDI use cases only)
Per Used GB (VCPP only)
Editions:
vSAN Enterprise
vSAN Advanced
vSAN Standard
vSAN for ROBO Enterprise
vSAN for ROBO Advanced
vSAN for ROBO Standard
vSAN (for ROBO) Standard editions offer All-Flash Hardware, iSCSI Target Service, Storage Policy Based Management, Virtual Distributed Switch, Rack Awareness, Software Checksum and QoS - IOPS Limit.
vSAN (for ROBO) Enterprise and vSAN (for ROBO) Advanced editions exclusively offer All-Flash related features RAID-5/6 Erasure Coding and Deduplication+Compression, as well as vRealize Operations within vCenter over vSAN (for ROBO) Standard editions.
vSAN (for ROBO) Enterprise editions exclusively offer Stretched Cluster and Data-at-rest Encryption features over both vSAN (for ROBO) Standard and Advanced editions.
VMware vSAN is priced per CPU socket for VSI workloads, but can also be acquired per desktop for VDI use cases; A 25VM Pack is exclusively available for ROBO use cases.
vSAN for Desktop is priced per named user or per concurrent user (CCU) in a virtual desktop environment and sold in packs of 10 and 100 licenses.
VMware vSAN is priced per Used GB when subscribing to vSAN from a VMware Cloud Provider (VCPP = VMware Cloud Partner Program).
|
|
|
Support Pricing Model
Details
|
Capacity based (per TB)
Support is always provided on a premium (24x7) basis, including free updates.
More information about DataCores support policy can be found here:
http://datacore.custhelp.com/app/answers/detail/a_id/1270/~/what-is-datacores-support-policy-for-its-products
|
Per Node
StarWind provides three editions of StarWind Virtual SAN Support:
1. Standard Support for business days, business hours and up to 4 hours response time
2. Premium Support covers 24x7x365 support and up to 1 hour response time
3. Proactive Support provides Premium Support and monitors the health of the system and proactively notifies about potential issues on the hardware level, hypervisor level and StarWind software-level.
|
Per CPU Socket
Per Desktop (VDI use cases only)
Subscriptions: Basic, Production, Business, Critical and Mission Critical
For details on the different support subscriptions, please use the following link: https://www.vmware.com/support/services/compare.html
|
|
|
Design & Deploy
|
|
|
|
|
|
|
Design |
|
|
|
Consolidation Scope
Details
|
Storage
Data Protection
Management
Automation&Orchestration
DataCore is storage-oriented.
SANsymphony Software-Defined Storage Services are focused on variable deployment models. The range covers classical Storage Virtualization over Converged and Hybrid-Converged to Hyperconverged including a seamless migration between them.
DataCore aims to provide all key components within a storage ecosystem including enhanced data protection and automation & orchestration.
|
Storage
Management
StarWind Virtual SAN consolidates storage from different servers by replicating and presenting it over the iSCSI protocol as a single pool.
|
Hypervisor
Compute
Storage
Data Protection (limited)
Management
Automation&Orchestration
VMware is stack-oriented, whereas the vSAN platform itself is heavily storage-focused.
With the vSAN/VxRail platforms VMware aims to provide all functionality required in a Private Cloud ecosystem.
|
|
|
|
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
The bandwidth required depends entirely on the specifc workload needs.
SANsymphony 10 PSP11 introduced support for Emulex Gen 7 64 Gbps Fibre Channel HBAs.
SANsymphony 10 PSP8 introduced support for Gen6 16/32 Gbps ATTO Fibre Channel HBAs.
|
1, 10, 25, 40, 100 GbE
StarWind requires at least three dedicated network interfaces:
- one for StarWind Synchronization
- one for iSCSI traffic/Heartbeat)
- one for Management/Heartbeat
At least one Heartbeat interface must be on a separate network adapter and redundant.
For iSCSI and synchronization, minimum 1 GbE of bandwidth and latency under 5ms is a requirement.
|
1, 10, 40 GbE
vSAN supports ethernet connectivity using SFP+ or Base-T. VMware recommends 10GbE or higher to avoid the network becoming a performance bottleneck.
|
|
|
Overall Design Complexity
Details
|
Medium
DataCore SANsymphony is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. DataCore SANsymphony seeks to provide important capabilities either natively or tightly integrated, and this keeps the design process relatively simple. However, because many features in SANsymphony are optional and thus can be turned on/off, in effect each one needs to be taken into consideration when preparing a detailed design.
|
Medium
StarWind Virtual SAN is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. Also, StarWind Virtual SAN does not encompass many native data protection capabilities and data services. A complete solution design therefore requires the presence of multiple technology platforms.
|
Medium
VMware vSAN was developed with simplicity in mind, both from a design and a deployment perspective. VMware vSANs uniform platform architecture is meant to be applicable to all virtualization use-cases and seeks to provide important capabilities either natively or by leveraging features already present in the VMware hypervisor, vSphere, on a per-VM basis. However, there are still some key areas where vSAN still relies heavily on external products and where there is no tight integration involved (eg. backup/restore). In these cases choices need to made whether to incorporate 1st party of 3rd party solutions into the overall technical design.
|
|
|
External Performance Validation
Details
|
SPC (Jun 2016)
ESG Lab (Jan 2016)
SPC (Jun 2016)
Title: 'Dual Node, Fibre Channel SAN'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-connected, SSY 10.0, 4x All-Flash Dell MD1220 SAS Storage Arrays
SPC (Jun 2016)
Title: 'Dual Node, High Availability, Hyper-converged'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-interconnect, SSY 10.0
ESG Lab (Jan 2016)
Title: 'DataCore Application-adaptive Data Infrastructure Software'
Workloads: OLTP
Benchmark Tools: IOmeter
Hardware: Hybrid (Tiered) Dell PowerEdge R720, 2-node cluster, SSY 10.0
|
N/A
No StarWind Virtual SAN validated test reports have been published in 2016/2017/2018/2019.
|
StorageReview (aug 2018, aug 2016)
ESG Lab (aug 2018, apr 2016)
Evaluator Group (oct 2018, jul 2017, aug 2016)
StorageReview (Aug 2018)
Title: 'VMware vSAN with Intel Optane Review'
Workloads: MySQL OLTP, MSSQL OLTP, Generic profiles
Benchmark Tools: Sysbench (MySQL), TPC-C (MSSQL), Vdbench (generic)
Hardware: All-Flash Supermicro’s 2029U-TN24R4T+, 4-node cluster, vSAN 6.7
ESG Lab (Aug 2018)
Title: 'Optimize VMware vSAN with Western Digital NVMe SSDs and Supermicro Servers'.
Workloads: MSSQL OLTP
Benchmark Tools: HammerDB (MSSQL)
Hardware: All-Flash SuperMicro BigTwin, 4-node cluster, vSAN 6.6.1; All-Flash Lenovo X3650M5, 4-node cluster, vSAN 6.1
Evaluator Group (Jul 2017/Oct 2018)
Title: 'IOmark-VM-HC Test Report'
Workloads: Mix (MS Exchange, Olio, Web, Database)
Benchmark Tools: IOmark-VM (all)
Hardware: All-Flash Intel R2208WF, 4-node cluster, vSAN 6.6/6.7
Storage Review (Aug 2016)
Title: 'VMware VSAN 6.2 All-Flash Review'
Workloads: MySQL OLTP, MSSQL OLTP
Benchmark Tools: Sysbench (MySQL), TPC-C (MSSQL)
Hardware: All-Flash Dell PowerEdge R730xd, 4-node cluster, vSAN 6.2
Evaluator Group (Aug 2016)
Title: 'IOmark-VM-HC Test Report'
Workloads: Mix (MS Exchange, Olio, Web, Database)
Benchmark Tools: IOmark-VM (all)
Hardware: All-Flash Intel S2600WT, 4-node and 6-node cluster, vSAN 6.2
ESG Lab (Apr 2016)
Title: 'Optimize VMware Virtual SAN 6 with SanDisk SSDs'.
Workloads: MSSQL OLTP
Benchmark Tools: HammerDB (MSSQL)
Hardware: Hybrid/All-Flash Lenovo X3650M5, 4-node cluster, vSAN 6.2
|
|
|
Evaluation Methods
Details
|
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
SANsymphony is freely downloadble after registering online and offers full platform support (complete Enterprise feature set) but is scale (4 nodes), capacity (16TB) and time (30 days) restricted, what all can be expanded upon request. The free trial version of SANsymphony can be installed on all commodity hardware platforms that meet the hardware requirements.
For more information please go here: https://www.datacore.com/try-it-now/
|
Community Edition (forever)
Trial (up to 30 days)
Proof-of-Concept
There are 2 ways for end-user organizations to evaluate StarWind:
1. StarWind Free. A free version of the software-only Virtual SAN product can be downloaded for the StarWind website. The free version has full functionality but StarWind Management Console works only in the monitoring mode without the ability to create or manage StarWind. All the management is performed via PowerShell and set of script templates.
The free version is intended to be self-supported or community-supported on public discussion forums. This
2. StarWind Trial. The trial version has full functionality and all the management capabilities for StarWind Management Console and is limited to 30 days but can be prolonged if required.
|
Free Trial (60-days)
Online Lab
Proof-of-Concept (POC)
vSAN Evaluation is freely downloadble after registering online. Because it is embedded in the hypervisor, the free trial includes vSphere and vCenter Server. vSAN Evaluation can be installed on all commodity hardware platforms that meet the hardware requirements. vSAN Evaluation use is time-restricted (60-days). vSAN Evaluation is not for production environments.
VMware also offers a vSAN hosted hands-on lab that lets you deploy, configure and manage vSAN in a contained environment, after registering online.
|
|
|
|
Deploy |
|
|
|
Deployment Architecture
Details
|
Single-Layer
Dual-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
Single-Layer:
- SANsymphony is implemented as virtual machine (VM) or in case of Hyper-V as service layer on Hyper-V parent OS, managing internal and/or external storage devices and providing virtual disks back to the hypervisor cluster it is implemented in. DataCore calls this a hyper-converged deployment.
Dual-Layer:
- SANsymphony is implemented as bare metal nodes, managing external storage (SAN/NAS approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a traditional deployment.
- SANsymphony is implemented as bare metal nodes, managing internal storage devices (server-SAN approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a converged deployment.
Mixed:
- SANsymphony is implemented in any combination of the above 3 deployments within a single management entity (Server Group) acting as a unified storage grid. DataCore calls this a hybrid-converged deployment.
|
Single-Layer
Dual-Layer (secondary)
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
In a Single-Layer architecture, the servers with Virtual SAN software function as compute nodes as well as storage node; StarWind performs storage synchronous replication
In a Dual-Layer architecture, StarWind Virtual SAN software replicates data in active-active mode between the dedicated storage servers and provides HA storage for use by separate compute nodes that do not have StarWind Virtual SAN installed.
|
Single-Layer (primary)
Dual-Layer (secondary)
Single-Layer: VMware vSAN is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
VMware vSAN can partially serve in a dual-layer model by providing storage also to other vSphere hosts within the same cluster that do not contribute storage to vSAN themselves or to bare metal hosts. However, this is not a primary use case and also requires the other vSphere hosts to have vSAN enabled (Please view the compute-only scale-out option for more information).
|
|
|
Deployment Method
Details
|
BYOS (some automation)
BYOS = Bring-Your-Own-Server-Hardware
DataCore SANsymphony is made easy by providing a very straightforward implementation approach.
|
BYOS (some automation)
|
BYOS (fast, some automation)
Pre-installed (very fast, turnkey approach)
There are four methods to deploy vSAN:
1. Build-Your-Own.
2. vSAN Ready Nodes (Pre-installed).
3. vSAN Ready Nodes (non pre-installed).
4. Hardware Appliances aka HCI (Dell VxRail / Lenovo VX Series).
Each method has a different level of effort involved; the method that suits the end-user organization best is based on the technical level of the admin team, the IT architecture and state, as well as the desired level of customization. Where the 'Hardware Appliances' allow for the least amount of customization and the 'Build-Your-Own' for the most, in reality the majority of end-users choose either 'Pre-Installed vSAN Ready Nodes' or 'Hardware Appliances'.
Because of the tight integration with the VMware vSphere platform, vSAN itself is very easy to install and configure (just a few clicks in the vSphere GUI)
vSAN 6.7 U1 introduced a Quickstart wizard in the vSphere Client. The Quickstart workflow guides the end user through the deployment process for vSAN and non-vSAN clusters, and covers every aspect of the initial configuration, such as host, network, and vSphere settings. Quickstart also plays a part in the ongoing expansion of a vSAN cluster by allowing an end user to add additional hosts to the cluster.
BYOS = Bring-Your-Own-Server-Hardware
|
|
|
Workload Support
|
|
|
|
|
|
|
Virtualization |
|
|
|
Hypervisor Deployment
Details
|
Virtual Storage Controller
Kernel (Optional for Hyper-V)
The SANsymphony Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the SANsymphony storage solution and commits its internal storage and/or externally connected storage to the shared resource pool. The Virtual Storage Controller (VSC) can be configured direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
In Microsoft Hyper-V environments the SANsymphony software can also be installed in the Windows Server Root Partition. DataCore does not recommend installing SANsymphony in a Hyper-V guest VM as it introduces virtualization layer overhead and obstructs DataCore Software from directly accessing CPU, RAM and storage. This means that installing SANsymphony in the Windows Server Root Partition is the preferred deployment option. More information about the Windows Server Root Partition can be found here: https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/architecture
The DataCore software can be installed on Microsoft Windows Server 2019 or lower (all versions down to Microsoft Windows Server 2012/R2).
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Virtual Storage Controller (vSphere)
User-Space (Hyper-V)
StarWind VSAN for vSphere is deployed by downloading an OVF template of a preconfigured Linux-based VM. It should be deployed on each VMware vSphere node that takes part in StarWind replication. The OVF contains StarWind SDS stack pre-configured and pre-installed.
StarWind VSAN for Hyper-V is deployed by downloading the latest build of StarWind. The installation process will automatically install the required components. The build should be installed on each node that will take part in StarWind replication.
|
Kernel Integrated
vSAN is embedded into the VMware hypervisor. This means it does not require any Controller VMs to be deployed on top of the hypervisor platform.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
|
|
Hypervisor Compatibility
Details
|
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
'Not qualified' means there is no generic support qualification due to limited market footprint of the product. However, a customer can always individually qualify the system with a specific SANsymphony version and will get full support after passing the self-qualification process.
Only products explicitly labeled 'Not Supported' have failed qualification or have shown incompatibility.
|
VMware vSphere ESXi 6.0-6.7
Microsoft Hyper-V 2012-2019
StarWind is actively working on supporting KVM.
|
VMware vSphere ESXi 7.0 U1
NEW
VMware vSAN is an integral part of the VMware vSphere platform; As such it cannot be used with any other hypervisor platform.
vSAN supports a single hypervisor in contrast to other SDS/HCI products that support multiple hypervisors.
|
|
|
Hypervisor Interconnect
Details
|
iSCSI
FC
The SANsymphony software-only solution supports both iSCSI and FC protocols to present storage to hypervisor environments.
DataCore SANsymphony supports:
- iSCSI (Switched and point-to-point)
- Fibre Channel (Switched and point-to-point)
- Fibre Channel over Ethernet (FCoE)
- Switched, where host uses Converged Network Adapter (CNA), and switch outputs Fibre Channel
|
iSCSI
NFS
SMB3
iSCSI is the native StarWind VSAN storage protocol, as StarWind VSAN provides block-based storage.
NFS can be used as the storage protocol in VMware vSphere environments by leveraging the File Server role that the Windows OS provides.
SMB3 can be used as the storage protocol in Microsoft Hyper-V environments by leveraging the File Server role that the Windows OS provides.
In both VMware vSphere and Microsoft Hyper-V environments, iSCSI is used as a protocol to provide block-level storage access. It allows consolidating storage from multiple servers providing it as highly available storage to target servers. In the case of vSphere, VMware iSCSI Initiator allows connecting StarWind iSCSI devices to ESXi hosts and further create datastores on them. In the case of Hyper-V, Microsoft iSCSI Initiator is utilized to connect StarWind iSCSI devices to the servers and further provide HA storage to the cluster (i.e. CSV).
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
vSAN (incl. WSFC)
VMware uses a propietary protocol for vSAN.
vSAN 6.1 and upwards support the use of Microsoft Failover Clustering (MSFC). This includes MS Exchange DAG and SQL Always-On clusters when a file share witness quorum is used. The use of a failover clustering instance (FCI) is not supported.
vSAN 6.7 and upwards support Windows Server Failover Clustering (WSFC) by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.
vSAN 6.7 U3 introduced native support for SCSI-3 Persistent Reservations (PR), which enables Windows Server Failover Clusters (WSFC) to be directly deployed on native vSAN VMDKs. This capability enables migrations from legacy deployments on physical RDMs or external storage protocols to VMware vSAN.
|
|
|
|
Bare Metal |
|
|
|
Bare Metal Compatibility
Details
|
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Any operating system currently not qualified for support can always be individually qualified with a specific SANsymphony version and will get full support after passing the self-qualification process.
SANsymphony provides virtual disks (block storage LUNs) to all of the popular host operating systems that use standard disk drives with 512 byte or 4K byte sectors. These hosts can access the SANsymphony virtual disks via SAN protocols including iSCSI, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE).
Mainframe operating systems such as IBM z/OS, z/TPF, z/VSE or z/VM are not supported.
SANsymphony itself runs on Microsoft Windows Server 2012/R2 or higher.
|
Microsoft Windows Server 2012/2012R2/2016/2019
StarWind VSAN provides highly available storage over iSCSI between the StarWind nodes and can additionally share storage over iSCSI to any OS that supports the iSCSI protocol.
|
Many
vSAN iSCSI Service enables hosts and physical workloads that reside outside the vSAN cluster to access the vSAN datastore by providing highly available block storage as iSCSI LUNs. The physical workloads can be stand-alone servers, Windows Failover Clusters (including MSSQL) or Oracle RAC.
vSAN iSCSI Service does not support other vSphere or ESXi clients or initiators, third
party hypervisors, or migrations using raw device mapping (RDMs).
vSAN 6.7 and upwards support Windows Server Failover Clustering (WSFC) by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.
|
|
|
Bare Metal Interconnect
Details
|
iSCSI
FC
FCoE
|
iSCSI
|
iSCSI
vSAN iSCSI block storage acts as one or more targets for Windows or Linux operating systems running on a bare metal (physical) server.
vSAN iSCSI Service does not support other vSphere or ESXi clients or initiators, third-party hypervisors, or migrations using raw device mapping (RDMs).
vSAN 6.7 and upwards support Windows Server Failover Clustering (WSFC) by building WSFC targets on top of vSAN iSCSI targets. vSAN iSCSI target service supports SCSI-3 Persistent Reservations for shared disks and transparent failover for WSFC. WSFC can run on either physical servers or VMs.
vSAN 6.7 U3 enhances the vSAN iSCSI service by allowing dynamic resizing of iSCSI LUNs without disruption.
|
|
|
|
Containers |
|
|
|
Container Integration Type
Details
|
Built-in (native)
DataCore provides its own Volume Plugin for natively providing Docker container support, available on Docker Hub.
DataCore also has a native CSI integration with Kubernetes, available on Github.
|
N/A
StarWind Virtual SAN relies on the container support delivered by the hypervisor platform.
|
Built-in (Hypervisor-based, vSAN supported)
VMware vSphere Docker Volume Service (vDVS) technology enables running stateful containers backed by storage technology of choice in a vSphere environment.
vDVS comprises of Docker plugin and vSphere Installation Bundle which bridges the Docker and vSphere ecosystems.
vDVS abstracts underlying enterprise class storage and makes it available as docker volumes to a cluster of hosts running in a vSphere environment. vDVS can be used with enterprise class storage technologies such as vSAN, VMFS, NFS and VVol.
vSAN 6.7 U3 introduced support for VMware Cloud Native Storage (CNS). When Cloud Native Storage is used, persistent storage for containerized stateful applications can be created that are capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
|
|
|
Container Platform Compatibility
Details
|
Docker CE/EE 18.03+
Docker EE = Docker Enterprise Edition
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
|
|
Container Platform Interconnect
Details
|
Docker Volume plugin (certified)
The DataCore SDS Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables SANsymphony data volumes to persist beyond the lifetime of both a container or a container host. DataCore leverages SANsymphony iSCSI and FC to provide storage to containers. This effectively means that the hypervisor layer is bypassed.
The Docker SDS Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the Docker Hub. The plugin is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a SANsymphony storage cluster.
For more information please go to: https://hub.docker.com/plugins/datacore-sds-volume-plugin
The Kubernetes CSI plugin can be downloaded from GitHub. The plugin is automatically deployed as several pods within the Kubernetes system.
For more information please go to: https://github.com/DataCoreSoftware/csi-plugin
Both plugins are supported with SANsymphony 10 PSP7 U2 and later.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
The StarWind HA VMFS (datastore) can be used for deploying containers just as on common VMFS datastore.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
|
|
|
Container Host Compatibility
Details
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The DataCore native plug-ins are container-host centric and as such can be used across all SANsymphony-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, KVM, XenServer, Oracle VM Server) as well as on bare metal platforms.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
|
|
Container Host OS Compatbility
Details
|
Linux
All Linux versions supported by Docker CE/EE 18.03+ or higher can be used.
|
Linux
Windows 10 or 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
Linux
Windows 10 or Windows Server 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
|
|
Container Orch. Compatibility
Details
|
Kubernetes 1.13+
|
Kubernetes 1.6.5+ on ESXi 6.0+
|
VCP: Kubernetes 1.6.5+ on ESXi 6.0+
CNS: Kubernetes 1.14+
vSAN 6.7 U3 introduced support for VMware Cloud Native Storage (CNS).
When Cloud Native Storage (CNS) is used, persistent storage for containerized stateful applications can be created that are capable of surviving restarts and outages. Stateful containers orchestrated by Kubernetes can leverage storage exposed by vSphere (vSAN, VMFS, NFS, vVols) while using standard Kubernetes volume, persistent volume, and dynamic provisioning primitives.
VCP = vSphere Cloud Provider
CSI = Container Storage Interface
|
|
|
Container Orch. Interconnect
Details
|
Kubernetes CSI plugin
The Kubernetes CSI plugin provides several plugins for integrating storage into Kubernetes for containers to consume.
DataCore SANsymphony provides native industry standard block protocol storage presented over either iSCSI or Fibre Channel. YAML files can be used to configure Kubernetes for use with DataCore SANsymphony.
|
Kubernetes Volume Plugin
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, VVol, VMFS or NFS datastores.
|
Kubernetes Volume Plugin
The VMware vSphere Container Storage Interface (CSI) Volume Driver for Kubernetes leverages vSAN block storage and vSAN file shares to provide scalable, persistent storage for stateful applications.
Kubernetes contains an in-tree CSI Volume Plug-In that allows the out-of-tree VMware vSphere CSI Volume Driver to gain access to containers and provide persistent-volume storage. The plugin runs in a pod and dynamically provisions requested PersistentVolumes (PVs) using vSAN block storage and vSAN native files shares dynamically provisioned by VMware vSAN File Services.
The VMware vSphere CSI Volume Driver requires Kubernetes v1.14 or later and VMware vSAN 6.7 U3 or later. vSAN File Services requires VMware vSAN/vSphere 7.0.
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, vVol, VMFS or NFS datastores.
|
|
|
|
VDI |
|
|
|
VDI Compatibility
Details
|
VMware Horizon
Citrix XenDesktop
There is no validation check being performed by SANsymphony for VMware Horizon or Citrix XenDesktop VDI platforms. This means that all versions supported by these vendors are supported by DataCore.
|
VMware Horizon
Citrix XenDesktop
Although StarWind supports both VMware and Citrix VDI deployments on top of StarWind VSAN HA storage, there is currently no specific documentation available.
|
VMware Horizon
Citrix XenDesktop
VMware has vSAN related Reference Architecture whitepapers available for both VMware Horizon and Citrix VDI platforms.
|
|
|
|
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
DataCore has not published any recent VDI reference architecture whitepapers. The only VDI related paper that includes a Login VSI benchmark dates back to december 2010. There a 2-node SANsymphony cluster was able to sustain a load of 220 VMs based on the Login VSI 2.0.1 benchmark.
|
VMware: up to 260 virtual desktops/node
Citrix: up to 220 virtual desktops/node
The load bearing numbers are based on approximate calculations of VDI infrastructure that StarWind Virtual SAN can support.
There are no LoginVSA benchmark numbers on record for StarWind Virtual SAN as of yet.
|
VMware: up to 200 virtual desktops/node
Citrix: up to 90 virtual desktops/node
VMware Horizon 7.0: Load bearing number is based on Login VSI tests performed on all-flash rack servers using 2vCPU Windows 7 desktops and the Knowledge Worker profile.
Citrix XenDesktop 7.9: Load bearing number is based on Login VSI tests performed on hybrid VxRail 160 appliances using 2vCPU Windows 7 desktops and the Knowledge Worker profile.
For detailed information please read the corresponding whitepapers.
|
|
|
Server Support
|
|
|
|
|
|
|
Server/Node |
|
|
|
Hardware Vendor Choice
Details
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
Many
StarWind Virtual SAN is a hardware-agnostic solution and does not have a strict HCL or supported hardware platforms.
|
Many
The following server hardware vendors provide vSAN Ready Nodes: Cisco, DELL, Fujitsu, Hitachi, HPE, Huawei, Lenovo, Supermicro.
|
|
|
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
Many
StarWind Virtual SAN is a hardware-agnostic solution and does not have a strict HCL or supported hardware platforms.
|
Many
The following server hardware vendors provide vSAN Ready Nodes: Cisco, DELL, Fujitsu, Hitachi, HPE, Huawei, Lenovo, Supermicro.
|
|
|
|
1, 2 or 4 nodes per chassis
Note: Because SANsymphony is mostly hardware agnostic, customers can opt for multiple server densities.
Note: In most cases 1U or 2U building blocks are used.
Also Super Micro offers 2U chassis that can house 4 compute nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1, 2 or 4 nodes per chassis
StarWind Virtual SAN is a hardware-agnostic solution and does not have strict HCL or supported hardware platforms.
|
1, 2 or 4 nodes per chassis
Because vSAN has an extensive HCL, customers can opt multiple server densities.
Note: vSAN Ready Nodes are mostly based on standard 2U rack server configurations.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
|
|
|
Yes
DataCore does not explicitly recommend using different hardware platforms, but as long as the hardware specs are somehow comparable, there is no reason to insist on one or the other hardware vendor. This is proven in practice, meaning that some customers run their productive DataCore environment on comparable servers of different vendors.
|
Yes
Although StarWind does not recommend assymetric configurations (mixing nodes with different CPUs, storage type and networks in same replica) for Virtual SAN environments, if a customer understands the possible issues with performance (any solution that provides storage active-active replication performs with the speed of the slowest component), different servers can be mixed in a single solution.
|
Yes
Mixing is allowed, although this is not advised within a single vSAN cluster for a consistent performance experience.
|
|
|
|
Components |
|
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to:
https://www.starwindsoftware.com/system-requirements
|
Flexible
For each hardware component the vSAN Hardware Quick Reference Guide and the VMware vSAN Ready Nodes document provide sizing guidelines.
VMware vSphere supports 2nd generation Intel Xeon Scalable (Cascade Lake) processors in version 6.7U1 and upwards.
|
|
|
|
Flexible
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to:
https://www.starwindsoftware.com/system-requirements
|
Flexible
For each hardware component the vSAN Hardware Quick Reference Guide and the VMware vSAN Ready Nodes document provide sizing guidelines.
|
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to:
https://www.starwindsoftware.com/system-requirements
|
Flexible: number of disks + capacity
For each hardware component the vSAN Hardware Quick Reference Guide and the VMware vSAN Ready Nodes document provide sizing guidelines.
vSAN 7.0 provides support for 32TB physical capacity drives. This extends the logical capacity up to 1PB when deduplication and compression is enabled.
|
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to:
https://www.starwindsoftware.com/system-requirements
|
Flexible
For each hardware component the vSAN Hardware Quick Reference Guide and the VMware vSAN Ready Nodes document provide sizing guidelines.
|
|
|
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
DataCore SANsymphony supports the hardware that is on the hypervisor HCL.
VMware vSphere 6.5U1 officially supports several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
Windows 2016 supports two graphics virtualization technologies available with Hyper-V to leverage GPU hardware:
- Discrete Device Assignment
- RemoteFX vGPU
More information is provided here: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-graphics-virtualization
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
StarWind Virtual SAN supports the hardware that is on the hypervisor HCL.
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
VMware vSphere 6.5U1 and up officially support several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
|
|
|
Scaling |
|
|
|
|
CPU
Memory
Storage
GPU
The SANsymphony platform allows for expanding of all server hardware resources.
|
CPU
Memory
Storage
GPU
StarWind Virtual SAN allows expansion of all server hardware resources.
|
CPU
Memory
Storage
GPU
VMware allows for the on-the fly (non-disruptive) adding/removal of individual disks in existing disk groups.
There is a maximum of 5 disk groups (flash cache device + capacity devices) on an individual ESXi host participating in a vSAN cluster. In a hybrid configuration each diskgroup consists of 1 Flash Device + a maximum of 7 capacity devices. This totals to 40 devices per ESXi host, although an average rack server has room for up to only 24 devices.
|
|
|
|
Storage+Compute
Compute-only
Storage-only
Storage+Compute: In a single-layer deployment existing SANsymphony clusters can be expanded by adding additional nodes running SANsymphony, which adds additional compute and storage resources to the shared pool. In a dual-layer deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded simultaneously.
Compute-only: Because SANsymphony leverages virtual block volumes (LUNs), storage can be presented to hypervisor hosts not participating in the SANsymphony cluster. This is also beneficial to migrations, since it allows for online storage vMotions between SANsymphony and non-SANsymphony storage platforms.
Storage-only: In a dual-layer or mixed deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded independent from each other.
|
Storage+Compute
Compute-only
Storage-only
In case of Storage-only scale out, StarWind Virtual SAN Storage Nodes will be based on Windows Server bare metal with no hypervisor software installed eg. VMware ESXi. StarWind software will be running as a Windows-native application and providing storage to the hypervisor hosts.
|
Compute+storage
Compute-only (vSAN VMKernel)
Storage+Compute: Existing vSAN clusters can be expanded by adding additional vSAN nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: When the vSAN VMkernel is installed and enabled on a host that is not contributing storage but resides in the same cluster as contributing hosts, vSAN datastores can be presented to these hypervisor hosts as well. This is also beneficial to storage migrations as it allows for online storage vMotions between vSAN storage and non-vSAN storage platforms. The use of the vSAN VMkernel requires a vSAN license for this host.
Storage-only: N/A; A vSAN node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
|
|
|
|
1-64 nodes in 1-node increments
There is a maximum of 64 nodes within a single cluster. Multiple clusters can be managed through a single SANsymphony management instance.
|
2-64 nodes in 1-node increments
The 64-node limit applies to both VMware vSphere and Microsoft Hyper-V environments.
|
2-64 nodes in 1-node increments
The hypervisor cluster scale-out limits still apply: 64 hosts for VMware vSphere.
|
|
|
Small-scale (ROBO)
Details
|
2 Node minimum
DataCore prevents split-brain scenarios by always having an active-active configuration of SANsymphony with a primary and an alternate path.
In the case SANsymphony servers are fully operating but do not see each other, the application host will still be able to read and write data via the primary path (no switch to secondary). The mirroring is interrupted because of the lost connection and the administrator is informed accordingly. All writes are stored on the locally available storage (primary path) and all changes are tracked. As soon as the connection between the SANsymphony servers is restored, the mirror will recover automatically based on these tracked changes.
Dual updates due to misconfiguration are detected automatically and data corruption is prevented by freezing the vDisk and waiting for user input to solve the conflict. Conflict solutions could be to declare one side of the mirror to be the new active data set and discarding all tracked changes at the other side, or splitting the mirror and merge the two data sets into a 3rd vDisk manually.
|
1 Node minimum
StarWind Virtual SAN can be used as standalone iSCSI target providing storage from one node (no HA). For HA two nodes are required. StarWind Virtual SAN can be further scaled by adding more storage or new nodes into the storage cluster.
|
2 Node minimum
The use of the witness virtual appliance eliminates the requirement of a third physical node in vSAN ROBO deployments. vSAN for ROBO edition licensing is best suited for this type of deployment.
|
|
|
Storage Support
|
|
|
|
|
|
|
General |
|
|
|
|
Block Storage Pool
SANsymphony only serves block devices to the supported OS platforms.
|
Block Storage Pool
StarWind VSAN only serves block devices as storage volumes to the supported OS platforms.
The underlying storage is first aggregated with hardware or software RAID. Then, the storage is replicated by StarWind at the block level across 2 or 3 nodes and further provided as a single pool (single StarWind virtual device) or as multiple pools (multiple StarWind devices).
|
Object Storage File System (OSFS)
|
|
|
|
Partial
DataCores core approach is to provide storage resources to the applications without having to worry about data locality. But if data locality is explicitly requested, the solution can partially be designed that way by configuring the first instance of all data to be stored on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference.
By default data does not automatically follow the VM when the VM is moved to another node. However, virtual disks can be relocated on the fly to other DataCore node without losing I/O access, but this relocation takes some time due to data copy operations required. This kind of relocation usually is done manually, but we allow automation of such tasks and can integrate with VM orchestration using PowerShell for example.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Partial
StarWinds core approach is to keep data close (local) to the VM in order to avoid slow data transfers through the network and achieve the highest performance the setup can provide. The solution is designed to store the first instance of all data on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference. Data does not automatically follow the VM when the VM is moved to another node.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Partial
Each host within a vSAN cluster has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example. Apart from the read-cache vSAN only uses data locality in stretched clusters to avoid high latency.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
|
|
|
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
VoV = Volume-on-Volume; The Virtual Storage Controller uses virtual disks provided by the hypervisor platform.
|
Direct-attached (Raw)
SAN or NAS
Direct-attached: StarWind can take control of formatted disks (NTFS). Also, StarWind software can present RAW unformatted disks over SCSI Pass Through Interface that enables remote initiator clients to use any type of a hard drive (PATA/SATA/RAID).
External SAN/NAS Storage: SAN/NAS can be connected over Ethernet connectivity and can be used for StarWind Virtual SAN as soon as they are provided as block storage (iSCSI). In case it is required to replicate data between NAS systems, there should be 2 NAS systems connected to nodes that will be used for StarWind Virtual SAN.
|
Direct-attached (Raw)
Remote vSAN datatstores (HCI Mesh)
NEW
The software takes ownership of the unformatted physical disks available inside the host.
VMware vSAN 7.0 U1 introduces the HCI Mesh concept. With VMware HCI Mesh a vSAN cluster can leverage the storage of remote vSAN clusters for hosting VMs without sacrificing important features such as HA and DR. Up to 5 remote vSAN datastores can be mounted by a single vSAN cluster. HCI Mesh works by using the existing vSAN VMkernel ports and transport protocols. It is full software-based and does not require any specialized hardware.
|
|
|
|
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
|
Magnetic-only
Hybrid (Flash+Magnetic)
All-Flash
|
Hybrid (Flash+Magnetic)
All-Flash
VMware Enterprise edition offers All-Flash related features Erasure Coding and Data Reduction (Deduplication+Compression).
Hybrid hosts cannot be mixed with All-Flash hosts in the same vSAN cluster.
vSAN 7.0 provides support for 32TB physical capacity drives. This extends the logical capacity up to 1PB when deduplication and compression is enabled.
|
|
|
Hypervisor OS Layer
Details
|
SD, USB, DOM, SSD/HDD
|
SD, USB, DOM, SSD/HDD
|
SD, USB, DOM, HDD or SSD
|
|
|
|
Memory |
|
|
|
|
DRAM
|
DRAM
DRAM can be used for caching in a write-back or write-through mode. Additionally, it can be used for creating StarWind RAM disks.
For further information, please visit:
https://www.starwindsoftware.com/high-performance-ram-disk-emulator
|
DRAM
Each host within a vSAN cluster has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example.
|
|
|
|
Read/Write Cache
DataCore SANsymphony accelerates reads and writes by leveraging the powerful processors and large DRAM memory inside current generation x86-64bit servers on which it runs. Up to 8 Terabytes of cache memory may be configured on each DataCore node, enabling it to perform at solid state disk speeds without the expense. SANsymphony uses a common cache pool to store reads and writes in.
SANsymphony read caching essentially recognizes I/O patterns to anticipate which blocks to read next into RAM from the physical back-end disks. That way the next request can be served from memory.
When hosts write to a virtual disk, the data first goes into DRAM memory and is later destaged to disk, often grouped with other writes to minimize delays when storing the data to the persistent disk layer. Written data stays in cache for re-reads.
The cache is cleaned on a first-in-first-out (FiFo) basis. Segment overwrites are performed on the oldest data first for both read- and write cache segment requests.
SANsymphony prevents the write cache data from flooding the entire cache. In case the write data amount runs above a certain percentage watermark of the entire cache amount, then the write cache will temporarily be switched to write-through mode in order to regain balance. This is performed fully automatically and is self-adjusting, per virtual disk as well as on a global level.
|
Read/Write Cache
StarWind Virtual SAN accelerates reads and writes by leveraging conventional RAM.
The memory cache is filled up with data mainly during write operations. During read operations, data enters the cache only if the latter contains either empty memory blocks or the lines that were allocated for these entries earlier and have not been fully exhausted yet.
StarWind VSAN supports two Memory (L1 Cache) Policies:
1. Write-Back, caches writes in DRAM only and acknowledges back to the originator when complete in DRAM.
2. Write-Through, caches writes in both DRAM and underlying storage, and acknowledges back to the originator when complete in the underlying storage.
This means that exclusively caching writes in convential memory is optional. When the Write-Through policy is used, DRAM is used primarily for caching reads.
To change the cache size, first, the StarWind service should be stopped and change cache and then start the service and then repeat the same process on the partner node. This allows keeping VMs up and running during cache changes.
In the majority of use cases, there is no need to assign L1 cache for all-flash storage arrays.
Note: In case of using the Write-Back policy for DRAM, UPS units have to be installed to ensure the correct shutdown of StarWind Virtual SAN nodes. If a power outage occurs, this will prevent the loss of cached data. The UPS capacity must cover the time required for flushing the cached data to the underlying storage.
|
Read Cache
|
|
|
|
Up to 8 TB
The actual size that can be configured depends on the server hardware that is used.
|
Configurable
The size of L1 cache should be equal to the amount of the average working data set.
There are no default or maximum values for RAM cache as such. The maximum size that can be assigned for StarWind RAM cache is limited by the available RAM to the system; also, you need to make sure that other applications running will have enough of RAM for their operations.
Additionally, the total amount of L1 cache assigned influences the time required for system shutdown so overprovisioning of the L1 cache amount can cause StarWind service interruption and the loss of cached data. The minimum size assigned for RAM cache in either write-back or write-through mode is 1MB. However, StarWind recommends assigning a StarWind RAM cache size that matches the size of the working data set.
|
Non-configurable
Each host within a vSAN cluster has a local memory read cache that is 0.4% of the hosts memory, up to 1GB. The read cache optimizes VDI I/O flows for example.
|
|
|
|
Flash |
|
|
|
|
SSD, PCIe, UltraDIMM, NVMe
|
SSD, NVMe
|
SSD, PCIe, UltraDIMM, NVMe
VMware vSAN 6.6 offers support for Intel Optane (3D XPoint technology) NVMe SSDs.
|
|
|
|
Persistent Storage
SANsymphony supports new TRIM / UNMAP capabilities for solid-state drives (SSD) in order to reduce wear on those devices and optimize performance.
|
Read/Write Cache (hybrid)
Persistent storage (all-flash)
StarWind Virtual SAN supports a single Flash (L2 Cache) Policy:
1. Write-Through, caches writes in both Flash (SSD/NVMe) and the underlying storage, and acknowledges back to the originator when complete in the underlying storage.
With the write-through policy, new blocks are written both to cache layer and the underlying storage synchronously. However, in this mode, only the blocks that are read most frequently are kept in the cache layer, accelerating read operations. This is the only mode available for StarWind L2 cache.
In the case of Write-Through cached data does not need to be offloaded to the backing store when a device is removed or the service is stopped.
In the majority of use cases, if L1 cache is already assigned, there is no need to configure L2 cache.
A StarWind Virtual SAN solution with L2 cache configured should have more RAM available, since L2 cache needs 6.5 GB of RAM per 1 TB of L2 cache to store metadata.
|
Hybrid: Read/Write Cache
All-Flash: Write Cache + Storage Tier
In all vSAN configurations 1 separate SSD per disk group is used for caching purposes. The other disks in the disk group are used for persistent storage of data.
For all-flash configurations, the flash device(s) in the cache tier are used for write caching only (no read cache) as read performance from the capacity flash devices is more than sufficient.
Two different grades of flash devices are commonly used in an all-flash vSAN configuration: Lower capacity, higher endurance devices for the cache layer and more cost effective, higher capacity, lower endurance devices for the capacity layer. Writes are performed at the cache layer and then de-staged to the capacity layer, only as needed.
|
|
|
|
No limit, up to 1 PB per device
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
No limitations
The definition of a device here is a raw flash device that is presented to Virtual SAN as either a SCSI LUN or a SCSI disk.
|
Hybrid: 1-5 Flash devices per node (1 per disk group)
All-Flash: 40 Flash devices per node (8 per disk group,1 for cache and 7 for capacity)
VMware vSAN 7.0 provides support for large flash devices (up to 32TB).
|
|
|
|
Magnetic |
|
|
|
|
SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
In this case SATA = NL-SAS = MDL SAS
|
SAS or SATA
|
Hybrid: SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
VMware vSAN supports the use of 512e drives. 512e magnetic hard disk drives (HDDs) use a physical sector size of 4096 bytes, but the logical sector size emulates a sector size of 512 bytes. Larger sectors enable the integration of stronger error correction algorithms to maintain data integrity at higher storage densities.
VMware vSAN 6.7 introduced support for 4K native (4Kn) mode.
VMware vSAN 7.0 provides support for 32TB physical capacity drives.
|
|
|
|
Persistent Storage
|
Persistent Storage
|
Persistent Storage
|
|
|
Magnetic Capacity
Details
|
No limit, up to 1 PB (per device)
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
No limitations
|
1-35 SAS/SATA HDDs per host/node
|
|
|
Data Availability
|
|
|
|
|
|
|
Reads/Writes |
|
|
|
Persistent Write Buffer
Details
|
DRAM (mirrored)
If caching is turned on (default=on), any write will only be acknowledged back to the host after it has been succesfully stored in DRAM memory of two separate physical SANsymphony nodes. Based on de-staging algorithms each of the nodes eventually copies the written data that is kept in DRAM to the persistent disk layer. Because DRAM outperforms both flash and spinning disks the applications experience much faster write behavior.
Per default, the limit of dirty-write-data allowed per Virtual Disk is 128MB. This limit could be adjusted, but there has never been a reason to do so in the real world. Individual Virtual Disks can be configured to act in write-through mode, which means that the dirty-write-data limit is set to 0MB so effectively the data is directly written to the persistent disk layer.
DataCore recommends that all servers running SANsymphony software are UPS protected to avoid data loss through unplanned power outages. Whenever a power loss is detected, the UPS automatically signals this to the SANsymphony node and write behavior is switched from write-back to write-through mode for all Virtual Disks. As soon as the UPS signals that power has been restored, the write behavior is switched to write-back again.
|
DRAM (mirrored)
Flash Layer (SSD, NVMe)
|
Flash Layer (SSD;PCIe;NVMe)
|
|
|
Disk Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1-2 Replicas (2N-3N)
+ Hardware RAID (1, 5, 10)
StarWind Virtual SAN replicates the storage to protect data within the cluster. In addition, hardware or software RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated to one or two designated partner nodes. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. With 3N one instance of data that is written is stored on the local node, and two other instances of that data are stored on designated partner nodes in the cluster. When a physical disk fails, hardware RAID maintains data availability.
|
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N)
All-Flash: Erasure Coding (RAID5-6)
VMwares implementation of Erasure Coding only applies to All-Flash configurations and is similar to RAID-5 and RAID-6 protection. RAID-5 requires a minimum of 4 nodes (3+1) and protects against a single node failure; RAID-6 requires a minimum of 6 nodes and protects against two node failures. Erasure Coding is only available in vSAN Enterprise and Advanced editions, and is only configurable for All-flash configurations.
VMwares implementation of replicas is called NumberOfFailuresToTolerate (0, 1, 2 or 3). It applies to both disk and node failures. Optionally, nodes can be assigned to a logical grouping called Failure Domains. The use of 0 Replicas within a single site is only available when using Stretched Clustering, which is only available in the Enterprise editions.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. For both instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When an entire node fails, VMs need to be restarted and data is read from the surviving instances on other nodes within the vSAN cluster instead. At the same time data re-replication of the associated replicas needs to occur in order to restore the desired NumberOfFailuresToTolerate.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configuration.
vSAN provides increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.
vSAN 6.7 introduced the Host Pinning storage policy that can be used for next-generation, shared-nothing applications. When using Host Pinning, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment.
|
|
|
Node Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1)
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1-2 Replicas (2N-3N)
Replicas: Before any write is acknowledged to the host, it is synchronously replicated to one or two designated partner nodes. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. With 3N one instance of data that is written is stored on the local node, and two other instances of that data are stored on designated partner nodes in the cluster.
|
Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N)
All-Flash: Erasure Coding (RAID5-6)
VMwares implementation of Erasure Coding only applies to All-Flash configurations and is similar to RAID-5 and RAID-6 protection. RAID-5 requires a minimum of 4 nodes (3+1) and protects against a single node failure; RAID-6 requires a minimum of 6 nodes and protects against two node failures. Erasure Coding is only available in vSAN Enterprise and Advanced editions, and is only configurable for All-flash configurations.
VMwares implementation of replicas is called NumberOfFailuresToTolerate (0, 1, 2 or 3). It applies to both disk and node failures. Optionally, nodes can be assigned to a logical grouping called Failure Domains. The use of 0 Replicas within a single site is only available when using Stretched Clustering, which is only available in the Enterprise editions.
Replicas: Before any write is acknowledged to the host, it is synchronously replicated on an adjacent node. All nodes in the cluster participate in replication. This means that with 2N one instance of data that is written is stored on one node and another instance of that data is stored on a different node in the cluster. For both instances this happens in a fully distributed manner, in other words, there is no dedicated partner node. When an entire node fails, VMs need to be restarted and data is read from the surviving instances on other nodes within the vSAN cluster instead. At the same time data re-replication of the associated replicas needs to occur in order to restore the desired NumberOfFailuresToTolerate.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configuration.
vSAN provides increased support for locator LEDs on vSAN disks. Gen-9 HPE controllers in pass-through mode support vSAN activation of locator LEDs. Blinking LEDs help to identify and isolate specific drives.
vSAN 6.7 introduced the Host Pinning storage policy that can be used for next-generation, shared-nothing applications. When using Host Pinning, vSAN maintains a single copy of the data and stores the data blocks local to the ESXi host running the VM. This policy is offered as a deployment choice for Big Data (Hadoop, Spark), NoSQL, and other such applications that maintain data redundancy at the application layer. vSAN Host Pinning has specific requirements and guidelines that require VMware validation to ensure proper deployment.
|
|
|
Block Failure Protection
Details
|
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk. However, block failure protection is in most cases irrelevant as 1-node appliances are used as building blocks.
SANsymphony works on an N+1 redundancy design allowing any node to acquire any other node as a redundancy peer per virtual device. Peers are replacable/interchangable on a per Virtual Disk level.
|
Not relevant (usually 1-node appliances)
In a 3-node cluster, StarWind can provide 3-way mirroring, allowing the cluster to withstand a failure of two nodes without loosing data accessibility. In a cluster of more than 3 nodes, a grid architecture can be configured to withstand the failure of two nodes.
|
Failure Domains
Block failure protection can be achieved by assigning nodes in the same appliance to different Failure Domains.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configurations.
|
|
|
Rack Failure Protection
Details
|
Manual configuration
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk.
|
N/A
|
Failure Domains
Rack failure protection can be achieved by assigning nodes within the same rack to different Failure Domains.
Failure Domains: When using Failure Domains, one instance of the data is kept within the local Failure Domain and another instance of the data is kept within another Failure Domain. By applying Failure Domains, rack failure protection can be achieved as well as site failure protection in stretched configurations.
|
|
|
Protection Capacity Overhead
Details
|
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
|
Replica (2N) + RAID1/10: 200%
Replica (2N) + RAID5: 125-133%
Replica (3N) + RAID1/10: 300%
Replica (3N) + RAID5: 225-233%
|
Host Pinning (1N): Dependent on # of VMs
Replicas (2N): 100%
Replicas (3N): 200%
Erasure Coding (RAID5): 33%
Erasure Coding (RAID6): 50%
RAID5: The stripe size used by vSAN for RAID5 is 3+1 (33% capacity overhead for data protection) and is independent of the cluster size. The minimum cluster size for RAID5 is 4 nodes.
RAID6: The stripe size used by vSAN for RAID6 is 4+2 (50% capacity overhead for data protection) and is independent of the cluster size. The minimum cluster size for RAID6 is 6 nodes.
RAID5/6 can only be leveraged in vSAN All-flash configurations because of I/O amplification.
|
|
|
Data Corruption Detection
Details
|
N/A (hardware dependent)
SANsymphony fully relies on the hardware layer to protect data integrity. This means that the SANsymphony software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
N/A (hardware dependent)
StarWind Virtual SAN fully relies on the hardware layer to protect data integrity. This means that the StarWind software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
Read integrity checks
Disk scrubbing (software)
End-to-end checksum provides automatic detection and resolution of silent disk errors. Creation of checksums is enabled by default, but can be disabled through policy on a per VM (or virtual disk) basis if desired. In case of checksum verification failures data is fetched from another copy.
The disk scrubbing process runs in the background.
|
|
|
|
Points-in-Time |
|
|
|
|
Built-in (native)
|
N/A
StarWind Virtual SAN does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
Built-in (native)
VMware vSAN uses the 'vSANSparse' snapshot format that leverages VirstoFS technology as well as in-memory metadata cache for lookups. vSANSparse offers greatly improved performance when compared to previous virtual machine snapshot implementations.
|
|
|
|
Local + Remote
SANsymphony snapshots are always created on one side only. However, SANsymphony allows you to create a snapshot for the data on each side by configuring two snapshot schedules, one for the local volume and one for the remote volume. Both snapshot entities are independent and can be deleted independently allowing different retention times if needed.
There is also the capability to pair the snapshot feature along with asynchronous replication which provides you with the ability to have a third site long distance remote copy in place with its own retention time.
|
N/A
StarWind Virtual SAN does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
Local
|
|
|
Snapshot Frequency
Details
|
1 Minute
The snapshot lifecycle can be automatically configured using the integrated Automation Scheduler.
|
N/A
StarWind Virtual SAN does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
GUI: 1 hour
vSAN snapshots are invoked using the existing snapshot options in the VMware vSphere GUI.
To create a snapshot schedule using the vCenter (Web) Client: Click on a VM, then inside the Monitoring tab select Tasks & Events, Scheduled Tasks, 'Take Snapshots…'.
A single snapshot schedule allows a minimum frequency of 1 hour. Manual snapshots can be taken at any time.
|
|
|
Snapshot Granularity
Details
|
Per VM (Vvols) or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is certified for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
N/A
StarWind Virtual SAN does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
Per VM
|
|
|
|
Built-in (native)
DataCore SANsymphony incorporates Continuous Data Protection (CDP) and leverages this as an advanced backup mechanism. As the term implies, CDP continuously logs and timestamps I/Os to designated virtual disks, allowing end-users to restore the environment to an arbitrary point-in-time within that log.
Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage. Several of these markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent or crash-consistent restore point from just before the incident occurred.
|
External
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
External (vSAN Certified)
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
|
|
Local or Remote
All available storage within the SANsymphony group can be configured as targets for back-up jobs.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
|
|
Continuously
As Continuous Data Protection (CDP) is being leveraged, I/Os are logged and timestamped in a continous fashion, so end-users can restore to virtually any-point-in-time.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
|
Backup Consistency
Details
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
By default CDP creates crash consistent restore points. Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage.
Several CDP Rollback Markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent, filesystem-consistent or crash-consistent restore point from (just) before the incident occurred.
In a VMware vSphere environment, the DataCore VMware vCenter plug-in can be used to create snapshot schedules for datastores and select the VMs that you want to enable VSS filesystem/application consistency for.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
|
Restore Granularity
Details
|
Entire VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
When configuring the virtual environment as described above, effectively VM-restores are possible.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
N/A
VMware vSAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing data protection solutions like Dell EMC Avamar Virtual Edition (AVE) backup software or any other vSphere compatible 3rd party backup application. AVE is not part of the licenses bundled with VxRail and thus needs to be purchased separately. AVE requires the deployment of virtual backup appliances on top of vSphere.
No specific integration exists between VMware vSAN and Dell EMC AVE.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
VMware is working on native vSAN data protection, which is currently still in beta and expected to go live in the first half of 2019.
The following 3rd party Data Protection partner products are certified with vSAN 6.7:
- Cohesity DataProtect 6.1
- CommVault 11
- Dell EMC Avamar 18.1
- Dell EMC NetWorker 18.1
- Hitachi Data Instance Director 6.7
- Rubrik Cloud Data Management 4.2
- Veeam Backup&Replication 9.5 U4a
- Veritas NetBackup 8.1.2
|
|
|
| |