|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
- + Extensive platform support
- + Extensive data protection capabilities
- + Flexible deployment options
|
- + Extensive data protection capabilities
- + Policy-based management
- + Fast streamlined deployment
|
- + Flexible architecture
- + Extensive platform support
- + Several Microsoft integration points
|
|
Cons
|
- - No native data integrity verification
- - Dedup/compr not performance optimized
- - Disk/node failure protection not capacity optimized
|
- - Single hypervisor and server hardware
- - No bare-metal support
- - No hybrid configurations
|
- - Minimal data protection capabilities
- - No Quality-of-service mechanisms
- - No native encryption capabilities
|
|
|
|
Content |
|
|
|
|
WhatMatrix
|
WhatMatrix
|
WhatMatrix
|
|
|
|
Assessment |
|
|
|
|
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
DataCore was founded in 1998 and began to ship its first software-defined storage (SDS) platform, SANsymphony (SSY), in 1999. DataCore launched a separate entry-level storage virtualization solution, SANmelody (v1.4), in 2004. This platform was also the foundation for DataCores HCI solution. In 2014 DataCore formally announced Hyperconverged Virtual SAN as a separate product. In May 2018 integral changes to the software licensing model enabled consolidation because the core software is the same and since then cumulatively called DataCore SANsymphony.
One year later, in 2019, DataCore expanded its software-defined storage portfolio with a solution especially for the need of file virtualization. The additional SDS offering is called DataCore vFilO and operates as scale-out global file system across distributed sites spanning on-premises and cloud-based NFS and SMB shares.
Recently, at the beginning of 2021, DataCore acquired Caringo and integrated its know how and software-defined object storage offerings into the DataCore portfolio. The newest member of the DataCore SDS portfolio is called DataCore Swarm and together with its complementary offering SwarmFS and DataCore FileFly it enables customers to build on-premises object storage solutions that radically simplify the ability to manage, store, and protect data while allowing multi-protocol (S3/HTTP, API, NFS/SMB) access to any application, device, or end-user.
DataCore Software specializes in the high-tech fields of software solutions for block, file, and object storage. DataCore has by far the longest track-record when it comes to software-defined storage, when comparing to the other SDS/HCI vendors on the WhatMatrix.
In April 2021 the company had an install base of more than 10,000 customers worldwide and there were about 250 employees working for DataCore.
|
Name: HPE SimpliVity 2600
Type: Hardware+Software (HCI)
Development Start: 2009
First Product Release: 2018
SimpliVity was founded late 2009 and began to ship its first Hyper Converged Infrastructure (HCI) solution, OmniStack (OmniStack), in 2013. The core of the SimpliVity solution is the OmniStack OS combined with OmniStack Accellerator PCIe card. In January 2017 SimpliVity was acquired by HPE. In the second quarter of 2017 HPE introduced SimpliVity on HPE ProLiant server hardware and the platform was rebranded to HPE SimpliVity 380. In July 2018 HPE extended the HPE SimpliVity product family with HPE SimpliVity 2600 featuring software-only deduplication and compression.
In January 2018 HPE SimpliVity had a customer install base of approximately 2,000 customers worldwide. The number of employees working in the HPE SimpliVity division is unknown at this time.
|
Name: StarWind Virtual SAN
Type: SDS
Development Start: 2003
First Product Release: 2011
StarWind Software is a privately held company which started in 2008 as a spin-off from Rocket Division Software, Ltd. (founded in 2003). It initially provided free Software Defined Storage (SDS) offerings to early adopters in 2009. Sometime in 2011 the company released its product Native SAN (later rebranded to Virtual SAN). In 2015 StarWind executed a successful 'pivot shift' from software-only company to become a hardware vendor and brought Hyper-Convergence from the Enterprise level to SMB and ROBO. Apart of HCA solutions, StarWind keeps focus on developing and improving its Virtual SAN solution. In 2018, StarWind released Virtual SAN for vSphere - an SDS solution specifically aimed at VMware vSphere environments.
In March 2020 the company had a StarWind Virtual SAN install base of more than 4,500 customers worldwide. In June 2019 there were more than 250 employees working for StarWind.
|
|
|
|
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
10th Generation software. DataCore currently has the most experience when it comes to SDS/HCI technology, when comparing SANsymphony to other SDS/HCI platforms.
SANsymphony (SSY) version 3 was the first public release that hit the market back in 1999. The product has evolved ever since and the current major release is version 10. The list includes only the milestone releases.
PSP = Product Support Package
U = Update
|
GA Release Dates:
OmniStack 4.0.1 U1: dec 2020
OmniStack 4.0.1: apr 2020
OmniStack 4.0.0: jan 2020
OmniStack 3.7.10: sep 2019
OmniStack 3.7.9: jun 2019
OmniStack 3.7.8: mar 2019
OmniStack 3.7.7: dec 2018
OmniStack 3.7.6 U1: oct 2018
OmniStack 3.7.6: sep 2018
OmniStack 3.7.5: jul 2018
OmniStack 3.7.4: may 2018
OmniStack 3.7.3: mar 2018
OmniStack 3.7.2: dec 2017
OmniStack 3.7.1: oct 2017
OmniStack 3.7.0: jun 2017
OmniStack 3.6.2: mar 2017
OmniStack 3.6.1: jan 2017
OmniStack 3.5.3: nov 2016
OmniStack 3.5.2: juli 2016
OmniStack 3.5.1: may 2016
OmniStack 3.0.7: aug 2015
OmniStack 2.1.0: jan 2014
OmniStack 1.1.0: aug 2013
NEW
4th Generation software on 9th and 10th Generation HPE server hardware. The HPE SimpliVity 380 platform has shaped up to be a full-featured platform in virtualized datacenter environments.
RapidDR 3.5.1: dec 2020
RapidDR 3.5.0: oct 2020
RapidDR 3.1.1: feb 2020
RapidDR 3.1: jan 2020
RapidDR 3.0.1: sep 2019
RapidDR 3.0: jun 2019
RapidDR 2.5.1: dec 2018
RapidDR 2.5: oct 2018
RapidDR 2.1.1: jun 2018
RapidDR 2.1: mar 2018
RapidDR 2.0: oct 2017
RapidDR 1.5: feb 2017
RapidDR 1.2: oct 2016
|
StarWind VSAN for vSphere Release Dates:
VSAN build 13170: oct 2019
VSAN build 12859: feb 2019
VSAN build 12658: dec 2018
VSAN build 12533: sep 2018
StarWind VSAN for Hyper-V Release Dates:
VSAN build 13279: oct 2019
VSAN build 13182: aug 2019
VSAN build 12767: feb 2019
VSAN build 12658: nov 2018
VSAN build 12585 oct 2018
VSAN build 12393: aug 2018
VSAN build 12166: may 2018
VSAN build 12146: apr 2018
VSAN build 11818: dec 2017
VSAN build 11456: aug 2017
VSAN build 11404: jul 2017
VSAN build 11156: may 2017
VSAN build 11071 may 2017
VSAN build 10927 apr 2017
VSAN build 10914 apr 2017
VSAN build 10833: apr 2017
VSAN build 10811 mar 2017
VSAN build 10799: mar 2017
VSAN build 10695: feb 2017
VSAN build 10547: jan 2017
VSAN build 9996: aug 2016
VSAN build 9980 aug 2016
VSAN build 9781: jun 2016
VSAN build 9611 jun 2016
VSAN build 9052: may 2016
VSAN build 8730: nov 2015
VSAN build 8716 nov 2015
VSAN build 8198 jun 2015
VSAN build 7929 apr 2015
VSAN build 7774 feb 2015
VSAN build 7509 de 2014
VSAN build 7471 dec 2014
VSAN build 7354 nov 2014
VSAN build 7145
VSAN build 6884
Version 8 Release 10 StarWind software.
|
|
|
|
Pricing |
|
|
|
Hardware Pricing Model
Details
|
N/A
SANsymphony is sold by DataCore as a software-only solution. Server hardware must be acquired separately.
The entry point for all hardware and software compatibility statements is: https://www.datacore.com/products/sansymphony/tech/compatibility/
On this page links can be found to: Storage Devices, Servers, SANs, Operating Systems (Hosts), Networks, Hypervisors, Desktops.
Minimum server hardware requirements can be found at: https://www.datacore.com/products/sansymphony/tech/prerequisites/
|
Per Node
|
N/A
StarWind Virtual SAN is sold by StarWind as a software-only solution. Server hardware must be acquired separately.
|
|
|
Software Pricing Model
Details
|
Capacity based (per TB)
NEW
DataCore SANsymphony is licensed in three different editions: Enterprise, Standard, and Business.
All editions are licensed per capacity (in 1 TB steps). Except for the Business edition which has a fixed price per TB, the more capacity that is used by an end-user in each class, the lower the price per TB.
Each edition includes a defined feature set.
Enterprise (EN) includes all available features plus expanded Parallel I/O.
Standard (ST) includes all Enterprise (EN) features, except FC connections, Encryption, Inline Deduplication & Compression and Shared Multi-Port Array (SMPA) support with regular Parallel I/O.
Business (BZ) as entry-offering includes all essential Enterprise (EN) features, except Asynchronous Replication & Site Recovery, Encryption, Deduplication & Compression, Random Write Accelerator (RWA) and Continuous Data Protection (CDP) with limited Parallel I/O.
Customers can choose between a perpetual licensing model or a term-based licensing model. Any initial license purchase for perpetual licensing includes Premier Support for either 1, 3 or 5 years. Alternatively, term-based licensing is available for either 1, 3 or 5 years, always including Premier Support as well, plus enhanced DataCore Insight Services (predictive analytics with actionable insights). In most regions, BZ is available as term license only.
Capacity can be expanded in 1 TB steps. There exists a 10 TB minimum per installation for Business (BZ). Moreover, BZ is limited to 2 instances and a total capacity of 38 TB per installation, but one customer can have multiple BZ installations.
Cost neutral upgrades are available when upgrading from Business/Standard (BZ/ST) to Enterprise (EN).
|
Per Node (all-inclusive)
Add-ons: RapidDR (per VM)
There is no separate software licensing for most platform integrated features. By default all software functionality is available regardless of the hardware model purchased.
The HPE SimpliVity 2600 per node software license is tied to the selected CPU and storage configuration.
License Add-ons:
- RapidDR feature
RapidDR uses a 'per VM' licensing model and is available in 25 VM and 100 VM license packs.
|
Hyper-V: Per Node + Per TB
vSphere: Per Node (storage capacity included)
There are two StarWind Virtual SAN editions: StarWind Virtual SAN for Hyper-V and StarWind Virtual SAN for vSphere. StarWind Virtual SAN for Hyper-V is licensed per node + per amount of HA storage provisioned by StarWind Virtual SAN. StarWind Virtual SAN for vSphere is licensed per the node. There the amount of HA storage provisioned by StarWind Virtual SAN is always unlimited.
HA = Highly Available
|
|
|
Support Pricing Model
Details
|
Capacity based (per TB)
Support is always provided on a premium (24x7) basis, including free updates.
More information about DataCores support policy can be found here:
http://datacore.custhelp.com/app/answers/detail/a_id/1270/~/what-is-datacores-support-policy-for-its-products
|
Per Node
3-year HPE SimpliVity 2600 solution support (24x7x365) is mandatory.
|
Per Node
StarWind provides three editions of StarWind Virtual SAN Support:
1. Standard Support for business days, business hours and up to 4 hours response time
2. Premium Support covers 24x7x365 support and up to 1 hour response time
3. Proactive Support provides Premium Support and monitors the health of the system and proactively notifies about potential issues on the hardware level, hypervisor level and StarWind software-level.
|
|
|
Design & Deploy
|
|
|
|
|
|
|
Design |
|
|
|
Consolidation Scope
Details
|
Storage
Data Protection
Management
Automation&Orchestration
DataCore is storage-oriented.
SANsymphony Software-Defined Storage Services are focused on variable deployment models. The range covers classical Storage Virtualization over Converged and Hybrid-Converged to Hyperconverged including a seamless migration between them.
DataCore aims to provide all key components within a storage ecosystem including enhanced data protection and automation & orchestration.
|
Compute
Storage
Data Protection (full)
Management
Automation&Orchestration (DR)
HPE is stack-oriented, whereas the SimpliVity 2600 platform itself is heavily storage- and protection-focused.
HPE SimpliVity 2600 aims to provide key components within a Private Cloud ecosystem as well as integration with existing hypervisors and applications.
|
Storage
Management
StarWind Virtual SAN consolidates storage from different servers by replicating and presenting it over the iSCSI protocol as a single pool.
|
|
|
|
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
The bandwidth required depends entirely on the specifc workload needs.
SANsymphony 10 PSP11 introduced support for Emulex Gen 7 64 Gbps Fibre Channel HBAs.
SANsymphony 10 PSP8 introduced support for Gen6 16/32 Gbps ATTO Fibre Channel HBAs.
|
1, 10 GbE
HPE SimpliVity 2600 hardware models include ethernet connectivity using SFP+. HPE SimpliVity 2600 recommends 10GbE to avoid the network becoming a performance bottleneck.
|
1, 10, 25, 40, 100 GbE
StarWind requires at least three dedicated network interfaces:
- one for StarWind Synchronization
- one for iSCSI traffic/Heartbeat)
- one for Management/Heartbeat
At least one Heartbeat interface must be on a separate network adapter and redundant.
For iSCSI and synchronization, minimum 1 GbE of bandwidth and latency under 5ms is a requirement.
|
|
|
Overall Design Complexity
Details
|
Medium
DataCore SANsymphony is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. DataCore SANsymphony seeks to provide important capabilities either natively or tightly integrated, and this keeps the design process relatively simple. However, because many features in SANsymphony are optional and thus can be turned on/off, in effect each one needs to be taken into consideration when preparing a detailed design.
|
Low
HPE SimpliVity was developed with simplicity in mind, both from a design and a deployment perspective. HPE SimpiVitys uniform platform architecture is meant to be applicable to all virtualization use-cases and seeks to provide important capabilities natively and on a per-VM basis. There are only a handful of storage building blocks to choose from, and many advanced capabilities like deduplication and compression are always turned on. This minimizes the amount of design choices as well as the number of deployment steps.
|
Medium
StarWind Virtual SAN is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. Also, StarWind Virtual SAN does not encompass many native data protection capabilities and data services. A complete solution design therefore requires the presence of multiple technology platforms.
|
|
|
External Performance Validation
Details
|
SPC (Jun 2016)
ESG Lab (Jan 2016)
SPC (Jun 2016)
Title: 'Dual Node, Fibre Channel SAN'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-connected, SSY 10.0, 4x All-Flash Dell MD1220 SAS Storage Arrays
SPC (Jun 2016)
Title: 'Dual Node, High Availability, Hyper-converged'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-interconnect, SSY 10.0
ESG Lab (Jan 2016)
Title: 'DataCore Application-adaptive Data Infrastructure Software'
Workloads: OLTP
Benchmark Tools: IOmeter
Hardware: Hybrid (Tiered) Dell PowerEdge R720, 2-node cluster, SSY 10.0
|
Login VSI (jun 2018)
Login VSI (Jun 2018)
Title: 'VMware Horizon 7.4 on HPE SimpliVity
2600'
Workloads: VMware Horizon VDI
Benchmark Tools: Login VSI (VDI)
Hardware: All-Flash HPE SimpliVity 170 Gen10, 4-node cluster / 6-node cluster, OmniStack 3.7.5
|
N/A
No StarWind Virtual SAN validated test reports have been published in 2016/2017/2018/2019.
|
|
|
Evaluation Methods
Details
|
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
SANsymphony is freely downloadble after registering online and offers full platform support (complete Enterprise feature set) but is scale (4 nodes), capacity (16TB) and time (30 days) restricted, what all can be expanded upon request. The free trial version of SANsymphony can be installed on all commodity hardware platforms that meet the hardware requirements.
For more information please go here: https://www.datacore.com/try-it-now/
|
Cloud Technology Showcase (CTS)
Proof-of-Concept (POC)
HPE offers no Community Edition or Free Trial edition of their hyperconverged software.
However, HPE maintains a cloud-based evaluation environment in which demos can be conducted and where potential customers can load up their own workloads to execute Proof-of-Concepts. This is called the Cloud Technology Showcase (CTS).
|
Community Edition (forever)
Trial (up to 30 days)
Proof-of-Concept
There are 2 ways for end-user organizations to evaluate StarWind:
1. StarWind Free. A free version of the software-only Virtual SAN product can be downloaded for the StarWind website. The free version has full functionality but StarWind Management Console works only in the monitoring mode without the ability to create or manage StarWind. All the management is performed via PowerShell and set of script templates.
The free version is intended to be self-supported or community-supported on public discussion forums. This
2. StarWind Trial. The trial version has full functionality and all the management capabilities for StarWind Management Console and is limited to 30 days but can be prolonged if required.
|
|
|
|
Deploy |
|
|
|
Deployment Architecture
Details
|
Single-Layer
Dual-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
Single-Layer:
- SANsymphony is implemented as virtual machine (VM) or in case of Hyper-V as service layer on Hyper-V parent OS, managing internal and/or external storage devices and providing virtual disks back to the hypervisor cluster it is implemented in. DataCore calls this a hyper-converged deployment.
Dual-Layer:
- SANsymphony is implemented as bare metal nodes, managing external storage (SAN/NAS approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a traditional deployment.
- SANsymphony is implemented as bare metal nodes, managing internal storage devices (server-SAN approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a converged deployment.
Mixed:
- SANsymphony is implemented in any combination of the above 3 deployments within a single management entity (Server Group) acting as a unified storage grid. DataCore calls this a hybrid-converged deployment.
|
Single-Layer
Single-Layer: HPE SimpliVity 2600 is meant to be used as a storage platform as well as a compute platform at the same time. This effectively means that applications, hypervisor and storage software are all running on top of the same server hardware (=single infrastructure layer).
Although HPE SimpliVity 2600 can serve in a dual-layer model by providing storage to non-HPE SimpliVity 2600 hypervisor hosts, this would negate many of the platforms benefits as well as the financial business case. (Please view the compute-only scale-out option for more information).
|
Single-Layer
Dual-Layer (secondary)
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
In a Single-Layer architecture, the servers with Virtual SAN software function as compute nodes as well as storage node; StarWind performs storage synchronous replication
In a Dual-Layer architecture, StarWind Virtual SAN software replicates data in active-active mode between the dedicated storage servers and provides HA storage for use by separate compute nodes that do not have StarWind Virtual SAN installed.
|
|
|
Deployment Method
Details
|
BYOS (some automation)
BYOS = Bring-Your-Own-Server-Hardware
DataCore SANsymphony is made easy by providing a very straightforward implementation approach.
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by HPE SimpliVity 2600, customer deployments can be executed in hours instead of days.
In HPE SimpliVity 3.7.10 the Deployment Manager allows configuring NIC teaming during the deployment process. With this feature, one or more NICs can be assigned to the Management network, and one or more NICs to the Storage and Federation networks (shared between the two).
|
BYOS (some automation)
|
|
|
Workload Support
|
|
|
|
|
|
|
Virtualization |
|
|
|
Hypervisor Deployment
Details
|
Virtual Storage Controller
Kernel (Optional for Hyper-V)
The SANsymphony Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the SANsymphony storage solution and commits its internal storage and/or externally connected storage to the shared resource pool. The Virtual Storage Controller (VSC) can be configured direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
In Microsoft Hyper-V environments the SANsymphony software can also be installed in the Windows Server Root Partition. DataCore does not recommend installing SANsymphony in a Hyper-V guest VM as it introduces virtualization layer overhead and obstructs DataCore Software from directly accessing CPU, RAM and storage. This means that installing SANsymphony in the Windows Server Root Partition is the preferred deployment option. More information about the Windows Server Root Partition can be found here: https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/architecture
The DataCore software can be installed on Microsoft Windows Server 2019 or lower (all versions down to Microsoft Windows Server 2012/R2).
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Virtual Storage Controller
The HPE SimpliVity 2600 OmniStack Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the HPE SimpliVity 2600 storage solution and commits its internal storage to the shared resource pool. The Virtual Storage Controller (VSC) has direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Virtual Storage Controller (vSphere)
User-Space (Hyper-V)
StarWind VSAN for vSphere is deployed by downloading an OVF template of a preconfigured Linux-based VM. It should be deployed on each VMware vSphere node that takes part in StarWind replication. The OVF contains StarWind SDS stack pre-configured and pre-installed.
StarWind VSAN for Hyper-V is deployed by downloading the latest build of StarWind. The installation process will automatically install the required components. The build should be installed on each node that will take part in StarWind replication.
|
|
|
Hypervisor Compatibility
Details
|
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
'Not qualified' means there is no generic support qualification due to limited market footprint of the product. However, a customer can always individually qualify the system with a specific SANsymphony version and will get full support after passing the self-qualification process.
Only products explicitly labeled 'Not Supported' have failed qualification or have shown incompatibility.
|
VMware vSphere ESXi 6.5U2-6.7U3
HPE SimpliVity OmniStack 4.0.0 introduced support for VMware vSphere 6.7 Update 3.
HPE SimpliVity 2600 currently does not support Microsoft Hyper-V, whereas HPE SimpliVity 380 does.
|
VMware vSphere ESXi 6.0-6.7
Microsoft Hyper-V 2012-2019
StarWind is actively working on supporting KVM.
|
|
|
Hypervisor Interconnect
Details
|
iSCSI
FC
The SANsymphony software-only solution supports both iSCSI and FC protocols to present storage to hypervisor environments.
DataCore SANsymphony supports:
- iSCSI (Switched and point-to-point)
- Fibre Channel (Switched and point-to-point)
- Fibre Channel over Ethernet (FCoE)
- Switched, where host uses Converged Network Adapter (CNA), and switch outputs Fibre Channel
|
NFS
NFS is used as the storage protocol in vSphere environments.
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
iSCSI
NFS
SMB3
iSCSI is the native StarWind VSAN storage protocol, as StarWind VSAN provides block-based storage.
NFS can be used as the storage protocol in VMware vSphere environments by leveraging the File Server role that the Windows OS provides.
SMB3 can be used as the storage protocol in Microsoft Hyper-V environments by leveraging the File Server role that the Windows OS provides.
In both VMware vSphere and Microsoft Hyper-V environments, iSCSI is used as a protocol to provide block-level storage access. It allows consolidating storage from multiple servers providing it as highly available storage to target servers. In the case of vSphere, VMware iSCSI Initiator allows connecting StarWind iSCSI devices to ESXi hosts and further create datastores on them. In the case of Hyper-V, Microsoft iSCSI Initiator is utilized to connect StarWind iSCSI devices to the servers and further provide HA storage to the cluster (i.e. CSV).
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
|
|
|
Bare Metal |
|
|
|
Bare Metal Compatibility
Details
|
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Any operating system currently not qualified for support can always be individually qualified with a specific SANsymphony version and will get full support after passing the self-qualification process.
SANsymphony provides virtual disks (block storage LUNs) to all of the popular host operating systems that use standard disk drives with 512 byte or 4K byte sectors. These hosts can access the SANsymphony virtual disks via SAN protocols including iSCSI, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE).
Mainframe operating systems such as IBM z/OS, z/TPF, z/VSE or z/VM are not supported.
SANsymphony itself runs on Microsoft Windows Server 2012/R2 or higher.
|
N/A
HPE SimpliVity 2600 does not support any non-hypervisor platforms.
|
Microsoft Windows Server 2012/2012R2/2016/2019
StarWind VSAN provides highly available storage over iSCSI between the StarWind nodes and can additionally share storage over iSCSI to any OS that supports the iSCSI protocol.
|
|
|
Bare Metal Interconnect
Details
|
iSCSI
FC
FCoE
|
N/A
HPE SimpliVity 2600 does not support any non-hypervisor platforms.
|
iSCSI
|
|
|
|
Containers |
|
|
|
Container Integration Type
Details
|
Built-in (native)
DataCore provides its own Volume Plugin for natively providing Docker container support, available on Docker Hub.
DataCore also has a native CSI integration with Kubernetes, available on Github.
|
N/A
HPE SimpliVity 2600 relies on the container support delivered by the hypervisor platform.
|
N/A
StarWind Virtual SAN relies on the container support delivered by the hypervisor platform.
|
|
|
Container Platform Compatibility
Details
|
Docker CE/EE 18.03+
Docker EE = Docker Enterprise Edition
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
|
|
Container Platform Interconnect
Details
|
Docker Volume plugin (certified)
The DataCore SDS Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables SANsymphony data volumes to persist beyond the lifetime of both a container or a container host. DataCore leverages SANsymphony iSCSI and FC to provide storage to containers. This effectively means that the hypervisor layer is bypassed.
The Docker SDS Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the Docker Hub. The plugin is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a SANsymphony storage cluster.
For more information please go to: https://hub.docker.com/plugins/datacore-sds-volume-plugin
The Kubernetes CSI plugin can be downloaded from GitHub. The plugin is automatically deployed as several pods within the Kubernetes system.
For more information please go to: https://github.com/DataCoreSoftware/csi-plugin
Both plugins are supported with SANsymphony 10 PSP7 U2 and later.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
The StarWind HA VMFS (datastore) can be used for deploying containers just as on common VMFS datastore.
|
|
|
Container Host Compatibility
Details
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The DataCore native plug-ins are container-host centric and as such can be used across all SANsymphony-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, KVM, XenServer, Oracle VM Server) as well as on bare metal platforms.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
|
|
Container Host OS Compatbility
Details
|
Linux
All Linux versions supported by Docker CE/EE 18.03+ or higher can be used.
|
Linux
Windows 10 or 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
Linux
Windows 10 or 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
|
|
Container Orch. Compatibility
Details
|
Kubernetes 1.13+
|
Kubernetes 1.6.5+ on ESXi 6.0+
|
Kubernetes 1.6.5+ on ESXi 6.0+
|
|
|
Container Orch. Interconnect
Details
|
Kubernetes CSI plugin
The Kubernetes CSI plugin provides several plugins for integrating storage into Kubernetes for containers to consume.
DataCore SANsymphony provides native industry standard block protocol storage presented over either iSCSI or Fibre Channel. YAML files can be used to configure Kubernetes for use with DataCore SANsymphony.
|
Kubernetes Volume Plugin
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, VVol, VMFS or NFS datastores.
|
Kubernetes Volume Plugin
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, VVol, VMFS or NFS datastores.
|
|
|
|
VDI |
|
|
|
VDI Compatibility
Details
|
VMware Horizon
Citrix XenDesktop
There is no validation check being performed by SANsymphony for VMware Horizon or Citrix XenDesktop VDI platforms. This means that all versions supported by these vendors are supported by DataCore.
|
VMware Horizon
Citrix XenDesktop
HPE SimpliVity OmniStack 3.7.8 introduces support for VMware Horizon Instant Clone provisioning technology for vSphere 6.7.
HPE SimpliVity 2600 has published a Reference Architecture whitepaper for VMware Horizon 7.4. HPE SimpliVity 2600 has been validated by LoginVSI.
|
VMware Horizon
Citrix XenDesktop
Although StarWind supports both VMware and Citrix VDI deployments on top of StarWind VSAN HA storage, there is currently no specific documentation available.
|
|
|
|
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
DataCore has not published any recent VDI reference architecture whitepapers. The only VDI related paper that includes a Login VSI benchmark dates back to december 2010. There a 2-node SANsymphony cluster was able to sustain a load of 220 VMs based on the Login VSI 2.0.1 benchmark.
|
VMware: up to 175 virtual desktops/node
Citrix: unknown
VMware Horizon 7.4: Load bearing number is based on Login VSI tests performed on HPE SimpliVity 170 Gen10 all-flash model using 2vCPU Windows 10 desktops and the Knowledge Worker profile.
|
VMware: up to 260 virtual desktops/node
Citrix: up to 220 virtual desktops/node
The load bearing numbers are based on approximate calculations of VDI infrastructure that StarWind Virtual SAN can support.
There are no LoginVSA benchmark numbers on record for StarWind Virtual SAN as of yet.
|
|
|
Server Support
|
|
|
|
|
|
|
Server/Node |
|
|
|
Hardware Vendor Choice
Details
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
HPE
HPE SimpliVity 2600 deployments are solely based on HPE Apollo 2600 Gen10 server hardware.
|
Many
StarWind Virtual SAN is a hardware-agnostic solution and does not have a strict HCL or supported hardware platforms.
|
|
|
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
2 models
HPE SimpliVity 2600 is available in 2 series:
170 Gen10-series
190 Gen10-series
HPE positions HPE Simplivity 2600 as a VDI Optimized Hyperconverged Solution. The platform is also beneficial for compute-intensive workload
use cases in high density environments.
The Gen10 6000-series is best for high performance, IO intensive mixed workloads, whereas the gen10 4000-series is best for typical workloads (heavy reads/lower ratio of writes) at lower cost than the 6000-series. The difference between 4000/6000 is the SSD-type that is inserted in the server hardware.
There are no HPE SimpliVity 2600 Hybrid (SSD+HDD) models to choose from.
|
Many
StarWind Virtual SAN is a hardware-agnostic solution and does not have a strict HCL or supported hardware platforms.
|
|
|
|
1, 2 or 4 nodes per chassis
Note: Because SANsymphony is mostly hardware agnostic, customers can opt for multiple server densities.
Note: In most cases 1U or 2U building blocks are used.
Also Super Micro offers 2U chassis that can house 4 compute nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
170-series: 3-4 nodes per chassis
190-series: 2 nodes per chassis
The HPE Apollo 2600 server is a 2U/4-node building block. The nodes in each system have an identical hardware configuration.
Up to 4 slots in each chassis may be used for placement of HPE SimpliVity 2600 170-series nodes.
Up to 2 slots in each chassis may be used for placement of HPE SimpliVity 2600 190-series nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1, 2 or 4 nodes per chassis
StarWind Virtual SAN is a hardware-agnostic solution and does not have strict HCL or supported hardware platforms.
|
|
|
|
Yes
DataCore does not explicitly recommend using different hardware platforms, but as long as the hardware specs are somehow comparable, there is no reason to insist on one or the other hardware vendor. This is proven in practice, meaning that some customers run their productive DataCore environment on comparable servers of different vendors.
|
Partial
For mixing HPE SimpliVity 2600 nodes in a cluster, HPE recommends following these general guidelines:
- Only models of equal socket count are supported.
- All hosts should contain equal amounts of CPU & Memory.
- As a best practice, it’s recommended to use the same CPU model within a single cluster.
Heterogenous Federation Support: Although HPE SimpliVity 380 nodes cannot be mixed with HPE SimpliVity 2600 nodes or legacy SimpliVity nodes within the same cluster, they can coexist with such clusters within the same Federation.
HPE OmniStack 3.7.9 introduces suppor for using different versions of the OmniStack software within a federation. Some clusters can have hosts using HPE OmniStack 3.7.9 and other clusters can have hosts all using HPE OmniStack 3.7.8 and above. The hosts in each datacenter and cluster must use the same version of the software.
|
Yes
Although StarWind does not recommend assymetric configurations (mixing nodes with different CPUs, storage type and networks in same replica) for Virtual SAN environments, if a customer understands the possible issues with performance (any solution that provides storage active-active replication performs with the speed of the slowest component), different servers can be mixed in a single solution.
|
|
|
|
Components |
|
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible
HP SimpliVity 2600 on Apollo Gen10 server hardware:
- Choice of Intel Xeon Scalable (Skylake) Silver and Gold processors (2x per node), 12 to 22 cores selectable.
- Single socket option or dual socket option with less cores (8 or 10) for ROBO deployments only.
Although HPE does support 2nd generation Intel Xeon Scalable (Cascade Lake) processors in its ProLiant server line-up as of April 2019, HPE SimpliVity nodes do not yet ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to:
https://www.starwindsoftware.com/system-requirements
|
|
|
|
Flexible
|
Flexible
HP SimpliVity 2600 on Apollo Gen10 server hardware:
- 384GB to 768GB per node selectable.
- 128GB per node selectable for ROBO deployments only.
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to:
https://www.starwindsoftware.com/system-requirements
|
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Fixed: number of disks + capacity
For HP SimpliVity 260 on Apollo Gen10 server hardware, the following kits are selectable per node:
XS (6x 1.92TB SSD in RAID1+0)
The always-on inline deduplication and compression by default allows HPE SimpliVity 2600 to have much higher amounts of effective storage capacity on a single node than the raw disk capacity would indicate.
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to:
https://www.starwindsoftware.com/system-requirements
|
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible: additional 10Gbps (190-series only)
HP SimpiVity 2600 190 Gen10-series:
One optional 10 Gbps or 10/25 Gbps PCI adapter can be added to a node.
In HPE SimpliVity 3.7.10 the Deployment Manager allows configuring NIC teaming during the deployment process. With this feature, one or more NICs can be assigned to the Management network, and one or more NICs to the Storage and Federation networks (shared between the two).
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to:
https://www.starwindsoftware.com/system-requirements
|
|
|
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
DataCore SANsymphony supports the hardware that is on the hypervisor HCL.
VMware vSphere 6.5U1 officially supports several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
Windows 2016 supports two graphics virtualization technologies available with Hyper-V to leverage GPU hardware:
- Discrete Device Assignment
- RemoteFX vGPU
More information is provided here: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-graphics-virtualization
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
NVIDIA Tesla (190-series only)
HPE SimpliVity 2600 offers a GPU option in the HPE SimpliVity 190 Gen10-series for leveraging vGPU in virtual desktop/application environments.
Currently HPE SimpliVity 2600 190 Gen10-series supports the following GPUs in a single server:
2x NVIDIA Tesla M10
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
StarWind Virtual SAN supports the hardware that is on the hypervisor HCL.
|
|
|
|
Scaling |
|
|
|
|
CPU
Memory
Storage
GPU
The SANsymphony platform allows for expanding of all server hardware resources.
|
CPU
Memory
GPU
|
CPU
Memory
Storage
GPU
StarWind Virtual SAN allows expansion of all server hardware resources.
|
|
|
|
Storage+Compute
Compute-only
Storage-only
Storage+Compute: In a single-layer deployment existing SANsymphony clusters can be expanded by adding additional nodes running SANsymphony, which adds additional compute and storage resources to the shared pool. In a dual-layer deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded simultaneously.
Compute-only: Because SANsymphony leverages virtual block volumes (LUNs), storage can be presented to hypervisor hosts not participating in the SANsymphony cluster. This is also beneficial to migrations, since it allows for online storage vMotions between SANsymphony and non-SANsymphony storage platforms.
Storage-only: In a dual-layer or mixed deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded independent from each other.
|
Compute+storage
Compute-only
Storage+Compute: Existing HPE SimpliVity 2600 federations can be expanded by adding additional HPE SimpliVity 2600 nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: Because HPE SimpliVity 2600 leverages a file-level protocol (NFS), storage can be presented to hypervisor hosts not participating in the HPE SimpliVity 2600 cluster. This is also beneficial to migrations, since it allows for online storage vMotions between HPE SimpliVity 2600 and non-HPE SimpliVity 2600 storage platforms.
Storage-only: N/A; A HPE SimpliVity 2600 node always takes active part in the hypervisor (compute) cluster as well as the storage cluster.
|
Storage+Compute
Compute-only
Storage-only
In case of Storage-only scale out, StarWind Virtual SAN Storage Nodes will be based on Windows Server bare metal with no hypervisor software installed eg. VMware ESXi. StarWind software will be running as a Windows-native application and providing storage to the hypervisor hosts.
|
|
|
|
1-64 nodes in 1-node increments
There is a maximum of 64 nodes within a single cluster. Multiple clusters can be managed through a single SANsymphony management instance.
|
vSphere: 2-16 storage nodes (cluster); 2-8 storage nodes (stretched cluster); 2-32+ storage nodes (Federation) in 1-node increments
HPE SimpliVity 2600 currently offers support for up to 16 storage nodes and 720 VMs within a single VSI cluster, and up to 8 storage noes within a single VDI cluster. Up to 32 storage nodes are supported within a single Federation. Multiple Federations can be used in either single-site or multi-site deployments, allowing for a scalable as well as a flexible solution. Data protection can be configured to run in between federations.
Cluster scale enhancements (16 nodes instead of 8 nodes within a single cluster) apply to new as well as existing SimpliVity clusters that run OmniStack 3.7.7.
For specific use-cases a Request for Product Qualification (RPQ) process can be initiated to authorize more than 32 storage nodes within a single Federation.
HPE SimpliVity 2600 also supports adding compute nodes to a storage node cluster in vSphere environments.
Stretched Clusters with Availability Zones remain supported for up to 8 HPE OmniStack hosts (4 per Availability Zone).
OmniStack 3.7.7 introduces support for multi-host deployment at one time to a cluster.
HPE OmniStack 3.7.8 introduces support for 48 clusters per Federation (previously 16 clusters) when using multiple vCenter Servers in Enhanced Linked Mode.
VDI = Virtual Desktop Infrastructure
VSI = Virtual Server Infrastructure
|
2-64 nodes in 1-node increments
The 64-node limit applies to both VMware vSphere and Microsoft Hyper-V environments.
|
|
|
Small-scale (ROBO)
Details
|
2 Node minimum
DataCore prevents split-brain scenarios by always having an active-active configuration of SANsymphony with a primary and an alternate path.
In the case SANsymphony servers are fully operating but do not see each other, the application host will still be able to read and write data via the primary path (no switch to secondary). The mirroring is interrupted because of the lost connection and the administrator is informed accordingly. All writes are stored on the locally available storage (primary path) and all changes are tracked. As soon as the connection between the SANsymphony servers is restored, the mirror will recover automatically based on these tracked changes.
Dual updates due to misconfiguration are detected automatically and data corruption is prevented by freezing the vDisk and waiting for user input to solve the conflict. Conflict solutions could be to declare one side of the mirror to be the new active data set and discarding all tracked changes at the other side, or splitting the mirror and merge the two data sets into a 3rd vDisk manually.
|
2 Node minimum
HPE SimpliVity 2600 supports 2-node configurations without sacrificing any of the data reduction and data protection capabilities. The HPE SimpliVity 2600 is ideal for ROBO deployments.
All the remote sites can be centrally managed from a single dasboard at the central site.
|
1 Node minimum
StarWind Virtual SAN can be used as standalone iSCSI target providing storage from one node (no HA). For HA two nodes are required. StarWind Virtual SAN can be further scaled by adding more storage or new nodes into the storage cluster.
|
|
|
Storage Support
|
|
|
|
|
|
|
General |
|
|
|
|
Block Storage Pool
SANsymphony only serves block devices to the supported OS platforms.
|
Parallel File System
on top of Object Store
Both the File System and the Object Store have been internally developed by HPE SimpliVity 2600.
|
Block Storage Pool
StarWind VSAN only serves block devices as storage volumes to the supported OS platforms.
The underlying storage is first aggregated with hardware or software RAID. Then, the storage is replicated by StarWind at the block level across 2 or 3 nodes and further provided as a single pool (single StarWind virtual device) or as multiple pools (multiple StarWind devices).
|
|
|
|
Partial
DataCores core approach is to provide storage resources to the applications without having to worry about data locality. But if data locality is explicitly requested, the solution can partially be designed that way by configuring the first instance of all data to be stored on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference.
By default data does not automatically follow the VM when the VM is moved to another node. However, virtual disks can be relocated on the fly to other DataCore node without losing I/O access, but this relocation takes some time due to data copy operations required. This kind of relocation usually is done manually, but we allow automation of such tasks and can integrate with VM orchestration using PowerShell for example.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Full
When a VM is created, it is optimally placed on the best 2 nodes available. When data is written, it is deduplicated and compressed on arrival, and then stored on the local node as well as a dedicated partner node. To keep the performance optimal through the VMs lifecycle, OmniStack automatically creates VMware vSphere DRS affinity rules and policies. This is called Intelligent Workload Optimization. VMware DRS is made aware of where the data of an individual VM is. In effect, the VM follows the data rather than having the data follow the VM, as this prevents heavy moves of data to the VM. When a HPE SimpliVity 2600 node is added to the federation, the DRS rules related to OmniStack are automatically re-evaluated.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Partial
StarWinds core approach is to keep data close (local) to the VM in order to avoid slow data transfers through the network and achieve the highest performance the setup can provide. The solution is designed to store the first instance of all data on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference. Data does not automatically follow the VM when the VM is moved to another node.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
|
|
|
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
VoV = Volume-on-Volume; The Virtual Storage Controller uses virtual disks provided by the hypervisor platform.
|
Direct-attached (RAID)
The software takes ownership of the RAID groups provisioned by the servers hardware RAID controller.
|
Direct-attached (Raw)
SAN or NAS
Direct-attached: StarWind can take control of formatted disks (NTFS). Also, StarWind software can present RAW unformatted disks over SCSI Pass Through Interface that enables remote initiator clients to use any type of a hard drive (PATA/SATA/RAID).
External SAN/NAS Storage: SAN/NAS can be connected over Ethernet connectivity and can be used for StarWind Virtual SAN as soon as they are provided as block storage (iSCSI). In case it is required to replicate data between NAS systems, there should be 2 NAS systems connected to nodes that will be used for StarWind Virtual SAN.
|
|
|
|
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
|
All-Flash (SSD-only)
HPE has exclusively released all-flash models for the HPE SimpliVity 2600 platform. These models facilitate a variety of workloads including those that demand ultra-high low-latency performance.
|
Magnetic-only
Hybrid (Flash+Magnetic)
All-Flash
|
|
|
Hypervisor OS Layer
Details
|
SD, USB, DOM, SSD/HDD
|
SSD
1x 480GB M.2 SSD is used for system boot.
|
SD, USB, DOM, SSD/HDD
|
|
|
|
Memory |
|
|
|
|
DRAM
|
DRAM (VSC)
|
DRAM
DRAM can be used for caching in a write-back or write-through mode. Additionally, it can be used for creating StarWind RAM disks.
For further information, please visit:
https://www.starwindsoftware.com/high-performance-ram-disk-emulator
|
|
|
|
Read/Write Cache
DataCore SANsymphony accelerates reads and writes by leveraging the powerful processors and large DRAM memory inside current generation x86-64bit servers on which it runs. Up to 8 Terabytes of cache memory may be configured on each DataCore node, enabling it to perform at solid state disk speeds without the expense. SANsymphony uses a common cache pool to store reads and writes in.
SANsymphony read caching essentially recognizes I/O patterns to anticipate which blocks to read next into RAM from the physical back-end disks. That way the next request can be served from memory.
When hosts write to a virtual disk, the data first goes into DRAM memory and is later destaged to disk, often grouped with other writes to minimize delays when storing the data to the persistent disk layer. Written data stays in cache for re-reads.
The cache is cleaned on a first-in-first-out (FiFo) basis. Segment overwrites are performed on the oldest data first for both read- and write cache segment requests.
SANsymphony prevents the write cache data from flooding the entire cache. In case the write data amount runs above a certain percentage watermark of the entire cache amount, then the write cache will temporarily be switched to write-through mode in order to regain balance. This is performed fully automatically and is self-adjusting, per virtual disk as well as on a global level.
|
DRAM (VSC): Read Cache
|
Read/Write Cache
StarWind Virtual SAN accelerates reads and writes by leveraging conventional RAM.
The memory cache is filled up with data mainly during write operations. During read operations, data enters the cache only if the latter contains either empty memory blocks or the lines that were allocated for these entries earlier and have not been fully exhausted yet.
StarWind VSAN supports two Memory (L1 Cache) Policies:
1. Write-Back, caches writes in DRAM only and acknowledges back to the originator when complete in DRAM.
2. Write-Through, caches writes in both DRAM and underlying storage, and acknowledges back to the originator when complete in the underlying storage.
This means that exclusively caching writes in convential memory is optional. When the Write-Through policy is used, DRAM is used primarily for caching reads.
To change the cache size, first, the StarWind service should be stopped and change cache and then start the service and then repeat the same process on the partner node. This allows keeping VMs up and running during cache changes.
In the majority of use cases, there is no need to assign L1 cache for all-flash storage arrays.
Note: In case of using the Write-Back policy for DRAM, UPS units have to be installed to ensure the correct shutdown of StarWind Virtual SAN nodes. If a power outage occurs, this will prevent the loss of cached data. The UPS capacity must cover the time required for flushing the cached data to the underlying storage.
|
|
|
|
Up to 8 TB
The actual size that can be configured depends on the server hardware that is used.
|
DRAM (VSC): 16-48GB for Read Cache
Each Virtual Storage Controller (VSC) is equipped with 48-100GB total memory capacity, of which 16-48GB is used as read cache. The amount of memory allocated is fixed (non-configurable) and model dependent.
|
Configurable
The size of L1 cache should be equal to the amount of the average working data set.
There are no default or maximum values for RAM cache as such. The maximum size that can be assigned for StarWind RAM cache is limited by the available RAM to the system; also, you need to make sure that other applications running will have enough of RAM for their operations.
Additionally, the total amount of L1 cache assigned influences the time required for system shutdown so overprovisioning of the L1 cache amount can cause StarWind service interruption and the loss of cached data. The minimum size assigned for RAM cache in either write-back or write-through mode is 1MB. However, StarWind recommends assigning a StarWind RAM cache size that matches the size of the working data set.
|
|
|
|
Flash |
|
|
|
|
SSD, PCIe, UltraDIMM, NVMe
|
SSD
|
SSD, NVMe
|
|
|
|
Persistent Storage
SANsymphony supports new TRIM / UNMAP capabilities for solid-state drives (SSD) in order to reduce wear on those devices and optimize performance.
|
All-Flash: Metadata + Write Buffer + Persistent Storage Tier
Read cache is not necessary in All-flash configurations.
|
Read/Write Cache (hybrid)
Persistent storage (all-flash)
StarWind Virtual SAN supports a single Flash (L2 Cache) Policy:
1. Write-Through, caches writes in both Flash (SSD/NVMe) and the underlying storage, and acknowledges back to the originator when complete in the underlying storage.
With the write-through policy, new blocks are written both to cache layer and the underlying storage synchronously. However, in this mode, only the blocks that are read most frequently are kept in the cache layer, accelerating read operations. This is the only mode available for StarWind L2 cache.
In the case of Write-Through cached data does not need to be offloaded to the backing store when a device is removed or the service is stopped.
In the majority of use cases, if L1 cache is already assigned, there is no need to configure L2 cache.
A StarWind Virtual SAN solution with L2 cache configured should have more RAM available, since L2 cache needs 6.5 GB of RAM per 1 TB of L2 cache to store metadata.
|
|
|
|
No limit, up to 1 PB per device
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
All-Flash: 6 SSDs per node
All-Flash SSD configurations:
170-series: 6x 1.92TB
190-series: 6x 1.92TB
|
No limitations
The definition of a device here is a raw flash device that is presented to Virtual SAN as either a SCSI LUN or a SCSI disk.
|
|
|
|
Magnetic |
|
|
|
|
SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
In this case SATA = NL-SAS = MDL SAS
|
N/A
|
SAS or SATA
|
|
|
|
Persistent Storage
|
N/A
|
Persistent Storage
|
|
|
Magnetic Capacity
Details
|
No limit, up to 1 PB (per device)
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
N/A
|
No limitations
|
|
|
Data Availability
|
|
|
|
|
|
|
Reads/Writes |
|
|
|
Persistent Write Buffer
Details
|
DRAM (mirrored)
If caching is turned on (default=on), any write will only be acknowledged back to the host after it has been succesfully stored in DRAM memory of two separate physical SANsymphony nodes. Based on de-staging algorithms each of the nodes eventually copies the written data that is kept in DRAM to the persistent disk layer. Because DRAM outperforms both flash and spinning disks the applications experience much faster write behavior.
Per default, the limit of dirty-write-data allowed per Virtual Disk is 128MB. This limit could be adjusted, but there has never been a reason to do so in the real world. Individual Virtual Disks can be configured to act in write-through mode, which means that the dirty-write-data limit is set to 0MB so effectively the data is directly written to the persistent disk layer.
DataCore recommends that all servers running SANsymphony software are UPS protected to avoid data loss through unplanned power outages. Whenever a power loss is detected, the UPS automatically signals this to the SANsymphony node and write behavior is switched from write-back to write-through mode for all Virtual Disks. As soon as the UPS signals that power has been restored, the write behavior is switched to write-back again.
|
Flash Layer (SSD)
The HPE SimpliVity 2600 does not contain a propietary PCIe based HPE OmniStack Accelerator Card.
|
DRAM (mirrored)
Flash Layer (SSD, NVMe)
|
|
|
Disk Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1 Replica (2N)
+ Hardware RAID (5 or 6)
HPE SimpliVity 2600 uses replicas to protect data within the cluster. In addition, hardware RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated on a designated partner node. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. When a physical disk fails, hardware RAID maintains data availability.
Only when more than 2 disks fail within the same node, data has to be read from the partner node instead. Given the high level of redundancy within and across nodes, the desire to reduce unnecessary I/O, and the fact that most node outages are easily recoverable, node level redundancy is re-established based on a user initiated action.
|
1-2 Replicas (2N-3N)
+ Hardware RAID (1, 5, 10)
StarWind Virtual SAN replicates the storage to protect data within the cluster. In addition, hardware or software RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated to one or two designated partner nodes. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. With 3N one instance of data that is written is stored on the local node, and two other instances of that data are stored on designated partner nodes in the cluster. When a physical disk fails, hardware RAID maintains data availability.
|
|
|
Node Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1)
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1 Replica (2N)
+ Hardware RAID (5 or 6)
HPE SimpliVity 2600 uses replicas to protect data within the cluster. In addition, hardware RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated on a designated partner node. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. When a physical node fails, VMs need to be restarted and data is read from the partner node instead. Given the high level of redundancy within and across nodes, the desire to reduce unnecessary I/O, and the fact that most node outages are easily recoverable, node level redundancy is re-established based on a user initiated action.
|
1-2 Replicas (2N-3N)
Replicas: Before any write is acknowledged to the host, it is synchronously replicated to one or two designated partner nodes. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. With 3N one instance of data that is written is stored on the local node, and two other instances of that data are stored on designated partner nodes in the cluster.
|
|
|
Block Failure Protection
Details
|
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk. However, block failure protection is in most cases irrelevant as 1-node appliances are used as building blocks.
SANsymphony works on an N+1 redundancy design allowing any node to acquire any other node as a redundancy peer per virtual device. Peers are replacable/interchangable on a per Virtual Disk level.
|
Not relevant (1-node chassis only)
HPE SimpliVity 2600 building blocks are based on 1-node chassis only. Therefore multi-node block (appliance) level protection is not relevant for this solution as Node Failure Protection applies.
|
Not relevant (usually 1-node appliances)
In a 3-node cluster, StarWind can provide 3-way mirroring, allowing the cluster to withstand a failure of two nodes without loosing data accessibility. In a cluster of more than 3 nodes, a grid architecture can be configured to withstand the failure of two nodes.
|
|
|
Rack Failure Protection
Details
|
Manual configuration
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk.
|
Group Placement
HPE SimpliVity 2600 intelligent software features include Rack failure protection. Both rack level and site level protection within a cluster is administratively determined by placing hosts into groups. Data is balanced appropriately to ensure that each VM is redundantly stored across two separate groups.
|
N/A
|
|
|
Protection Capacity Overhead
Details
|
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
|
Replica (2N) + RAID5: 125-133%
Replica (2N) + RAID6: 120-133%
Replica (2N) + RAID60: 125-140%
The hardware RAID level that is applied depends on drive count in an individual node:
2 drives = RAID1
4-5 drives = RAID5
8-12 drives = RAID6
14-20 drives = RAID60 (2 per RAID6 set)
|
Replica (2N) + RAID1/10: 200%
Replica (2N) + RAID5: 125-133%
Replica (3N) + RAID1/10: 300%
Replica (3N) + RAID5: 225-233%
|
|
|
Data Corruption Detection
Details
|
N/A (hardware dependent)
SANsymphony fully relies on the hardware layer to protect data integrity. This means that the SANsymphony software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
Read integrity checks (CLI)
Disk scrubbing (software)
While writing data, checksums are created and stored as part of the inline deduplication process. When one of the underlying layers detects data corruption, a checksum comparison is performed and when required, another copy of the data is used to repair the corrupted copy in order to stay compliant with the configured protection level.
Read integrity checks can be enabled through the CLI.
Disk Scrubbing, termed 'RAID Patrol' by HPE SimpliVity 2600, is a background process that is used to perform checksum comparisons of all data stored within the solution. This way stale data is also verified for corruption.
|
N/A (hardware dependent)
StarWind Virtual SAN fully relies on the hardware layer to protect data integrity. This means that the StarWind software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
|
|
|
Points-in-Time |
|
|
|
|
Built-in (native)
|
Built-in (native)
HPE SimpliVity 2600 data protection capabilities are entirely integrated in its approach to backup/restore, so there is no need for additional Point-in-Time (PiT) capabilities.
Traditional snapshots can still be created using the features natively available in the hypervisor platform (eg. VMware Snapshots).
|
N/A
StarWind Virtual SAN does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
|
|
|
Local + Remote
SANsymphony snapshots are always created on one side only. However, SANsymphony allows you to create a snapshot for the data on each side by configuring two snapshot schedules, one for the local volume and one for the remote volume. Both snapshot entities are independent and can be deleted independently allowing different retention times if needed.
There is also the capability to pair the snapshot feature along with asynchronous replication which provides you with the ability to have a third site long distance remote copy in place with its own retention time.
|
Local + Remote
HPE SimpliVity 2600 data protection capabilities are entirely integrated in its approach to backup/restore, so there is no need for additional Point-in-Time (PiT) capabilities.
Traditional snapshots can still be created using the features natively available in the hypervisor platform (eg. VMware Snapshots).
|
N/A
StarWind Virtual SAN does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
|
|
Snapshot Frequency
Details
|
1 Minute
The snapshot lifecycle can be automatically configured using the integrated Automation Scheduler.
|
GUI: 10 minutes (Policy-based)
CLI: 1 minute
Backups can be scheduled.
Although setting the backup frequency below 10 minutes is possible through the Command Line Interface (CLI), it should not be used for a large number of protected VMs as this could severely impact performance.
|
N/A
StarWind Virtual SAN does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
|
|
Snapshot Granularity
Details
|
Per VM (Vvols) or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is certified for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
Per VM
|
N/A
StarWind Virtual SAN does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
|
|
|
Built-in (native)
DataCore SANsymphony incorporates Continuous Data Protection (CDP) and leverages this as an advanced backup mechanism. As the term implies, CDP continuously logs and timestamps I/Os to designated virtual disks, allowing end-users to restore the environment to an arbitrary point-in-time within that log.
Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage. Several of these markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent or crash-consistent restore point from just before the incident occurred.
|
Built-in (native)
HPE SimpliVity 2600 provides native backup capabilities. Its backup feature supports remote-replication, is deduplication aware and data is compressed over the wire.
A snapshot is not a backup:
1. For a data copy to be considered a backup, it must at the very least reside on a different physical platform (=controller+disks) to avoid dependencies. If the source fails or gets corrupted, a backup copy should still be accessible for recovery purposes.
2. To avoid further dependencies, a backup copy should reside in a different physical datacenter - away from the source. If the primary datacenter becomes unavailable for whatever reason, a backup copy should still be accessible for recovery purposes.
When considering the above prerequisites, a backup copy can be created by combining snapshot functionality with remote replication functionality to create independent point-in-time data copies on other SDS/HCI clusters or within the public cloud. In ideal situations, the retention policies can be set independently for local and remote point-in-time data copies, so an organization can differentiate between how long the separate backup copies need to be retained.
|
External
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
|
Local or Remote
All available storage within the SANsymphony group can be configured as targets for back-up jobs.
|
Locally
To other SimpliVity sites
To Service Providers
Backup remote-replication is deduplication aware + data is compressed over the wire.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
|
Continuously
As Continuous Data Protection (CDP) is being leveraged, I/Os are logged and timestamped in a continous fashion, so end-users can restore to virtually any-point-in-time.
|
GUI: 10 minutes (Policy-based)
CLI: 1 minute
Although setting the backup frequency below 10 minutes is possible through the Command Line Interface (CLI), it should not be used for a large number of protected VMs as this could severely impact performance.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
Backup Consistency
Details
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
By default CDP creates crash consistent restore points. Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage.
Several CDP Rollback Markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent, filesystem-consistent or crash-consistent restore point from (just) before the incident occurred.
In a VMware vSphere environment, the DataCore VMware vCenter plug-in can be used to create snapshot schedules for datastores and select the VMs that you want to enable VSS filesystem/application consistency for.
|
vSphere: File System Consistent (Windows), Application Consistent (MS Apps on Windows)
Hyper-V: File System Consistent (Windows)
HPE SimpliVity 2600 provides the option to enable Microsoft VSS integration when configuring a backup policy or when initiating manual backups using the CLI backup command. This ensures application-consistent backups are created for MS Exchange and MS SQL database environments.
In OmniStack 3.6.1 support was added for VSS on virtual machines running SQL Server 2012/2016 on the Windows Server 2012 R2 operating system.
HPE SimpliVity 2600 also still provides the option to create crash-consistent backups by setting consistency to 'none'.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
Restore Granularity
Details
|
Entire VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
When configuring the virtual environment as described above, effectively VM-restores are possible.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
vSphere: Entire VM or Single File
Hyper-V: Entire VM
With HPE SimpliVity 2600 snapshots and backups are the same.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
Restore Ease-of-use
Details
|
Entire VM or Volume: GUI
Single File: Multi-step
Restoring VMs or single files from volume-based storage snapshots requires a multi-step approach.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
Entire VM: GUI, CLI and API
Single File: GUI
Single File restores can be performed entirely from the vSphere Web Client GUI due to the plugin integration
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
|
Disaster Recovery |
|
|
|
Remote Replication Type
Details
|
Built-in (native)
DataCore SANsymphonys remote replication function, Asynchronous Replication, is called upon when secondary copies will be housed beyond the reach of Synchronous Mirroring, as in distant Disaster Recovery (DR) sites. It relies on a basic IP connection between locations and works in both directions. That is, each site can act as the disaster recovery facility for the other. The software operates near-synchronously, meaning that it does not hold up the application waiting on confirmation from the remote end that the update has been stored remotely.
|
Built-in (native)
HPE SimpliVity 2600 provides native DR and replication capabilities.
| |