|
General
|
|
|
- Fully Supported
- Limitation
- Not Supported
- Information Only
|
|
Pros
|
- + Extensive platform support
- + Extensive data protection capabilities
- + Flexible deployment options
|
- + Extensive QoS capabilities
- + Extensive data protection capabilities
- + Small form factor
|
- + Flexible architecture
- + Extensive platform support
- + Several Microsoft integration points
|
|
Cons
|
- - No native data integrity verification
- - Dedup/compr not performance optimized
- - Disk/node failure protection not capacity optimized
|
- - No hybrid configurations
- - No stretched clustering
- - Single hypervisor support
|
- - Minimal data protection capabilities
- - No Quality-of-service mechanisms
- - No native encryption capabilities
|
|
|
|
Content |
|
|
|
WhatMatrix
|
WhatMatrix
|
WhatMatrix
|
|
|
|
Assessment |
|
|
|
Name: SANsymphony
Type: Software-only (SDS)
Development Start: 1998
First Product Release: 1999
NEW
DataCore was founded in 1998 and began to ship its first software-defined storage (SDS) platform, SANsymphony (SSY), in 1999. DataCore launched a separate entry-level storage virtualization solution, SANmelody (v1.4), in 2004. This platform was also the foundation for DataCores HCI solution. In 2014 DataCore formally announced Hyperconverged Virtual SAN as a separate product. In May 2018 integral changes to the software licensing model enabled consolidation because the core software is the same and since then cumulatively called DataCore SANsymphony.
One year later, in 2019, DataCore expanded its software-defined storage portfolio with a solution especially for the need of file virtualization. The additional SDS offering is called DataCore vFilO and operates as scale-out global file system across distributed sites spanning on-premises and cloud-based NFS and SMB shares.
Recently, at the beginning of 2021, DataCore acquired Caringo and integrated its know how and software-defined object storage offerings into the DataCore portfolio. The newest member of the DataCore SDS portfolio is called DataCore Swarm and together with its complementary offering SwarmFS and DataCore FileFly it enables customers to build on-premises object storage solutions that radically simplify the ability to manage, store, and protect data while allowing multi-protocol (S3/HTTP, API, NFS/SMB) access to any application, device, or end-user.
DataCore Software specializes in the high-tech fields of software solutions for block, file, and object storage. DataCore has by far the longest track-record when it comes to software-defined storage, when comparing to the other SDS/HCI vendors on the WhatMatrix.
In April 2021 the company had an install base of more than 10,000 customers worldwide and there were about 250 employees working for DataCore.
|
Name: NetApp HCI
Type: Hardware+Software (HCI)
Development Start: 2016
First Product Release: 2017
SolidFire, founded at the end of 2009, released its first Software Defined Storage (SDS) solution in november 2012. In february 2016 NetApp officially completed its acquisition of SolidFire, thus gaining acces to the SDS/HCI market.
NetApp HCIs (NetApp HCI) worldwide customer install base is unknown at this time. The number of employees working in the NetApp HCI division is also unknown at this time.
|
Name: StarWind Virtual SAN
Type: SDS
Development Start: 2003
First Product Release: 2011
StarWind Software is a privately held company which started in 2008 as a spin-off from Rocket Division Software, Ltd. (founded in 2003). It initially provided free Software Defined Storage (SDS) offerings to early adopters in 2009. Sometime in 2011 the company released its product Native SAN (later rebranded to Virtual SAN). In 2015 StarWind executed a successful 'pivot shift' from software-only company to become a hardware vendor and brought Hyper-Convergence from the Enterprise level to SMB and ROBO. Apart of HCA solutions, StarWind keeps focus on developing and improving its Virtual SAN solution. In 2018, StarWind released Virtual SAN for vSphere - an SDS solution specifically aimed at VMware vSphere environments.
In March 2020 the company had a StarWind Virtual SAN install base of more than 4,500 customers worldwide. In June 2019 there were more than 250 employees working for StarWind.
|
|
|
GA Release Dates:
SSY 10.0 PSP12: jan 2021
SSY 10.0 PSP11: aug 2020
SSY 10.0 PSP10: dec 2019
SSY 10.0 PSP9: jul 2019
SSY 10.0 PSP8: sep 2018
SSY 10.0 PSP7: dec 2017
SSY 10.0 PSP6 U5: aug 2017
.
SSY 10.0: jun 2014
SSY 9.0: jul 2012
SSY 8.1: aug 2011
SSY 8.0: dec 2010
SSY 7.0: apr 2009
.
SSY 3.0: 1999
NEW
10th Generation software. DataCore currently has the most experience when it comes to SDS/HCI technology, when comparing SANsymphony to other SDS/HCI platforms.
SANsymphony (SSY) version 3 was the first public release that hit the market back in 1999. The product has evolved ever since and the current major release is version 10. The list includes only the milestone releases.
PSP = Product Support Package
U = Update
|
GA Release Dates:
NetApp HCI 1.8P1: oct 2020
NetApp HCI 1.8: may 2020
NetApp HCI 1.7: sep 2019
NetApp HCI 1.6: jul 2019
NetApp HCI 1.4: nov 2018
NetApp HCI 1.3: jun 2018
NetApp HCI 1.2: mar 2018
NetApp HCI 1.1: dec 2017
NetApp HCI 1.0: oct 2017
NEW
12th Generation SolidFire software on whitelabel server hardware.
NetApp HCI is fueled by SolidFire Element OS software. SolidFires maturity has been increasing ever since the first iteration by expanding its range of features with a set of advanced functionality.
NetApp HCI v1.8P1 is based on Element OS 12.2.
Element OS 12.2: sep 2020
Element OS 12.0: may 2020
Element OS 11.5: sep 2019
Element OS 11.3: jul 2019
Element OS 11.1: mar 2019
Element OS 11.0: nov 2018
Element OS 10.4: aug 2018
Element OS 10.3: jun 2018
Element OS 10.2: mar 2018 (NetApp HCI-only)
Element OS 10.1: dec 2017
Element OS 10.0: nov 2017
Element OS 9.0: oct 2016
Element OS 8.0: jun 2015
Element OS 7.0: nov 2014
Element OS 6.0: apr 2014
|
StarWind VSAN for vSphere Release Dates:
VSAN build 13170: oct 2019
VSAN build 12859: feb 2019
VSAN build 12658: dec 2018
VSAN build 12533: sep 2018
StarWind VSAN for Hyper-V Release Dates:
VSAN build 13279: oct 2019
VSAN build 13182: aug 2019
VSAN build 12767: feb 2019
VSAN build 12658: nov 2018
VSAN build 12585 oct 2018
VSAN build 12393: aug 2018
VSAN build 12166: may 2018
VSAN build 12146: apr 2018
VSAN build 11818: dec 2017
VSAN build 11456: aug 2017
VSAN build 11404: jul 2017
VSAN build 11156: may 2017
VSAN build 11071 may 2017
VSAN build 10927 apr 2017
VSAN build 10914 apr 2017
VSAN build 10833: apr 2017
VSAN build 10811 mar 2017
VSAN build 10799: mar 2017
VSAN build 10695: feb 2017
VSAN build 10547: jan 2017
VSAN build 9996: aug 2016
VSAN build 9980 aug 2016
VSAN build 9781: jun 2016
VSAN build 9611 jun 2016
VSAN build 9052: may 2016
VSAN build 8730: nov 2015
VSAN build 8716 nov 2015
VSAN build 8198 jun 2015
VSAN build 7929 apr 2015
VSAN build 7774 feb 2015
VSAN build 7509 de 2014
VSAN build 7471 dec 2014
VSAN build 7354 nov 2014
VSAN build 7145
VSAN build 6884
Version 8 Release 10 StarWind software.
|
|
|
|
Pricing |
|
|
Hardware Pricing Model
Details
|
N/A
SANsymphony is sold by DataCore as a software-only solution. Server hardware must be acquired separately.
The entry point for all hardware and software compatibility statements is: https://www.datacore.com/products/sansymphony/tech/compatibility/
On this page links can be found to: Storage Devices, Servers, SANs, Operating Systems (Hosts), Networks, Hypervisors, Desktops.
Minimum server hardware requirements can be found at: https://www.datacore.com/products/sansymphony/tech/prerequisites/
|
Per Node
|
N/A
StarWind Virtual SAN is sold by StarWind as a software-only solution. Server hardware must be acquired separately.
|
|
Software Pricing Model
Details
|
Capacity based (per TB)
NEW
DataCore SANsymphony is licensed in three different editions: Enterprise, Standard, and Business.
All editions are licensed per capacity (in 1 TB steps). Except for the Business edition which has a fixed price per TB, the more capacity that is used by an end-user in each class, the lower the price per TB.
Each edition includes a defined feature set.
Enterprise (EN) includes all available features plus expanded Parallel I/O.
Standard (ST) includes all Enterprise (EN) features, except FC connections, Encryption, Inline Deduplication & Compression and Shared Multi-Port Array (SMPA) support with regular Parallel I/O.
Business (BZ) as entry-offering includes all essential Enterprise (EN) features, except Asynchronous Replication & Site Recovery, Encryption, Deduplication & Compression, Random Write Accelerator (RWA) and Continuous Data Protection (CDP) with limited Parallel I/O.
Customers can choose between a perpetual licensing model or a term-based licensing model. Any initial license purchase for perpetual licensing includes Premier Support for either 1, 3 or 5 years. Alternatively, term-based licensing is available for either 1, 3 or 5 years, always including Premier Support as well, plus enhanced DataCore Insight Services (predictive analytics with actionable insights). In most regions, BZ is available as term license only.
Capacity can be expanded in 1 TB steps. There exists a 10 TB minimum per installation for Business (BZ). Moreover, BZ is limited to 2 instances and a total capacity of 38 TB per installation, but one customer can have multiple BZ installations.
Cost neutral upgrades are available when upgrading from Business/Standard (BZ/ST) to Enterprise (EN).
|
Per Node
Capacity based (per TB)
NetApp HCI software licensing is separate from hardware pricing. The decoupling provides a choice between per node and capacity pricing. The capacity pricing model allows usage of the licensed storage capacity across an entire enterprise (multiple sites, multiple solutions). Currently 25TB and 100TB packs exist.
There are no separate software editions with regards to NetApp HCI. Each node comes equiped with an all-inclusive feature set. This means that without exception all storage software capabilities are available for use.
NetApp HCI can be connected to NetApp Cloud Central to enable additional features and services such as Kubernetes-as-a-Service with NetApp Kubernetes Service and fileservices-as-a-Service with NetApp Cloud Volumes. Some cloud services come with an additional charge.
The NetApp HCI solution comes with a 90-day trial license for VMware, which allows the NetApp Deployment Engine (NDE) to install and configure the VMware software. VMware software licenses must be purchased separately.\
|
Hyper-V: Per Node + Per TB
vSphere: Per Node (storage capacity included)
There are two StarWind Virtual SAN editions: StarWind Virtual SAN for Hyper-V and StarWind Virtual SAN for vSphere. StarWind Virtual SAN for Hyper-V is licensed per node + per amount of HA storage provisioned by StarWind Virtual SAN. StarWind Virtual SAN for vSphere is licensed per the node. There the amount of HA storage provisioned by StarWind Virtual SAN is always unlimited.
HA = Highly Available
|
|
Support Pricing Model
Details
|
Capacity based (per TB)
Support is always provided on a premium (24x7) basis, including free updates.
More information about DataCores support policy can be found here:
http://datacore.custhelp.com/app/answers/detail/a_id/1270/~/what-is-datacores-support-policy-for-its-products
|
Per Node
With regards to NetApp HCI there are three support offerings:
- SupportEdge Standard
- SupportEdge Premium
- SupportEdge Secure for Government (onsite and parts delivery)
All three options include hardware support (both compute and storage), software support, VMware support, upgrades and patch entitlement, access to NetApp Support site assets and the Knowledge Base, and full access to the HCI Technical Support team.
SupportEdge Standard:
- Next-Business-Day (NBD) hardware replacement.
SupportEdge Premium:
- 4-Hour hardware replacement
- Priority placement in the technical support queuing system
Customers may call VMware directly or call NetApp Technical Support when it comes to VMware-specific support questions.
|
Per Node
StarWind provides three editions of StarWind Virtual SAN Support:
1. Standard Support for business days, business hours and up to 4 hours response time
2. Premium Support covers 24x7x365 support and up to 1 hour response time
3. Proactive Support provides Premium Support and monitors the health of the system and proactively notifies about potential issues on the hardware level, hypervisor level and StarWind software-level.
|
|
|
Design & Deploy
|
|
|
|
|
|
|
Design |
|
|
Consolidation Scope
Details
|
Storage
Data Protection
Management
Automation&Orchestration
DataCore is storage-oriented.
SANsymphony Software-Defined Storage Services are focused on variable deployment models. The range covers classical Storage Virtualization over Converged and Hybrid-Converged to Hyperconverged including a seamless migration between them.
DataCore aims to provide all key components within a storage ecosystem including enhanced data protection and automation & orchestration.
|
Compute
Storage
Data Protection
Management
Automation&Orchestration
Both NetApp and the NetApp HCI platform are storage-oriented.
With the NetApp HCI platform NetApp aims to provide key components within a Private Cloud ecosystem as well as integration with existing hypervisors and applications.
|
Storage
Management
StarWind Virtual SAN consolidates storage from different servers by replicating and presenting it over the iSCSI protocol as a single pool.
|
|
|
1, 10, 25, 40, 100 GbE (iSCSI)
8, 16, 32, 64 Gbps (FC)
The bandwidth required depends entirely on the specifc workload needs.
SANsymphony 10 PSP11 introduced support for Emulex Gen 7 64 Gbps Fibre Channel HBAs.
SANsymphony 10 PSP8 introduced support for Gen6 16/32 Gbps ATTO Fibre Channel HBAs.
|
10/25 GbE (iSCSI)
NetApp HCI compute and storage nodes use Ethernet connectivity for storage traffic; two SFP+ (10 GbE) or SFP28 (25GbE) interfaces can be dedicated to storage. On compute nodes, all traffic including storage can also be converged on just two SFP+/SFP28 interfaces.
|
1, 10, 25, 40, 100 GbE
StarWind requires at least three dedicated network interfaces:
- one for StarWind Synchronization
- one for iSCSI traffic/Heartbeat)
- one for Management/Heartbeat
At least one Heartbeat interface must be on a separate network adapter and redundant.
For iSCSI and synchronization, minimum 1 GbE of bandwidth and latency under 5ms is a requirement.
|
|
Overall Design Complexity
Details
|
Medium
DataCore SANsymphony is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. DataCore SANsymphony seeks to provide important capabilities either natively or tightly integrated, and this keeps the design process relatively simple. However, because many features in SANsymphony are optional and thus can be turned on/off, in effect each one needs to be taken into consideration when preparing a detailed design.
|
Low
NetApp HCI was developed with simplicity in mind, both from a design and a deployment perspective. NetApp HCIs uniform platform architecture is meant to be applicable to a wide variety of use-cases and seeks to provide important capabilities natively. There are only a handful of storage building blocks to choose from, and many advanced capabilities like deduplication and compression are always turned on. This minimizes the amount of design choices as well as the number of deployment steps.
|
Medium
StarWind Virtual SAN is able to meet many different use-cases because of its flexible technical architecture, however this also means there are a lot of design choices that need to be made. Also, StarWind Virtual SAN does not encompass many native data protection capabilities and data services. A complete solution design therefore requires the presence of multiple technology platforms.
|
|
External Performance Validation
Details
|
SPC (Jun 2016)
ESG Lab (Jan 2016)
SPC (Jun 2016)
Title: 'Dual Node, Fibre Channel SAN'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-connected, SSY 10.0, 4x All-Flash Dell MD1220 SAS Storage Arrays
SPC (Jun 2016)
Title: 'Dual Node, High Availability, Hyper-converged'
Workloads: SPC-1
Benchmark Tools: SPC-1 Workload Generator
Hardware: All-Flash Lenovo x3650, 2-node cluster, FC-interconnect, SSY 10.0
ESG Lab (Jan 2016)
Title: 'DataCore Application-adaptive Data Infrastructure Software'
Workloads: OLTP
Benchmark Tools: IOmeter
Hardware: Hybrid (Tiered) Dell PowerEdge R720, 2-node cluster, SSY 10.0
|
Evaluator Group (aug 2019)
Evaluator Group (Aug 2019)
Title: 'Scaling Performance for Enterprise HCI Environments'
Workloads: Mix (MS Exchange, Olio, Web, Database), VDI
Benchmark Tools: IOmark-VM (all), IOmark-VDI
Hardware: H410S, 5-node cluster, NetApp HCI 1.x
|
N/A
No StarWind Virtual SAN validated test reports have been published in 2016/2017/2018/2019.
|
|
Evaluation Methods
Details
|
Free Trial (30-days)
Proof-of-Concept (PoC; up to 12 months)
SANsymphony is freely downloadble after registering online and offers full platform support (complete Enterprise feature set) but is scale (4 nodes), capacity (16TB) and time (30 days) restricted, what all can be expanded upon request. The free trial version of SANsymphony can be installed on all commodity hardware platforms that meet the hardware requirements.
For more information please go here: https://www.datacore.com/try-it-now/
|
Proof-of-Concept (PoC)
|
Community Edition (forever)
Trial (up to 30 days)
Proof-of-Concept
There are 2 ways for end-user organizations to evaluate StarWind:
1. StarWind Free. A free version of the software-only Virtual SAN product can be downloaded for the StarWind website. The free version has full functionality but StarWind Management Console works only in the monitoring mode without the ability to create or manage StarWind. All the management is performed via PowerShell and set of script templates.
The free version is intended to be self-supported or community-supported on public discussion forums. This
2. StarWind Trial. The trial version has full functionality and all the management capabilities for StarWind Management Console and is limited to 30 days but can be prolonged if required.
|
|
|
|
Deploy |
|
|
Deployment Architecture
Details
|
Single-Layer
Dual-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
Single-Layer:
- SANsymphony is implemented as virtual machine (VM) or in case of Hyper-V as service layer on Hyper-V parent OS, managing internal and/or external storage devices and providing virtual disks back to the hypervisor cluster it is implemented in. DataCore calls this a hyper-converged deployment.
Dual-Layer:
- SANsymphony is implemented as bare metal nodes, managing external storage (SAN/NAS approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a traditional deployment.
- SANsymphony is implemented as bare metal nodes, managing internal storage devices (server-SAN approach) and providing virtual disks to external hosts which can be either bare metal OS systems and/or hypervisors. DataCore calls this a converged deployment.
Mixed:
- SANsymphony is implemented in any combination of the above 3 deployments within a single management entity (Server Group) acting as a unified storage grid. DataCore calls this a hybrid-converged deployment.
|
Dual-Layer
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
Dual-Layer: NetApp HCI is implemented exclusively in a dual-layer architecture consisting of compute-only nodes in one layer and storage-only nodes in a separate layer. This design choice was intentional as to allow for maximum flexibility and performance predictability.
|
Single-Layer
Dual-Layer (secondary)
Single-Layer = servers function as compute nodes as well as storage nodes.
Dual-Layer = servers function only as storage nodes; compute runs on different nodes.
In a Single-Layer architecture, the servers with Virtual SAN software function as compute nodes as well as storage node; StarWind performs storage synchronous replication
In a Dual-Layer architecture, StarWind Virtual SAN software replicates data in active-active mode between the dedicated storage servers and provides HA storage for use by separate compute nodes that do not have StarWind Virtual SAN installed.
|
|
Deployment Method
Details
|
BYOS (some automation)
BYOS = Bring-Your-Own-Server-Hardware
DataCore SANsymphony is made easy by providing a very straightforward implementation approach.
|
Turnkey (very fast; highly automated)
Because of the ready-to-go Hyper Converged Infrastructure (HCI) building blocks and the setup wizard provided by NetApp, customer deployments can be executed in hours instead of days.
|
BYOS (some automation)
|
|
|
Workload Support
|
|
|
|
|
|
|
Virtualization |
|
|
Hypervisor Deployment
Details
|
Virtual Storage Controller
Kernel (Optional for Hyper-V)
The SANsymphony Controller is deployed as a pre-configured Virtual Machine on top of each server that acts as a part of the SANsymphony storage solution and commits its internal storage and/or externally connected storage to the shared resource pool. The Virtual Storage Controller (VSC) can be configured direct access to the physical disks, so the hypervisor is not impeding the I/O flow.
In Microsoft Hyper-V environments the SANsymphony software can also be installed in the Windows Server Root Partition. DataCore does not recommend installing SANsymphony in a Hyper-V guest VM as it introduces virtualization layer overhead and obstructs DataCore Software from directly accessing CPU, RAM and storage. This means that installing SANsymphony in the Windows Server Root Partition is the preferred deployment option. More information about the Windows Server Root Partition can be found here: https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/architecture
The DataCore software can be installed on Microsoft Windows Server 2019 or lower (all versions down to Microsoft Windows Server 2012/R2).
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
None
Because of the design choice to keep compute and storage fully separated, the storage is served from the storage node cluster to the compute node cluster through the iSCSI storage protocol. In effect no solution components, eg. virtual storage controllers, need to be deployed on top of the compute nodes.
Kernel Integrated, Virtual Controller and VIB are each distributed architectures, having one active component per virtualization host that work together as a group. All three architectures are capable of delivering a complete set of storage services and good performance. Kernel Integrated solutions reside within the protected lower layer, VIBs reside just above the protected kernel layer, and Virtual Controller solutions reside in the upper user layer. This makes Virtual Controller solutions somewhat more prone to external actions (eg. most VSCs do not like snapshots). On the other hand Kernel Integrated solutions are less flexible because a new version requires the upgrade of the entire hypervisor platform. VIBs have the middle-ground, as they provide more flexibility than kernel integrated solutions and remain relatively shielded from the user level.
|
Virtual Storage Controller (vSphere)
User-Space (Hyper-V)
StarWind VSAN for vSphere is deployed by downloading an OVF template of a preconfigured Linux-based VM. It should be deployed on each VMware vSphere node that takes part in StarWind replication. The OVF contains StarWind SDS stack pre-configured and pre-installed.
StarWind VSAN for Hyper-V is deployed by downloading the latest build of StarWind. The installation process will automatically install the required components. The build should be installed on each node that will take part in StarWind replication.
|
|
Hypervisor Compatibility
Details
|
VMware vSphere ESXi 5.5-7.0U1
Microsoft Hyper-V 2012R2/2016/2019
Linux KVM
Citrix Hypervisor 7.1.2/7.6/8.0 (XenServer)
'Not qualified' means there is no generic support qualification due to limited market footprint of the product. However, a customer can always individually qualify the system with a specific SANsymphony version and will get full support after passing the self-qualification process.
Only products explicitly labeled 'Not Supported' have failed qualification or have shown incompatibility.
|
VMware vSphere ESXi 6.5U2-7.0
(Microsoft Hyper-V)
(Red Hat KVM)
The NetApp HCI Deployment Engine currently supports a single hypervisor (VMware vSphere), whereas other platforms support multiple hypervisors.
NetApp HCI v1.8 supports VMware vSphere 6.5U2-6.5U3, 6.7U1-6.7U3 and 7.0.
VMware vSphere 6.5U2 and 6.7U1 are also supported for initial deployments as well as expansions of existing environments. VMware vSphere 6.0 is no longer supported as it reached end-of-life on March 12, 2020.
Microsoft Hyper-V or Red Hat KVM hypervisors can be manually installed on the NetApp HCI compute nodes. However, support for such a particular combination can only be obtained via the NetApp Feature Product Variance Request (FPVR) process.
|
VMware vSphere ESXi 6.0-6.7
Microsoft Hyper-V 2012-2019
StarWind is actively working on supporting KVM.
|
|
Hypervisor Interconnect
Details
|
iSCSI
FC
The SANsymphony software-only solution supports both iSCSI and FC protocols to present storage to hypervisor environments.
DataCore SANsymphony supports:
- iSCSI (Switched and point-to-point)
- Fibre Channel (Switched and point-to-point)
- Fibre Channel over Ethernet (FCoE)
- Switched, where host uses Converged Network Adapter (CNA), and switch outputs Fibre Channel
|
iSCSI
NetApp HCI only supports the iSCSI protocol in order to present storage to hypervisor hosts, bare-metal hosts and VMs.
|
iSCSI
NFS
SMB3
iSCSI is the native StarWind VSAN storage protocol, as StarWind VSAN provides block-based storage.
NFS can be used as the storage protocol in VMware vSphere environments by leveraging the File Server role that the Windows OS provides.
SMB3 can be used as the storage protocol in Microsoft Hyper-V environments by leveraging the File Server role that the Windows OS provides.
In both VMware vSphere and Microsoft Hyper-V environments, iSCSI is used as a protocol to provide block-level storage access. It allows consolidating storage from multiple servers providing it as highly available storage to target servers. In the case of vSphere, VMware iSCSI Initiator allows connecting StarWind iSCSI devices to ESXi hosts and further create datastores on them. In the case of Hyper-V, Microsoft iSCSI Initiator is utilized to connect StarWind iSCSI devices to the servers and further provide HA storage to the cluster (i.e. CSV).
In virtualized environments In-Guest iSCSI support is still a hard requirements if one of the following scenarios is pursued:
- Microsoft Failover Clustering (MSFC) in a VMware vSphere environment
- A supported MS Exchange 2013 Environment in a VMware vSphere environment
Microsoft explicitely does not support NFS in both scenarios.
|
|
|
|
Bare Metal |
|
|
Bare Metal Compatibility
Details
|
Microsoft Windows Server 2012R2/2016/2019
Red Hat Enterprise Linux (RHEL) 6.5/6.6/7.3
SUSE Linux Enterprise Server 11.0SP3+4/12.0SP1
Ubuntu Linux 16.04 LTS
CentOS 6.5/6.6/7.3
Oracle Solaris 10.0/11.1/11.2/11.3
Any operating system currently not qualified for support can always be individually qualified with a specific SANsymphony version and will get full support after passing the self-qualification process.
SANsymphony provides virtual disks (block storage LUNs) to all of the popular host operating systems that use standard disk drives with 512 byte or 4K byte sectors. These hosts can access the SANsymphony virtual disks via SAN protocols including iSCSI, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE).
Mainframe operating systems such as IBM z/OS, z/TPF, z/VSE or z/VM are not supported.
SANsymphony itself runs on Microsoft Windows Server 2012/R2 or higher.
|
RHEL 7.4-8.1 (KVM)
Windows Server 2016-2019 (Hyper-V)
VMware vSphere 6.5-7.0
Citrix XenServer 7.4-7.6
Citrix Hypervisor 8.0-8.1
NetApp HCI volumes can be allocated to external hosts that support the iSCSI protocol, such as external (=non-NetApp HCI) VMware clusters, Hyper-V hosts, OpenStack hosts, Bare Metal Windows hosts and Bare Metal Linux hosts.
|
Microsoft Windows Server 2012/2012R2/2016/2019
StarWind VSAN provides highly available storage over iSCSI between the StarWind nodes and can additionally share storage over iSCSI to any OS that supports the iSCSI protocol.
|
|
Bare Metal Interconnect
Details
|
iSCSI
FC
FCoE
|
iSCSI
|
iSCSI
|
|
|
|
Containers |
|
|
Container Integration Type
Details
|
Built-in (native)
DataCore provides its own Volume Plugin for natively providing Docker container support, available on Docker Hub.
DataCore also has a native CSI integration with Kubernetes, available on Github.
|
Built-in (native)
NetApp provides its own software plugins for container support (both Docker and Kubernetes) through its Trident open-source project.
More information can be found here: https://netapp-trident.readthedocs.io
NetApp also developed its own container platform software called 'NetApp Kubernetes Service'. NKS provides on-premises Kubernetes-as-a-Service (KaaS) in order to enable end-users to quickly adopt container services.
NetApp Kubernetes Service (NKS) is not a hard requirement for running Docker containers and Kubernetes on top of NetApp HCI, however it does make it easier to use and consume.
NKS system size requirements for NetApp HCI environments:
- 2x4 systems are not supported for production use. However, these can be used for demo work.
- 3x4 systems are the minimum production system size supported by NetApp.
- 4x4 systems are the recommended minimum size.
|
N/A
StarWind Virtual SAN relies on the container support delivered by the hypervisor platform.
|
|
Container Platform Compatibility
Details
|
Docker CE/EE 18.03+
Docker EE = Docker Enterprise Edition
|
Docker EE 17.06+
Trident should work with any distribution of Docker or Kubernetes that uses one of the supported versions as a base, such as Rancher or Tectonic.
NetApp Kubernetes Service (NKS) is a Pure Upstream K8s play. This means that NetApp will support the latest release within a week.
|
Docker CE 17.06.1+ for Linux on ESXi 6.0+
Docker EE/Docker for Windows 17.06+ on ESXi 6.0+
Docker CE = Docker Community Edition
Docker EE = Docker Enterprise Edition
|
|
Container Platform Interconnect
Details
|
Docker Volume plugin (certified)
The DataCore SDS Docker Volume plugin (DVP) enables Docker Containers to use storage persistently, in other words enables SANsymphony data volumes to persist beyond the lifetime of both a container or a container host. DataCore leverages SANsymphony iSCSI and FC to provide storage to containers. This effectively means that the hypervisor layer is bypassed.
The Docker SDS Volume plugin (DVP) is officially 'Docker Certified' and can be downloaded from the Docker Hub. The plugin is installed inside the Docker host, which can be either a VM or a Bare Metal Host connect to a SANsymphony storage cluster.
For more information please go to: https://hub.docker.com/plugins/datacore-sds-volume-plugin
The Kubernetes CSI plugin can be downloaded from GitHub. The plugin is automatically deployed as several pods within the Kubernetes system.
For more information please go to: https://github.com/DataCoreSoftware/csi-plugin
Both plugins are supported with SANsymphony 10 PSP7 U2 and later.
|
Docker Volume Plugin (certified)
The NetApp 'Trident' plugin, previously known as 'NetApp Docker Volume Plugin (nDVP)', is a block volume plugin that connects containers to persistent storage served by NetApp HCI/SolidFire.
The 'NetApp Docker Volume Plugin (nDVP)' plugin is officially 'Docker Certified' and can be downloaded from the online Docker Store.
|
Docker Volume Plugin (certified) + VMware VIB
vSphere Docker Volume Service (vDVS) can be used with VMware vSAN, as well as VMFS datastores and NFS datastores served by VMware vSphere-compatible storage systems.
The vSphere Docker Volume Service (vDVS) installation has two parts:
1. Installation of the vSphere Installation Bundle (VIB) on ESXi.
2. Installation of Docker plugin on the virtualized hosts (VMs) where you plan to run containers with storage needs.
The vSphere Docker Volume Service (vDVS) is officially 'Docker Certified' and can be downloaded from the online Docker Store.
The StarWind HA VMFS (datastore) can be used for deploying containers just as on common VMFS datastore.
|
|
Container Host Compatibility
Details
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The DataCore native plug-ins are container-host centric and as such can be used across all SANsymphony-supported hypervisor platforms (VMware vSphere, Microsoft Hyper-V, KVM, XenServer, Oracle VM Server) as well as on bare metal platforms.
|
Virtualized container hosts on all supported hypervisors
Bare Metal container hosts
The NetApp Trident native plugins are container-host centric and as such can be used across all NetApp HCI-supported hypervisor platforms (VMware vSphere) as well as on bare metal platforms.
|
Virtualized container hosts on VMware vSphere hypervisor
Because the vSphere Docker Volume Service (vDVS) and vSphere Cloud Provider (VCP) are tied to the VMware Sphere platform, they cannot be used for bare metal hosts running containers.
|
|
Container Host OS Compatbility
Details
|
Linux
All Linux versions supported by Docker CE/EE 18.03+ or higher can be used.
|
RHEL
CentOS
Ubuntu
Debian
NetApp Trident has been qualified for the mentioned Linux operating systems.
Container hosts running the Windows OS are not (yet) supported.
|
Linux
Windows 10 or 2016
Any Linux distribution running version 3.10+ of the Linux kernel can run Docker.
vSphere Storage for Docker can be installed on Windows Server 2016/Windows 10 VMs using the PowerShell installer.
|
|
Container Orch. Compatibility
Details
|
Kubernetes 1.13+
|
Kubernetes 1.9+
NetApp HCI v1.6 and up provide the ability to deploy managed Kubernetes clusters by leveraging NetApp Kubernetes Service (NKS).
The SolidFire driver does not support Docker Swarm.
|
Kubernetes 1.6.5+ on ESXi 6.0+
|
|
Container Orch. Interconnect
Details
|
Kubernetes CSI plugin
The Kubernetes CSI plugin provides several plugins for integrating storage into Kubernetes for containers to consume.
DataCore SANsymphony provides native industry standard block protocol storage presented over either iSCSI or Fibre Channel. YAML files can be used to configure Kubernetes for use with DataCore SANsymphony.
|
Kubernetes Volume Plugin
The NetApp Trident Volume Plugin for Kubernetes allows deploying Pods with Persistent Volumes. Both Static Provisioning and Dynamic Provisioning are supported.
|
Kubernetes Volume Plugin
vSphere Cloud Provider (VCP) for Kubernetes allows Pods to use enterprise grade persistent storage. VCP supports every storage primitive exposed by Kubernetes:
- Volumes
- Persistent Volumes (PV)
- Persistent Volumes Claims (PVC)
- Storage Class
- Stateful Sets
Persistent volumes requested by stateful containerized applications can be provisioned on vSAN, VVol, VMFS or NFS datastores.
|
|
|
|
VDI |
|
|
VDI Compatibility
Details
|
VMware Horizon
Citrix XenDesktop
There is no validation check being performed by SANsymphony for VMware Horizon or Citrix XenDesktop VDI platforms. This means that all versions supported by these vendors are supported by DataCore.
|
VMware Horizon
Citrix XenDesktop
Support for VMware Horizon and Citrix XenDesktop is a function of the underlying hypervisor support; ESXi 6.x is supported by NetApp HCI. In addition to the broad support, there is an additional whitepaper on VMware Horizon 7 on NetApp HCI: https://www.netapp.com/us/media/tr-4630.pdf
|
VMware Horizon
Citrix XenDesktop
Although StarWind supports both VMware and Citrix VDI deployments on top of StarWind VSAN HA storage, there is currently no specific documentation available.
|
|
|
VMware: 110 virtual desktops/node
Citrix: 110 virtual desktops/node
DataCore has not published any recent VDI reference architecture whitepapers. The only VDI related paper that includes a Login VSI benchmark dates back to december 2010. There a 2-node SANsymphony cluster was able to sustain a load of 220 VMs based on the Login VSI 2.0.1 benchmark.
|
VMware: up to 139 virtual desktops/node
Citrix: unknown
VMware Horizon 7.6: Load bearing number is based on Login VSI tests performed on small and large compute+storage appliances using 2vCPU Windows 10 1803 desktops and the Knowledge Worker profile. The referenced test result of 110 desktop VMs is based on a NetApp HCI H410C 8-node compute cluster and a NetApp HCI H500S 4-node storage cluster configuration.
For detailed information please view the corresponding whitepaper (NVA-1132-DEPLOY) dated May 2019.
VMware Horizon 7.2: Load bearing number is based on Login VSI tests performed on small and large compute+storage appliances using 2vCPU Windows 10 desktops and the Knowledge Worker profile. The referenced test result of 139 desktop VMs is based on a NetApp HCI H700E Spectre and Meltdown postpatch configuration.
For detailed information please view the corresponding whitepaper (tr-4630) dated July 2018.
|
VMware: up to 260 virtual desktops/node
Citrix: up to 220 virtual desktops/node
The load bearing numbers are based on approximate calculations of VDI infrastructure that StarWind Virtual SAN can support.
There are no LoginVSA benchmark numbers on record for StarWind Virtual SAN as of yet.
|
|
|
Server Support
|
|
|
|
|
|
|
Server/Node |
|
|
Hardware Vendor Choice
Details
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
Super Micro
Quanta Cloud Technology
NetApp HCI uses Super Micro Twin Squared (2U-4node) chassis for H300E/H500E/H700E/H410C compute nodes and H410S-0, H410S-1 and H410S-2 storage nodes.
NetApp HCI uses QCT QuantaGrid 1U servers for H610S storage nodes, and 2U servers for H610C compute nodes.
All types of nodes can be mixed and matched in a cluster.
|
Many
StarWind Virtual SAN is a hardware-agnostic solution and does not have a strict HCL or supported hardware platforms.
|
|
|
Many
SANsymphony runs on all server hardware that supports x86 - 64bit.
DataCore provides minimum requirements for hardware resources.
|
6 models (3x compute + 3x storage) + several sub-models
There are 3 compute models to choose from:
H410C (Small/Medium/Large)
H610C (Graphic) 2U
H615C (Graphic) 1U
There are 3 storage models to choose from:
H410S (General Purpose) 2U/4nodes
H610S (High Perf Large) 1U
H610S-2F (FIPS 140-2 Drive Encryption) 1U
|
Many
StarWind Virtual SAN is a hardware-agnostic solution and does not have a strict HCL or supported hardware platforms.
|
|
|
1, 2 or 4 nodes per chassis
Note: Because SANsymphony is mostly hardware agnostic, customers can opt for multiple server densities.
Note: In most cases 1U or 2U building blocks are used.
Also Super Micro offers 2U chassis that can house 4 compute nodes.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1 node per chassis
4 nodes per chassis
NetApp HCI 410C compute nodes and 410S storage nodes are delivered in 2U/4-node building blocks. Compute nodes and storage nodes can be mixed within the same chassis.
NetApp HCI H615C compute nodes and H610S storage nodes are delivered in 1U building blocks.
NetApp HCI H610C compute nodes are delivered in 2U building blocks.
Denser nodes provide a smaller datacenter footprint where space is a concern. However, keep in mind that the footprint for other datacenter resources such as power and heat and cooling is not necessarily reduced in the same way and that the concentration of nodes can potentially pose other challenges.
|
1, 2 or 4 nodes per chassis
StarWind Virtual SAN is a hardware-agnostic solution and does not have strict HCL or supported hardware platforms.
|
|
|
Yes
DataCore does not explicitly recommend using different hardware platforms, but as long as the hardware specs are somehow comparable, there is no reason to insist on one or the other hardware vendor. This is proven in practice, meaning that some customers run their productive DataCore environment on comparable servers of different vendors.
|
Yes
Compute and storage nodes can be mixed and matched as long as best practices are adhered to.
NetApp HCI allows any storage node type to be mixed within a single cluster, however one important rule is mandatory: no one storage node can be more than one-third of the total cluster capacity for high-availabity and self-healing purposes.
|
Yes
Although StarWind does not recommend assymetric configurations (mixing nodes with different CPUs, storage type and networks in same replica) for Virtual SAN environments, if a customer understands the possible issues with performance (any solution that provides storage active-active replication performs with the speed of the slowest component), different servers can be mixed in a single solution.
|
|
|
|
Components |
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible (up to 5 options)
NetApp HCI Compute nodes:
H410C (Small/Medium/Large): 2x Intel Xeon Silver 4110, Gold 5120, Gold 5122, Gold 6138
H610C (Graphic): 2x Intel Xeon Gold 6130
H615C (Graphic): 2x Intel Xeon Silver 4214, Gold 5222, Gold 6242, Gold 6252, Gold 6240Y
H615C compute nodes ship with 2nd generation Intel Xeon Scalable (Cascade Lake) processors.
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to:
https://www.starwindsoftware.com/system-requirements
|
|
|
Flexible
|
Fixed: H610C
Flexible: H410C/H615C
NetApp HCI Compute nodes:
H410C (Small/Medium/Large): 384GB-1TB
H610C (Graphic): 512GB
H615C (Graphic): 384GB-1.5TB
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to:
https://www.starwindsoftware.com/system-requirements
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Flexible (3 options)
NetApp HCI Storage nodes:
H410S (General Purpose): 6x 480GB/960GB/1.92TB NVMe
H610S (Capacity Optimized): 12x 960GB/1.92TB/3.84TB NVMe
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to:
https://www.starwindsoftware.com/system-requirements
|
|
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to: https://www.datacore.com/products/sansymphony/tech/compatibility/
|
Fixed (10 or 25Gbps)
NetApp HCI compute nodes come in various network configurations: H410C nodes use two RJ45 interfaces (1/10GbE) for management, two SFP+/SFP28 (10/25GbE) interfaces for storage, and two SFP+/SFP28 interfaces for vMotion and virtual machines; alternatively, all network functions can be converged onto two SFP+/SFP28 interfaces. H610C and H615C compute nodes come with two SFP+/SFP28 interfaces that are used for all traffic classes in NetApp HCI. All compute nodes support an additional RJ45 interface for out-band management.
NetApp HCI storage nodes (H410S, H610S) come with two RJ45 interfaces (1/10GbE) for management and two SFP+/SFP28 (10/25GbE) interfaces for storage.
|
Flexible
Minimum hardware requirements need to be fulfilled.
For more information please go to:
https://www.starwindsoftware.com/system-requirements
|
|
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
DataCore SANsymphony supports the hardware that is on the hypervisor HCL.
VMware vSphere 6.5U1 officially supports several GPUs for VMware Horizon 7 environments:
NVIDIA Tesla M6 / M10 / M60
NVIDIA Tesla P4 / P6 / P40 / P100
AMD FirePro S7100X / S7150 / S7150X2
Intel Iris Pro Graphics P580
More information on GPU support can be found in the online VMware Compatibility Guide.
Windows 2016 supports two graphics virtualization technologies available with Hyper-V to leverage GPU hardware:
- Discrete Device Assignment
- RemoteFX vGPU
More information is provided here: https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-graphics-virtualization
The NVIDIA website contains a listing of GRID certified servers and the maximum number of GPUs supported inside a single server.
Server hardware vendor websites also contain more detailed information on the GPU brands and models supported.
|
H610C/H615C: NVIDIA Tesla
The following NVIDIA GPU card configurations can be ordered along with the H610C 2U compute model:
1x-2x NVIDIA Tesla M10 (VDI workloads)
The following NVIDIA GPU card configurations can be ordered along with the H615C 1U compute model:
1x-3x NVIDIA Tesla T4 (ML/AI workloads)
GPUs come factory integrated with the compute nodes, and cannot be added at a later point in time. However, NetApp HCI supports an open storage model in which it is possible to connect third-party servers with GPUs to the storage cluster in NetApp HCI.
|
NVIDIA Tesla
AMD FirePro
Intel Iris Pro
StarWind Virtual SAN supports the hardware that is on the hypervisor HCL.
|
|
|
|
Scaling |
|
|
|
CPU
Memory
Storage
GPU
The SANsymphony platform allows for expanding of all server hardware resources.
|
N/A
Because of the fixed hardware composition of the compute and storage nodes and taking into account the small form factor, none of the allocated hardware resources (CPU, Memory, Network, Storage) can be expanded.
|
CPU
Memory
Storage
GPU
StarWind Virtual SAN allows expansion of all server hardware resources.
|
|
|
Storage+Compute
Compute-only
Storage-only
Storage+Compute: In a single-layer deployment existing SANsymphony clusters can be expanded by adding additional nodes running SANsymphony, which adds additional compute and storage resources to the shared pool. In a dual-layer deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded simultaneously.
Compute-only: Because SANsymphony leverages virtual block volumes (LUNs), storage can be presented to hypervisor hosts not participating in the SANsymphony cluster. This is also beneficial to migrations, since it allows for online storage vMotions between SANsymphony and non-SANsymphony storage platforms.
Storage-only: In a dual-layer or mixed deployment both the storage-only SANsymphony clusters and the compute clusters can be expanded independent from each other.
|
Storage+Compute
Compute-only
Storage-only
NetApp HCIs hardware architecture fully separates compute from storage. This effectively means the platform has the capability to scale compute and storage as needed.
Storage+Compute: Existing NetApp HCI clusters can be expanded by adding additional Compute and Storage nodes, which adds additional compute and storage resources to the shared pool.
Compute-only: Existing NetApp HCI clusters can be expanded by adding additional Compute nodes, which adds additional CPU and Memory resources for VMs to the shared pool.
Storage-only: Existing NetApp HCI clusters can be expanded by adding additional Storage nodes, which adds additional storage performance and capacity resources to the shared pool.
|
Storage+Compute
Compute-only
Storage-only
In case of Storage-only scale out, StarWind Virtual SAN Storage Nodes will be based on Windows Server bare metal with no hypervisor software installed eg. VMware ESXi. StarWind software will be running as a Windows-native application and providing storage to the hypervisor hosts.
|
|
|
1-64 nodes in 1-node increments
There is a maximum of 64 nodes within a single cluster. Multiple clusters can be managed through a single SANsymphony management instance.
|
2-64 compute nodes and 2-40 storage nodes in 1-node increments
Compute nodes: The hypervisor cluster scale-out limits still apply eg. 64 hosts for VMware vSphere in a single cluster.
Storage nodes: For specific use-cases a Request for Product Qualification (RPQ) process can be initiated to authorize more than 40 storage nodes within a single cluster.
|
2-64 nodes in 1-node increments
The 64-node limit applies to both VMware vSphere and Microsoft Hyper-V environments.
|
|
Small-scale (ROBO)
Details
|
2 Node minimum
DataCore prevents split-brain scenarios by always having an active-active configuration of SANsymphony with a primary and an alternate path.
In the case SANsymphony servers are fully operating but do not see each other, the application host will still be able to read and write data via the primary path (no switch to secondary). The mirroring is interrupted because of the lost connection and the administrator is informed accordingly. All writes are stored on the locally available storage (primary path) and all changes are tracked. As soon as the connection between the SANsymphony servers is restored, the mirror will recover automatically based on these tracked changes.
Dual updates due to misconfiguration are detected automatically and data corruption is prevented by freezing the vDisk and waiting for user input to solve the conflict. Conflict solutions could be to declare one side of the mirror to be the new active data set and discarding all tracked changes at the other side, or splitting the mirror and merge the two data sets into a 3rd vDisk manually.
|
4 Node minimum (2 storage + 2 compute)
Minimum configuration is 1 chassis with 4 nodes, consisting of 2 storage nodes and 2 compute nodes.
When leveraging a 2-node or 3-node storage cluster, 2 NetApp Witness nodes are deployed as virtual machines for arbitration purposes.
|
1 Node minimum
StarWind Virtual SAN can be used as standalone iSCSI target providing storage from one node (no HA). For HA two nodes are required. StarWind Virtual SAN can be further scaled by adding more storage or new nodes into the storage cluster.
|
|
|
Storage Support
|
|
|
|
|
|
|
General |
|
|
|
Block Storage Pool
SANsymphony only serves block devices to the supported OS platforms.
|
Block Pool
Element OS aggregates all storage resources into a single pool of storage from which block storage volumes can be provisioned.
|
Block Storage Pool
StarWind VSAN only serves block devices as storage volumes to the supported OS platforms.
The underlying storage is first aggregated with hardware or software RAID. Then, the storage is replicated by StarWind at the block level across 2 or 3 nodes and further provided as a single pool (single StarWind virtual device) or as multiple pools (multiple StarWind devices).
|
|
|
Partial
DataCores core approach is to provide storage resources to the applications without having to worry about data locality. But if data locality is explicitly requested, the solution can partially be designed that way by configuring the first instance of all data to be stored on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference.
By default data does not automatically follow the VM when the VM is moved to another node. However, virtual disks can be relocated on the fly to other DataCore node without losing I/O access, but this relocation takes some time due to data copy operations required. This kind of relocation usually is done manually, but we allow automation of such tasks and can integrate with VM orchestration using PowerShell for example.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
None
NetApp HCI is based on a shared nothing storage architecture. NetApp HCI enables every drive in every storage node throughout the cluster to contribute to the storage performance and capacity of every volume presented by the Element software storage layer. When a VM is moved to another compute node, data remains in place and does not follow the VM because data is stored and available across all nodes residing in the cluster.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases require a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
Partial
StarWinds core approach is to keep data close (local) to the VM in order to avoid slow data transfers through the network and achieve the highest performance the setup can provide. The solution is designed to store the first instance of all data on locally available storage (primary path) and the mirrored instance to be stored on the alternate path (secondary path). Furthermore every hypervisor host can have a local preferred path, indicated by the ALUA path preference. Data does not automatically follow the VM when the VM is moved to another node.
Whether data locality is a good or a bad thing has turned into a philosophical debate. Its true that data locality can prevent a lot of network traffic between nodes, because the data is physically located at the same node where the VM resides. However, in dynamic environments where VMs move to different hosts on a frequent basis, data locality in most cases requires a lot of data to be copied between nodes in order to maintain the physical VM-data relationship. The SDS/HCI vendors today that choose not to use data locality, advocate that the additional network latency is negligible.
|
|
|
Direct-attached (Raw)
Direct-attached (VoV)
SAN or NAS
VoV = Volume-on-Volume; The Virtual Storage Controller uses virtual disks provided by the hypervisor platform.
|
Direct-Attached (Raw)
Direct-attached SSD (NVMe & SATA): The Element software takes ownership of the unformatted physical disks and creates a pool of block storage that is offered to clients via iSCSI.
|
Direct-attached (Raw)
SAN or NAS
Direct-attached: StarWind can take control of formatted disks (NTFS). Also, StarWind software can present RAW unformatted disks over SCSI Pass Through Interface that enables remote initiator clients to use any type of a hard drive (PATA/SATA/RAID).
External SAN/NAS Storage: SAN/NAS can be connected over Ethernet connectivity and can be used for StarWind Virtual SAN as soon as they are provided as block storage (iSCSI). In case it is required to replicate data between NAS systems, there should be 2 NAS systems connected to nodes that will be used for StarWind Virtual SAN.
|
|
|
Magnetic-only
All-Flash
3D XPoint
Hybrid (3D Xpoint and/or Flash and/or Magnetic)
NEW
|
All-Flash (SSD-only)
NetApp has exclusively released all-flash models for the NetApp HCI platform. These models facilitate a variety of workloads including those that demand ultra-high performance.
|
Magnetic-only
Hybrid (Flash+Magnetic)
All-Flash
|
|
Hypervisor OS Layer
Details
|
SD, USB, DOM, SSD/HDD
|
SSD
Compute Node: 2x 240GB M.2 6Gb/s SATA MLC SSDs are used for system boot. VMware ESXi boots from these drives.
|
SD, USB, DOM, SSD/HDD
|
|
|
|
Memory |
|
|
|
DRAM
|
NVRAM (PCIe card) or NVDIMM
DRAM
PCIe card-based NVRAM is utilized in H410S storage nodes.
NVDIMM is utilized in H610S storage nodes.
|
DRAM
DRAM can be used for caching in a write-back or write-through mode. Additionally, it can be used for creating StarWind RAM disks.
For further information, please visit:
https://www.starwindsoftware.com/high-performance-ram-disk-emulator
|
|
|
Read/Write Cache
DataCore SANsymphony accelerates reads and writes by leveraging the powerful processors and large DRAM memory inside current generation x86-64bit servers on which it runs. Up to 8 Terabytes of cache memory may be configured on each DataCore node, enabling it to perform at solid state disk speeds without the expense. SANsymphony uses a common cache pool to store reads and writes in.
SANsymphony read caching essentially recognizes I/O patterns to anticipate which blocks to read next into RAM from the physical back-end disks. That way the next request can be served from memory.
When hosts write to a virtual disk, the data first goes into DRAM memory and is later destaged to disk, often grouped with other writes to minimize delays when storing the data to the persistent disk layer. Written data stays in cache for re-reads.
The cache is cleaned on a first-in-first-out (FiFo) basis. Segment overwrites are performed on the oldest data first for both read- and write cache segment requests.
SANsymphony prevents the write cache data from flooding the entire cache. In case the write data amount runs above a certain percentage watermark of the entire cache amount, then the write cache will temporarily be switched to write-through mode in order to regain balance. This is performed fully automatically and is self-adjusting, per virtual disk as well as on a global level.
|
NVRAM/NVDIMM: Write Buffer
DRAM: Metadata
PCIe card-based NVRAM is utilized in H410S storage nodes.
NVDIMM is utilized in H610S storage nodes.
|
Read/Write Cache
StarWind Virtual SAN accelerates reads and writes by leveraging conventional RAM.
The memory cache is filled up with data mainly during write operations. During read operations, data enters the cache only if the latter contains either empty memory blocks or the lines that were allocated for these entries earlier and have not been fully exhausted yet.
StarWind VSAN supports two Memory (L1 Cache) Policies:
1. Write-Back, caches writes in DRAM only and acknowledges back to the originator when complete in DRAM.
2. Write-Through, caches writes in both DRAM and underlying storage, and acknowledges back to the originator when complete in the underlying storage.
This means that exclusively caching writes in convential memory is optional. When the Write-Through policy is used, DRAM is used primarily for caching reads.
To change the cache size, first, the StarWind service should be stopped and change cache and then start the service and then repeat the same process on the partner node. This allows keeping VMs up and running during cache changes.
In the majority of use cases, there is no need to assign L1 cache for all-flash storage arrays.
Note: In case of using the Write-Back policy for DRAM, UPS units have to be installed to ensure the correct shutdown of StarWind Virtual SAN nodes. If a power outage occurs, this will prevent the loss of cached data. The UPS capacity must cover the time required for flushing the cached data to the underlying storage.
|
|
|
Up to 8 TB
The actual size that can be configured depends on the server hardware that is used.
|
NVRAM/NVDIMM: Non-configurable
DRAM: Unknown
NVRAM (PCIe): 8GB for H410S storage nodes.
NVDIMM: 32GB for H610S storage nodes.
|
Configurable
The size of L1 cache should be equal to the amount of the average working data set.
There are no default or maximum values for RAM cache as such. The maximum size that can be assigned for StarWind RAM cache is limited by the available RAM to the system; also, you need to make sure that other applications running will have enough of RAM for their operations.
Additionally, the total amount of L1 cache assigned influences the time required for system shutdown so overprovisioning of the L1 cache amount can cause StarWind service interruption and the loss of cached data. The minimum size assigned for RAM cache in either write-back or write-through mode is 1MB. However, StarWind recommends assigning a StarWind RAM cache size that matches the size of the working data set.
|
|
|
|
Flash |
|
|
|
SSD, PCIe, UltraDIMM, NVMe
|
SSD, NVMe
The Element software uses a log-structured approach in writing to disk. This optimizes utilization and performance of SSDs, significantly improves the lifespan of SSDs, and, most importantly, enables the use of less expensive consumer-grade MLC SSDs.
The H610S Storage Node introduces support for NVMe U.2 SSDs.
MLC = Multi-Level Cell
|
SSD, NVMe
|
|
|
Persistent Storage
SANsymphony supports new TRIM / UNMAP capabilities for solid-state drives (SSD) in order to reduce wear on those devices and optimize performance.
|
All-Flash: Metadata + Persistent Storage Tier
Write buffer is provided by the PCIe card (NVRAM).
Read cache is not necessary in All-flash configurations.
|
Read/Write Cache (hybrid)
Persistent storage (all-flash)
StarWind Virtual SAN supports a single Flash (L2 Cache) Policy:
1. Write-Through, caches writes in both Flash (SSD/NVMe) and the underlying storage, and acknowledges back to the originator when complete in the underlying storage.
With the write-through policy, new blocks are written both to cache layer and the underlying storage synchronously. However, in this mode, only the blocks that are read most frequently are kept in the cache layer, accelerating read operations. This is the only mode available for StarWind L2 cache.
In the case of Write-Through cached data does not need to be offloaded to the backing store when a device is removed or the service is stopped.
In the majority of use cases, if L1 cache is already assigned, there is no need to configure L2 cache.
A StarWind Virtual SAN solution with L2 cache configured should have more RAM available, since L2 cache needs 6.5 GB of RAM per 1 TB of L2 cache to store metadata.
|
|
|
No limit, up to 1 PB per device
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
All-Flash: 6 SSDs/12 NVMe per storage node
All-Flash SSD configurations:
H410S (General Purpose): 6x 480GB/960GB/1.92TB SSD
H610S (Capacity Optimized): 12x 960GB/1.92TB/3.84TB NVMe
With H410S/H610S nodes there is a choice between Encrypting and Non-Encrypting drives.
|
No limitations
The definition of a device here is a raw flash device that is presented to Virtual SAN as either a SCSI LUN or a SCSI disk.
|
|
|
|
Magnetic |
|
|
|
SAS or SATA
SAS = 10k or 15k RPM = Medium-capacity medium-speed drives
SATA = NL-SAS = 7.2k RPM = High-capacity low-speed drives
In this case SATA = NL-SAS = MDL SAS
|
N/A
|
SAS or SATA
|
|
|
Persistent Storage
|
N/A
|
Persistent Storage
|
|
Magnetic Capacity
Details
|
No limit, up to 1 PB (per device)
The definition of a device here is a raw flash device that is presented to SANsymphony as either a SCSI LUN or a SCSI disk.
|
N/A
|
No limitations
|
|
|
Data Availability
|
|
|
|
|
|
|
Reads/Writes |
|
|
Persistent Write Buffer
Details
|
DRAM (mirrored)
If caching is turned on (default=on), any write will only be acknowledged back to the host after it has been succesfully stored in DRAM memory of two separate physical SANsymphony nodes. Based on de-staging algorithms each of the nodes eventually copies the written data that is kept in DRAM to the persistent disk layer. Because DRAM outperforms both flash and spinning disks the applications experience much faster write behavior.
Per default, the limit of dirty-write-data allowed per Virtual Disk is 128MB. This limit could be adjusted, but there has never been a reason to do so in the real world. Individual Virtual Disks can be configured to act in write-through mode, which means that the dirty-write-data limit is set to 0MB so effectively the data is directly written to the persistent disk layer.
DataCore recommends that all servers running SANsymphony software are UPS protected to avoid data loss through unplanned power outages. Whenever a power loss is detected, the UPS automatically signals this to the SANsymphony node and write behavior is switched from write-back to write-through mode for all Virtual Disks. As soon as the UPS signals that power has been restored, the write behavior is switched to write-back again.
|
NVRAM (mirrored)
NVRAM serves as a very performant storage medium for a read/write mirrored journal that is split between two NetApp HCI storage nodes.
When an application sends a write request, it is mirrored between the NVRAM on two NetApp HCI storage nodes for high availability and redundancy. Once both copies are stored, the primary node acknowledges the write completion to the host. Once the write is acknowledged, the system will copy the data from NVRAM to the persistent storage layer (SSD).
|
DRAM (mirrored)
Flash Layer (SSD, NVMe)
|
|
Disk Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1) + opt. Hardware RAID
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1 Replica (2N)
NEW
NetApp HCI Double Helix data protection is a RAID-less data protection solution designed to maintain both data availability and performance regardless of failure condition. Helix data protection is a distributed replication algorithm that spreads at least two redundant copies of data across all drives in the system. This approach allows the system to absorb multiple failures across all levels of the storage solution while maintaining data redundancy and QoS settings.
If a drive should go down, NetApp HCI automatically rebuilds redundant data across all remaining drives in the cluster in minutes to maintain high availability with minimal impact to performance due to the platforms architecture. The more nodes in the cluster, the faster the activity occurs and the lower the overall impact. No matter where the failure occurs (drive, node, backplane, network or software failure), the recovery process is the same.
Element OS 12.2 introduces periodic health checks on SolidFire appliance drives using SMART health data from the drives. A drive that fails the SMART health check might be close to failure. If a drive fails the SMART health check, a new critical severity cluster fault appears.
|
1-2 Replicas (2N-3N)
+ Hardware RAID (1, 5, 10)
StarWind Virtual SAN replicates the storage to protect data within the cluster. In addition, hardware or software RAID is implemented to enhance the robustness of individual nodes.
Replicas+Hardware RAID: Before any write is acknowledged to the host, it is synchronously replicated to one or two designated partner nodes. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. With 3N one instance of data that is written is stored on the local node, and two other instances of that data are stored on designated partner nodes in the cluster. When a physical disk fails, hardware RAID maintains data availability.
|
|
Node Failure Protection
Details
|
2-way and 3-way Mirroring (RAID-1)
DataCore SANsymphony software primarily uses mirroring techniques (RAID-1) to protect data within the cluster. This effectively means the SANsymphony storage platform can withstand a failure of any two disks or any two nodes within the storage cluster. Optionally, hardware RAID can be implemented to enhance the robustness of individual nodes.
SANsymphony supports Dynamic Data Resilience. Data redundancy (none, 2-way or 3-way) can be added or removed on-the-fly at the vdisk level.
A 2-way mirror acts as active-active, where both copies are accessible to the host and written to. Updating of the mirror is synchronous and bi-directional.
A 3-way mirror acts as active-active-backup, where the active copies are accessible to the host and written to, and the backup copy is inaccessible to the host (paths not presented) and written to. Updating of the mirrors active copies is synchronous and bi-directional. Updating of the mirrors backup copy is synchronous and unidirectional (receive only).
In a 3-way mirror the backup copy should be independent of existing storage resources that are used for the active copies. Because of the synchronous updating all mirror copies should be equal in storage performance.
When in a 3-way mirror an active copy fails, the backup copy is promoted to active state. When the failed mirror copy is repaired, it automatically assumes a backup state. Roles can be changed manually on-the-fly by the end-user.
DataCore SANsymphony 10.0 PSP9 U1 introduced System Managed Mirroring (SMM). A multi-copy virtual disk is created from a storage source (disk pool or pass-through disk) from two or three DataCore Servers in the same server group. Data is synchronously mirrored between the servers to maintain redundancy and high availability of the data. System Managed Mirroring (SMM) addresses the complexity of managing multiple mirror paths for numerous virtual disks. This feature also addresses the 256 LUN limitation by allowing thousands of LUNs to be handled per network adapter. The software transports data in a round robin mode through available mirror ports to maximize throughput and can dynamically reroute mirror traffic in the event of lost ports or lost connections. Mirror paths are automatically and silently managed by the software.
The System Managed Mirroring (SMM) feature is disabled by default. This feature may be enabled or disabled for the server group.
With SANsymphony 10.0 PSP10 adds seamless transition when converting Mirrored Virtual Disks (MVD) to System Managed Mirroring (SMM). Seamless transition converts and replaces mirror paths on virtual disks in a manner in which there are no momentary breaks in mirror paths.
|
1 Replica (2N)
NetApp HCI Double Helix data protection is a RAID-less data protection solution designed to maintain both data availability and performance regardless of failure condition. Helix data protection is a distributed replication algorithm that spreads at least two redundant copies of data across all drives in the system. This approach allows the system to absorb multiple failures across all levels of the storage solution while ma
If a storage node should go down, NetApp HCI automatically rebuilds redundant data across all remaining nodes in minutes to maintain high availability with minimal impact to performance due to the platforms architecture. The more nodes in the cluster, the faster the activity occurs and the lower the overall impact. No matter where the failure occurs (drive, node, backplane, network or software failure), the recovery process is the same.
|
1-2 Replicas (2N-3N)
Replicas: Before any write is acknowledged to the host, it is synchronously replicated to one or two designated partner nodes. This means that with 2N one instance of data that is written is stored on the local node and another instance of that data is stored on the designated partner node in the cluster. With 3N one instance of data that is written is stored on the local node, and two other instances of that data are stored on designated partner nodes in the cluster.
|
|
Block Failure Protection
Details
|
Not relevant (usually 1-node appliances)
Manual configuration (optional)
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk. However, block failure protection is in most cases irrelevant as 1-node appliances are used as building blocks.
SANsymphony works on an N+1 redundancy design allowing any node to acquire any other node as a redundancy peer per virtual device. Peers are replacable/interchangable on a per Virtual Disk level.
|
Protection Domains
Block failure protection can be achieved by assigning chassis to different Protection Domains (aka Zones).
Protection Domains: Protection domains at the chassis level allow scaling of NetApp HCI clusters in such a way that
the failure of an entire chassis can be sustained even if there is more than one storage node in the chassis. The minimum allowed configuration is 3 chassis with 2 storage nodes each. The system automatically lays out data correctly when the configuration is compatible with Protection domains.
Beginning with Element OS 12.0, protection domain layouts can be customized to cover zones of storage nodes within a rack, or between multiple racks. This enables more flexibility in data resiliency and improves storage availability, especially in large scale installations.
|
Not relevant (usually 1-node appliances)
In a 3-node cluster, StarWind can provide 3-way mirroring, allowing the cluster to withstand a failure of two nodes without loosing data accessibility. In a cluster of more than 3 nodes, a grid architecture can be configured to withstand the failure of two nodes.
|
|
Rack Failure Protection
Details
|
Manual configuration
Manual designation per Virtual Disk is required to accomplish this. The end-user is able to define which node is paired to which node for that particular Virtual Disk.
|
Protection Domains
Rack failure protection can be achieved by assigning chassis to different Protection Domains (aka Zones).
Protection Domains: Protection domains at the chassis level allow scaling of NetApp HCI clusters in such a way that
the failure of an entire chassis can be sustained even if there is more than one storage node in the chassis. The minimum allowed configuration is 3 chassis with 2 storage nodes each. The system automatically lays out data correctly when the configuration is compatible with Protection domains.
Beginning with Element OS 12.0, protection domain layouts can be customized to cover zones of storage nodes within a rack, or between multiple racks. This enables more flexibility in data resiliency and improves storage availability, especially in large scale installations.
|
N/A
|
|
Protection Capacity Overhead
Details
|
Mirroring (2N) (primary): 100%
Mirroring (3N) (primary): 200%
+ Hardware RAID5/6 overhead (optional)
|
Replica (2N): 100%
|
Replica (2N) + RAID1/10: 200%
Replica (2N) + RAID5: 125-133%
Replica (3N) + RAID1/10: 300%
Replica (3N) + RAID5: 225-233%
|
|
Data Corruption Detection
Details
|
N/A (hardware dependent)
SANsymphony fully relies on the hardware layer to protect data integrity. This means that the SANsymphony software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
Read integrity checks
During writing of the data checksums are created and stored. When read again or accessed during system operations, the checksum is validated. If an issue is detected, the system automatically accesses the secondary copy and repairs the invalid copy.
|
N/A (hardware dependent)
StarWind Virtual SAN fully relies on the hardware layer to protect data integrity. This means that the StarWind software itself does not perform Read integrity checks and/or Disk scrubbing to verify and maintain data integrity.
|
|
|
|
Points-in-Time |
|
|
|
Built-in (native)
|
Built-in (native)
|
N/A
StarWind Virtual SAN does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
|
|
Local + Remote
SANsymphony snapshots are always created on one side only. However, SANsymphony allows you to create a snapshot for the data on each side by configuring two snapshot schedules, one for the local volume and one for the remote volume. Both snapshot entities are independent and can be deleted independently allowing different retention times if needed.
There is also the capability to pair the snapshot feature along with asynchronous replication which provides you with the ability to have a third site long distance remote copy in place with its own retention time.
|
Local + Remote
NetApp HCI snapshots can be created and then replicated in real-time to a remote NetApp HCI cluster over an IP network when paired.
Up to 32 local Snapshot copies can be created for every individual volume.
|
N/A
StarWind Virtual SAN does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
|
Snapshot Frequency
Details
|
1 Minute
The snapshot lifecycle can be automatically configured using the integrated Automation Scheduler.
|
5 minutes
If a Netapp HCI snapshot is scheduled to run at a time period that is not divisible by 5 minutes, the snapshot will run at the next time period that is divisible by 5 minutes. You cannot schedule a snapshot to run at intervals of less than 5 minutes.
|
N/A
StarWind Virtual SAN does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
|
Snapshot Granularity
Details
|
Per VM (Vvols) or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is certified for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
Per VM (Vvols) or Volume
Although NetApp HCI uses block-storage, the platform is capable of attaining per VM-granularity by leveraging VMware Virtual Volumes (Vvols).
NetApp HCI has VVols certification for VMware ESXi 6.5 U1 up to ESXi 7.0U1. NetApp VASA provider for SolidFire Element OS 2.10.1 is compatible with NetApp HCIs H410S and H610S storage nodes.
|
N/A
StarWind Virtual SAN does not have native snapshot capabilities. Hypervisor-native snapshot capabilities can be leveraged instead.
|
|
|
Built-in (native)
DataCore SANsymphony incorporates Continuous Data Protection (CDP) and leverages this as an advanced backup mechanism. As the term implies, CDP continuously logs and timestamps I/Os to designated virtual disks, allowing end-users to restore the environment to an arbitrary point-in-time within that log.
Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage. Several of these markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent or crash-consistent restore point from just before the incident occurred.
|
Built-in (native)
With regard to NetApp HCI volume back-ups can be created in two data formats:
- Native: a compressed format readable only by NetApp HCI storage systems.
- Uncompressed: an uncompressed format compatible with other systems.
|
External
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
Local or Remote
All available storage within the SANsymphony group can be configured as targets for back-up jobs.
|
Locally
To remote sites
To remote cloud object stores (Amazon S3, OpenStack Swift)
Volumes can be back-ed up and restored to other NetApp HCI clusters, as well as secondary object stores that are compatible with Amazon S3 or OpenStack Swift.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
Continuously
As Continuous Data Protection (CDP) is being leveraged, I/Os are logged and timestamped in a continous fashion, so end-users can restore to virtually any-point-in-time.
|
5 minutes
NetApp HCI backups are based on volume snapshots. If a Netapp HCI snapshot is scheduled to run at a time period that is not divisible by 5 minutes, the snapshot will run at the next time period that is divisible by 5 minutes. You cannot schedule a snapshot to run at intervals of less than 5 minutes.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
Backup Consistency
Details
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
By default CDP creates crash consistent restore points. Similar to snapshot requests, one can generate a CDP Rollback Marker by scripting a call to a PowerShell cmdlet when an application has been quiesced and the caches have been flushed to storage.
Several CDP Rollback Markers may be present throughout the 14-day rolling log. When rolling back a virtual disk image, one simply selects an application-consistent, filesystem-consistent or crash-consistent restore point from (just) before the incident occurred.
In a VMware vSphere environment, the DataCore VMware vCenter plug-in can be used to create snapshot schedules for datastores and select the VMs that you want to enable VSS filesystem/application consistency for.
|
Crash Consistent
File System Consistent (Windows)
Application Consistent (MS Apps on Windows)
By default NetApp HCI creates crash consistent restore points.
The NetApp HCI VSS hardware provider integrates VSS shadow copies with NetApp HCI Snapshot copies and clones. The provider runs on Microsoft Windows 2008 R2 and 2012 R2 editions and supports shadow copies created using DiskShadow and other VSS requesters. A GUI-based configuration utility is provided to add, modify, and remove cluster information used by the NetApp HCI VSS hardware provider.
Utilizing VSS Snapshot capabilities with the NetApp HCI VSS hardware provider makes sure that Snapshot copies are application consistent with business applications that use NetApp HCI volumes on a system. A coordinated effort between VSS components provides this functionality.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
Restore Granularity
Details
|
Entire VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
When configuring the virtual environment as described above, effectively VM-restores are possible.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
Entire Volume
Administrators are enabled to perform VM and single file restores by mounting a volume snapshot created by NetApp HCI.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
Restore Ease-of-use
Details
|
Entire VM or Volume: GUI
Single File: Multi-step
Restoring VMs or single files from volume-based storage snapshots requires a multi-step approach.
For file-level restores a Virtual Disk snapshot needs to be mounted so the file can be read from the mount. Many simultaneous rollback points for the same Virtual Disk can coexist at the same time, allowing end-users to compare data states. Mounting and changing rollback points does not alter the original Virtual Disk.
|
Entire VM: Multi-step
Single File: Multi-step
Restoring a volume is a single action via API or GUI. However, restoring either VMs or single files from volume-based storage snapshots requires a multi-step approach.
|
N/A
StarWind Virtual SAN does not provide any backup/restore capabilities of its own. Therefore it relies on existing 3rd party data protection solutions.
VMwares free-of-charge backup software that comes with any vSphere license, VMware vSphere Data Protection (VDP), has been declared End-of-Availability and is not supported for VMware vSphere 6.7 and up.
|
|
|
|
Disaster Recovery |
|
|
Remote Replication Type
Details
|
Built-in (native)
DataCore SANsymphonys remote replication function, Asynchronous Replication, is called upon when secondary copies will be housed beyond the reach of Synchronous Mirroring, as in distant Disaster Recovery (DR) sites. It relies on a basic IP connection between locations and works in both directions. That is, each site can act as the disaster recovery facility for the other. The software operates near-synchronously, meaning that it does not hold up the application waiting on confirmation from the remote end that the update has been stored remotely.
|
Built-in (native)
NetApp HCI supports replication to another NetApp HCI via Element Real-Time Replication or to an ONTAP AFF/FAS cluster via SnapMirror. The SnapMirror feature also allows replication to systems running ONTAP Select or Cloud Volumes ONTAP.
|
Built-in (native; stretched clusters only)
External
StarWind Virtual SAN does not have any remote replication capabilities of its own. Stretched Clustering with synchronous replication is the exception.
Therefore in non-stretched setups it relies on external remote replication mechanisms like the ones natively available in the hypervisor platform (VMware vSphere Replication, Microsoft Hyper-V Replica) or any 3rd party remote replication application (eg. Zerto VR, Veeam VM replica, Azure Site Recovery).
vSphere Replication requires the deployment of virtual appliances. No specific integration exists between StarWind VSAN and VMware vSphere VR.
|
|
Remote Replication Scope
Details
|
To remote sites
To MS Azure Cloud
On-premises deployments of DataCore SANsymphony can use Microsoft Azure cloud as an added replication location to safeguard highly available systems. For example, on-premises stretched clusters can replicate a third copy of the data to MS Azure to protect against data loss in the event of a major regional disaster. Critical data is continuously replicated asynchronously within the hybrid cloud configuration.
To allow quick and easy deployment a ready-to-go DataCore Cloud Replication instance can be acquired through the Azure Marketplace.
MS Azure can serve only as a data repository. This means that VMs cannot be restored and run in an Azure environment in case of a disaster recovery scenario.
|
To remote sites
NetApp HCI supports replication to another NetApp HCI via Element Real-Time Replication or to an ONTAP AFF/FAS cluster via SnapMirror. The SnapMirror feature also allows replication to systems running ONTAP Select or Cloud Volumes ONTAP.
|
VR: To remote sites, To VMware clouds
HR: To remote sites, to Microsoft Azure (not part of Windows Server 2019)
VMware vSphere Replication (VR): VMware vSphere Replication allows for replication of VMs to a different vSphere cluster on a remote site or to any supported VMware Cloud Service Provider (vSphere Replication to Cloud). This includes VMware on AWS and VMware on IBM Cloud.
Hyper-V Replica (HR): Hyper-V Replica is an integral part of the Hyper-V role. This feature enables block-level log-based replication of an active source VM to a passive destination VM located on another Hyper-V server or to Microsoft Azure (requires Azure Site Recovery, which is a paid external service, i.e. not part of Windows Server 2019).
Because Hyper-V Replica operates on the hypervisor layer, it is storage agnostic. This means that on one site you can have Hyper-V 2019 on SAN, whereas on the other site you can have Hyper-V 2019 on S2D.
|
|
Remote Replication Cloud Function
Details
|
Data repository
All public clouds can only serve as data repository when hosting a DataCore instance. This means that VMs cannot be restored and run in the public cloud environment in case of a disaster recovery scenario.
In the Microsoft Azure Marketplace there is a pre-installed DataCore instance (BYOL) available named DataCore Cloude Replication.
BYOL = Bring Your Own License
|
N/A
NetApp HCI provides native DR and replication capabilities (Element Real-Time Replication) does not support replication to hyperscale public cloud targets (AWS, Azure, GCP).
|
VR: DR-site (VMware Clouds)
HR: DR-site (Azure)
VMware vSphere Replication (VR): Because VMware on AWS and VMware on IBM Cloud are full vSphere implementations, replicated VMs can be started and run in a DR-scenario.
|
|
Remote Replication Topologies
Details
|
Single-site and multi-site
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
|
Single-site and multi-site (limited)
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
A NetApp HCI cluster can be paired with up to four other clusters.
Volume pairings are always one-to-one.
|
VR: Single-site and multi-site
HR: Single-site and chained
Single Site DR = 1-to-1
Multiple Site DR = 1-to many, many-to 1
Hyper-V Replica (HR): Besides 1-to-1 replications Hyper-V Replica allows for extended (chained) replication. A VM can be replicated from a primary host to a secondary host, and then be replicated from the secondary host to a third host. Please note that it is not possible to replicate from the primary host directly to the second and the third (1-to-many).
|
|
Remote Replication Frequency
Details
|
Continuous (near-synchronous)
SANsymphony Asynchronous Replication is not checkpoint-based but instead replicates continuously. This way data loss is kept to a minimum (seconds to minutes). End-users can inject custom consistency checkpoints based on CDP technology which has no minimum time slot/frequency.
|
Continuous (Synchronous, Asynchronous)
NetApp HCI Real-Time Asynchronous Replication is not checkpoint-based but instead replicates continuously. This way data loss is kept to a minimum (seconds to minutes). The actual RPO is dependent on line latency that exists between the source and the destination site.
ONTAP SnapMirror is snapshot-based and has a minimum replication interval of 15 minutes.
|
VR: 5 minutes (Asynchronous)
HR: 30 seconds (Asynchronous)
SW: Continuous (Stretched Cluster)
Hyper-V Replica (HR): With Hyper-V Replica replication frequency can be set to 30 seconds, 5 minutes, or 15 minutes on a per-VM basis.
|
|
Remote Replication Granularity
Details
|
VM or Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
Although DataCore SANsymphony uses block-storage, the platform is capable of attaining per VM-granularity if desired.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and snapshots can be created on the VM-level. The per-VM functionality is realized by installing the DataCore Storage Management Provider in SCVMM.
Because of the per-host storage limitations in VMware vSphere environments, VVols is leveraged to provide per VM-granularity. DataCore SANsymphony Provider v2.01 is VMware certified for ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
|
VM (Vvols) or Volume; Snapshots-only
NetApp HCI Snapshot technology allows only snapshots created on the source cluster to be replicated. Active writes from the source volume are not replicated.
|
VR: VM
HR: VM
Both vSphere Replication (VR) and Hyper-V Replica operate on the VM level.
|
|
Consistency Groups
Details
|
Yes
SANsymphony provides the option to use Virtual Disk Grouping to enable end-users to restore multiple Virtual Disks to the exact same point-in-time.
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
|
No
|
VR: No
HR: No
|
|
|
VMware SRM (certified)
DataCore provides a certified Storage Replication Adapter (SRA) for VMware Site Recovery Manager (SRM). DataCore SRA 2.0 (SANsymphony 10.0 FC/iSCSI) shows official support for SRM 6.5 only. It does not support SRM 8.2 or 8.1.
There is no integration with Microsoft Azure Site Recovery (ASR). However, SANsymphony can be used with the control and automation options provided by Microsoft System Center (e.g. Operations Manager combined with Virtual Machine Manager and Orchestrator) to build a DR orchestration solution.
|
VMware SRM (certified)
NetApp provides a certified Storage Replication Adapter (SRA) for VMware Site Recovery Manager (SRM). NetApp SolidFire SRA 2.0.1.16 shows official support for SRM versions 8.2, 8.1, 6.5 and 6.1.
The NetApp SolidFire SRA is compatible with all storage nodes in NetApp HCI.
|
N/A
StarWind Virtual SAN does not have a-synchronous native remote-replication capabilities and does not provide a Storage Replication Adapter (SRA) for VMware SRM implementations.
|
|
Stretched Cluster (SC)
Details
|
VMware vSphere: Yes (certified)
DataCore SANsymphony is certified by VMware as a VMware Metro Storage Cluster (vMSC) solution. For more information, please view https://kb.vmware.com/kb/2149740.
|
N/A
At this time NetApp does not support NetApp HCI clusters that are stretched across data centers.
|
vSphere: Yes
Hyper-V: Yes
StarWind Virtual SAN supports active-active Stretched Clustering which leverages native synchronous block-level replication.
|
|
|
2+sites = two or more active sites, 0/1 or more tie-breakers
Theoretically up to 64 sites are supported.
SANsymphony does not require a quorum or tie-breaker in stretched cluster configurations, but can be used as an optional component. The Virtual Disk Witness can provide a tie-breaker role if for instance redundant inter site paths are not implemented. The tie-breaker node (server or device) must be other than the two nodes presenting a virtual disk. Access to the Virtual Disk Witness is leading for storage node behavior.
There are 3 ways to configure the stretched cluster without any tie-breakers:
1. Default: in a split-brain scenario both sides stay active allowing upper infrastructure layers (OS/database/application) to make a decision (eg. clustering principles). In any case SANsymphony prevents a merge when there is a risk to data integrity, and the end-user has to make the choice on how to proceed next (which side is true)
2. Select one side to go inaccessible
3. Select both sides to go inaccessible.
|
N/A
At this time NetApp does not support NetApp HCI clusters that are stretched across data centers.
|
vSphere: 2+ sites = minimum of two active sites + optional tie-breaker in 3rd site
Hyper-V: 2+ sites = minimum of two active sites + optional tie-breaker in 3rd site
|
|
|
<=5ms RTT (targeted, not required)
RTT = Round Trip Time
In truth the user/app with the least tolerated write latency defines the acceptable RTT or distance. In practice
|
N/A
At this time NetApp does not support NetApp HCI clusters that are stretched across data centers.
|
<=5ms RTT
|
|
|
<=32 hosts at each active site (per cluster)
The maximum is per cluster. The SANsymphony solution can consist of multiple stretched clusters with a maximum of 64 nodes each.
|
N/A
At this time NetApp does not support NetApp HCI clusters that are stretched across data centers.
|
No set maximum number of nodes
|
|
SC Data Redundancy
Details
|
Replicas: 1N-2N at each active site
DataCore SANsymphony provides enhanced stretched cluster availability by offering local fault protection with In Pool Mirroring. With In Pool Mirroring you can choose to mirror the data inside the local Disk Pool as well as mirror the data across sites to a remote Disk Pool. In the remote Disk Pool data is then also mirrored. All mirroring happens synchronously.
1N-2N: With SANsymphony Stretched Clustering, there can be either 1 instance of the data at each site (no In Pool Mirroring) or 2 instances of the data a each site (In Pool RAID-1 Mirroring).
|
N/A
At this time NetApp does not support NetApp HCI clusters that are stretched across data centers.
|
Replicas: 0-3 Replicas at each active site
|
|
|
Data Services
|
|
|
|
|
|
|
Efficiency |
|
|
Dedup/Compr. Engine
Details
|
Software (integration)
NEW
SANsymphony provides integrated and individually selectable inline deduplication and compression. In addition, SANsymphony is able to leverage post-processing deduplication and compression options available in Windows 2016/2019 as an alternative approach.
|
Software
NetApp HCI includes data reduction technology, that compresses and deduplicates all incoming data cluster-wide across all volumes.
Compression is completely inline, with no performance impact. During a write, a block is compressed, and during the recycling process blocks are recompressed to provide even greater efficiency.
An internal garbage collection process cleans up blocks that are no longer referenced by any metadata, including Snapshot copies.
|
vSphere: Software (native)
Hyper-V: Software (integration)
StarWind Virtual SAN for vSphere provides inline deduplication and compression at the software level. In Hyper-V environments, post-process deduplication and compression can be used by leveraging Windows Server OS native capabilities.
|
|
Dedup/Compr. Function
Details
|
Efficiency (space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
Efficiency and Performance
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
NetApp HCI focusses on both aspects.
|
Efficiency (space savings)
Deduplication and compression can provide two main advantages:
1. Efficiency (space savings)
2. Performance (speed)
Most of the time deduplication/compression is primarily focussed on efficiency.
|
|
Dedup/Compr. Process
Details
|
Deduplication: Inline (post-ack)
Compression: Inline (post-ack)
Deduplication/Compression: Post-Processing (post process)
NEW
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
DataCore SANSymphony 10 PSP12 and above leverage both inline deduplication and compression, as well as post process deduplication and compression techniques.
With inline deduplication incoming writes first hit the memory cache of the primary host and are replicated to the cache of a secondary host in an un-deduplicated state. After the blocks have been written to both memory caches, the primary host acknowledges the writes back to the originator. Each host then destages the written blocks to the persistent storage layer. During destaging, written blocks are deduplicates and/or compressed.
Windows Server 2019 deduplication is performed outside of IO path (post-processing) and is multi-threaded to speed up processing and keep performance impact minimal.
|
Deduplication: inline (pre-ack)
Compression: inline (pre-ack) + post process
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
NetApp HCI leverages global inline deduplication and global inline as well as post compression techniques. Incoming writes are broken into 4K blocks, assigned a hash (BlockID), compressed and then written to the NVRAM of two storage nodes in order to protect the data. When the internal Block Service identifies that the BlockID already exists, which means that the data has already been committed to the persistent flash layer, it does not destage the data twice. When destaging unique compressed 4K blocks from NVRAM to the persistent flash layer, blocks going to the same SSD are consolidated into 1MB chunks.
|
vSphere: Inline
Hyper-V: Post-Processing
Deduplication can be performed in 4 ways:
1. Immediately when the write is processed (inline) and before the write is ackowledged back to the originator of the write (pre-ack).
2. Immediately when the write is processed (inline) and in parallel to the write being acknowledged back to the originator of the write (on-ack).
3. A short time after the write is processed (inline) so after the write is acknowleged back to the originator of the write - eg. when flushing the write buffer to persistent storage (post-ack)
4. After the write has been committed to the persistent storage layer (post-process).
The first and second methods, when properly integrated into the solution, are most likely to offer both performance and capacity benefits. The third and fourth methods are primarily used for capacity benefits only.
StarWind Virtual SAN for vSphere inline deduplication works in the following manner:
1. In the initial phase, any blocks that consist entirely of zeros are identified and recorded only in metadata.
2. In the second phase, the incoming data is processed to determine whether it is redundant data (data that has been written before) or not. The redundancy of this data is checked through metadata maintained by the kernel module. Any block of data that is found to be redundant will not be written out. Instead, metadata will be updated to point to the original copy of the block already stored on media.
3. Once the initial and second phases are completed, compression is applied to the remaining individual data blocks. The compressed data blocks are then packed together into fixed length (4KB) blocks and stored on media.
Windows Server 2019 deduplication is performed outside of IO path (post-processing) and is multi-threaded to speed up processing and keep performance impact minimal.
|
|
Dedup/Compr. Type
Details
|
Optional
NEW
By default, deduplication and compression are turned off. For both inline and post-process, deduplication and compression can be enabled.
For inline deduplication and compression the feature can be turned on per node. The entire node represents a global deduplication domain. Deduplication and compression work across pools and across vDisks. Individual pools can be selected to participate in capacity optimization. Either deduplication or compression or both can be selected per individual vDisk. Pools can host both capacity optimized and non-capacity optimized vDisks at the same time. The optional capacity optimization settings can be added/changed/removed during operation for each vDisk.
For post-processing the feature can be enabled per pool. All vDisks in that pool would be deduplicated and compressed. Each pool is an independent deduplication domain. This means only data in the pool is capacity optimized, but not across pools. Additionally, for post-processing capacity optimization can be scheduled so admins can decide when deduplication should run.
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
|
Always-on
NetApp HCIs data deduplication and compression features are always on and cannot be disabled as it is an integral component of the platform architecture providing both performance and efficiency. The fact that the storage subsystem has been completely separated from the compute subsystem in NetApp, deduplication and compression does not substract from compute resources. It also provides end-user simplicity.
|
Optional
vSphere: By default inline data deduplication is turned off and can be enabled.
Hyper-V: By default post-process deduplication and compression are turned off. Deduplication and compression can be enabled for selected volumes, either manually or scheduled.
|
|
Dedup/Compr. Scope
Details
|
Persistent data layer
|
Read and Write caches + Persistent data layers
Deduplication and compression is used for optimizing read/write cache and persistent storage capacity.
|
Persistent data layer
StarWind Virtual SAN for vSphere inline deduplication is performed on the persistant storage layer (XFS partition with inline deduplication enabled on which StarWind devices are located).
Microsoft Windows Server native deduplication and compression can be set for StarWind virtual devices formatted as NTFS.
Windows Server 2019 Deduplication only happens in the persistent data layer and not in the cache. The cache is not accessible from the file system and so deduplication cannot be applied to it.
|
|
Dedup/Compr. Radius
Details
|
Pool (post-processing deduplication domain)
Node (inline deduplication domain)
NEW
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
For inline deduplication and compression raw physical disks are added to a capacity optimization pool. The entire node represents a global deduplication domain. Deduplication and compression work across pools and across vDisks. Individual pools can be selected to participate in capacity optimization.
The post-processing capability provided through Windows Server 2016/2019 is highly scalable and can be used with volumes up to 64 TB and files up to 1 TB in size. Data deduplication identifies repeated patterns across files on that volume.
|
Storage Cluster
NetApp HCI inline deduplication works globally, which means that deduplication happens across all nodes within a storage cluster.
|
Volume
StarWind Virtual SAN for vSphere inline deduplication is scalable up to 256TB volume size.
Windows Server 2019 deduplication is highly scalable and can be used with volumes up to 64TB and files up to 4TB in size. Data deduplication identifies repeated patterns across files on that volume.
|
|
Dedup/Compr. Granularity
Details
|
4-128 KB variable block size (inline)
32-128 KB variable block size (post-processing)
NEW
With inline deduplication and compression, the data is organized in 128 KB segments. Depending on the optimization setting, a write into such a segment first gets compressed (when compression is selected) and then a hash is generated. If the hash is unique, the 128 KB segment is written back and the hash is added to the deduplication hash-table. If the hash is not unique, the segment is referenced in the deduplication hash table and discarded. The smallest chunk in the segment can be 4 KB.
For post-processing the system leverages deduplication in Windows Server 2016/2019, files within a deduplication-enabled volume are segmented into small variable-sized chunks (32–128 KB), duplicate chunks are identified, and only a single copy of each chunk is physically stored.
|
4 KB fixed block size
NetApp HCI deduplication and compression use 4K fixed block segments.
|
vSphere: 4KB fixed block size
Hyper-V: 32-128 KB variable block size
Inline deduplication present in StarWind VSAN for vSphere stored deduplicated and compressed block in the 4k block size.
By leveraging deduplication in Windows Server 2019, files within a deduplication-enabled volume are segmented into small variable-sized chunks (32–128 KB), duplicate chunks are identified, and only a single copy of each chunk is physically stored.
|
|
Dedup/Compr. Guarantee
Details
|
N/A
Microsoft provides the Deduplication Evaluation Tool (DDPEVAL) to assess the data in a particular volume and predict the dedup ratio.
|
Workload/Data Type dependent
NetApp uses pre-defined reduction ratios per workload for NetApp HCI based on testing. A capacity guarantee is provided to customers as part of the purchase. In exceptional scenarios a higher ratio may be agreed upon between NetApp and the individual customer organization.
|
N/A
|
|
|
Full (optional)
Data rebalancing needs to be initiated manually by the end-user. It depends on the specific use case and end-user environment if this makes sense. When end-users want to isolate new workloads and corresponding data on new nodes, data rebalancing is not used.
|
Full
NetApp HCI automatically rebalances data across nodes when a node is either added or removed. There is no user-intervention required for these redistribution activities.
|
Full (optional)
Data rebalancing needs to be initiated manually. When a new StarWind Virtual SAN node is added, a StarWind Support Engineer assists with replicating data to the new partner.
|
|
|
Yes
DataCore SANsymphonys Auto-Tiering is a real-time intelligent mechanism that continuously positions data on the appropriate class of storage based on how frequently the data is accessed. Auto-Tiering leverages any combination of Flash and traditional disk technologies, whether it is internal or array based, with up to 15 different storage tiers that can be defined.
As more advanced storage technologies become available, existing tiers can be modified as necessary and additional tiers can be added to further diversify the tiering architecture.
|
N/A
The NetApp HCI storage architecture is based on a single storage layer (SSD) and hence does not include multiple persistent storage layers to distribute data across.
|
Partial (integration; optional)
StarWind Virtual SAN can leverage the data tiering capabilities available in the Windows Server 2016/2019 OS (Storage Spaces). This is the case for Hyper-V and VMware vSphere environments.
|
|
|
|
Performance |
|
|
|
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; Space Reclamation (T10 SCSI UNMAP)
DataCore SANsymphony iSCSI and FC are fully qualified for all VMware vSphere VAAI-Block capabilities that include: Thin Provisioning, HW Assisted Locking, Full Copy, Block Zero
Note: DataCore SANsymphony does not support Thick LUNs.
DataCore SANsymphony is also fully qualified for Microsoft Hyper-V 2012 R2 and 2016/2019 ODX and UNMAP/TRIM.
Note: ODX is not used for files smaller than 256KB.
VAAI = VMware vSphere APIs for Array Integration
ODX = Offloaded Data Transfers
UNMAP/TRIM support allows the Windows operating system to communicate the inactive block IDs to the storage system. The storage system can wipe these unused blocks internally.
|
vSphere: VMware VAAI-Block (full)
NetApp HCI iSCSI is fully qualified for all VMware vSphere VAAI-Block capabilities that include: Thin Provisioning, HW Assisted Locking, Full Copy, Block Zero. NetApp HCIs VAAI-Block capabilities are certified with VMware vSphere 6.0-6.7.
VAAI = VMware vSphere APIs for Array Integration.
|
vSphere: VMware VAAI-Block (full)
Hyper-V: Microsoft ODX; UNMAP/TRIM
StarWind Virtual SAN iSCSI is fully qualified for all VMware vSphere VAAI-Block capabilities that include: Thin Provisioning, HW Assisted Locking, Full Copy, Block Zero.
StarWind Virtual SAN is also fully qualified for Microsoft Hyper-V 2016 and 2019 ODX, as well as TRIM (for SATA SSDs) and UNMAP (for SAS SSDs).
Note: ODX is not used for files smaller than 256KB.
VAAI = VMware vSphere APIs for Array Integration
ODX = Offloaded Data Transfers
UNMAP/TRIM support allows the Windows operating system to communicate the inactive block IDs to the storage system. The storage system can wipe these unused blocks internally.
|
|
|
IOPs and/or MBps Limits
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical clients/hosts.
2. Ability to set guarantees to ensure service levels for mission-critical clients/hosts.
SANsymphony currently supports only the first method. Although SANsymphony does not provide support for the second method, the platform does offer some options for optimizing performance for selected workloads.
For streaming applications which burst data, it’s best to regulate the data transfer rate (MBps) to minimize their impact. For transaction-oriented applications (OLTP), limiting the IOPs makes most sense. Both parameters may be used simultaneously.
DataCore SANsymphony ensures that high-priority workloads competing for access to storage can meet their service level agreements (SLAs) with predictable I/O performance. QoS Controls regulate the resources consumed by workloads of lower priority. Without QoS Controls, I/O traffic generated by less important applications could monopolize I/O ports and bandwidth, adversely affecting the response and throughput experienced by more critical applications. To minimize contention in multi-tenant environments, the data transfer rate (MBps) and IOPs for less important applications are capped to limits set by the system administrator. QoS Controls enable IT organizations to efficiently manage their shared storage infrastructure using a private cloud model.
More information can be found here: https://docs.datacore.com/SSV-WebHelp/quality_of_service.htm
In order to achieve consistent performance for a workload, a separate Pool can be created where selected vDisks are placed. Alternatively 'Performance Classes' can be assigned to differentiate between data placement of multiple workloads.
|
IOPs Limits (maximums)
IOPs Guarantees (minimums)
QoS is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical clients/hosts.
2. Ability to set guarantees to ensure service levels for mission-critical clients/hosts.
NetApp HCI supports both methods through pre-defined performance policies per volume. These policies can be changed at any point in time and on-the-fly.
NetApp HCI provides the following three-dimensional QoS settings for every individual volume/Vvol:
- Minimum IOPS: The minimum number of IOPS guaranteed for the volume.
- Maximum IOPS: The maximum number of IOPS allowed for the volume.
- Burst IOPS: The maximum number of IOPS allowed over a short period of time for the volume.
The IOPS values entered are normalized to a 4K IO size. The configuration screen shows the calculation for 8K, 16K and 256K IO sizes, as well as MBps for Maximum Bandwidth.
|
N/A
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
StarWind Virtual SAN currently does not offer any QoS mechanisms.
|
|
|
Virtual Disk Groups and/or Host Groups
SANsymphony QoS parameters can be set for individual hosts or groups of hosts as well as for groups of Virtual Disks for fine grained control.
In a VMware VVols (=Virtual Volumes) environment a vDisk corresponds 1-to-1 to a virtual disk (.vmdk). Thus virtual disks can be placed in a Disk Group and a QoS Limit can then be assigned it. DataCore SANsymphony Provider v2.01 has VVols certification for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
In Microsoft Hyper-V environments, when a VM with vdisks is created through SCVMM, DataCore can be instructed to automatically carve out a Virtual Disk (=storage volume) for every individual vdisk. This way there is a 1-to-1 alignment from end-to-end and QoS Limits can be applied on the virtual disk level. The 1-to-1 allignment is realized by installing the DataCore Storage Management Provider in SCVMM.
|
Per volume
Per vdisk (Vvols)
Because NetApp HCI presents block-based storage volumes, QoS Policies can be applied to VMware datastores, Raw Device Mappings (RDMs). This also extends to individual vdisks when Vvols is leveraged.
|
N/A
Quality-of-Service (QoS) is a means to ensure specific performance levels for applications and workloads. There are two ways to accomplish this:
1. Ability to set limitations to avoid unwanted behavior from non-critical VMs.
2. Ability to set guarantees to ensure service levels for mission-critical VMs.
StarWind Virtual SAN currently does not offer any QoS mechanisms.
|
|
|
Per VM/Virtual Disk/Volume
With SANsymphony the rough hierarchy is: physical disk(s) or LUNs -> Disk Pool -> Virtual Disk (=logical volume).
In SANsymphony 'Flash Pinning' can be achieved using one of the following methods:
Method #1: Create a flash-only pool and migrate the individual vDisks that require flash pinning to the flash-only pool. When using a VVOL configuration in a VMware environment, each vDisk represents a virtual disk (.vmdk). This method guarantees all application data will be stored in flash.
Method #2: Create auto-tiering pools with at least 1 flash tier. Assign the Performance Class “Critical” to the vDisks that require flash pinning and place them in the auto-tiering pool. This will effectively and intelligently put as much of the data that resides in the vDisk in the flash tier as long that the flash tier has enough space available. Therefore this method is on a best-effort basis and dependent on correct sizing of the flash tier(s).
Methods #1 and #2 can be uses side-by-side in the same DataCore environment.
|
Not relevant (All-Flash only)
The NetApp HCI platform is not available as a hybrid (flash+magnetic) configuration and as such has no need for a ' Flash Pinning' feature.
|
N/A
|
|
|
|
Security |
|
|
Data Encryption Type
Details
|
Built-in (native)
SANsymphony 10.0 PSP9 introduced native encryption when running on Windows Server 2016/2019.
|
Built-in (native)
|
N/A
StarWind Virtual SAN does not have native data encryption capabilities. Adding data encryption capabilities requires encryption storage hardware. Alternatively, software options such as Microsoft Bitlocker or vSphere Virtual Machine Encryption can be leveraged.
VMware Virtual Machine Encryption requires VMware vSphere
Enterprise Plus or VMware vSphere Platinum licences.
|
|
Data Encryption Options
Details
|
Hardware: Self-encrypting drives (SEDs)
Software: SANsymphony Encryption
Hardware: In SANsymphony deployments the encryption data service capabilities can be offloaded to hardware-based SED offerings available in server- and storage solutions.
Software: SANsymphony provides software-based data-at-rest encryption that is XTS-AES 256bit compliant.
|
Hardware: Self-encrypting drives (SEDs)
Software: Element OS encryption
NEW
Hardware-based encryption: NetApp HCI allows encryption of all data stored within the cluster. Self-encrypting drives are available on H410S/H610S storage nodes, with FIPS-certified drives in H610S-2F storage nodes.
All drives in NetApp HCI storage nodes leverage AES 256-bit encryption at the drive level. Each drive has its own encryption key, which is created when the drive is first initialized. When you enable the encryption feature, a cluster-wide password is created, and chunks of the password are then distributed to all nodes in the cluster. No single node stores the entire password. The password is then used to password-protect all access to the drives and must then be supplied for every read and write operation to the drive.
Enabling the encryption-at-rest feature does not affect performance or efficiency on the cluster. Additionally, if an encryption-enabled drive or node is removed from the cluster with the API or web UI, Encryption-at-Rest will be disabled on the drives.
Software-based encryption: Element 12.2 introduces software encryption at rest, which can be enabled when creating a new storage cluster (and is enabled by default when creating a SolidFire Enterprise SDS storage cluster). The encryption feature encrypts all data stored on the SSDs in the storage nodes and causes only a very small (~2%) performance impact on client IO.
|
Hardware: Self-encrypting drives (SEDs)
Software: Microsoft BitLocker Drive Encryption; VMware Virtual machine Encryption
Hardware: In StarWind Virtual SAN deployments the encryption data service capabilities can be offloaded to hardware-based SED offerings available in server- and storage solutions.
Software: Microsoft Bitlocker provides software encryption on standalone and cluster based NTFS or ReFS(v2) volumes. Cluster volumes (CSV) encryption support was added in Windows 2012 Server.
Microsoft BitLocker uses the Advanced Encryption Standard (AES) encryption algorithm with either 128-bit or 256-bit keys. It is generally recommended to use 256-bit keys because of their superior strength.
Bitlocker can be used to encrypt the local storage, encrypt Cluster Shared Volumes created on StarWind HA devices where VMs are located or encrypt VMs with Trusted Platform Module.
As for ESXi hosts, Bitlocker can be used to encrypt the Windows Server VMs storage (only non-bootable partitions). However, this does not provide full security. Alternatively, vSphere Virtual Machine Encryption can be used.
|
|
Data Encryption Scope
Details
|
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: SEDs provide encryption for data-at-rest; SEDs do not provide encryption for data-in-transit.
Software: SANsymphony provides encryption for data-at-rest; it does not provide encryption for data-in-transit. Encryption can be enabled per individual virtual disk.
|
Hardware: Data-at-rest
Software: Data-at-rest
NEW
Hardware: NetApp HCI provides encryption for data-at-rest through the use of self-encrypting drives (SEDs); NetApp HCI does not provide encryption for data-in-transit.
Software: NetApp HCI provides encryption for data-at-rest through the use of Element OS; NetApp HCI does not provide encryption for data-in-transit.
|
Hardware: Data-at-rest
Software: Data-at-rest
Hardware: SEDs provide encryption for data-at-rest; SEDs do not provide encryption for data-in-transit.
Software: Microsoft BitLocker provides encryption for data-at-rest as well as data-in-transit during live migration of a VM; VMware Virtual Machine Encryption provides encryption for data-at-rest.
|
|
Data Encryption Compliance
Details
|
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (SANsymphony)
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
Hardware: FIPS 140-2 Level 1 (SEDs)
Software: N/A
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
NetApp HCI 1.7 supports FIPS 140-2 drive encryption for FIPS-compliant drives when installed in the H610S-2F storage node. FIPS drive encryption requires all drives in the storage cluster to be
FIPS-capable.
|
Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (BitLocker; VMware Virtual Machine Encryption)
Microsoft BitLocker has been validated for Federal Information Processing Standard (FIPS) 140-2 in March 2018.
FIPS = Federal Information Processing Standard
FIPS 140-2 defines four levels of security:
Level 1 > Basic security requirements are specified for a cryptographic module (eg. at least one Approved algorithm or Approved security function shall be used).
Level 2 > Also has features that show evidence of tampering.
Level 3 > Also prevents the intruder from gaining access to critical security parameters (CSPs) held within the cryptographic module.
Level 4 > Provides a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.
|
|
Data Encryption Efficiency Impact
Details
|
Hardware: No
Software: No
Hardware: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
|
Hardware: No
Software: Yes (very limited)
NEW
Hardware: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software: Element OS encryption causes only a very small (~2%) performance impact on client IO.
|
Hardware: No
Software: No (Bitlocker); No (VMware Virtual Machine Encryption)
Hardware: Because data encryption is performed at the end of the write path, storage efficiency mechanisms are not impaired.
Software: Microsoft BitLocker can be used to provide whole-disk encryption on a deduplicated disk since BitLocker sits at the end of the write path. VMware Virtual Machine Encryption encrypts tge data on the host before it is written to storage, thus negatively impacting backend storage features such as deduplication and compression. However, Microsoft post-process deduplication is executed at the filesystem layer.
|
|
|
|
Test/Dev |
|
|
|
Yes
Support for fast VM cloning via VMware VAAI and Microsoft ODX.
|
Yes
Support for fast VM cloning via VMware VAAI.
|
No
StarWind VSAN is an SDS solution. The cloning functionality is not related to storage capabilities and is performed on the hypervisor level.
|
|
|
|
Portability |
|
|
Hypervisor Migration
Details
|
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
Microsoft Virtual Machine Converter (MVMC) supports conversion of VMware VMs and vdisks to Hyper-V VMs and vdisks. It is also possible to convert physical machines and disks to Hyper-V VMs and vdisks.
MVMC has been offcially retired and can only be used for converting VMs up to version 6.0.
Microsoft System Center Virtual Machine Manager (SCVMM) 2016 also supports conversion of VMs up to version 6.0 only.
|
Hyper-V to ESXi (external)
VMware Converter 6.2 supports the following Guest Operating Systems for VM conversion from Hyper-V to vSphere:
- Windows 7, 8, 8.1, 10
- Windows 2008/R2, 2012/R2 and 2016
- RHEL 4.x, 5.x, 6.x, 7.x
- SUSE 10.x, 11.x
- Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS
- CentOS 6.x, 7.0
The VMs have to be in a powered-off state in order to be migrated across hypervisor platforms.
|
Hyper-V to ESXi (external)
ESXi to Hyper-V (external)
StarWind V2V Converter is a StarWind proprietary tool that can be leveraged for virtual-to-virtual (V2V) use cases as well as physical-to-virtual (P2V) use cases. The tool supports all major VM formats: VHD/VHDX, VMDK, QCOW2, and StarWind native IMG. Both the source and target VM copies exist simultaneously because the conversion procedure is more like a cloning process than a replacement. As a convenient side effect, StarWind V2V Converter basically creates a backup copy of the VMs, making the process even safer.
When converting the VM to VHDX format, StarWind V2V Converter enables the activation of Windows Repair Mode. This way the virtual machine will automatically adapt to the given hardware environment and negate any compatibility problems.
|
|
|
|
File Services |
|
|
|
Built-in (native)
SANsymphony delivers out-of-box (OOB) file services by leveraging Windows native SMB/NFS and Scale-out File Services capabilities. SANsymphony is capable of simultaneously handling highly-available block and file level services.
Raw storage is provisioned from within the SANsymphony GUI to the Microsoft file services layer, similar to provisioning Storage Spaces Volumes to the file services layer. This means any file services configuration is performed from within the respective Windows service consoles e.g. quotas.
More information can be found under: https://www.datacore.com/products/features/high-availability-nas-cluster-file-sharing.aspx
|
Built-in (native)
NetApp ONTAP Select is an optional service provided with NetApp HCI. Deploying a single/dual node cluster of ONTAP Select within the NetApp HCI environment allows provisioning of file shares (SMB and NFS) via the ONTAP Select user interface on top of the NetApp HCI block storage. The ideal use cases are home directories and departmental file shares.
ONTAP Select is installed as part of a post NetApp Deployment Engine (NDE), customer-driven workflow.
NetApp HCI v1.8 allows the automatic configuration of NetApp ONTAP Select 9.7 file services as part of the NetApp HCI deployment process streamlined by NetApp Deployment Engine (NDE). NDE v1.8 deploys a single-node NetApp ONTAP Select cluster. A second node can be added afterwards to form an HA-pair with the first node.
NetApp ONTAP Select nodes are VMware virtual machines deployed on top of the NetApp HCI Compute node cluster. The NetApp ONTAP Select clusters initial licensing supports 2-8TB of raw storage capacity. ONTAP Select datastores are mapped to VMware virtual disks (VMDKs) that reside on the NetApp HCI storage cluster.
NetApp ONTAP Select requires a separate license (Standard or Premium).
|
vSphere: N/A
Hyper-V: Built-in (native)
StarWind Virtual SAN for Hyper-V delivers out-of-box (OOB) file services by leveraging Windows native SMB/NFS and Scale-out File Services capabilities. StarWind Virtual SAN is capable of simultaneously handling highly-available block and file level services.
StarWind virtual devices formatted as NTFS volumes are provisioned to the Microsoft file services layer. This means any file services configuration is performed from within the respective Windows service consoles e.g. quotas.
|
|
Fileserver Compatibility
Details
|
Windows clients
Linux clients
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, most Windows and Linux clients are able to connect.
|
Windows clients
Linux clients
|
Windows clients
Linux clients
Because StarWind Virtual SAN leverages Windows Server native CIFS/NFS and Scale-out File services, most Windows and Linux clients are able to connect.
|
|
Fileserver Interconnect
Details
|
SMB
NFS
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, Windows Server platform compatibility applies:
SMB versions1,2 and 3 are supported, as are NFS versions 2, 3 and 4.1.
|
SMB
NFS
Supported SMB versions: 1.0, 2.0, 2.1, 3.0, 3.1.1
Supported NFS versions: v3, v4.0, v4.1, pNFS
|
SMB
NFS
Because StarWind Virtual SAN leverages Windows Server native CIFS/NFS and Scale-out File services, Windows Server platform compatibility applies:
SMB versions1,2 and 3 are supported, as are NFS versions 2, 3 and 4.1.
|
|
Fileserver Quotas
Details
|
Share Quotas, User Quotas
Because SANsymphony leverages Windows Server native CIFS/NFS and Scale-out File services, all Quota features available in Windows Server can be used.
|
User Quotas
User quotas can be configured and applied on a volume in order to restrict space usage for specific users and/or user groups.
|
Share Quotas, User Quotas
Because StarWind Virtual SAN leverages Windows Server native CIFS/NFS and Scale-out File services, all Quota features available in Windows Server can be used.
|
|
Fileserver Analytics
Details
|
Partial
Because SANsymphony leverages Windows Server native CIFS/NFS, Windows Server built-in auditing capabilities can be used.
|
N/A
NetApp ONTAP Select currently does not have advanced file analytics capabilities.
|
Partial
Because StarWind Virtual SAN leverages Windows Server native CIFS/NFS, Windows Server built-in auditing capabilities can be used.
|
|
|
|
Object Services |
|
|
Object Storage Type
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
N/A
NetApp HCI does not provide any object storage serving capabilities of its own. However, NetApp StorageGRID can be leveraged as VMware virtual machines (minimum of 3 nodes) to deliver S3-compatible object storage. No direct integrations exist between NetApp HCI and NetApp StorageGRID.
|
N/A
StarWind Virtual SAN does not provide any object storage serving capabilities of its own.
|
|
Object Storage Protection
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
N/A
NetApp HCI does not provide any object storage serving capabilities of its own. However, NetApp StorageGRID can be leveraged as VMware virtual machines (minimum of 3 nodes) to deliver S3-compatible object storage. No direct integrations exist between NetApp HCI and NetApp StorageGRID.
|
N/A
StarWind Virtual SAN does not provide any object storage serving capabilities of its own.
|
|
Object Storage LT Retention
Details
|
N/A
DataCore SANsymphony does not provide any object storage serving capabilities of its own.
|
N/A
NetApp HCI does not provide any object storage serving capabilities of its own. However, NetApp StorageGRID can be leveraged as VMware virtual machines (minimum of 3 nodes) to deliver S3-compatible object storage. No direct integrations exist between NetApp HCI and NetApp StorageGRID.
|
N/A
StarWind Virtual SAN does not provide any object storage serving capabilities of its own.
|
|
|
Management
|
|
|
|
|
|
|
Interfaces |
|
|
GUI Functionality
Details
|
Centralized
SANsymphonys graphical user interface (GUI) is highly configurable to accommodate individual preferences and includes guided wizards and workflows to simplify administration. All actions available from the GUI may also be scripted with PowerShell Commandlets to orchestrate workflows with other tools and applications.
|
Centralized
NetApp HCI management, capacity monitoring, performance monitoring and efficiency reporting is performed through the vSphere Web Client interface.
|
Centralized
StarWind Web-based Management provides the ability to administrate the StarWind Virtual SAN infrastructure from any remote location using any HTML 5 web console.
|
|
|
Single-site and Multi-site
|
Single-site and Multi-site
Centralized management of one or multiple NetApp HCI clusters can be performed from a single dasboard. This means that global implementations with multiple sites in multiple countries can be easily managed. NetApp HCI vCenter Plug-in can be used to manage NetApp HCI cluster resources from other vCenter Servers using vCenter Linked Mode.
|
Single-site and Multi-site
From within the StarWind Web Management Console servers from different clusters and sites can be added.
|
|
GUI Perf. Monitoring
Details
|
Advanced
SANsymphony has visibility into the performance of all connected devices including front-end channels, back-end channels, cache, physical disks, and virtual disks. Metrics include Read/write IOPs, Read/write MBps and Read/Write Latency at all levels. These metrics can be exported to the Windows Performance Monitoring (Perfmon) utility where other server parameters are being tracked.
The frequency at which performance metrics can be captured and reported on is configurable, real-time down to 1 second intervals and long term recording at 2 minutes granularity.
When a trend analysis is required, an end-user can simply enable a recording session to capture metrics over a longer period of time.
|
Advanced
Individual volume details include: Actual IOPS, Average IOP Size, Burst IOP Size, Client Queue Depth, Latency (microseconds), Read Bytes, Read Latency (microseconds), Read operations, Write Bytes, Write Latency (microseconds), Write operations, Total Latency (microseconds), Throttle, Volume Utilization.
Furthermore, Performance Utilization show the percentage of cluster IOPS being consumed.
HCI v1.3 and up enable Active IQ cloud monitoring to also display performance data for Virtual Volumes (VVols) configured on a NetApp HCI cluster.
|
Basic
StarWind Management Console can be used to view basic storage performance characteristics.
StarWind Management Console provides:
- counters on a per-host level: CPU/RAM load, CPU load, RAM load, Total IOPS, Total Bandwidth
- counters on a per-device level: Read Bandwidth, Write Bandwidth, Total Bandwidth, Total IOPS
- selectable are Server with all targets (StarWind devices) or each separate StarWind device
- retention time of performance information is last 24 hours
- refresh rate of performance metrics is every 30 seconds
|
|
|
VMware vSphere Web Client (plugin)
VMware vCenter plug-in for SANsymphony
SCVMM DataCore Storage Management Provider
Microsoft System Center Monitoring Pack
DataCore offers deep integration with VMware vSphere and Microsoft Hyper-V, as well as their respective systems management tools, vCenter and System Center.
SCVMM = Microsoft System Center Virtual Machine Manager
|
VMware vSphere Web Client (plugin)
The Netapp Element plug-in for VMware vCenter allows day-to-day storage management tasks to be performed from within VMware vCenter, such as:
- Viewing of event logs and report overviews.
- Creation of a datastore or a volume with individualized min, max, and burst QoS per application.
- Management of accounts, clusters, remote replication, data protection and VVols.
- Registration of alerts about the health of NetApp HCI in the alarm section.
VMware ESXi 6.0, 6.5, 6.7 or 7.0 is required to use the NetApp Element Plug-in for vCenter Server.
|
vSphere: Web Client (plugin)
Hyper-V: SCVMM 2016 (add-in)
The StarWind vCenter plug-in provides a 1-to-1 representation of the StarWind Web Console inside the VMware vSphere Web Client. It does not add additional StarWind Virtual SAN-related actions to any Web Client menus.
StarWind Virtual SAN for vSphere environments can be additionally managed from the VM web console providing storage and networking management capabilities as well as performance metrics.
|
|
|
|
Programmability |
|
|
|
Full
Using DataCores native management console, Virtual Disk Templates can be leveraged to populate storage policies. Available configuration items: Storage profile, Virtual disk size, Sector size, Reserved space, Write-trough enabled/disabled, Storage sources, Preferred snapshot pool, Accelerator enabled/disabled, CDP enabled/disabled.
Virtual Disk Templates integrate with System Center Virtual Machine Manager (SCVMM), VMware Virtual Volumes (VVol) and OpenStack. Virtual Disk Templates are also fully supported by the REST-API allowing any third-party integration.
Using Virtual Volumes (VVols) defined through DataCore’s VASA provider, VMware administrators can self-provision datastores for virtual machines (VMs) directly from their familiar hypervisor interface. This is possible even for devices in the DataCore pool that don’t natively support VVols and never will, as SANsymphony can be used as a storage-virtualization layer for these devices/solutions. DataCore SANsymphony Provider v2.01 has VVols certification for VMware ESXi 6.5 U2/U3, ESXi 6.7 GA/U1/U2/U3 and ESXi 7.0 GA/U1.
Using Classifications and StoragePools defined through DataCore’s Storage Management Provider, Hyper-V administrators can self-provision virtual disks and pass-through LUNS for virtual machines (VMs) directly from their familiar SCVMM interface.
|
Full
|
Partial (Protection)
|
|
|
REST-APIs
PowerShell
The SANsymphony REST-APIs library includes more than 200 new representational state transfer (REST) operations, so automation can be leveraged more extensively. RESTful interfaces are used by products such as Lenovo XClarity, Cisco Embedded Resource Manager and Dell OpenManage to manage infrastructure in the enterprise.
SANsymphony provides its own Powershell cmdlets.
|
REST-APIs
CLI
NetApp HCI was designed from the ground up to be 100% programmable. Therefore NetApp HCI provides rich and at the same time easy-to-comprehend REST-APIs.
|
REST-APIs (through Swordfish)
PowerShell
StarWind Virtual SAN does not offer native REST APIs (functionality still under development). Alternatively, StarWind over Swordfish API can be configured if programming is a requirement.
For more information, please view: https://www.starwindsoftware.com/resource-library/starwind-swordfish-provider/
|
|
|
OpenStack
OpenStack: The SANsymphony storage solution includes a Cinder driver, which interfaces between SANsymphony and OpenStack, and presents volumes to OpenStack as block devices which are available for block storage.
Datacore SANsymphony programmability in VMware vRealize Automation and Microsoft System Center can be achieved by leveraging PowerShell and the SANsymphony specific cmdlets.
|
OpenStack
Orchestration (vRO plug-in)
Ansible playbooks
Element OS provides a driver for the OpenStack Block Storage (Cinder) service. Features include:
- Volume create/delete
- Volume attach/detach
- Extend volume
- Snapshot create/delete
- List snapshots
- Create volume from snapshot
NetApp HCI currently supports Red Hat OpenStack Platform 13-16.
NetApp HCI also provides a plug-in for VMware vRealize Orchestration (vRO). The plug-in models the storage API methods as vRO workflows so scheduling and automating NetApp HCI storage administration tasks becomes easy.
There are also Ansible playbooks for Element OS and VMware vSphere.
|
N/A
|
|
|
Full
The DataCore SANsymphony GUI offers delegated administration to secondary users through fine-grained Role-based Access Control (RBAC). The administrator is able to define Virtual Disk ownership as well as privileges associated with that particular ownership. Owners must have Virtual Disk privileges in an assigned role in order to perform operations on the virtual disk. Access can be very refined. For example, one owner may have the privilege to create a snapshot of a virtual disk, but not have the ability to serve or unserve the same virtual disk. Privilege sets define the operations that can be performed. For instance, in order for an owner to perform snapshot, rollback, or replication operations, they would require those privilege sets in an assigned role.
|
N/A
The NetApp HCI platform does not provide any end-user self service capabilities of its own.
A self service portal enables end-users to access a portal where they can provision and manage VMs from templates, eliminating administrator requests or activity.
Self-Service functionality can be enabled by leveraging VMware vRealize Orchestration (vRO). This requires a separate VMware license. NetApp HCI officially supports vRealize Orchestration (vRO) through a plug-in.
|
N/A
|
|
|
|
Maintenance |
|
|
|
Unified
All storage related features and functionality are built into the DataCore SANsymphony platform. The consolidation means, that only one product needs to be installed and upgraded and minimal dependencies exist with other software.
Integration with 3rd party systems (e.g. OpenStack, vSphere, System Center) are delivered seperately but are free-of-charge.
|
Unified
All storage related features and functionality are built into the NetApp HCI platform. The consolidation means, that only one core product suite needs to be installed and upgraded and minimal dependencies exist with other software.
|
Partially Distributed
For a number of features and functions StarWind Virtual SAN relies on other components that need to be installed and upgraded next to the core Windows platform. Examples are backup/restore and advanced management software. As a result some dependencies exist with other software.
|
|
SW Upgrade Execution
Details
|
Rolling Upgrade (1-by-1)
Each SANsymphony update is packaged in an installation Wizard which contains a fully guided upgrade process. The upgrade process checks all system requirements and performs a system health before starting the upgrade process and before moving from one node to the next.
The user can also decide to upgrade a SANsymphony cluster manually and follow all steps that are outlined in the Release Notes.
|
Rolling Upgrade (1-by-1)
NEW
NetApp HCI clustered architecture allows nondisruptive storage software upgrades (Element OS) on a rolling node-by-node basis. Upgrades can be performed during production hours with little to no workload impact.
Element OS 12.2 introduces maintenance mode, which enables taking a storage node offline for
maintenance such as software upgrades or host repairs, while preventing a full sync of all data. If one or more nodes need maintenance, I/O impact can be minimized to the rest of the storage cluster by enabling maintenance mode for those nodes before beginning.
Software upgrades on compute nodes are orchestrated by VMware Update Manager (VUM). This procedure covers driver upgrades.
|
Manual Upgrade (1-by-1)
When updating StarWind VSAN, the latest build can be downloaded and executed. StarWind will automatically update all the components of the SDS stack on a single node. The update process has to be launched manually on every node in a VSAN cluster in sequentially.
|
|
FW Upgrade Execution
Details
|
Hardware dependent
Some server hardware vendors offer rolling upgrade options with their base software or with a premium software suite. With some other server vendors, BIOS and Baseboard Management Controller (BMC) updated have to be performed manually and 1-by-1.
DataCore provides integrated firmware-control for FC-cards. This means the driver automatically loads the required firmware on demand.
|
Manual (written procedure)
NetApp HCI provides the option to update firmware, including BIOS and BMC, on compute nodes by booting into a firmware update boot media (USB drive, virutal media via BMC). Today, this process is manual and performed on one compute node at a time. NetApp Support can assist administators with performing this task.
On storage nodes, firmware is updated in conjuction with an update of the Element OS software.
|
Manual Upgrade (1-by-1 )
To perform drivers updates, a remote session with a StarWind Support engineer is scheduled who will perform all the updates. Firmware and driver updates are first verified for any possible issues.
To keep the VMs and applications running during maintenance, StarWind performs the following steps:
1. Check that StarWind devices are synchronized on both nodes and that all iSCSI connections are active.
2. Move VMs inside the cluster to the partner node.
3. In case the process presumes several restarts, put StarWind service into a disabled state or manual start on the required node.
4. Install the required updates/drivers.
5. Restart the node if required.
6. In case required, shut down the node and perform a firmware update.
|
|
|
|
Support |
|
|
Single HW/SW Support
Details
|
No
With regard to DataCore SANsymphony as a software-only offering (SDS), DataCore does not offer unified support for the entire solution. This means storage software support (SANsymphony) and server hardware support are separate.
|
Yes
The entire solution is owned by NetApp, so support for all solution components (storage, compute hardware, solution software) can be provided by a single point of contact. Support for the hypervisor is provided by a third-party like the vendor (VMware) or a partner who provides VMware support.
|
Yes (optional)
StarWind Virtual SAN can optionally be supplied with StarWind ProActive Support to preventively resolve issues on hardware, hypervisor, and the StarWind SDS stack. Additionally, StarWind offers Managed Services covering not only SDS stack but also applications and services running on top of it and near it.
|
|
Call-Home Function
Details
|
Partial (HW dependent)
With regard to DataCore SANsymphony as a software-only offering (SDS), DataCore does not offer call-home for the entire solution. This means storage software support (SANsymphony) and server hardware support are separate.
|
Full
NetApps call-home function is called 'Active IQ' and is fully integrated into the platform. The feature is configured automatically during NetApp HCI installation. NetApp HCI telemetry is dispatched to both vCenter and Active IQ.
Active IQ is a proactive and preventative service that continuously monitors and evaluates the health of a customer’s hyperconverged infrastructure.
Active IQ is a basic support service and is included with every support plan.
|
Yes
StarWind ProActive Support combines both the 'Call Home' functionality and Predictive Analytics.
StarWind ProActive Support service notifies the StarWind Support team about any issues occurring on the node where StarWind Virtual SAN is installed covering storage, networking, compute and software layers. This is achieved by StarWind Agents running on each StarWind node and collecting the metrics from the servers and StarWind software.
|
|
Predictive Analytics
Details
|
Partial
Capacity Management: DataCore SANsymphony Analysis and Reporting supports depletion monitoring of the capacity, complements pool space threshold warnings by regularly evaluating the rate of capacity consumption and estimating when space will be depleted. The regularly updated projections give you a chance to add more storage to the pool before you run out of storage. It also helps you do a better job of capacity planning with fewer surprises. To help allocate costs, especially in private cloud and hosted cloud services, SANsymphony generates reports quantifying the storage resources consumed by specific hosts or groups of hosts. The reports tally several parameters.
Health Monitoring: A combination of system health checks and access to device S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) alerts help to isolate performance and disk problems before they become serious.
DataCore Insight Services (DIS) offers additional capabilites including log-analytics for predictive failure analysis and actionable insights - including hardware.
DIS also provides predictive capacity trend analysis in order to pro-actively warn about licensing limitations being reached within x days and/or disk pools running out of capacity.
|
Partial (Capacity)
NEW
The NetApp Active IQ provides capacity forecasting for NetApp HCI systems.
Element OS 12.2 introduces periodic health checks on SolidFire appliance drives using SMART health data from the drives. A drive that fails the SMART health check might be close to failure. If a drive fails the SMART health check, a new critical severity cluster fault appears.
|
Partial
StarWind ProActive support is capable of Predictive Analytics by analyzing abnormal patterns on storage, networking, compute and software layers. This allows StarWinds Support team to automatically receive notifications as to possible issues that might occur on the servers.
Additionally, any other pattern that caused issues in one environment is analyzed and integrated into the ProActive Support database allowing the service to notify the StarWind Support team about the same patterns occurring in other clients environments.
|
|